We just raised a $30M Series A: Read our story
Tracy Hautenen Kriel
Architecture Sr. Manager, Data Design & Metadata Mgmt at a insurance company with 10,001+ employees
Real User
Top 5Leaderboard
Seeing a picture that shows you how the data relates to each other helps you better understand what the data is and how to use it

Pros and Cons

  • "The visual data models for helping to overcome data source complexity and enabling understanding and collaboration around maintenance and usage are excellent. A picture speaks 1,000 words. Seeing a picture that shows you how the data relates to each other helps you better understand what the data is and how to use it. Pairing that information with a dictionary, which has the definitions of the tables and columns or the entities and attributes, ensures that the users understand what the data is so that they can use it best and most successfully."
  • "I would like to see the reporting capabilities be more dynamic and more inclusive of information. The API is very sparsely understood by people across the user community."

What is our primary use case?

We use the erwin Data Modeler tool to document conceptual, logical, and physical data design. Business data models capture the understanding of the data from a business perspective, which can then drive physical design to ensure data is represented and used correctly.

How has it helped my organization?

The automated generation of the DDL ensures that the data store looks exactly as the data design. It also ensures that the standards that are governed are followed and implemented successfully.

What is most valuable?

We use the diagrams and data dictionary capabilities to help users understand the data environments, as well as how the data relates to each other. We'll use the naming standard master file to govern and ensure that we have consistent naming and abbreviations across and within data stores. We use the forward engineering templates to standardize and govern the generation of the data definition language that is used to actually make the changes to the data stores. We also use the Compare capability to ensure that we have up to date production data models. And we are looking forward to the integration of the Data Modeler metadata with the data intelligence suite in R2.

The visual data models for helping to overcome data source complexity and enabling understanding and collaboration around maintenance and usage are excellent. A picture speaks 1,000 words. Seeing a picture that shows you how the data relates to each other helps you better understand what the data is and how to use it. Pairing that information with a dictionary, which has the definitions of the tables, columns, the entities, and attributes, ensures that the users understand what the data is so that they can use it best and most successfully.

Its ability to compare and synchronize data sources with data models in terms of accuracy and speed for keeping them in sync is excellent. 

We don't typically use the configurable workspace and modeling canvas because while the platform allows for the flexibility to dynamically include multiple colors and multiple themes, feedback from business users is that the multiple colors and themes can become overwhelming. When you do that, you need to include a key so that people understand what the colors mean.

Its ability to generate database code from a model for a wide array of data sources cuts our development time. By how much depends on the number of changes that are required within the data store. It is certainly better to automate the forward engineering of the DDL creation, rather than having someone manually type it all out and then possibly make a human error with spelling irregularities.

Its code generation ensures accurate engineering of data sources. It decreases development time because it's automated.

What needs improvement?

I would like to see the reporting capabilities be more dynamic and more inclusive of information. The API is very sparsely understood by people across the user community.

I would also like to see a greater amount of integration with the erwin Data Intelligence Suite and the erwin Web Portal for the diagram delivery. That would be beneficial to all.

For how long have I used the solution?

I have been using erwin for twenty years. 

What do I think about the stability of the solution?

It's very stable, especially having been available for use for so many years.

What do I think about the scalability of the solution?

It is scaling well to include the new data structures, rather than being stagnant and only continuing to support the older DBMS types.

We have over 100 Data Modelers in my company and the users of the metadata go into the 1,000s.

We have an administrator who is responsible for the software upgrades, we have a governance community in the Center of Excellence, and we have the actual Data Modelers themselves who provide the delivery of the physical data models. We have data architects who create business, conceptual, and logical data models. And then, of course, we have our developers who use the data model information to understand the code that they are writing. We also have the business users who use the diagrams and the data dictionaries to understand the data so that they use it correctly.

Data Modeler is being used very extensively. We are considered power users within the community of users.

As new applications are developed, we may or may not need new licenses for erwin Data Modeler.

Which solution did I use previously and why did I switch?

I have used SILVERRUN, which is a very old tool and actually has Sunset. I have also used SAP Sybase PowerDesigner. The primary reason for using PowerDesigner over erwin Data Modeler for that decision was that we were able to program the PL/SQL right into Sybase PowerDesigner. At the time, it had the capability to order the run of the PL/SQL. So the Sybase PowerDesigner would not make the changes to the database via the DDL, but it also generated the PL/SQL code that moved the data from source to target. That's a capability that erwin Data Modeler has never had. I don't know if it is on the roadmap for inclusion in the future, but I also do not see it as a requirement for erwin Data Modeler going forward because there are many ETL tools out there readily available.

I've also used IDERA. The interesting feature about IDERA that differentiates it from erwin Data Modeler is that the model repository actually separates the logical data models from the physical data models. Whereas erwin is basically the flip of a switch. It's not a true logical model, it's a logical representation of the physical data model.

I think the other thing that sets erwin Data Modeler apart is the model Mart repository, which protects a company's intellectual property within the data models and makes them available across the company so that the information is shared with anyone who has an erwin Data Modeler license. That was not available in SILVERRUN. It was also not available when I used PowerDesigner at the time. It was about 15 years ago for PowerDesigner. It is available for IDERA.

How was the initial setup?

I find the setup straightforward. It is very easy to install. It took minutes.

What was our ROI?

We have seen ROI.

The reusability of some of the information within erwin Data Modeler, coupled with the capability to govern the information such as the data domains, the naming standard master file, degeneration of the DDL, every piece of automation ensures that there is consistency across and within data stores, and reduces the time to deliver the information because of the automation and governance built into the tool.

Whether or not the accuracy and speed of the solution and transforming complex designs into well-aligned data sources make the cost of the tool worth it would be a judgment call. I do think it is worth it. But of course, in this day and age where people are offshoring all of their work trying to save money, makes one consider the cost of any investment.

What's my experience with pricing, setup cost, and licensing?

I think that the pricing is reasonable. It has called Concurrent licensing, where you can have a number of people share an erwin license. I think that that pricing is a little bit high, but that is a personal opinion.

What other advice do I have?

The biggest lesson that I've learned is actually with a lack of data modeling. We have teams who have complained that data modeling takes too long. They would rather have developers manually code the DDL, which creates a lot of mistakes, increases the backlog, and increases not only the time to delivery but the cost to delivery. There is a lack of understanding of the agile methodology around data modeling and the incorporation of the emergent design happening in the scrum teams with the intentional design of the data architect creating a data model. Given an opportunity to follow the correct path and perform data modeling, we have seen a significant return on investment with decreases in delivery time and decreases in project cost.

I would rate erwin Data Modeler a ten out of ten. 

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
SA
Senior Data Warehouse Architect at a financial services firm with 1,001-5,000 employees
Real User
Top 20
Support for Snowflake is very helpful from the data modeling perspective, and JDBC/native connectivity simplifies the push mechanism

Pros and Cons

  • "The logical model gives developers, as well as the data modelers, an understanding of exactly how each object interacts with the others, whether a one-to-many, many-to-many, many-to-one, etc."
  • "We are planning to move, in 2021, into their server version, where multiple data modelers can work at the same time and share their models. It has become a pain point to merge the models from individual desktops and get them into a single data model, when multiple data modelers are working on a particular project. It becomes a nightmare for the senior data modeler to bring them together, especially when it comes to recreating them when you want to merge them."

What is our primary use case?

We use erwin DM as a data modeling tool. All projects in the data warehouse area go through the erwin model first and get reviewed and get approved. That's part of the project life cycle. And then we exude the scripts out of DM into Snowflake, which is our target database. Any changes that happen after that also go through erwin and we then make a master copy of the erwin model.

Our solution architecture for projects that involve erwin DM and Snowflake is an on-prem Data Modeler desktop version, and we have a SQL database behind it and that's where the models are stored. In terms of erwin Data Modeler, Snowflake is the only database we're using.

We are not utilizing a complete round-trip from DM for Snowflake. We are only doing one side of it. We are not doing reverse-engineering. We only go from the data model to the physical layer.

How has it helped my organization?

We use erwin Data Modeler for all enterprise data warehouse-related projects. It is very vital that the models should be up and running and available to the end-users for their reporting purposes. They need to be able to go through them and to understand what kinds of components and attributes are available. In addition, the kinds of relationships that are built in the data warehouse are visible through erwin DM. It is very important to keep everybody up to the mark and on the same page. We distribute erwin models to all the business users, our business analysts, as well as the developers. It's the first step for us. Before something gets approved we generally don't do any data work. What erwin DM does is critical for us.

erwin DM's support for Snowflake is very helpful from the data modeling perspective and, obviously, the JDBC and native connectivity also helps us in simplifying the push mechanism we have in erwin DM. 

What is most valuable?

Primarily, we use erwin for data modeling only, the functionality which is available to do logical models and the physical model. Those are the two areas which we use the most: we use a conceptual model first and the logical model, and then the physical model.

When we do the conceptual data model, we will look at the source and how the objects in the source interact, and that will give us a very clear understanding of how the data is set up in the source environment. The logical model gives developers, as well as the data modelers, an understanding of exactly how each object interacts with the others, whether a one-to-many, many-to-many, many-to-one, etc. The physical model, obviously, helps in executing the data model in Snowflake, on the physical layer.

Compatibility and support for cloud-based databases is very important in our environment because Snowflake is the only database to which we push our physical data structures. So any data modeling tool we use should be compatible with a cloud data warehouse, like Snowflake. It is definitely a very important functionality and feature for us.

What needs improvement?

We are planning to move, in 2021, into their server version, where multiple data modelers can work at the same time and share their models. It has become a pain point to merge the models from individual desktops and get them into a single data model, when multiple data modelers are working on a particular project. It becomes a nightmare for the senior data modeler to bring them together, especially when it comes to recreating them when you want to merge them. That's difficult. So we are looking at the version that will be a server-based model, where the data modelers can bring the data out, they can share, and they can merge their data models with existing data model on the server.

The version we're not using now—the server version—would definitely help us with the pain point when it comes to merging the models. When you have the desktop version, merging the models, two into one, requires more time. But when we go over to the server, the data models can automatically pull and push.

We will have to see what the scalability is like in that version.

Apart from that, the solution seems to be fine.

For how long have I used the solution?

I've been using erwin DM for years, since the early 2000s and onwards. It's a very robust tool for data modeling purposes.

What do I think about the scalability of the solution?

We have five to seven data modelers working on it at any moment in time. We have not seen any scalability issues, slowness, or that it is not supporting that level of use, because it's all desktop-based

When we go into the server model, where the web server is involved, we will have to see. And the dataset storage in the desktop model is also very limited, so I don't think going to the server model is going to impact scalability.

In our company, erwin DM is used only in the data warehouse area at this moment. I don't see any plans, from the management perspective, to extend it. It's mostly for ER diagrams and we will continue to use it in the same way. Depending on the usage, the number of concurrent users might go up a little bit.

How are customer service and technical support?

I have interacted with erwin's technical support lately regarding the server version and they have been very proactive in answering those questions as well as following up with me. They ask if they have resolved the issue or if anything still needs to be done. I'm very happy with erwin's support.

What other advice do I have?

The biggest lesson I have learned from using erwin DM, irrespective of whether it's for Snowflake or not, is that having the model upfront and getting it approved helps in reducing project go-live time. Everybody is on the same page: all the developers, how they interact, how they need to connect the various objects to generate their ETL processes. It also definitely helps business analysts and end-users to understand how to write their Tableau reports. If they want to know where the objects are, how they connect to each other, and whether they are a one-to-one or one-to-many relationship, etc., they can get it out of this solution. It's a very central piece of the development and the delivery process.

We use Talend as our ETL and BI vendor for workload. We don't combine it with erwin DM. Right now, each is used for its own specific need and purpose. erwin DM is mostly for our data modeling purposes, and Talend is for integration purposes.

Overall, erwin DM's support for Snowflake is very good. It's very stable and user-friendly and our data modelers live, day in and day out, on it. No complaints. There is nothing that impacts their performance.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Learn what your peers think about erwin Data Modeler (DM). Get advice and tips from experienced pros sharing their opinions. Updated: December 2021.
554,529 professionals have used our research since 2012.
RH
President at a tech services company with 51-200 employees
Real User
Top 5Leaderboard
Beautiful model for the new microservices world that is easy to use

Pros and Cons

  • "It reduces monthly savings by hundreds of thousands of dollars. Think about a company like Costco and all of the points of sale systems in Costco, all of the systems, the applications, but if all the applications in Costco all had their own data model, trying to integrate those, upgrade them and manage their different versions of the same model throughout the store, is an absolute nightmare. It's phenomenally expensive. This helps reduce that cost significantly. I'm talking on the orders of hundreds of thousands of dollars."
  • "The navigation is a little bit of a challenge. It's painful. For example, if you've got a view open and you want to try to move from side to side, the standard today is being able to drag and drop left and right. You can't really do that in the model. Moving around the model is painful because it doesn't follow the Windows model today."

What is our primary use case?

I was part of a standards organization and we built a data model that is a standard data model for use in retail. That data model is now been released in version 7.3 and it is implemented all over the world. We don't implement the model, we've built the logical model and then the companies build their own physical model from there. 

erwin is a retail data model, which means that it handles the operational side of retail, which means there are somewhere around 8,000 attributes in it. It has got around 10 groupings of things. We have a grouping on transactions and there are all kinds of transactions that can occur in retail. The whole customer life cycle is covered in the inventory, items, and all that. The use case is for retail operations. It's massive. There are hundreds of use cases in this.

How has it helped my organization?

We don't implement, we simply tell other people how to do it. It's a beautiful model for the new microservices world, so we can help people understand how to fit this into their world. In terms of us actually doing something and implementing it and all that, that's really not in scope for what we do.

erwin is easy. In the microservices world, having a unified retail model like this one that is a standard and allows two companies to inter-operate easily in the past. In fact, the whole reason the model was created was in 1993, was because about half a dozen major retail CIOs got together and said, "We've got to have a standard model because every time we buy a new point of sale system, we need to re-architect our entire enterprise." They started building this model back in 1993, and the beauty of it is it does precisely what they say. A retailer can now integrate two vendor's systems easily, as long as they all follow the same model. It reduces their cost of integration dramatically, as well as being quite a powerful model in and of itself.

It reduces monthly savings by hundreds of thousands of dollars. Think about a company like Costco and all of the points of sale systems in Costco, all of the systems, and the applications, but if all the applications in Costco all had their own data model, trying to integrate those, upgrade them and manage their different versions of the same model throughout the store, is an absolute nightmare. It's phenomenally expensive. This helps reduce that cost significantly. I'm talking on the orders of hundreds of thousands of dollars.

What is most valuable?

erwin is pretty easy. I've been using it for so long it's like second nature. 

The visual data models are pretty easy for helping to overcome data source complexity and enabling understanding and collaboration around maintenance and usage. It's easy to add, change, and update things. We get feedback from retailers. For example, somebody wants to update something in the item area, they want to use a new item identifier and it's just a matter of going in and adding it to the numerations for that. Or somebody might come in and say, "We're using a little bit of a different pricing model so we need to add this information into the pricing area." Or people will say "We need to add Bitcoin," so we can go in and add Bitcoin and the attributes you need to support it and do it very easily. At this point, we're not adding new capabilities, we're simply expanding existing ones.

What needs improvement?

The navigation is a little bit of a challenge. It's painful. For example, if you've got a view open and you want to try to move from side to side, the standard today is being able to drag and drop left and right. You can't really do that in the model. Moving around the model is painful because it doesn't follow the Windows model today.

Otherwise, it's got everything I need and it's not hard to use for me.

What do I think about the stability of the solution?

The stability is great. We don't have any problems. 

How are customer service and technical support?

I actually did use their support, I had some issues getting it installed and it had to do with that they've given a copy of the Data Modeler for me to support the standard data model, and getting that approved and authorized and all that was a bit of a challenge. I went through the help desk and they got it done pretty easy for me. I had a unique problem.

Which solution did I use previously and why did I switch?

I had used D-Base. This was a long time ago, but I used D-Base to build a model for the oil industry. That was a long time ago. It was a 1980s vintage so there is no comparison.

How was the initial setup?

The initial setup is straightforward. You can install it without a lot of hassle.

What's my experience with pricing, setup cost, and licensing?

They gave us a copy because of supporting a standards data model, so pricing and all that is really not something I can compare. I think it's a bit expensive, but it supports and does what we want.

Which other solutions did I evaluate?

At one point we had a data modeler who wanted to switch to Embarcadero, and it turned out that that was a huge mess so we dropped it. It didn't last very long, but it was a data modeler who came in and wanted to do it in Embarcadero. I think she had an agreement with them and got a bonus for trying to get it converted or approved to convert but it was such a huge mess we didn't do it.

The Embarcadero model is huge. It's got 8,000 attributes in it. Being able to go through and validate that every one of those 8,000 attributes properly converted over to the correct place in Embarcadero was such a massive job. We didn't mess with it. It's not just the attributes, but it's the relationships and table names. It was a huge job so we didn't do it. I suspect if we had gone to Embarcadero, it would have been just fine, but it was just too big of a job.

What other advice do I have?

erwin DM is good. It does the job and it's been around a long time, so I think it would be a good one to use. I don't have any problems with it.

I would rate erwin DM a nine out of ten. Nothing is perfect. I don't have any real issues with it. It does everything we need it to do. It's really good.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Scott Pennah
Data Architect at Teknion Data Solutions
Real User
Top 10
Its ability to standardize data types and some common attributes is pretty powerful

Pros and Cons

  • "We use the macros with naming standards patterns, domains, datatypes, and some common attributes. As far as other automations, a feature of the Bulk Editor is mass updates. When it sees something is nonstandard or inaccurate, it will export the better data out. Then, I can easily see which entities and attributes are not inline or standard. I can easily make changes to what was uploaded to the Bulk Editor. When taking on a new project, it can save you about a half a day on a big project across an entire team."
  • "The Bulk Editor needs improvement. If you had something that was a local model to your local machine, you could connect to the API, then it would write directly into the repository. However, when you have something that is on the centralized server, that functionality did not work. Then, you had to export out to a CSV and upload up to the repository. It would have been nice to be able to do the direct API without having that whole download and upload thing. Maybe I didn't figure it out, but I'm pretty sure that didn't work when it was a model that sat on a centralized repository."

What is our primary use case?

My previous employer's use case was around data warehousing. We used it to house our models and data dictionaries. We didn't do anything with BPM, etc. The company that I left prior to coming to my current company had just bought erwin EDGE. Therefore, I was helping to see how we could leverage the integration between erwin Mapping Manager and erwin Data Modeler, so we could forward engineer our models and source port mappings, then mapping our data dictionary into our business definitions.

We didn't use it to capture our sources. It was more target specific. We would just model and forward engineer our targets, then we used DM to manage source targets in Excel. Only when the company first got erwin EDGE did we start to look at leveraging erwin Mapping Manager to manage source targets, but that was still a POC. 

As far as early DM source specific, we didn't do anything with that. It was always targeted. 

How has it helped my organization?

It improved the way we were able to manage our models. I come from a corporate background, working for some big banks. We had a team of about 10 architects who were spread out, but we were able to collaborate very well with the tool.

It was a good way to socialize the data warehouse model within our own team and to our end users. 

It helped manage some of the data dictionary stuff, which we could extract out to end users. It provided a repository of the data warehouse models, centralizing them. It also was able to manage the metadata and have the dictionary all within one place, socializing that out from our repository as well.

Typically, for an engineer designing and producing the DDL out of erwin, we will execute it into the database, then they have a target that they can start coding towards. 

What is most valuable?

  • Being able to manage the domains.
  • Ability to standardize our data types and some common attributes, which was pretty powerful. 
  • The Bulk Editor: I could extract the metadata into Excel (or something) and be able to make some mass changes, then upload it back.

We use the macros with naming standards patterns, domains, datatypes, and some common attributes. As far as other automations, a feature of the Bulk Editor is mass updates. When it sees something is nonstandard or inaccurate, it will export the better data out. Then, I can easily see which entities and attributes are not inline or standard. I can easily make changes to what was uploaded to the Bulk Editor. When taking on a new project, it can save you about a half a day on a big project across an entire team.

What needs improvement?

The Bulk Editor needs improvement. If you had something that was a local model to your local machine, you could connect to the API, then it would write directly into the repository. However, when you have something that is on the centralized server, that functionality did not work. Then, you had to export out to a CSV and upload up to the repository. It would have been nice to be able to do the direct API without having that whole download and upload thing. Maybe I didn't figure it out, but I'm pretty sure that didn't work when it was a model that sat on a centralized repository.

For how long have I used the solution?

I have been using erwin since about 2010. I used it last about a year ago at my previous employer. My current employer does not have it.

What do I think about the stability of the solution?

We only had one guy who would keep up with it. Outside of the server, as far as adding and removing users and doing an upgrade which I would help with sometimes, there were typically only two people on our side maintaining it.

What do I think about the scalability of the solution?

There are about 10 users in our organization.

How was the initial setup?

There were a couple of little things that you had to remember to do. We ran into a couple of issues more than once when we did an upgrade or install. It wasn't anything major, but It was something that you really had to remember how you have to do it. I

t takes probably a few hours If you do everything correctly, then everything is ready to go.

What about the implementation team?

There were two people from our side who deployed it, a DBA and myself. 

We didn't go directly through erwin to purchase the solution. We used Sandhill Consulting, who provided someone for the setup. We had used them since purchasing erwin. They used to put on workshops, tips and tricks, etc. They're pretty good.

What was our ROI?

Once you start to get into using all the features, it is definitely worth the cost.

Which other solutions did I evaluate?

With erwin Mapping Manager, which I have PoC'd a few times, it was something that I'd always get to produce ETL code. I have also used WhereScape for several years as well, and that type of functionality is very useful when producing ETLs from your model. It provides a lot of saving. When you're not dealing with something extremely complex, but just has a lot of repeatable type stuff, then you get a pretty standard, robust model. That's a huge saving to be able to do that with ETL code.

What other advice do I have?

The ability to compare and synchronize data sources with data models in terms of accuracy and speed for keeping them in sync is pretty powerful. However, I have never actually used the models as something that associates source. It is something I would be interested in trying to learn how to use and get involved with that type of feature. It would be nice to be able to have everything tied in from start to finish.

I am now working with cloud and Snowflake. Therefore, I definitely see some very good use cases and benefits for modeling the cloud with erwin. For example, there is so much more erwin can offer for doing something automated with SqlDBM. 

I would rate this solution as an eight out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
Jose Luis Leon
Data Management & Automation Manager at a consultancy with 11-50 employees
Reseller
Top 5Leaderboard
Different members can work on the same model, regardless of where they are located

Pros and Cons

  • "The ability to collaborate between different members across the organization is the most valuable feature. It gives us the ability to work on the same model, regardless of where we are physically."
  • "We had some data integration projects, where we needed to integrate it for about 100 databases. Doing that manually is crazy; we can't do that. With erwin, it was much easier to identify which tables and columns could be used for the integration. That means a lot in terms of time and effort as well as my image to the customer, because they can see that we are providing value in a very short time."
  • "I am not so happy with its speed. Sometimes, it can have problems with connections."

What is our primary use case?

We use it in order to create models, do some reverse engineering in the case of existing databases, and for comparing models, e.g., what is in the design vs reality.

How has it helped my organization?

It provides us a visual of the database, which helps me with the complexity of the models. We can know if someone made changes to anything, which is very important from a development perspective. It helps us maintain control of the work.

We had some data integration projects, where we needed to integrate it for about 100 databases. Doing that manually is crazy; we can't do that. With erwin, it was much easier to identify which tables and columns could be used for the integration. That means a lot in terms of time and effort as well as my image to the customer, because they can see that we are providing value in a very short time.

The solution's code generation ensures accurate engineering of data sources. This accuracy affects our development time a lot. It is very easy to go into the graphical model to change something, e.g., generate scripts. It now takes minutes (less than an hour).

What is most valuable?

The ability to collaborate between different members across the organization is the most valuable feature. It gives us the ability to work on the same model, regardless of where we are physically.

I like the accuracy. It is very precise.

What needs improvement?

I am not so happy with its speed. Sometimes, it can have problems with connections.

erwin's automation of reusable design rules and standards is good, but it could be better.

For how long have I used the solution?

About 30 years.

What do I think about the stability of the solution?

It is pretty good. I haven't had any problems with crashes, etc.

We have a consultant who is responsible for the maintenance.

What do I think about the scalability of the solution?

The solution's scalability is good. However, there isn't a clear explanation of how to go from 10 to 20 users, which is something that customers ask us.

In my company, there are currently five data managers who use erwin.

How are customer service and technical support?

I like their technical support. They try very hard to solve the problem.

They are not supporting old versions of some databases anymore, so I don't always have the tools that I need. I would like them to keep the support for the older versions.

How was the initial setup?

The standard edition is quite straightforward to set up. It is just clicking, "Next, Next, Next." This takes less than an hour to set up.

It gets complicated when we set up the group edition. We need to start a database. Sometimes, erwin support is needed for the setup. The setup for the group edition can take two days to a week, depending on the database.

What about the implementation team?

We also sell erwin to some of our customers. Usually, we create some sort of implementation steps to ensure that it will work.

What was our ROI?

We have seen ROI in terms of time, e.g., consulting time and the ability to answer customers faster. This has improved the image of the company.

The solution’s ability to generate database code from a model for a wide array of data sources cuts development time from two weeks to one day or even hours. This is one of the features that I like.

What's my experience with pricing, setup cost, and licensing?

The price should be lower in order to be on the same level as its competitors.

Which other solutions did I evaluate?

I have worked with Toad, Sparx, and the free version of Oracle Data Modeler. erwin DM's competitors are cheaper, but the look and feel of erwin is more user-friendly, professional, mature, and enterprise level.

What other advice do I have?

I recommend using erwin Data Modeler. You should have a good business case to convince the finance team, as the price is high for Latin America.

I would rate this solution as nine out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
Buyer's Guide
Download our free erwin Data Modeler (DM) Report and get advice and tips from experienced pros sharing their opinions.