Alan Xu - PeerSpot reviewer
Consultant at Beijing Essential Data Tech Co., Ltd
Reseller
Top 10Leaderboard
The data lineage feature is valuable but there is a lack of support in the China region
Pros and Cons
  • "The data lineage feature is very valuable."
  • "There is a lack of local support in the China region."

What is our primary use case?

Our company is the solution's only partner and reseller in China. We use the solution to provide data lineage to our customers' production environments. Most of our customers are in the mid-sized range. 

What is most valuable?

The data lineage feature is very valuable. 

What needs improvement?

There is a lack of local support in the China region. 

The solution needs to be available in the Chinese language. 

For how long have I used the solution?

I have been using the solution for 15 years. 

Buyer's Guide
erwin Data Modeler by Quest
April 2024
Learn what your peers think about erwin Data Modeler by Quest. Get advice and tips from experienced pros sharing their opinions. Updated: April 2024.
768,857 professionals have used our research since 2012.

What do I think about the stability of the solution?

On occasion, we experience some issues with performance so the stability is rated a seven out of ten. 

What do I think about the scalability of the solution?

The solution is scalable.

How are customer service and support?

Local support is lacking in the China region. We try to seek support but also have to do our own research to resolve technical issues. 

How was the initial setup?

The setup is easy and there are only a few steps. You just download the package and install it in the customer's environment. 

What about the implementation team?

We implement the solution for customers and deployments take several days. We handle everything so the customer can just start using the solution. One person can handle setup and deployment. 

What's my experience with pricing, setup cost, and licensing?

There are two license options and the pricing is reasonable.  

What other advice do I have?

The solution is the best option in the market. I rate it a seven out of ten. 

Disclosure: My company has a business relationship with this vendor other than being a customer: Reseller
PeerSpot user
ElbertTarrosa - PeerSpot reviewer
Enterprise Data Architect at Unionbank Philippines
Real User
Top 20Leaderboard
A high performing solution for designing and deploying enterprise datasets
Pros and Cons
  • "I have worked with erwin Data Modeler for quite some time and familiarity is its most valuable feature."
  • "The reverse engineering in Oracle Databases needs improvement, as there are issues."

What is our primary use case?

The solution is primarily used for banking data models.

What is most valuable?

I have worked with erwin Data Modeler for quite some time and familiarity is its most valuable feature. 

What needs improvement?

The reverse engineering in Oracle Databases needs improvement, as there are issues. 

For how long have I used the solution?

I have been using erwin Data Modeler for twelve years.

What do I think about the stability of the solution?

It is a stable solution. I rate the stability an eight out of ten. 

What do I think about the scalability of the solution?

The solution's scalability depends on the device that you use it on. 

How are customer service and support?

The technical support team is good. I rate the support an eight out of ten.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

Previously, I used SAP PowerDesigner.

How was the initial setup?

The initial setup is straightforward. The deployment takes a few minutes when entering the license and installing it. 

What's my experience with pricing, setup cost, and licensing?

It is not a very expensive solution. Only the licensing and maintenance fee needs to be paid. 

What other advice do I have?

I recommend erwin Data Modeller because it is a good data modeling solution in comparison to others available in the market.

I rate the overall solution an eight out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Buyer's Guide
erwin Data Modeler by Quest
April 2024
Learn what your peers think about erwin Data Modeler by Quest. Get advice and tips from experienced pros sharing their opinions. Updated: April 2024.
768,857 professionals have used our research since 2012.
Independent Consultant at a tech consulting company with 1-10 employees
Real User
Complete Compare is good for double checking your work and ensuring that your model reflects the database design
Pros and Cons
  • "The generation of DDL saved us having to write the steps by hand. You still had to go in and make some minor modifications to make it deployable to the database system. However, for the data lineage, it is very valuable for tracing our use of data, especially personal confidential data through different systems."
  • "The report generation has room for improvement. I think it was version 8 where you had to use Crystal Reports, and it was so painful that the company I was with just stayed on version 7 until version 9 came out and they restored the data browser. That's better than it was, but it's still a little cumbersome. For example, you run it in erwin, then export it out to Excel, and then you have to do a lot of cosmetic modification. If you discover that you missed a column, then you would have to rerun the whole thing. Sometimes what you would do is just go ahead and fix it in the report, then you have to remember to go back and fix it in the model. Therefore, I think the report generation still could use some work."

What is our primary use case?

The use case was normally to update data model designs for transaction processing systems and data warehouse systems. Part of our group also was doing data deployment, though I personally didn't do it. The work I did was mostly for the online transaction systems and for external file designs.

I didn't use it for data sources. I used the solution for generation of code for the target in the database. Therefore, I went from the model to the database by generating the DDL code out of erwin.

We had it on-premise. There was a local database server on SQL, then we each had a client that we install on our machines.

How has it helped my organization?

At one of my previous jobs, we had a lot of disparate databases that people built on their PCs, which were under their desk. We were under a mandate to bring all of that into a controlled environment that our DBAs could monitor, tune, etc. Therefore, this was a big improvement. I put the data that was in whatever source into an Excel spreadsheet, reverse engineering it into a SQL file and putting in the commas, and then I could reverse engineer that SQL into a data model. That saved us a tremendous amount of time instead of building the data model from scratch.

I educated a number of my colleagues who were in data architecture and writing the DDL by hand. I showed them, "You do it this way from the model." That way, you never have to worry about introducing errors or having a disconnect between what is in the model and the database. I was able to get management support for that. We enhanced the accuracy of our data models.

What is most valuable?

I do like the whole idea of being able to identify your business rules. In my last position, I got acquainted with using it for data lineage, which is so important now with the current regulatory environment because there are so many laws or regulations that need to be adhered to. 

If you're able to show where the data came from, then you know the source. For example, I was able to use user-defined properties (UDPs) on one job where we were bringing in the data from external XML files. I would put it at the UDP level, where the data came from. On another job, we upgraded a homegrown database that didn't meet our standards, so we changed the naming standards. I put in the formally known UDPs so I could run reports, because our folks in MIS who were running the reports were more familiar with the old names than the new names. Therefore, I could run the report so they could see, "This is where you find what you used to call X, and it is now called Y." That helped. 

The generation of DDL saved us having to write the steps by hand. You still had to go in and make some minor modifications to make it deployable to the database system. However, for the data lineage, it is very valuable for tracing our use of data, especially personal confidential data through different systems.

Complete Compare is good for double checking your work, how your model compares with prior versions, and making sure that your model reflects the database design. At my job before my last one, every now and then the DBAs would go in and make updates to correct a production problem, and sometimes they would forget to let us know so we could update the model. Therefore, periodically, we would go in and compare the model to the database to ensure that there weren't any new indexes or changes to the sizes of certain data fields without our knowing it. However, at the last job I had, the DBAs wouldn't do anything to the database unless it came from the data architects so I didn't use that particular function as much.

If the source of the data is an L2TP system and you're bringing it into a data warehouse, erwin's ability to compare and synchronize data sources with data models, in terms of accuracy and speed, is excellent for keeping them in sync. We did a lot of our source to target work with Informatica. We used erwin to sometimes generate the spreadsheets that we would give our developers. This was a wonderful feature that isn't very well-known nor well-publicized by erwin. 

Previously, we were manually building these Excel spreadsheets. By using erwin, we could click on the target environment, which is the table that we wanted to populate. Then, it would automatically generate the input to the Excel spreadsheet for the source. That worked out very well.

What needs improvement?

When you do a data model, you can detect the table. However, sometimes I would find it quicker to just do a screenshot of the tables in the data model, put it in a Word document, and send it to the software designers and business users to let them see that this is how I organized the data. We could also share the information on team calls, then everybody could see it. That was quicker than trying to run reports out of erwin, because sometimes we got mixed results which took us more time than what they were worth. If you're just going in and making changes to a handful of tables, I didn't find the reporting capabilities that flexible or easy to use. 

The report generation has room for improvement. I think it was version 8 where you had to use Crystal Reports, and it was so painful that the company I was with just stayed on version 7 until version 9 came out and they restored the data browser. That's better than it was, but it's still a little cumbersome. For example, you run it in erwin, then export it out to Excel, and then you have to do a lot of cosmetic modification. If you discover that you missed a column, then you would have to rerun the whole thing. Sometimes what you would do is just go ahead and fix it in the report, then you have to remember to go back and fix it in the model. Therefore, I think the report generation still could use some work.

I don't see that it helped me that much in identifying data sources. Instead, I would have to look at something like an XML file, then organize and design it myself.

For how long have I used the solution?

I started working with Data Modeler when I was in the transportation industry. However, that was in the nineties, when it was version 1 and less than $1,000.

What do I think about the stability of the solution?

I found it pretty stable. I didn't have any problems with it. 

Sometimes, when you're working with model Mart, once in a while the connection would drop. What I don't like is that if you don't consistently save, you could lose a lot of changes. That's something that I think should work more like Word. If for some reason your system goes down, there's an interruption, or you just forget or get distracted by a phone call, then you go back and something happened. You might have lost hours worth of work. That was always painful.

What do I think about the scalability of the solution?

I have worked on databases that had as many as a thousand tables. In terms of volume and versioning, it is fine. We've used the model Mart to house versions that introduce another level of complexity to keep the versioning consistent. 

There is a big learning curve with using model Mart. Therefore, a lot of groups don't really fully utilize it the way they should. You need somebody to go in there every now and then to clean things up. We had some pretty serious standards around when you deployed it to production and how you moved it in model Mart. We would use Complete Compare there. It scaled well that way. 

In terms of the number of users, we had 20 to 30 different data architects using it. I don't know that everybody was on it full-time, all the time. I never saw a conflict where we were having trouble because too many people were using it. From that point, it was fine.

I think the team got as large as it was going to get. In fact, right now they're on a hiring freeze because of COVID-19.

How are customer service and technical support?

Over a period of five or 10 years, the few times I've had to go all the way through to erwin, I talked to the same young lady, who is very good. She understood the problem, worked it, and would give me the solution within two phone calls. This was very good.

Which solution did I use previously and why did I switch?

Prior to erwin, I had used Bachman and IEF. Bachman I liked better, but IEF was way too cumbersome. 

Bachman was acquired by another company and disappeared from the marketplace. The graphics were very pretty on Bachman. Its strongest feature was reverse engineering databases. I found erwin just as robust with its reverse engineering. 

IEF also disappeared from the marketplace, and I didn't use it very much. I didn't like it, as it was way too cumbersome. You needed a local administrator. It was really tough. It promised to generate code and database as well as supposed to be an all encompassing case tool. I just don't think it really delivered on that promise.

It could very well be that the coding of those solutions didn't keep up with the latest languages. There was a real consolidation of data modeling tools in the last 15 to 18 years. Now, you've only got erwin and maybe Embarcadero. I don't think there's anything else. erwin absorbed a lot of the other solutions but didn't integrate them very well. We were suffering when it didn't work. However, with the latest versions, I think they've overcome a lot of those problems.

How was the initial setup?

Usually, the companies already had erwin in place. We had one company where the DBAs would sort of get us going.

The upgrades were complex. They required a lot of testing. About a year ago, we held off doing them because we wanted to upgrade to the latest version as well as we were in the midst of a very big system upgrade. Nobody wanted to take the time. It took one of our architects working with other internal organizations, then there were about three or four of us who tried to do the testing of the features. It was a big investment of time, and I thought that it should have been more straightforward. I think companies would be more willing to upgrade if it wasn't so painful.

The upgrade took probably two months because nobody was working on it full-time. They would work on it while they could. One of the architects ended up working late, over the weekends, and everything trying to get it ready before we could roll it out to the entire team.

For the upgrades, there were ;at least half a dozen people across three different groups. There were three or four data architects in our group, then we had two or three desktop support and infrastructure people for the server issues.

What about the implementation team?

I think they used Sandhill for the initial installation.

If it's the first time, I recommend engaging a third-party integrator, like Sandhill, whom I found them very good and responsive.

What's my experience with pricing, setup cost, and licensing?

We always had a problem keeping track of all the licenses. All of a sudden you might get a message that your license expired and you didn't know, and it happens at different times. At GM Finance, they engaged Sandhill to help us manage it. I was less involved because of the use of Sandhill, who was very helpful when we had trouble with our license. I remember you had to put in these long string of characters and be very careful that you didn't cut and paste it in an email, but that you generated it. It was so sensitive and really difficult until the upgrades.

if there was a serious problem, then it was usually around the licensing, where there was some glitch in the licensing. Then, we would call Sandhill who would help us out with it. That's something where we had to invoke a third-party for any technical difficulties.

I wish it wasn't so expensive. I would love to personally buy a copy of my own and have it at home, because the next job that I'm looking at is probably project management and I might not have access to the tool. I would like to keep my ability to use the tool. Therefore, they should probably have a pricing for people like me who want to just use the solution as an independent consultant, trying to get started. $3,000 is a big hit.

I think you buy a block of users because I know the company always wanted to manage the number of licenses. 

Which other solutions did I evaluate?

I really haven't spent a lot of time on other data modeling tools. I have heard people complain about erwin quite a bit, "Oh, we wish we had Embarcadero," or something like that. I haven't worked with those tools, so I really can't say that they're better or worse than erwin, since erwin is the only data modeling tool that I've used in the last 15 years.

What other advice do I have?

There might be some effort to do some cloud work at my previous place of employment, but I wasn't on those projects. I don't think they've settled on how they're going to depict the data.

Some of the stuff in erwin Evolve, and the way in which it meshes with erwin Data Modeler, was very cool.

Sometimes, your model would get corrupted, but you could reverse engineer it and go back in, then regenerate the model by using the XML that was underlying the model. This would repair it. When I showed this to my boss, he was very impressed. He said, "Oh man, this is where we used to always have to call Sandhill." I replied, "You don't have to do that. You need to do this." That worked out pretty well.

Biggest lesson learnt: The value of understanding your data in a graphical way has been very rich in communicating to developers and testers when they recognize the relationships and the business rules. It made their lives so much easier in the capturing of the metadata and business English definitions, then generating them. Everybody on the team could understand what this data element or group of data elements represented. This is the biggest feature that I've used in my development and career.

I would rate this solution as an eight out of 10. 

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Enterprise Data Architect at a energy/utilities company with 1,001-5,000 employees
Real User
Makes logical and conceptual models easy to look at, helping us to engage and collaborate with the business side
Pros and Cons
  • "It's important to create standard templates — Erwin is good at that — and you can customize them. You can create a standard template so that your models have the same look and feel. And then, anyone using the tool is using the same font and the same general layout. erwin's very good at helping enforce that."
  • "Another feature of erwin is that it can help you enforce your naming standards. It has little modules that you can set up and, as you're building the data model, it's ensuring that they conform to the naming standards that you've developed."
  • "I would like to see improved reporting and, potentially, dashboards built on top of that. Right now, it's a little manual. More automated reporting and dashboard views would help because currently you have to push things out to a spreadsheet, or to HTML, and there aren't many other options that I know of. I would like to be able to produce graphs and additional things right in the tool, instead of having to export the data somewhere else."

What is our primary use case?

We use it for our conceptual business-data model, for logical data modeling, and to generate physical database schemas. We also create dimensional modeling models.

How has it helped my organization?

One of the ways Data Modeler has benefited our company is that it gives us the ability to engage with the business alongside IT, because it's friendly. It has friendly views that we can use when we meet with them. They can follow them and understand them. That increases the quality and accuracy of our IT solutions.

The solution's ability to generate database code from a model for a wide array of data sources helps cut development time. We generate all the DDL for our hub through a modeling exercise and generate the alter statements and maintenance through the erwin modeling tool. I would estimate that reduces development time by 30 to 40 percent because it's so accurate. We don't have to go back in. It takes care of the naming standards and the data types. And because we use OData, we generate our service calls off of those schemas too. So that's also more accurate because it uses what we've created from the model all the way through to a service call with OData.

What is most valuable?

I find the logical data modeling very useful because we're building out a lot of our integration architecture. The logical is specific to my role, since I do conceptual/logical, but I partner with a team that does the physical. And we absolutely see value in the physical, because we deploy databases for some of those solutions.

I would rate erwin's visual data models very highly for helping to overcome data source complexity. We have divided our data into subject areas for the company, and we do a logical data model for every one of those subject areas. We work directly with business data stewards. Because the logical and the conceptual are so easy to look at, the business side can be very engaged and collaborate on those. That adds a lot of value because they're then governing the solutions that we implement in our architecture.

We definitely use the solution's ability to compare and synchronize data sources with data models. We have a data hub that we've built to integrate our data. We're able to look at the data model from the source system, the abstracted model we do for the hub, and we can use erwin to reverse-engineer a model and compare them. We also use these abilities for the lifecycle of the hub. If we make a change, we can run a comparison report and file it with the release notes.

What needs improvement?

I would like to see improved reporting and, potentially, dashboards built on top of that. Right now, it's a little manual. More automated reporting and dashboard views would help because currently you have to push things out to a spreadsheet, or to HTML, and there aren't many other options that I know of. I would like to be able to produce graphs and additional things right in the tool, instead of having to export the data somewhere else. And that should work in an intuitive way which doesn't require so much of my time or my exporting things to a spreadsheet to make the reporting work.

For how long have I used the solution?

I've used the erwin Data Modeling tool since about 1990. I work more with the Standard Edition, 64-bit.

What do I think about the stability of the solution?

It's stable. This specific tool has been around a long time and it has matured. We don't encounter many defects and, when we do, a ticket is typically taken care of within a couple of days.

What do I think about the scalability of the solution?

We're using standalone versions, so we don't need to scale much. In the Workgroup Edition we've got it on a server and we have concurrent licensing, and we've had no issues with performance. It can definitely handle multiple users when we need it to.

At any time we have six to 10 people using the Workgroup Edition. They are logical data modelers and DBAs.

We've already increased the number of people using it and we've likely topped-out for a while, but we did double it each year over the past three years. We added more licenses and more people during that time. It has probably evolved as far as it's going to for our company because we don't have more people in those roles. We've met our objectives in terms of how much we need.

How are customer service and technical support?

I would rate erwin's technical support at seven out of 10. One of the reasons is that it's inconsistent. Sometimes we get responses quickly, and sometimes it takes a couple of days. But it's mostly good. It's online, so that's helpful. But we've had to follow up on tickets that we just weren't hearing a status on from them.

They publish good forums so you can see if somebody else is having a given problem and that's helpful. That way you know it's not just you.

Which solution did I use previously and why did I switch?

We did not have a previous solution.

How was the initial setup?

I've brought this tool into four different companies, when I came to each as a data architect. So I was always involved early on in establishing the tool and the usage guidelines. The setup process is pretty straightforward, and it has improved over the years.

To install or make updates takes an hour, maybe.

A lot of the implementation strategy for Data Modeler in my current company was the starting of a data governance and data architecture program. Three years ago, those concepts were brand-new to this company. We got the tool as part of the new program.

For deployment and maintenance of the solution we need one to two people. Once it's installed, it's very low maintenance.

What about the implementation team?

We did it ourselves, because we have experience.

What was our ROI?

We're very happy with the return on investment. It has probably exceeded the expectations of some, just because the program is new and they hadn't seen tools before. So everyone is really happy with it.

erwin's automation of reusable design rules and standards, especially compared to those of basic drawing tools, has been part of our high ROI. We're using a tool that we keep building upon, and we are also able to report on it and generate code from it. So it has drastically improved what was a manual process for doing those same things. That's one of the main reasons we got it.

What's my experience with pricing, setup cost, and licensing?

We pay maintenance on a yearly basis, and it's a low cost. There are no additional costs or transactional fees.

The accuracy and speed of the solution in transforming complex designs into well-aligned data sources make the cost of the tool worth it.

Which other solutions did I evaluate?

We looked at a couple of solutions. Embarcadero was one of them.

erwin can definitely handle more DBMSs and formats. It's not just SQL. It has a long list of interfaces with Oracle and SQL Server and XSD formats. That's a very rich set of interfaces. It also does both reverse- and forward-engineering well, through a physical and logical data model. And one of the other things is that it has dimensional modeling. We wanted to use it for our data warehouse and BI, and I don't believe Embarcadero had that capability at the time. Most tools don't have all of that, so erwin was more complete. erwin also has several choices for notation and we specifically wanted to use IDEF notation. erwin is very strong in that.

The con for erwin is the reporting, compared to other tools. The interface and reporting could be improved.

What other advice do I have?

My advice would depend on how you're going to be using it. I would definitely advise that, at a minimum, you maintain logical and physical views of the data. That's one of the strengths of the tool. Also, while this might sound like a minor thing, it's important to create standard templates — Erwin is good at that — and you can customize them. You can create a standard template so that your models have the same look and feel. And then, anyone using the tool is using the same font and the same general layout. erwin's very good at helping enforce that. You should do that early on so that you don't have to redo anything later to make things look more cohesive.

Another feature of erwin is that it can help you enforce your naming standards. It has little modules that you can set up and, as you're building the data model, it's ensuring that they conform to the naming standards that you've developed. I think that's something that some people don't realize is there and don't take advantage of.

The biggest lesson I have learned from using this solution faces in two directions. One is the ability to engage the business to participate in the modeling. The second is that the forward-engineering and automation of the technical solution make it more seamless all the way through. We can meet with the business, we can model, and then we can generate a solution in a database, or a service, and this tool is our primary way for interacting with those roles, and producing the actual output. It's made things more seamless.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Senior Project Manager at a tech services company with 51-200 employees
Real User
Stable, scales well, satisfactory support, and saves time during project reengineering
Pros and Cons
  • "There is absolutely no problem with the stability."
  • "The erwin ETL functionality has room for improvement when it comes to mapping databases with a classic entity-relationship model to a data warehouse model."

What is our primary use case?

For the first 30 years of my career, I worked on many small projects. Since erwin was released, I used it to help develop projects up until about two years ago. At that time, I moved to a new company and I still use erwin in my current role.

When I moved to the new company, I recommended erwin and explained it to my colleagues and my clients. When the most recent version was released, I looked at the licensing and became familiar with its new features and benefits.

I have developed a couple of projects myself in the past two years, including one that had to do with mail, in Serbia, which was an interesting project. Another and the other to do with handling automotive equipment maintenance. One of the projects is something that I started from the beginning, whereas the other was reengineered with changes made and new features added.

I have also worked with erwin from a higher-level role. Rather than developing smaller projects, I have taken responsibility for a much larger project worth several million Euros.

How has it helped my organization?

In general, if you start using erwin from the beginning of a project then it provides a lot of benefits. You have to start with the process modeling, and then find data and create an entity, and the process continues. Essentially, you have to have something before you create the data model. However, if you're talking about reengineering a project that has existing data models or existing processes, then the benefits of using erwin are really big. You can save 50% of the time if you're working on reengineering existing processes or existing data models.

The visual data models are okay for helping to overcome data source complexity. If the project is started with erwin from the beginning then I can create the database, stored procedures, and everything that I need. However, when it comes to reengineering an existing product, and if the database changes then some of the stored procedures, as well as other things also need to change. For example, in one project, the original database was Informix and the new one is Microsoft SQL Server.

What needs improvement?

The erwin ETL functionality has room for improvement when it comes to mapping databases with a classic entity-relationship model to a data warehouse model. If you have a legacy database like Informix, Oracle, SQL Server, or something similar, then you need to create a data warehouse database. These use completely different logic and you need to create some procedures to map the tables.

The number of databases should be extended.

To have more documentation or available knowledge on how to connect is very important. This is probably the most important issue that I have experienced. Specifically, I would like more information on how to connect, how to transfer, and how to do the mapping from a legacy database.

If you try to open a file from an older version of erwin, you can only open files from one version back. This is all that they support, so they need to add the option of opening all older versions. As it is now, they push people to buy a new version every year.

For how long have I used the solution?

We have been using erwin since the beginning when it was first released by Logic Works in 1993.

What do I think about the stability of the solution?

There is absolutely no problem with the stability.

What do I think about the scalability of the solution?

In terms of scalability, there is not enough long-term support for each version of erwin. In the past, the extensions of some erwin models, or files were ER1. After that, the file extension was ERW and now it is ERAN, which created some confusion.

In my current company, I am the only person using erwin because we are not specialists in development. In my previous company, five or six people were using it.

How are customer service and support?

The support is okay and I am satisfied with it. However, it's a little slower getting support for the role that I'm in now, as compared to when I was at my previous company.

In the past, the support was always okay. Within a few hours, I either had an answer or was at least speaking with them. We sent emails to discuss how to solve the problem.

Overall, I'm really satisfied with the support.

Which solution did I use previously and why did I switch?

I have used several other modeling tools in the past, including SAP PowerDesigner and Bizagi. My experience with them has depended on what I needed to do. For example, Bizagi has a completely different way of developing a model. I am not satisfied with it because they don't follow the rules for relational modeling.

On the other hand, Power Designer is quite a good tool that works well. It's a complex tool that can be used for data modeling and process modeling. They use BPMN methodology and in terms of functionality, it has enough. From a cost perspective, it is cheaper than erwin.

How was the initial setup?

The initial setup is straightforward, it was no problem.

The installation can be done in five minutes. The new version may take a little longer, but it is very fast.

What about the implementation team?

When we have completed, we have erwin come to analyze the process.

We start with global entities, or how I can see it on a higher level without talking about the relationship model. I am looking for the relation, and foreign keys, then we search for the stored procedure and functions.

We look at the first creating the keys, the primary and alternative keys in the tables, entities, and at the end, we develop the indexing. The indexing requires daily analysis when you put the database in operation they look at the speed of everything. you can change the indexing to make your database faster.

What was our ROI?

In my previous company, there we had a really large return on investment from using erwin. In one of the systems that we re-engineered, there were more than 2,000 tables. If these had to be created from the beginning then it would have taken a really long time to collect all of the information. When it comes to reengineering, the database usually stays the same with perhaps 20% to 30% of the model being modified.

In my current company, we are trying to educate our clients on using erwin. Many of them are not using it in their everyday business. The problem is that bigger organizations, like government departments, usually want to have somebody from outside their own organization develop the solution.

What's my experience with pricing, setup cost, and licensing?

The price of erwin Data Modeler is very expensive, in particular for this part of the world. I think that for the United States and Europe, the price is probably okay. However, in Serbia, the salary of an IT engineer is perhaps 50% of what it is in the United States. Because of this, erwin needs to have a different pricing model for different countries.

For example, you cannot sell products in places like Serbia, Croatia, Bosnia, Bulgaria, Romania, and other places in this part of Europe at the same price as countries like Germany, Norway, or the United States. This is something that needs to change from a licensing perspective.

What other advice do I have?

In terms of erwin's code generation and the accurate engineering of data sources, for some of the databases, it is quite okay. However, in others, it is not exactly following the rules of the database in the way that I want to generate the model.

There are two ways to generate a model. The first is to create a schema, which is a textual file that contains everything needed to create a complete database structure. The second is to have erwin connect to the databases directly. In this case, erwin installs and creates the database.

In some cases, it is better to first create a DB schema, which is an SQL file where you can look for syntax errors or other problems in the code. Once complete, you can create the database, including the tables and everything else.

When I start to use erwin in a project, it is normally right after I analyze the process. The second thing I do is look at the global entities, so I can view the system from a high level without dealing with the relationship model. After that, I start looking for relationships, creating the primary and alternative keys in the table. I then start looking for foreign keys. At that stage, I begin to look for stored procedures and functions. After this, I work on the creation of indexes.

The indexing needs to be analyzed daily, once the database is put into operation. This helps with database performance. When you change the indexing, the database gets faster.

My advice for anybody who is planning to use erwin is that sometimes, it should be used to develop models right from the beginning. It will depend on the project, as well as the organization and the experience that they have with erwin. It is also possible to have different people and different teams from the same company working on one model. For example, we have three development centers that are all working on the same model.

The biggest lesson that I have learned from using erwin DM is that it pushes you to use the notation and methodology exactly. You must follow the rules. Several years ago, they started adding tools and options that are used to verify a model, and this functionality helps to point out mistakes in the models. Once the model is correct, you can move on to working with the databases and the specifics of each one. You can move very easily between databases such as Informix, Oracle, and MySQL, without losing much time.

I would rate this solution a ten out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Senior Data Warehouse Architect at a financial services firm with 1,001-5,000 employees
Real User
Support for Snowflake is very helpful from the data modeling perspective, and JDBC/native connectivity simplifies the push mechanism
Pros and Cons
  • "The logical model gives developers, as well as the data modelers, an understanding of exactly how each object interacts with the others, whether a one-to-many, many-to-many, many-to-one, etc."
  • "We are planning to move, in 2021, into their server version, where multiple data modelers can work at the same time and share their models. It has become a pain point to merge the models from individual desktops and get them into a single data model, when multiple data modelers are working on a particular project. It becomes a nightmare for the senior data modeler to bring them together, especially when it comes to recreating them when you want to merge them."

What is our primary use case?

We use erwin DM as a data modeling tool. All projects in the data warehouse area go through the erwin model first and get reviewed and get approved. That's part of the project life cycle. And then we exude the scripts out of DM into Snowflake, which is our target database. Any changes that happen after that also go through erwin and we then make a master copy of the erwin model.

Our solution architecture for projects that involve erwin DM and Snowflake is an on-prem Data Modeler desktop version, and we have a SQL database behind it and that's where the models are stored. In terms of erwin Data Modeler, Snowflake is the only database we're using.

We are not utilizing a complete round-trip from DM for Snowflake. We are only doing one side of it. We are not doing reverse-engineering. We only go from the data model to the physical layer.

How has it helped my organization?

We use erwin Data Modeler for all enterprise data warehouse-related projects. It is very vital that the models should be up and running and available to the end-users for their reporting purposes. They need to be able to go through them and to understand what kinds of components and attributes are available. In addition, the kinds of relationships that are built in the data warehouse are visible through erwin DM. It is very important to keep everybody up to the mark and on the same page. We distribute erwin models to all the business users, our business analysts, as well as the developers. It's the first step for us. Before something gets approved we generally don't do any data work. What erwin DM does is critical for us.

erwin DM's support for Snowflake is very helpful from the data modeling perspective and, obviously, the JDBC and native connectivity also helps us in simplifying the push mechanism we have in erwin DM. 

What is most valuable?

Primarily, we use erwin for data modeling only, the functionality which is available to do logical models and the physical model. Those are the two areas which we use the most: we use a conceptual model first and the logical model, and then the physical model.

When we do the conceptual data model, we will look at the source and how the objects in the source interact, and that will give us a very clear understanding of how the data is set up in the source environment. The logical model gives developers, as well as the data modelers, an understanding of exactly how each object interacts with the others, whether a one-to-many, many-to-many, many-to-one, etc. The physical model, obviously, helps in executing the data model in Snowflake, on the physical layer.

Compatibility and support for cloud-based databases is very important in our environment because Snowflake is the only database to which we push our physical data structures. So any data modeling tool we use should be compatible with a cloud data warehouse, like Snowflake. It is definitely a very important functionality and feature for us.

What needs improvement?

We are planning to move, in 2021, into their server version, where multiple data modelers can work at the same time and share their models. It has become a pain point to merge the models from individual desktops and get them into a single data model, when multiple data modelers are working on a particular project. It becomes a nightmare for the senior data modeler to bring them together, especially when it comes to recreating them when you want to merge them. That's difficult. So we are looking at the version that will be a server-based model, where the data modelers can bring the data out, they can share, and they can merge their data models with existing data model on the server.

The version we're not using now—the server version—would definitely help us with the pain point when it comes to merging the models. When you have the desktop version, merging the models, two into one, requires more time. But when we go over to the server, the data models can automatically pull and push.

We will have to see what the scalability is like in that version.

Apart from that, the solution seems to be fine.

For how long have I used the solution?

I've been using erwin DM for years, since the early 2000s and onwards. It's a very robust tool for data modeling purposes.

What do I think about the scalability of the solution?

We have five to seven data modelers working on it at any moment in time. We have not seen any scalability issues, slowness, or that it is not supporting that level of use, because it's all desktop-based

When we go into the server model, where the web server is involved, we will have to see. And the dataset storage in the desktop model is also very limited, so I don't think going to the server model is going to impact scalability.

In our company, erwin DM is used only in the data warehouse area at this moment. I don't see any plans, from the management perspective, to extend it. It's mostly for ER diagrams and we will continue to use it in the same way. Depending on the usage, the number of concurrent users might go up a little bit.

How are customer service and technical support?

I have interacted with erwin's technical support lately regarding the server version and they have been very proactive in answering those questions as well as following up with me. They ask if they have resolved the issue or if anything still needs to be done. I'm very happy with erwin's support.

What other advice do I have?

The biggest lesson I have learned from using erwin DM, irrespective of whether it's for Snowflake or not, is that having the model upfront and getting it approved helps in reducing project go-live time. Everybody is on the same page: all the developers, how they interact, how they need to connect the various objects to generate their ETL processes. It also definitely helps business analysts and end-users to understand how to write their Tableau reports. If they want to know where the objects are, how they connect to each other, and whether they are a one-to-one or one-to-many relationship, etc., they can get it out of this solution. It's a very central piece of the development and the delivery process.

We use Talend as our ETL and BI vendor for workload. We don't combine it with erwin DM. Right now, each is used for its own specific need and purpose. erwin DM is mostly for our data modeling purposes, and Talend is for integration purposes.

Overall, erwin DM's support for Snowflake is very good. It's very stable and user-friendly and our data modelers live, day in and day out, on it. No complaints. There is nothing that impacts their performance.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Data Architect at Teknion Data Solutions
Real User
Its ability to standardize data types and some common attributes is pretty powerful
Pros and Cons
  • "We use the macros with naming standards patterns, domains, datatypes, and some common attributes. As far as other automations, a feature of the Bulk Editor is mass updates. When it sees something is nonstandard or inaccurate, it will export the better data out. Then, I can easily see which entities and attributes are not inline or standard. I can easily make changes to what was uploaded to the Bulk Editor. When taking on a new project, it can save you about a half a day on a big project across an entire team."
  • "The Bulk Editor needs improvement. If you had something that was a local model to your local machine, you could connect to the API, then it would write directly into the repository. However, when you have something that is on the centralized server, that functionality did not work. Then, you had to export out to a CSV and upload up to the repository. It would have been nice to be able to do the direct API without having that whole download and upload thing. Maybe I didn't figure it out, but I'm pretty sure that didn't work when it was a model that sat on a centralized repository."

What is our primary use case?

My previous employer's use case was around data warehousing. We used it to house our models and data dictionaries. We didn't do anything with BPM, etc. The company that I left prior to coming to my current company had just bought erwin EDGE. Therefore, I was helping to see how we could leverage the integration between erwin Mapping Manager and erwin Data Modeler, so we could forward engineer our models and source port mappings, then mapping our data dictionary into our business definitions.

We didn't use it to capture our sources. It was more target specific. We would just model and forward engineer our targets, then we used DM to manage source targets in Excel. Only when the company first got erwin EDGE did we start to look at leveraging erwin Mapping Manager to manage source targets, but that was still a POC. 

As far as early DM source specific, we didn't do anything with that. It was always targeted. 

How has it helped my organization?

It improved the way we were able to manage our models. I come from a corporate background, working for some big banks. We had a team of about 10 architects who were spread out, but we were able to collaborate very well with the tool.

It was a good way to socialize the data warehouse model within our own team and to our end users. 

It helped manage some of the data dictionary stuff, which we could extract out to end users. It provided a repository of the data warehouse models, centralizing them. It also was able to manage the metadata and have the dictionary all within one place, socializing that out from our repository as well.

Typically, for an engineer designing and producing the DDL out of erwin, we will execute it into the database, then they have a target that they can start coding towards. 

What is most valuable?

  • Being able to manage the domains.
  • Ability to standardize our data types and some common attributes, which was pretty powerful. 
  • The Bulk Editor: I could extract the metadata into Excel (or something) and be able to make some mass changes, then upload it back.

We use the macros with naming standards patterns, domains, datatypes, and some common attributes. As far as other automations, a feature of the Bulk Editor is mass updates. When it sees something is nonstandard or inaccurate, it will export the better data out. Then, I can easily see which entities and attributes are not inline or standard. I can easily make changes to what was uploaded to the Bulk Editor. When taking on a new project, it can save you about a half a day on a big project across an entire team.

What needs improvement?

The Bulk Editor needs improvement. If you had something that was a local model to your local machine, you could connect to the API, then it would write directly into the repository. However, when you have something that is on the centralized server, that functionality did not work. Then, you had to export out to a CSV and upload up to the repository. It would have been nice to be able to do the direct API without having that whole download and upload thing. Maybe I didn't figure it out, but I'm pretty sure that didn't work when it was a model that sat on a centralized repository.

For how long have I used the solution?

I have been using erwin since about 2010. I used it last about a year ago at my previous employer. My current employer does not have it.

What do I think about the stability of the solution?

We only had one guy who would keep up with it. Outside of the server, as far as adding and removing users and doing an upgrade which I would help with sometimes, there were typically only two people on our side maintaining it.

What do I think about the scalability of the solution?

There are about 10 users in our organization.

How was the initial setup?

There were a couple of little things that you had to remember to do. We ran into a couple of issues more than once when we did an upgrade or install. It wasn't anything major, but It was something that you really had to remember how you have to do it. I

t takes probably a few hours If you do everything correctly, then everything is ready to go.

What about the implementation team?

There were two people from our side who deployed it, a DBA and myself. 

We didn't go directly through erwin to purchase the solution. We used Sandhill Consulting, who provided someone for the setup. We had used them since purchasing erwin. They used to put on workshops, tips and tricks, etc. They're pretty good.

What was our ROI?

Once you start to get into using all the features, it is definitely worth the cost.

Which other solutions did I evaluate?

With erwin Mapping Manager, which I have PoC'd a few times, it was something that I'd always get to produce ETL code. I have also used WhereScape for several years as well, and that type of functionality is very useful when producing ETLs from your model. It provides a lot of saving. When you're not dealing with something extremely complex, but just has a lot of repeatable type stuff, then you get a pretty standard, robust model. That's a huge saving to be able to do that with ETL code.

What other advice do I have?

The ability to compare and synchronize data sources with data models in terms of accuracy and speed for keeping them in sync is pretty powerful. However, I have never actually used the models as something that associates source. It is something I would be interested in trying to learn how to use and get involved with that type of feature. It would be nice to be able to have everything tied in from start to finish.

I am now working with cloud and Snowflake. Therefore, I definitely see some very good use cases and benefits for modeling the cloud with erwin. For example, there is so much more erwin can offer for doing something automated with SqlDBM. 

I would rate this solution as an eight out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
PeerSpot user
Erika Lara - PeerSpot reviewer
Senior IT Auditor at Banking
Real User
Top 20Leaderboard
I like the graphic interface and responsive support team, but the solution is very difficult to set up
Pros and Cons
  • "The principal feature that I liked is that the solution has a very graphic interface."
  • "I would like the solution to be more user-friendly to deploy."

What is our primary use case?

We used this solution to upload and document our database models from the legacy systems.

What is most valuable?

The principal feature that I liked is that the solution has a very graphic interface.

What needs improvement?

I think the interface sometimes looks old-fashioned when compared to other solutions, so maybe that can be improved. 

Also, I would like the solution to be more user-friendly to deploy.

What do I think about the stability of the solution?

After those initial problems, the solution became stable.

What do I think about the scalability of the solution?

The solution only works with one server and one database, so I don't think it's scalable enough.

How are customer service and support?

The solution has a good support center. They provide responses quickly, in about a day. 

How was the initial setup?

The solution was a little hard to set up. We had to request help from our provider because we had some technical problems with getting the solution to work. There were some problems with configuration, and it took about three months to get the solution working.

What other advice do I have?

My advice to those considering this solution would be that they should first evaluate what they need. I suggest they maybe do a POC to evaluate their use cases and then work at finding a solution, and run all the necessary tests before starting to work with the solution.

I would rate this solution as a seven out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Buyer's Guide
Download our free erwin Data Modeler by Quest Report and get advice and tips from experienced pros sharing their opinions.
Updated: April 2024
Buyer's Guide
Download our free erwin Data Modeler by Quest Report and get advice and tips from experienced pros sharing their opinions.