What is our primary use case?
We work on different platforms like SQL Sever, Oracle, DB2, Teradata and NOSQL. When we take in requirements, it will be through Excel spreadsheet which is a Mapping Document and this contains information about Source and Target and there mapping and transformation rules. We understand the requirements and start building the conceptual model and then the logical model. When we have these Data Models built in erwin Data Modeler tool, we generate the PDF Data Model diagrams and take it to the team (DBA, BSAs, QA and others) to explain the model diagram. Once everything is reviewed, then we go on to discuss the physical Data Model. This is one aspect of the requirement from Data Warehouse perspective.
Other aspect of the requirement can be from the operational systems where the application requirements might come through as DDLs with SQL extension files where we reverse engineer those files and have the models generated within erwin Data Modeler. Some of them, we follow the same templates as they are. But some others, once we reverse-engineer and have that Model within the erwin, we make changes to entity names, table names and capture metadata according to RBC standards. We have standards defined internally, and we follow and apply these standards on the Data Models.
How has it helped my organization?
There are different access level permissions given to different users who are Data Modelers, Data Architects, Database Administrators, etc. These permission have read, write and delete options. Some team members only have read-only access to the Data Models while others have more. Therefore, this helps us with security and maintain the Data Models.
The solution’s ability to generate database code from a model for a wide array of data sources cuts development time in only some scenarios for us where we have the data model build into the erwin tool. E.g., I can generate a DDL for the DBAs to create tables on the database. But, in other scenarios, it will be the DBAs who will access the erwin tool with read-only access. They will fetch the DDLs from the models that we created. Once the DDL is generated from the erwin tool, it is all about running the script on the database to create tables and relationships. There are some other scenarios where we might add an index or a default value based on the requirements. 90 percent of the work is being done by the tool.
The solution’s code generation ensures accurate engineering of data sources, as there is no development time. Code doesn't even have to be reviewed. We have been using this solution for so long and all the code which has been generated is accurate with the requirements. Once we generate the DDLs out of the erwin tools, the development team does a quick review of the script line by line. They will just be running the script on the database and looking into other requirements, such as the index. So, there is less effort from development side to create tables or build a database.
What is most valuable?
We have a very large number of operational and Data Mart Data Models inside of the erwin tool. It has a huge volume of metadata captured. Therefore, when we are working on a very large requirement, there is an option called Bulk Editor where we can input large files into the erwin in one shot to build the Data Mode with much lesser time. All the built-in features are easy to use.
We make use of the solution’s configurable workspace and modeling canvas. All the features available help us to build our Data Model, show the entities, and the relationship between the entities, define the data types and add description of the entities and attributes. With all of this we can take out the PDF version of the Data Model diagram, then send them across for any teams to review.
Not to forget the version saving feature. Every time we make changes by adding, deleting and modifying to the Data Models and save, the tool automatically create a new Data Model versions so we don't lose any work. We can go back to the previous versions and reverse all the changes and make it a current version if needed.
What needs improvement?
Some Source official systems give us DDLs to work with and they have contents not required to be part of the DDL before we reverse engineer in the erwin DM. Therefore, we manually make changes to those scripts and edit them, then reverse-engineer within the tool. So, it does take some time to edit these DDL scripts generated by the source operational systems. What I would suggest: It would be helpful if there were a place within the erwin tool to import the file and automatically eliminate all the unnecessary lines of code, and just have the clean code built-in to generate the table/data model.
For how long have I used the solution?
I have been using this tool for five years. I have used this tool at my previous companies as well as in my current company.
What do I think about the stability of the solution?
One recent scenario that we came across was in our day-by-day activities, where Data Models are growing in very large numbers. For some reason, the performance was bit low. It was suggested to upgrade to the newer version, which is erwin Data Modeler 2019 R1. So, we are already in the process of moving into the newer version. Once we migrate, we will do all the user testing to see how the performance has increased from the previous version. If there still are any performance issues or other features errors, we will get back to the support team.
So far, whenever we have moved to a newer version, there has always been a positive result. We keep that version until we see a newer version. Every six months or once a year, we get in touch with the erwin support team to ask for any suggestions to see if any new features added and any enhancement to the newest version. Or, is it a right time to move into the newest version or just stick to our current version? They suggest based on our use cases and requirements.
For deployment and maintenance of this solution, five to 10 people are needed. E.g., two people are involved from our team, two DBAs, and two people from the server team and other teams.
What do I think about the scalability of the solution?
What we have is a huge volume of data so far. We have a very large number of Data Models with Operational Systems, Data Marts and it still has room for extension and expansion.
Within my current company, this product has been accessed by Data Modelers, Database Administrators, Data Architects, and Data Scientists. 50 to 100 people have access to this solution.
How are customer service and technical support?
Once a year or every two years, we upgrade to the latest version. If we are looking for any new features or enhancements to be used for new use cases or requirements, we get in touch with the erwin support team. They are very helpful in understanding and providing the best possible suggestions and solutions with a very impressive SLAs. They really guide us and give us a solution when we have to upgrade versions.
Which solution did I use previously and why did I switch?
I have not used another solution with my current company. While I have used other solutions before, the majority of the time, I have been with erwin Data Modeler.
How was the initial setup?
Whenever there is a new release, we do the testing, installation from scratch. The initial setup is straightforward. Once you install the product, it downloads onto your system. Once you double click, it gives you the basic instructions, like any other product. You just have to click on "next", where everything is configured already.
Somethings might be company-specific requirements. For these, you have to make sure you select the right options. Apart from that, everything is straightforward. Until you get to the last page, where you give it your Server details and selecting the windows credentials to log in, and that is company specific.
Once we have it on the production environment, privileges are given only to Data Modelers who can read, write, and delete to design the Data Model.
What about the implementation team?
This is implemented in-house where this software is packaged by Application Support team who deploys it on the production environment on our internal Software Center application. To download and install this solution takes about 40 to 50 minutes.
What was our ROI?
We haven't moved away from this product for a very long time. I am sure the company has seen benefits and profits out of the solution, saving a lot of work effort and resources.
The accuracy and speed of the solution in transforming complex designs into well-aligned data sources makes the cost of the tool definitely worth it.
What's my experience with pricing, setup cost, and licensing?
This company had bought the license for three years, and it's not an individual license. While you can buy a license for each individual, that would be very expensive. There is something called concurrent licenses where you can purchase licenses in bulk and 15 to 20 people can access the license and model. Concurrent licenses are scalable to the number of users and are proportional to the cost.
Which other solutions did I evaluate?
When I joined the company, the product was already here. Our internal team will have a meeting to discussion on new releases of this product. When we talk to the erwin support team, we ask, "What are the newest features? Will these be beneficial for our company based on the requirements and use cases?" Once everyone has given their opinion, then we move forward in upgrading to newer version considering the performance, new features or enhancements.
What other advice do I have?
For our use cases and requirements, we are very happy with the erwin product. If we come across any issues or have any doubts about the tool, we get really good support from erwin support team.
They definitely have a positive impact on overall solutioning because of how they design and capture data. This is definitely something any company who is involved with data should look into, specifically when there are many database platforms and dealing with huge volume of data. It is definitely scalable as well, as we are one of the biggest financial institutions and have a very massive Data Models inside this tool.
The biggest lesson learnt from using this solution is how we can capture metadata along with the data structure of the database models. Sometimes, when we go to the business showing the designs of the conceptual/logical model, they want to understand what the table and each field is about. So, we do have an option to go into each entities/attributes to add the respective information and show them the metadata captured for these entities and attributes.
I would rate the newest releas as 9.5 out of 10. When our requirement use case change, the solution moves to a newer version and everything works fine. We are happy with that. However, as time goes, a year or two, we might come across some situations where we look for better enhancements of features or newer features.
Which deployment model are you using for this solution?