erwin Data Intelligence by Quest Benefits

RD
Data Architect at NAMM California

erwin DI needs the Data Modeler, obviously, to be able to harvest the data directly from an existing database, or even a brand new one as you're designing it. That is a huge step in the right direction, although erwin has been known for that for 30 years. But the ability to take that model and interface it directly to the data governance makes it an easy update. It makes it simple for me to move from a development/design stage, for each environment, and into production, and to update the documentation using the data harvester and the Metadata Management tool and data cataloging module. That really brings it all together.

If I were to note any downside, it's that there are multiple modules and you can't have one without the other if you want to be world-class. But when you have them all, it makes life really easy for something like data profiling of an existing database to know if you want to keep it or not, given that there are so many legacy changes all the way through. The way we do it, when we make a change to a database or we add a database, the model is mapped, we import it, and then we have the data stewards populate any of our descriptions in their glossaries. The tool allows us to see all that instantly, unlike before.

I mentioned we have a data steward program, which is not part of the tool. While the solution has ways of using issues and for requesting data access within it, we're still stumbling with that. Sometimes it's just easier to talk to people. But we find that getting requests, getting data, and updating it, is actually a much easier process now.

In addition, the fact that I can always refer back to a centralized location with executive approval has helped me. 

For our business analysts and data analysts, especially for some of the wannabes and the data steward program, we have been able to centralize a tremendous amount of data into a common standard. One of our mandates was to have a Tableau-type business-intelligence component. We went live with our entire enterprise data warehouse, all the tools, in January of 2019, even though we started in 2016. We spent most of the year in massive amounts of discovery just around our organization's members. We didn't even get to claims or provider-contracting because they are so complex. The tool itself has expedited our getting to brand-new levels we've never seen with our members, because now things are becoming standardized.

People can refer to an inventory of reports and they can see that we don't have the same report in 20 different places, having 20 people support them. Now, there is one report in Tableau with one dataset. That dataset has become a centralized dictionary/glossary/ terminology inside the tool. Anybody who needs to get access to our data can access it. 

It's enables efficiency. Just in our marketing department alone, the number of new ways they have to think about our membership and growth has completely changed. They have access to data to make decisions. 

Executives can now look at what we call a scorecard of our PCP because we now have standardized sales. Everybody knows what they mean, how they are calculated.

Very high-end statistics and calculations are now easily designed. Anybody can go look at them, they know where to go. And if they want something because it helps make their business grow, it's almost a 24-hour turnaround, as opposed to a four-week SDLC process. It has expedited our process. The goal was to build a foundation and then, for the next couple of years, to really expand it. We hit that and I don't think we could've done it without these tools.

Recently we had to bring on a brand-new entity, a brand-new medical group. One of the minimum requirements was that we had to take 10 years of historical data from whatever system they had and to convert it, transform it, map it, and log it into our existing source of truth. We did this about four years ago for an entity, and it took us almost nine months just to get a dataset that somebody could use. This last time, it took us three weeks from start to finish because, outside of the governance tool, we have erwin's Mapping Manager and harvester. It also allows us to do source-to-target, so we have all our target mapping to our own repository, and then we have all our targets to EDW already mapped. Our goal was to bring a 100 percent source of truth. We had a complete audit, from when it came in from outside the building, to a location in the building. Then we would transform it into our EDW to whatever attributes, facts, or dimensions we wanted to. The tool allowed us to do that almost in hours, compared to what used to take months.

Another thing with their DI, not necessarily governance, but some of their other tools — which, of course, all feed back there — is that as soon as we do it, it's available to anybody. Not that a lot of people look at it, because a lot of times they just come and ask us, but the difference is that we're giving them the right answers within minutes. We don't have to tell them, "Well, let me go back and search it for six days."

We have downstream departments, like our risk department which manages our Medicare patients, and makes sure that we are taking care of them, which involves a very data-intensive process. Our ability to bring in historical data from an old system, a different type of a computer system, and convert it to make it look just like ours, no matter what it looked like before, is all because we have a data governance program. People can look at the changes from before and after and determine if they need certain data. 

A year ago, if somebody in our company's "left hand" brought in new data, no one but that left hand would know about it. Today, if somebody brings in data, all my data stewards know about it and they can choose to subscribe to it or not, today or later. And that is a matter of a flip of a switch for them, once we have brought it in and published it to anybody in the company. That's really important, for example, from the point of view of a human being. If someone has been around for 20 years it would be nice if we had all their records. Because of our data governance and what we built, all those records are maintained and associated with that person, and that's huge from a medical point of view. Data governance is helping us become even a better company because we know our data and how to use it.

The fact that erwin DI for Data Governance has affected our speed of analysis is a given. The DBAs are starting to use it more and even some of our executives are wanting to get to it for the data dictionary. It can happen that somebody from one of our departments sends them something and it doesn't make sense to them. Our goal was that if that happened we would try to find out and try to centralize it. We ended up creating our own dashboard reports on our Tableau server and published them to the same parties, so we could get rid of old habits and focus on new ones that have now been validated and verified, with the rules checked. 

The data governance allows us a real-time inventory. Every time there's a new request or a new ask, we put it in there and we track it and we make sure that our attributes are the same. If they're not, we have an explanation with a description for the different contexts in which the data is being used.

In addition, part of our ingest of an ask is that we take a first look at it and we provide as-is documentation so that the functional design can be tracked. That's a huge advantage. That has saved huge amounts of time in our development cycle, either for data exchange or interfacing, or even application development. The ability to just pull up the database, to be able to look at the fields and know what's important and what isn't important, note the definitions — we're able to support that kind of functionality. I'm one of the data architects here, and we work with everybody to make sure that our features and our epics are managed properly. For me to be able to quickly assess something, within a few minutes, to be able to say, "Here's the impact, here's what we have to do," and then hand it off to the full-blown design teams; that saves a month, easily. And that's especially true when there are 10 or 15 requests a week.

As for how the solution’s data cataloging, data literacy, and automation have affected the data used by decision-makers in our organization, on a scale of one to 10, I would give it a seven. It depends on which stakeholder or executive we're talking about. But has it had an impact? Every one of them has brand new reports, reports that didn't exist a year ago. Every one of them now sees data in a standardized format. The data governance tool might not have a direct impact on that, but it has an indirect impact due to the fact that we now govern our data. We treat data as an asset because of the tool. It's not cheap, it's an expensive tool. But my project has a monthly executive steering committee and, for 36 months, they never had a question and never second-guessed anything we did, and they loved any and all tools. So being able to sit with them and say, "Hey, we had an issue," and immediately give them a visual diagram — show what happened with the databases and what somebody may have misinterpreted — is huge; just huge. For everyone from our chief operating officer to our network operations, physicians' contracting, our medical management group, and our quality improvement group, it definitely has impacted the company.

We've only taken it out to about 50 percent of what it can do. There's so much it can do that we still don't do, because we ourselves are maturing into the program. It really has helped when it comes to harvesting or data profiling. For those processes, it's beautiful — hands-down the best so far. I love the data profiling.

View full review »
KK
Senior Director at a retailer with 10,001+ employees

I represent IT teams, and a lot of times different business teams want to do data analysis. Before using erwin Data Intelligence Suite, they used to constantly come to IT teams to understand things like how is the data is organized and what type of queries or table they should use. It used to take a lot of my team's time to answer those questions. Some of those questions were pretty repetitive. With erwin Data Intelligence Suite, they can now do a self-service. There is a business user portal using which they can search different tables. They can do the search in different ways. If they already know the table name, they can just directly search for that table name, and they will find the definition of each column there, and that would help them in understanding how to use that table. In some cases, they may not know the exact table name, but they may know, for example, a business metric. In such a case, they can search by using a business metric, and, inside the tool, they can link those business metrics to the underlying tables from which these metrics get calculated. They can get to the table definitions through that route as well. This is helping all of our business analysts to do the self-service analytics, and, at the same time, we can enforce some governance around it. Because we enabled self-service for different business analysts, it has improved the speed. It has easily reduced at least 20% of the time that my IT team had to spend answering questions from different business teams. The benefit is probably even more for business teams, and I think they are faster by at least 30% in terms of being able to get the data that they need and perform their analysis based on that. I would expect at least 25% savings in time.

It has a big impact in terms of the transparency of the data. Everybody is able to find the data by using the catalog, and they can also see how the data is getting loaded through different tables. It has given a lot of transparency. Based on this transparency, we were able to have a good discussion about how the data should be organized so that we can collaborate better with the business in terms of data organization. We were also able to change some of our table structures, data models, etc.

By using the data catalog, we have definitely improved in terms of maturity as a data-driven decision-maker organization. We are now getting to a level where everybody understands the data. Everyone understands how it is organized, and how they can use this data for different business decisions. The next level for us would be to go and use some of these advanced features such as AI Data Match.

In terms of the effect of the data pipeline on our speed of analysis, understanding the data pipeline and the data flow is helpful in identifying a problem and resolving it quickly. A lot of times there is some level of ambiguity, and businesses don't understand that how the data flows. Understanding the data pipeline helps them in quickly identifying the problems. They can solve the identified problems and bottlenecks in the data flow. For example, they can identify the data set that is required for a specific analysis and then bring in the data from another system. 

In terms of the money and time that the real-time data pipeline has saved us, it is hard to quantify the amount in dollars. In terms of time, it has saved us 25% time on the analysis part.

It has allowed us to automate a lot of stuff for data governance. By using Smart Data Connectors, we are automatically able to pull metric definitions from our reporting solution. We are then able to put an overall governance and approval process on top of that. Whenever a new business metric needs to be created, the data stewards who have Write access to the tool can go ahead and create those definitions. Other data stewards can peer-review their definitions. Our automated workflow then takes that metric to an approved state, and it can be used across the company. We have definitely built a lot of good automation with the help of this tool.

It definitely affects the quality of data. A lot of times, different teams may have different levels of understanding, and they might have different definitions of a particular metric. A good example is the customer lifetime value. This metric is used by multiple departments, but each department can have its own metric definition. In such a case, they will get different values for this metric, and they won't make consistent decisions across. If they have a common definition, which is governed by this tool, then everybody can reference that. When they do the analysis, they will get the same result, which leads to a better quality of decision-making across the company.

It affects data delivery in terms of making correct decisions. Ultimately, we are using all of this data to get some insights and then make decisions based on that. It is not so much of the cost but more of the risk that it affects.

View full review »
TZ
Analyst at Roche

Data Intelligence allows us to automate multiple tasks we had previously done manually, such as restructuring the metadata for our purposes, setting up ETL flows, and defining the data tables we create. It also enables us to standardize our approach and our technical processes.

The ability to automate metadata harvesting and ingestion from common industry sources is pretty nice. Data Intelligence helps us better understand the systems we use. We automatically connect to a specific system. For example, we might connect our Oracle Snowflake databases to Teradata Cloud and automatically ingest all the metadata. We transform it and browse through what we can use. Automation also simplifies the mapping processes. We don't need to recreate anything manually. 

Data Intelligence's standard data connectors for the metadata manager are easy to use. We can connect the systems and have everything in place. 

The automation capabilities help us create ETL flows and map the tables in our system. It frees up our staff who would otherwise need to spend time generating all those pieces manually. Data Intelligence lets us connect to those reports and change the metadata automatically. We get a picture of the target lineage, so we can check the dependencies of one data object on another.

The lineage functionality that erwin offers is only a recommendation, and it isn't fully validated. We cannot trust shared data because anyone can modify a work in progress. Based on the latest information in erwin, we can make high-level assumptions and trust the data in the process.

Our project must be validated, and erwin is a non-validated tool. We can only use outcomes that we can validate. We can check the data lineage and see the potential data flows from sources to targets. However, we cannot fully track the data that we have there.

Data Intelligence saves time on data discovery and helps us understand our data through standardization. It helps us connect to data services and simplifies the process. That's one of the significant benefits of using the Data Intelligence suite. It's difficult to estimate how much time it saves us. In the early stages of a project, we need to spend a lot of time on integration. Data Intelligence simplifies and standardizes the mapping. It's hard to say how much because I would need to compare the time spent manually generating the metrics in Excel versus doing it in Data Intelligence. 

View full review »
Buyer's Guide
erwin Data Intelligence by Quest
April 2024
Learn what your peers think about erwin Data Intelligence by Quest. Get advice and tips from experienced pros sharing their opinions. Updated: April 2024.
767,667 professionals have used our research since 2012.
Roy Pollack - PeerSpot reviewer
Advisor Application Architect at CPS Energy

Data Intelligence has provided more profound insights into legacy data movements, lineages, and definitions in the short term. We have linked three critical layers of data, providing us with an end-to-end lineage at the column level.

Our long-term plans include adding other systems to complete the end-to-end picture of the data lineage. We also intend to better utilize the Business Glossary and Mind Map features. This will require commitment from a planned data governance program, which may still be a year or more into the future.

View full review »
KW
Senior Solution Architect at a pharma/biotech company with 10,001+ employees

Data Intelligence creates a single source of truth for all of our metadata. This solution is better for data warehousing, but the metadata features speed up our development work. It's easy to create and manage mappings because we can export them to Informatica and pick up the work where we left off.

We are using the connectors for Snowflake and our data warehouse. The data connectors work well. We've never had any bugs or other issues when new versions of the connectors are released. 

The solution allows us to deliver data pipelines faster and cheaper. The alternative is to write the code down from scratch, so it's almost 30 percent faster. 

View full review »
JG
Release Train Engineer (RTE) at a pharma/biotech company with 10,001+ employees

The solution saves us time and reduces the number of bugs by automatically generating software, rather than manually creating it.

The solution saves time in data discovery and understanding our entire organization's data.

View full review »
TH
Architecture Sr. Manager, Data Design & Metadata Mgmt at a insurance company with 10,001+ employees

One of the ways this is helping to improve our delivery is through the increased understanding of what the data is, so that we're not mapping incorrect data from a source to a target. 

We also have additional understanding of where our best data is. For example, when you think of the HL7 FHIR work and the need to map customer data to a specific FHIR profile, we need to understand where our best data is, as well as the definition of the data so that we are mapping the correct data. Health-interoperability requires us to provide the customer with the data they request when they request it. There are multiple levels of complexity in doing that work. The Data Intelligence Suite is helping us to manage and document all of those complexities to ensure that we are delivering the right data to the customer when they request it.

erwin DI also provides us with a real-time, understandable data-pipeline. One of the use cases that we didn't talk about is that we set up batch jobs to automate the metadata ingestion, so that we always have up to date and accurate metadata. It saves us a great deal because we always know where our metadata is, and what our data is, versus having to spend weeks hunting down information. For example, if we needed to make a change to a datastore, and we needed to understand the other datastores that are dependent on that data, we know that at a moment's notice. It's not delayed by a month. It's not a case of someone either having to manually look through Excel spreadsheet mapping documents or needing to get a new degree in a software tool such as Informatica or DataStage or Ab Initio, or even reading Python. We always know where our data is, and anybody can look that up, whether they're a business person who doesn't know anything about Informatica, or a developer who knows everything about creating data movement jobs in Informatica, but who does not understand the business terminology or the data that is being used in the tool.

The solution also automates critical areas of our data governance and data management infrastructure. The data management is, obviously, key in understanding where the data is and what the data is. And the governance can be done at multiple levels. You have the governance of the code sets versus the governance of the business terms and the definitions of those business terms. You have the governance of the business data models and how those business data models are driving the physical implementation of the actual databases. And, of course, you have the governance of the mapping to make sure that source-to-target mapping is done and is being shared across the company.

In terms of how this affects the quality and speed of delivery of data, I did use-case studies before we brought the Data Intelligence Suite into our company. Some of those use cases included research into impact analysis taking between six and 16 weeks to figure out where data is, or where an impact would be. Having the mapping documents drive the data lineage and impact analysis in the Data Intelligence Suite means that data investigation into impact analysis takes minutes instead of weeks. The understanding of what the data is, is critical to any company. And being able to find that information with the click of a button, versus having to request access to share drives and Confluence and SharePoint drives and Alation, and anywhere that metadata could be, is a notable difference. Having to ask people, "Do you have this information?" versus being able to go and find it yourself saves incredible amounts of time. And it enables everyone, whether it's a business person or a designer, or a data architect, a data modeler, or a developer. Everyone is able to use the tool and that is extremely important, because you need a tool that is user-friendly, intuitive, and easily understood, no matter your technical capabilities.

Also, the things around production that the solution can do have been very helpful to us. This includes creating release reports, so that we know what production looked like prior to an implementation versus what it looks like afterward. It helps with understanding any new data movement that was implemented versus what it was previously. Those are the production implementations that are key for us right now.

Another aspect is that the solution’s data cataloging, data literacy, and automation have been extremely important in helping people understand what the data is so that they use it correctly. That happens at all levels.

The responsiveness of the tool has been fantastic. The amount of time that it takes to do work has been so significantly decreased. If you were creating a mapping document, especially if you were doing it in an Excel spreadsheet, you would have to manually type in every single piece of information: the name of the system, the name of the table, the name of the column, the data type, the length of the column. Any information that you needed to put into a source-to-target mapping document would have to be manually entered.

Especially within the Mapping Manager, the ability to automatically create the mapping document through drag-and-drop functionality of the metadata that is in the system catalog, within the Metadata Manager, results in savings on the order weeks or days. When you drag and drop the information from the metadata catalog into the mapping document, the majority of the mapping document is filled out, and the only thing that you have to do manually is put the information in about the type of data movement or transformation that you're going to do on the data. And even some of that is automated, or could be automated. You're talking about significant time savings.

And because you have all of the information right there in the tool, you don't have to look at different places to find the understanding of the data that you're working with. All of the information is right there, which is another time savings. It's like one-stop shopping. You can either go to seven stores to get everything you want, or you can go to one store. Which option would you choose? Most people would prefer to go to one store. And that's what the Data Intelligence Suite gives you: one place.

I can say, in general that a large number of hours are saved, depending on the work that is being done, because of the automation capabilities and the ability to instantly understand what your data is and where it is. We are working on creating metrics. For example, we have one metric where it has taken someone hours to do research, to understand what the data is and where it is and map it to a business term, versus where it has taken less than two minutes to map 600 physical columns to a business term.

View full review »
AS
Architect at a insurance company with 10,001+ employees

The benefit of the solution was the adoption of a lot of business partners using and leveraging our data through our governance processes. We have matrices of how many users have been capturing and using it. We have data consultants and other data governance teams who are set up to review these processes and ensure that nobody is really bypassing them. We use this tool in the middle of our work processes for utilization of data on the tail-end, letting the business do self-service, and build our own IT things.

When we manage our data processes, we know that there are some upward sources or downstream systems. We know that they could be impacted based on some changes coming in from the source or some related to the lineage and impact analysis that this tool brings to the table. We have been able to identify system changes which could impact all downstream systems. That is a big plus because IT and production support teams are now able to use this tool to identify the impact of any issues with the data or any data quality gaps. They can notify all the recipients upfront with the product business communications of any impacts.

For any company mature enough to have implemented any of these data governance rules or principles, these are the building blocks of the actual process. The criticality is such because we want the business to self-service. We can build data lakes or data warehouses using our data pipelines, but if nobody can actually use the data to be able to see what information they have available without going through IT sources, that defeats the whole purpose of doing this additional work. It is a data platform that allows any business process to come in and be self-service, building their own processes without a lot of IT dependencies.

There is a data science function where a lot of critical operational reporting can be done. Users leverage this tool to be able to discover what information is available, and it's very heavily used.

If we start capturing additional data about some metadata, then we can define our own user-defined attributes, which we can then start capturing. It does provide all the information that we want to manage. For our own processes, we have some special tags that we have been able to configure quickly through this tool to start capturing that information.

We have our own homegrown solutions built around the data that we are capturing in the tool. We build our own pipelines and have our own homegrown ETL tools built using Spark and cloud-based ecosystems. We capture all the metadata in this tool and all the transformation business rules are captured there too. We have API-level interfaces built into the tool to pull the data at the runtime. We then use that information to build our pipelines.

This tool allows us to bring in any data stewards in the business area to use this tool and set up the metadata, so we don't have to spend a lot of time in IT understanding all the data transformation rules. The business can set up the business metadata, and once it is set up, IT can then use the metadata directly, which feeds into our ETL tool.

Impact analysis is a huge benefit because it gives us access to our pipeline and data mapping. It captures the source systems from which the data came. For each source system, there is good lineage so we can identify where it came from. Then, it is loaded into our clean zone and data warehouse, where I have reports, data extracts, API calls, and the web application layer. This provides access to all the interfaces and how information has been consumed. Impact analysis, at an IT and field levels, lets me determine:

  • What kind of business rules are applied. 
  • How data has been transformed from each stage. 
  • How the data is consumed and moved to different data marts or reporting layers. 

Our visibility is now huge, creating a good IT and business process. With confidence, they can assess where the information is, who is using it, and what applications are impacted if that information is not available, inaccurate, or if there are any issues at the source. That impact analysis part is a very strong use case of this tool.

View full review »
JC
Works at a insurance company with 5,001-10,000 employees

The biggest impact for us is that erwin generates DDL extremely quickly. We're able to pull in metadata, map it to a target, generate DDL to create the tables, and generate SSIS packages. Previously, especially going back 10 to 15 years ago, hundreds of hours had to be spent to manually perform these tasks. This solution completely automates it and gets it 90% done. We can then pass it off to a developer to create the items in SSIS.

View full review »
MT
Business Intelligence BA at a insurance company with 10,001+ employees

The automated data lineage is very useful. We used to work in Excel, and there is no way to trace the lineage of the data. Since we started working with DI, we have been able to quickly trace the lineage, as well as do an impact analysis.

We do not use the ETL functionality. I do know, however, that there is a feature that allows you to export your mapping into Informatica.

Using this product has improved our process in several ways. When we were using Excel, we did not know for sure that what was entered in the database was what had been entered into Excel. One of the reasons for this is that Excel documents contain a lot of typos. Often, we don't know the data type or the data length, and these are some of the reasons that lineage and traceability are important. Prior to this, it was zero. Now, because we're able to create metadata from our databases, it's easier for us to create mappings. As a result, the typos virtually disappeared because we just drag-and-drop each field instead of typing it. 

Another important thing is that with Excel, it is too cumbersome or next to impossible to document the source path for XSD files. With DI, since we're able to model it in the tool, we can drag and drop and we don't have to type the source path. It's automatic.

This tool has taken us from having nothing to being very efficient. It's really hard to compare because we have never had these features before.

The data pipeline definitely improved the speed of analysis in our use case. We have not timed it but having the lineage, and being able to just click, makes it easier and faster. We believe that we are the envy of other departments that are not using DI. For them to conduct an impact analysis takes perhaps a few minutes or even a few hours, whereas, for us, it takes less than one minute to complete.

We have automated parts of our data management infrastructure and it has had a positive effect on our quality and speed of delivery. We have a template that the system uses to create SQL code for us. The code handles the moving of data and if they are direct move fields, it means that we don't need a person to code this operation. Instead, we just run the template.

The automation that we use is isolated and not for everything, but it affects our cost and risk in a positive way because it works efficiently to produce code.

It is reasonable to say that DI's generation of production code through automated code engineering reduces the cost from initial concept to implementation. However, it is only a small percentage of our usage.

With respect to the transparency and accuracy of data movement and data integration, this solution has had a positive impact on our process. If we bring a new source system into the data warehouse and the interconnection between that system and us is through XML then it's easier for us to start the mapping in DI. It is both efficient and effective. Downstream, things are more efficient as well. It used to take days for the BAs to do the mapping and now, it probably takes less than one hour.

We have tried the AIMatch feature a couple of times, and it was okay. It is intended to help automatically discover relationships and associations in data and I found that it was positive, albeit more relevant to the data governance team, of which I am not part. I think that it is a feature in its infancy and there is a lot of room for improvement.

Overall, DI's data cataloging, data literacy, and automation have helped our decision-makers because when a source wants to change something, we immediately know what the impact is going to be downstream. For example, if a source were to say "Okay, we're no longer going to send this field to you," then immediately we will know what the impact downstream will be. In response, either we can inform upstream to hold off on making changes, or we can inform the departments that will be impacted. That in itself has a lot of value.

View full review »
JM
Analytics Delivery Manager at DXC

This use case is a one-time system conversion solution not having life after the migration. Value is in the acceleration, accuracy, quality, and completeness of the migration source to target mapping and generated data management code.

Use case action is the extraction and staging of the source application data targeting ~700 large objects from the overall application set of ~2,400 relational tables. Each table extract has light join and selection criteria which are injected into the source metadata. The application itself is moving to a next-generation application that performs the same business function. Our client is in health and human services welfare administration in the United States. This use case doesn't have ongoing data governance for our client, at least at this point.

erwin DIS has enabled us to automate critical areas of data management infrastructure. That's where we see the benefit, in the acceleration of speed as well as the acceleration of quality and reduction of costs. 

erwin DIS generation of data management code through automated code engineering reduced the time it takes to go from initial concept to implementation for what we're in progress with right now. There is not a production delivery as of yet. That's still another year and a half out. This is a multi-year project where this use case is applied.

erwin has affected the transparency and accuracy of data movement and data integration quite a bit through the various report facilities. We can make self-service reporting available through the business user's portal. erwin DIS has provided the framework and the capability to be transparent, to have stakeholder involvement with the exercise the whole way along.

Through business user's portals and workflows, we're able to provide effective stakeholder reviews as well as then stakeholder access to all of the information and knowledge that's collected. The facility itself gives quite a few capabilities into user-defined parameters to capture data knowledge and organization change information which project stakeholders can use and apply throughout the program. Client and stakeholders utilize the business user's portal for extended visibility which is a big benefit.

We're interested in the AIMatch feature. It's something that we had worked with AnalytiX DS early on to actually develop some of the ideas for. We were somewhat instrumental in bringing some of that technology in, but in this particular case, we're not using it. 

View full review »
SA
Sr. Manager, Data Governance at a insurance company with 501-1,000 employees

We have only had it a couple months. I am working with the DBAs to get what I would call a foundational installation of the data in. My company doesn't have a department called Data Governance, so I'm having to do some of this work during the cracks of my work day, but I'm expecting it to be well-received.

View full review »
EL
Delivery Director at a computer software company with 1,001-5,000 employees

The standard data connectors for automation, metadata, harvesting, and ingestion are easy to use.

The solution enables us to deliver data pipelines faster and with a 20 to 30 percent reduction in cost.

erwin provides an immediate view of the technical details required to manage our data landscape. Data transparency is increasing which helps our IT operations.

It delivers up-to-date and detailed data lineage which is important.

erwin helps us with data discovery and understanding of our entire organization's data.

The solution provides visibility into our organization's data for our IT, data governance, and business users which we require to build the compound report.

The asset discovery and collaboration provided by the data quality feature is good.

Its ability to affect data users' confidence levels when they are utilizing data is admirable.

erwin's capacity to tackle challenges associated with data quality and offer the required information for users to make well-informed decisions is effective in overseeing the regional database and metadata.

View full review »
Ahmad AlRjoub - PeerSpot reviewer
Data Management Consultant at CompTechCo

erwin enhances our overall data governance process, improving the business as well as our ability to meet compliance and reporting requirements. 

The solution helps us save time and understand our data better. For example, you can search for keywords or discover assets by classification, feedback, and rating. You can sort assets by the highest or lowest rating. It has reduced the time spent on these tasks by about 20 percent. 

View full review »
MJ
Solution Architect at a pharma/biotech company with 10,001+ employees

It is improving just a small piece of our company. We are an extremely big company. Implementing this to the company, there is probably a zero percent adoption rate because I think it is only implemented in the development team of our platform. 

If you look at this from the perspective of the platform that we are delivering, the adoption rate is around 90 percent because almost every area and step somehow touches the tool. We, as a program, are delivering a data-oriented platform, and erwin DI is helping us build that for our customers. 

The tool is not like Outlook where everyone in the company really uses it or SharePoint that is company-wide. We are using this in our program as a tool to help my technical analysts, data modelers, developers, etc.

View full review »
MN
Practice Director - Digital & Analytics Practice at HCL Technologies

Companies will say that data is their most valuable asset. If you, personally, have an expensive car or a villa, those are valued assets and you make sure that the car is taken for service on a regular basis and that the house is painted on a regular basis. When it comes to data, although people agree that it is one of the most valued assets, the way it is managed in many organizations is that people still use Excel sheets and manual methods. In this era, where data is growing humongously on a day-to-day basis—especially data that is outside the enterprise, through social media—you need a mechanism and process to handle it. That mechanism and process should be amply supported with the proper technology platform. And that's the type of technology platform provided by erwin, one that stitches data catalogs together with business glossaries and provides intelligent connectors and metadata harvesters. Gone are the days where you can use Excel sheets to manage your organization. erwin steps up and changes the game to manage your most valued asset in the best way possible.

The solution allows you to automate critical areas of your data governance and data management infrastructure. Manual methods for managing data are no longer practical. Rather than that, automation is really important. Using this solution, you can very easily search for something and very easily collaborate with others, whether it's asking questions, creating a change request, or creating a workflow process. All of these aspects are really important. With this kind of solution, all the actions that you've taken, and the responses, are in one place. It's no longer manual work. It reduces the complexity a lot, improves efficiency a lot, and time management is much easier. Everything is in a single place and everybody has an idea of what is happening, rather than one-on-one emails or somebody having an Excel sheet on their desktop.

The solution also affects the transparency and accuracy of data movement and data integration. If people are using Excel sheets, there is my version of truth versus your version of truth. There's no source of truth. There's no way an enterprise can benefit from that kind of situation. Bringing in standardization across the organization happens only through tools like metadata harvesters, data catalogs, business glossaries, and stewardship tools. This is what helps bring transparency.

The AIMatch feature, to automatically discover and suggest relationships and associations between business terms and physical metadata, is another very important aspect because automation is at the heart of today's technology. Everything is planned at scale. Enterprises have many data users, and the number of data users has increased tremendously in the last four or five years, along with the amount of data. Applications, data assets, databases, and integration technologies have all evolved a lot in the last few years. Going at scale is really important and automation is the only way to do so. You can't do it working manually.

erwin DI’s data cataloging, data literacy, and automation have reduced a lot of complexities by bringing all the assets together and making sense out of them. It has improved the collaboration between stakeholders a lot. Previously, IT and business were separate things. This has brought everybody together. IT and business understand the need for maintaining data and having ownership for that data. Becoming a data-literate organization, with proper mechanisms and processes and tools to manage the most valued assets, has definitely increased business in terms of revenues, customer service, and customer satisfaction. All these areas have improved a lot because there are owners and stewards from business as well as IT. There are processes and tools to support them. The solution has helped our clients a lot in terms of overall data management and driving value from data.

View full review »
BM
Data Program Manager at a non-tech company with 201-500 employees

This type of solution was key to moving our entire company in the right direction by getting everyone to think about data governance.

View full review »
Buyer's Guide
erwin Data Intelligence by Quest
April 2024
Learn what your peers think about erwin Data Intelligence by Quest. Get advice and tips from experienced pros sharing their opinions. Updated: April 2024.
767,667 professionals have used our research since 2012.