Over 265,117 professionals have used IT Central Station research.
Compare the best Test Data Management vendors based on product reviews, ratings, and comparisons.
All reviews and ratings are from real users, validated by our triple authentication process.
The total ranking of a product, represented by the bar length, is based on a weighted aggregate score.
The score is calculated as follows: The product with the highest count in each area gets the highest available score.
(20 points for Reviews; 16 points for Views, Comparisons, and Followers.)
Every other product gets assigned points based on its total in proportion to the #1 product in
that area. For example, if a product has 80% of the number of reviews compared to the product
with the most reviews then the product's score for reviews would be 20% (weighting factor) *
80% = 16. For Average Rating, the maximum score is 32 points awarded linearly based on our
rating scale of 1-10. If a product has fewer than ten reviews, the point contribution
for Average Rating is reduced (one-third reduction in points for products with 5-9 reviews;
two-thirds reduction for products with fewer than five reviews). Reviews that are more than 24 months old,
as well as those written by resellers, are completely excluded from the ranking algorithm.
Test Data Management (TDM) refers to the supervision and administration of enterprise architectures, methods and policies to successfully manage the value of the data and information lifecycle in-house or from outside vendor sources.
Test Data Management is important to IT Central Station users who are tasked with managing data across an entire organization. Standard Operating Procedures (SOPs) are put in place in order for Test Data Management to be consistent with guidelines and procedures developed for the daily functioning of a company. IT managers and DevOps are looking for robust test automation that is flexible, dynamic and able to be efficiently deployed. In order to adhere to company directives, IT managers and DevOps use various key software on-premises, in the cloud, the hybrid cloud and across mobile devices. Test Data Management should deliver data security and copying speed, and increase virtual and automation efficiency. The strategy for developing and facilitating TDM implementation should address variables of data verification, data confidentiality, disk space and prolonged test duration.
IT managers and DevOps align with critical data compliance and confidentiality. Because integrated, sensitive data, business classification and policy-driven data masking are integral to security, IT Central Station users look for a safe and secure environment in which to test data and prefer that Test Data Management ensures that each test begins with a consistent data state, important in maintaining predictable data at the end of testing. Overseeing visible test results and the effects on a database is vital and would be almost impossible to achieve manually. IT and DevOps need to effectively monitor data in order to serve Enterprise needs and outcomes, therefore Test Data Management deployment is the best solution for maintaining protocol and serving the professionals involved.
Test Data Management Reviews
Read reviews of Test Data Management that are trending in the IT Central Station community:
Your trust is our top concern, so companies can't alter or remove reviews.
A lot of people, when they first started looking at the tool, started immediately jumping in and looking at the data masking, the data subsetting that it can do, and it works fantastically to help with the compliance issues for masking their... more»
When I look at the return on investment, there are not only huge financial gains on it. In fact, when I recently ran the numbers, we had about $1.1 million in savings on just the financials from 2016 alone. What it came down to is, when we... more»
It's cool that right now with this tool, they're doing a lot of things to continuously improve it. I think Test Data Management as a strategy across the whole organization, has really picked up a lot of momentum, and CA’s been intelligent to... more»
TDM has tons of great solutions involved in one package. For me personally, what I find to be most valuable is its ability to do synthetic data creation. I love that because it has a lot of flexibility and you do not have to worry about one... more»
The benefits of doing something like the data creation is that you are going to be able to totally have control of your data from the get-go. You are not worrying about "what you see is what you get" kind of results from a production set.... more»
As the solution continues to evolve, the one thing I like about it is the API-friendly layers that they have added into the realm. So, I just would like to see more support around that and more usability. If we can continue expand upon that,... more»
You've got the basic services of the TDM tool which allows you to mask data, it allows you to create synthetic data, but I think what really sets TDM apart from the other competitors, is the kind of the added extras that you get with doing... more»
We've got a centralized COE in terms of test data management within our organization, benefits that are really three fold in terms of cost, quality and time to market. In terms of the quality we through test data management where, data is... more»
I think the kind of the big area for exploitation for us is already a feature that already exists within the tool. The TCO element is something massive, I talked earlier on about the kind of the maturity and the structure that it gives you to... more»
We use these functions: * Archive – archive data to PST files, we do not archive data to the archive DBMS * Data privacy – masking of PII and PHI data * Submitter – test data management This product archives all types of relational databases.... more»
Archive Moving data that is no longer needed in daily processing on to archive files, resulting in: * Faster query performance. * Shorter batch execution trails. * Stops/slows the growth of disk spaces. We don’t have to constantly add... more»
IBM treats non-production deploys with a lower priority to resolve problems. When we archive, we select a similar-sized non-production environment first, before we archive production. We need the volume to determine how big to make the... more»
There are a lot of great features with Test Data Manager. It's allowed us to really revolutionize the way that we are doing our testing. We use Test Data Manager to create synthetic data for our testing purposes. We have a legacy application that has been around for a very long time, and a lot of the people that originally developed it are no longer with the... more»
It's really allowed us to focus on the craft of testing, and not focus on the creating data, spending time setting up the scenario that we need, or trying to find that sort of thing. It's allowed us to be more focused on our efforts, it's allowed us to be faster. We run an agile shop, and so we've been able to use that data in our pipeline as we deliver stuff in... more»
One of the most valuable features to us is synthetic data generation. We generate a lot of synthetic data for our performance testing and bulging up our performance environment to see how much load they can sustain. We've been doing it for... more»
It has really changed the culture in the company because nobody could ever imagine generating millions of records. Even production systems have just a couple of million records. When you want to test your applications with three or four times... more»
The solution can really improve on non-relational data structures because that's a big industry use case which we are foreseeing, with non-relational database structures. I talk about databases. I talk about request-response pairs; the... more»
The most important feature I see is how to have a centralized view of the test data; how efficiently you can use the test data across different business units, starting from generating the data that you need to use, to how to use it,... more»
Now, though, my company is going through a process to be more agile, which is basically the theme of a recent CA conference I attended. While we are trying to go through the agile journey, there are some building blocks that need to be in... more»
I think one thing we would like to see is how quickly it can be used like a SaaS product. You can just plug in incoming data that we have from different sources; how quickly that can be integrated and how the test data can be generated. That... more»
It allows us to create a testing environment that is repeatable. And we can manage the data so that our testing becomes automated, everything from actually performing the testing to also evaluating the results. We can automate that process.... more»
The interface based, on our unique test case - because we are extremely unique platform - could be better. We have to do multiple steps just to create a single output. We understand that, because we are a niche architecture, it's not high on... more»
Test Data Manager allows you to do synthetic data generation. It gives you a high level of confidence in your data that you're creating. It also keeps you out of the SOX arena, because there's no production data within that environment. The... more»
We have certain aspects of our data that we have to self-generate. The VIN number is one that we have to generate and we have to be able to generate on the fly. TDM allows us to generate that VIN number based upon whether it's a truck, car,... more»
I would probably like to see improvement in the ease of the rule use. I think sometimes it gets a little cumbersome setting up some of the rules. I'd like to be able to see a rule inside of a rule inside of a rule; kind of an iterative process.
The most valuable feature is the Portal that comes with the tool. That helps make it look much more user-friendly for the users. Also its ease of use - even for developers it's not that complicated. It gives us the ability to * mask the data * sub-set the data * synthetically generate test data * create test data for specific business case scenarios and more.
Primary Use Case
We are having a serious trouble delivering and providing development environments and test environments for the diversifications that we use at the bank. What we are trying to do is use the time for that delivery, and that is why we are exploring using CA tools for networks. We already have the Server Virtualization tool, and now we are entering into the test of the management phase of tables.
• Improvements to My Organization
We realized before what we are looking for. Now, we are trying to get everybody to work with the DevOps mindset. That is the main advantage and benefit that we are seeing here.
• Valuable Features
They are able to provide some technical data, then provide that data to multiple teams for testing purposes. That is the most value for us....
The product provides four main features: Data discovery to profile data and mask sensitive data. Data subsetting to take a subset of production data from all schema tables while preserving referential integrity. Data masking to create masking... more»
I did the implementation for many clients. It is mostly used to mask production data and get values that are realistic but not real. With this, they create development and testing environments with data that looks like it’s real. Developers... more»
Currently the data mining, complex data mining, that we do out there. Any sort of financial institution runs along the same challenges that we face in that referential integrity across all databases, and finding that one unique customer piece... more»
The benefit is that it removes manual intervention. A lot of the time that we've spent previously was always manually, as an individual running SQL scripts against databases, or manually going through a UI to create data. These solutions... more»
I think the biggest one will be - all financial institutions are based on mainframes, so they're never going to go away. Opportunities to increase functionality and efficiencies within the mainframe solution, within this TDM product.... more»
We have moved data creation from manual or limited and costly automated processes to a set of weekly data builds and an On-Demand offering capable of delivering versatile data that meets the needs of our many teams.
An increase in the types of programmatic capabilities could allow the tool to be more powerful. For instance, often data inserts in one table are contingent upon entries or flags from another. In these situations, there is no way to choose to... more»
One of the features that I wanted, which I think is going to be released, is to be able to create virtualized data sets, or virtualized databases. That's a feature we're going to take advantage of. All of our developers will be able to have their own virtual copy of a golden copy of our database, and be able to do transactions against their virtual copy, and... more»
* Creates an automated process for production data extraction * Does scrubbing * Does sub-setting and movement of data * Tests regions across various platforms including legacy and distributed platforms
The tool needs to have more test data management features like synthetic data generation and data reservation. Optim is capable of extracting/masking production data and move it to the test region, for test data provisioning. However, there... more»
The most valuable features for us are masking, data profiling, and creating data subsets. More specifically, we are able to assist our clients with data privacy and the regulatory recommendations that come from the government. We help them to... more»
CA Test Data Manager is enormously helpful to us. We assist our customers by speeding up the application development process using real-time test data and synthetic test data, which mimics the real test data.
BI/Dwh/Etl Expert (Informatica Power Center/Tdm) at a tech services company with 1,001-5,000 employees
Mar 22 2017
What do you think of Informatica Test Data Management?
The ability to mask the same data consistently and in a repeatable way across heterogeneous data sources (e.g., Oracle, XML). This is done without the need to store original and masked value pairs anywhere physically.
• Improvements to My Organization:
Provided more realistic and consistent test data dumps to external developers Decreased the DWH ETL development and testing cycles Allowed more thorough testing to be performed on each layer (ODS, DWH, and DM)
• Use of Solution:
I have used this solution for five years.
• Stability Issues:
Both TDM and the underlying ETL tool are quite stable. There is a resilience period that you can configure in order to avoid downtime, at least during shorter repository database outages.
• Scalability Issues:
The benefit is that it's something you can repeat over and over again, once a solution is in place. You can benefit from it as often as you want to and without having to constantly looking for support.
The additional feature probably would be a data reservation that is more robust, where you can actually use it consecutively. It is a difficult way to do things, so the data reservation, for us, is very important. Also, maybe they could make... more»