it_user572823 - PeerSpot reviewer
AVP Quality Assurance at GM Financial
Video Review
Vendor
Gives you confidence in data that you're creating and keeps you out of the SOX arena, because there's no production data within that environment.

What is most valuable?

Test Data Manager allows you to do synthetic data generation. It gives you a high level of confidence in your data that you're creating. It also keeps you out of the SOX arena, because there's no production data within that environment. The more that you can put in controls and keep your data clean, the better off you are. There are some laws coming into effect in the next year or so that are going to really scrutinize production data being in the lower environments.

How has it helped my organization?

We have certain aspects of our data that we have to self-generate. The VIN number is one that we have to generate and we have to be able to generate on the fly. TDM allows us to generate that VIN number based upon whether it's a truck, car, etc. We're in the car, auto loan business.

What needs improvement?

I would probably like to see improvement in the ease of the rule use. I think sometimes it gets a little cumbersome setting up some of the rules. I'd like to be able to see a rule inside of a rule inside of a rule; kind of an iterative process.

What do I think about the stability of the solution?

TDM has been around for a couple of years. I used it at my previous company, as well. It's been really stable. It's a tool that probably doesn't get utilized fully. We intend on taking that, partnering it with the SV solution and being able to generate the data for the service virtualization aspect.

Buyer's Guide
Broadcom Test Data Manager
April 2024
Learn what your peers think about Broadcom Test Data Manager. Get advice and tips from experienced pros sharing their opinions. Updated: April 2024.
768,857 professionals have used our research since 2012.

What do I think about the scalability of the solution?

Scalability is similar along the SV lines; it's relatively easy to scale. It's a matter of how you want to set up your data distribution.

How are customer service and support?

We were very pleased with the technical support.

Which solution did I use previously and why did I switch?

When you have to generate the amount of loan volume that we need – 50 states, various tax laws, etc. – I needed a solution that I can produce quality data that fits the target testing we need; any extra test cases; etc. We’re more concentrated on being very succinct in the delivery and the time frame that we need to get the testing done in.

I used CA in my previous company. I have prior working relationship with them.

How was the initial setup?

The initial setup was done internally. Obviously, the instructions that were online when we downloaded it, we were able to follow those and get the installation done. We did have a couple of calls into the technical solution support area and they were able to resolve it fairly quick.

What other advice do I have?

I think from my synthetic generation, a lot of times generating synthetic data can be cumbersome. TDM, with some of the rules aspect of it, you can generate it and have your rules in place that you know your data's going to be very consistent. When we want a particular loan to come through with a particular credit score, we can generate the data. We can select and generate the data out of TDM that will create me a data file for my in-front script, through using DevTest.

I also push the service virtualization record to respond to the request of the loan, hitting the credit bureau, returning a certain credit score, which then gets us within that target zone for that loan we're looking for, to trigger a rule.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
PeerSpot user
Practice Manager (Testing Services) at a financial services firm with 1,001-5,000 employees
Video Review
Vendor
Includes basic services which allow you to mask data and create synthetic data. It also includes test matching which accelerates test cycles and allows automation to happen.

What is most valuable?

You've got the basic services of the TDM tool which allows you to mask data, it allows you to create synthetic data, but I think what really sets TDM apart from the other competitors, is the kind of the added extras that you get with doing true test data management, so you've got things like the cubing concepts that are grid tools, or data maker, kind of really brings to bear within test data management teams. You've also got test matching as well which massively accelerates test cycles and really gives stability and allows automation to happen.

How has it helped my organization?

We've got a centralized COE in terms of test data management within our organization, benefits that are really three fold in terms of cost, quality and time to market. In terms of the quality we through test data management where, data is kind of the glue that holds systems together and therefore, if I understand my test data, I understand what I'm testing. Through the tooling and kind of the maturity in the tooling we're really bringing an added quality aspect in terms of what we test and how we test, and the risk based testing that we might approach.

In terms of the speed to market, because we don't manually produce data anymore, we use intelligent profiling techniques, test data matching, we massively reduce the time we spend finding data, and we also can produce data on the fly, which turns around test data cycles. In terms of cost, because we're doing it a lot quicker, it's a lot cheaper.

We have a centralized test data management team that caters for all development within my organization. We've created an organization that is so much more effective and optimized in terms of the kind of the time to get to test execution, to identify data and get into execution in the right way.

What needs improvement?

I think the kind of the big area for exploitation for us is already a feature that already exists within the tool. The TCO element is something massive, I talked earlier on about the kind of the maturity and the structure that it gives you to testing. I think this is kind of a game changer in terms of articulating impact of change and no project goes swimmingly first time and therefore the ability to impact a test through kind of a making simple process changes is a massive benefit.

What do I think about the stability of the solution?

The stability of the solution is really fine. I think the really big question is the stability of underlying system that it's trying to manipulate and the tool is the tool, it does what it needs to do.

What do I think about the scalability of the solution?

Within our organization we have many, many platforms, many, many different technologies. One of the interesting challenges we always have is in terms of, especially when we're doing performance testing, can we get the kind of the volumes of data in sufficient times, and we use things like data explosion quite often and it does what it needs to do and it does it very quickly.

How are customer service and technical support?

We work in an organization where we use many tools from many different suppliers. I think that the kind of a relationship that my organization has with CA is kind of a much richer one in terms of, you know, it's not just a tool support.

Which solution did I use previously and why did I switch?

Originally we used to spend probably, hours and hours and hours of spreadsheet time manually creating, keying data, massively inefficient, massively error prone, and clearly as part of a financial institution we need to conform to regulations. Therefore we needed an enterprise solution to make sure that we could actually deliver a regulatory data, test data, to suit our projects.

The initial driver with kind of really buying any tooling initially is kind of what's the problem statement, what's the driver to get these things in? I think once you realize that there is so much more than just the regulatory bit, as I say, the time, cost, quality aspect that it can actually give to test, that's really the kind of the bigger benefit than just regulatory.

How was the initial setup?

We've had the tool for about four or five years now within the organization. As you might expect we first got the guys in not knowing anything about the tool and not really knowing how to deploy it, therefore what we needed to do was we called on the CA guys to come in and really to show us how the tool works, but also how to manipulate that within our organization. We had a problem case that we wanted to address, we used that as the proving item, and that's really where we started our journey in terms of a dedicated test data management function.

Which other solutions did I evaluate?

Important evaluation criteria: to be honest it's got to be around what does the tool do? A lot of the tools on the market do the same thing, whether there are things that differentiate those tools, and what's really the organization's problem statement they're trying to fulfill. Once you've got the tool, that's great, but you need the people and process, and without that, it comes back to the relationship that you have with the CA guys, you've just got to shelfwear and a tool. We went through a proper RFP selection process where we kind of set our criteria and we kind of we invited a few of the kind of the vendors into come and demonstrate what they could do for us and picked the one that was best suited to us.

What other advice do I have?

Rating: no one's perfect. You got to go in the top quartile of it, so you're probably eight upwards. I think in terms of test data management solutions, it's the best out there. I think that the way that tool is going it's kind of moving into other areas in TCO and kind of the integration with SV, it's a massive thing for us.

I think the recommendation is that absolutely this is kind of the best in breed. As well as buying the tool, it would be a mistake to not also invest in kind of understanding how the tool integrates into the organization and kind of how to bring that into the kind of the tools team, the testing teams and the environment teams that you need to work with.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
Broadcom Test Data Manager
April 2024
Learn what your peers think about Broadcom Test Data Manager. Get advice and tips from experienced pros sharing their opinions. Updated: April 2024.
768,857 professionals have used our research since 2012.
it_user779256 - PeerSpot reviewer
Solutions Architect at American Express
Real User
Allows me to generate and manage synthetic data, but the interface could be better
Pros and Cons
  • "It allows us to create a testing environment that is repeatable. And we can manage the data so that our testing becomes automated, everything from actually performing the testing to also evaluating the results."

    What is our primary use case?

    Generate synthetic test data.

    It has performed fine. It provides us the capabilities that we were anticipating.

    How has it helped my organization?

    It allows us to create a testing environment that is repeatable. And we can manage the data so that our testing becomes automated, everything from actually performing the testing to also evaluating the results. We can automate that process. Plus, we're no longer using production data.

    What is most valuable?

    1. I am able to maintain metadata information based off of the structures and 
    2. I am able to generate and manage synthetic data from those.

    What needs improvement?

    The interface based, on our unique test case - because we are extremely unique platform - could be better. We have to do multiple steps just to create a single output. We understand that, because we are a niche architecture, it's not high on their list, but eventually we're hoping it becomes integrated and seamless.

    As noted in my answer on "initial setup", I would like to see that I don't have to do three steps, rather that it's all integrated into one. Plus, I'd like to know more about their API, because I want to be able actually call it directly using an API, and pass in specific information so that I can tune the results to my specific needs for that test case. And actually make it to where I can do it for multiple messages in one call.

    What do I think about the stability of the solution?

    Stability is fine. It's stable. It's not like it crashes or anything like that, because it's just a utility that we use to generate data. Once we generate the data, we capture it and maintain it. We don't use the tool to continually generate data, we only generate it for the specific test case, and then don't generate it again. But it gives us the ability to handle all the various combinations of variables, that's the big part.

    What do I think about the scalability of the solution?

    For our platform, scalability probably isn't really an issue. We're not planning on using it the way it was intended because we're not going to use it for continually generating more data. We want to only generate specific output that we will then maintain separately and reuse. So, the only time we will generate anything is anytime there is a different test case needed, a different condition that we need to be able to create. So, scalability is not issue.

    How are customer service and technical support?

    Tech support is great. We've had a couple of in-house training sessions. It's coming along fine. We're at a point now where were trying to leverage some other tools, like Agile Designer, to start managing the knowledge we're starting to capture, so that we can then begin automating the construction of this component with Agile Designer as well.

    Which solution did I use previously and why did I switch?

    We didn't have a previous solution.

    How was the initial setup?

    The truth is that I was involved in setup but they didn't listen to me. "They" are other people in the company I work for. It wasn't CA that did anything right or wrong, it was that the people that decided how to set it up didn't understand.  So we're struggling with that, and we will probably transition over. Right now we have it installed on laptops, and it shouldn't be. It should be server based. We should have a central point where we can maintain everything.

    So, the set up is fairly straightforward, except for the fact that there are three steps that we have to go through. We have to do a pre-setup, a pre-process, then we can do our generation of our information, and then there's a post-process that we have to perform, only because of the unique characteristics of our platform.

    Which other solutions did I evaluate?

    In addition to CA Test Data Manager, we evaluated IBM InfoSphere Optim. Those were the two products that were available to our company at the time when I proposed the idea of using it in this way.

    We chose CA because they had the capability of doing relationship mapping between data variables.

    What other advice do I have?

    The most important criterion when selecting a vendor is support. And obviously it comes down to: Do they offer the capabilities I'm interested in at a reasonable price, with good support.

    I rate it at seven out of 10 because of those three steps I have to go through. If they get rid of those, make it one step, and do these other things, I'd give it a solid nine. Nothing's perfect.

    For my use, based on the products out there that I have researched, this is the best one.

    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    PeerSpot user
    it_user778575 - PeerSpot reviewer
    QA Director at Sogeti UK
    Real User
    We are able to create test data for specific business case scenarios; it's user-friendly
    Pros and Cons
    • "The most valuable feature is the Portal that comes with the tool. That helps make it look much more user-friendly for the users. Also its ease of use - even for developers it's not that complicated."
    • "They should make the Portal a little more user-friendly, make it even easier to configure things directly from the Portal."
    • "There were some issues with initial setup. It wasn't as smooth as we had thought. We ran into a network issue, a firewall issue, things like that. It wasn't something we could not fix. We worked with CA support and with the client's team to fix it. But there were issues, it took a lot of time to install and configure."

    What is our primary use case?

    We are using it to implement test data management.

    It is a new implementation so there were some challenges. But so far, it has been good.

    What is most valuable?

    The most valuable feature is the Portal that comes with the tool. That helps make it look much more user-friendly for the users. 

    Also its ease of use - even for developers it's not that complicated.

    It gives us the ability to 

    • mask the data
    • sub-set the data
    • synthetically generate test data
    • create test data for specific business case scenarios

    and more.

    What needs improvement?

    • Addition of more data sources.
    • Make the Portal a little more user-friendly, make it even easier to configure things directly from the Portal.

    For how long have I used the solution?

    Less than one year.

    What do I think about the stability of the solution?

    It is stable, but it is not where even CA wants it to be. There have been numerous releases going on and there are still some we are waiting for. But, overall it's good.

    What do I think about the scalability of the solution?

    It is scalable. This particular tool is  used by certain types of engineers, TDM engineers. But the recipient of the tool can be anybody so it can be scaled for as many licenses as the customer is willing to pay for. It's kind of expensive.

    How are customer service and technical support?

    Tech support has been very helpful.

    They have been responsive as best they can. I'm assuming that they're very busy, and they are. They usually respond within the same day. And usually the requests that go to the technical support side are not that simple either, so I can understand that.

    Which solution did I use previously and why did I switch?

    We are partners with CA, so this was one of the strategic directions my company also wanted to take. And CA had the near-perfect solution, which we thought we should invest in, together.

    How was the initial setup?

    It was good. There were some issues. It wasn't as smooth as we had thought.

    We ran into a network issue, a firewall issue, things like that. It wasn't something we could not fix. We worked with the CA support and with the client's team to fix it. But there were issues, it took a lot of time to install and configure.

    Which other solutions did I evaluate?

    We are a consulting company, so when we go to a client we do an evaluation. Often we have to tell them what about the different products we evaluated. So in this case CA TDM has competition: Informatica has a similar product called Informatica TDM; IBM has a similar product called IBM InfoSphere Optim. These are the main competitors of CA.

    What other advice do I have?

    When selecting a vendor the important criteria are 

    • ease of use
    • responsiveness of the technical support
    • forward-looking products. By that I mean, do they have a plan for the next three months, six months, year, not just make the product and then forgot about it.

    For this particular area, test data management, because I am involved in evaluating other companies' products as well, CA so far is the leader. I personally compare each feature for all the companies we evaluate. So far CA is the number one. There is still some improvement to be done, which CA is aware of. But I think I would advise a colleague that we can start with CA.

    Disclosure: My company has a business relationship with this vendor other than being a customer: Partner.
    PeerSpot user
    it_user797949 - PeerSpot reviewer
    Domain Manager at KeyBank National Association
    Video Review
    Real User
    Enables us to incorporate automation and self-service to eliminate all of our manual efforts
    Pros and Cons
    • "It removes manual intervention. A lot of the time that we've spent previously was always manually, as an individual running SQL scripts against databases, or manually going through a UI to create data. These solutions allow us to incorporate automation and self-service to eliminate all of our manual efforts."
    • "Core features that we needed were synthetic data creation, and to be able to do complex data mining and profiling across multiple databases with referential integrity intact across them. CA's product actually came through with the highest score and met the most of our needs."
    • "All financial institutions are based on mainframes, so they're never going to go away. There are ppportunities to increase functionality and efficiencies within the mainframe solution, within this TDM product."

    How has it helped my organization?

    The benefit is that it removes manual intervention. A lot of the time that we've spent previously was always manually, as an individual running SQL scripts against databases, or manually going through a UI to create data. These solutions allow us to incorporate automation and self-service to eliminate all of our manual efforts.

    What is most valuable?

    Currently the data mining, complex data mining, that we do out there. Any sort of financial institution runs along the same challenges that we face in that referential integrity across all databases, and finding that one unique customer piece of information that meets all the criteria that we're looking for. All the other functions are fabulous as far as sub-setting, data creation.

    What needs improvement?

    I think the biggest one will be - all financial institutions are based on mainframes, so they're never going to go away. Opportunities to increase functionality and efficiencies within the mainframe solution, within this TDM product. Certainly, it does what we need out there, but there's always opportunities for greatly improving it.

    What do I think about the stability of the solution?

    Stability for the past year and a half has been very good. We have not had an outage that has prevented us from doing anything. It has allowed us to connect to the critical databases that we need, so no challenges.

    What do I think about the scalability of the solution?

    We haven't run into any issues at this point. So far we think that we're going to be able to get where we need to. In the future, as we expand, we may have a need to increase the hardware associated with it and optimize some query language, but I think we'll be in good shape.

    Which solution did I use previously and why did I switch?

    We were not using a previously solution. It was all home-grown items that we did out there, so a lot of automated scripting and some performance scripting that we did, in addition to manual efforts. 

    As we looked at what the solutions were, some of our core features that we needed were synthetic data creation, and to be able to do complex data mining and profiling across multiple databases with referential integrity intact across them. CA's product actually came through with the highest score and met the most of our needs.

    What other advice do I have?

    I'd rate it about an eight. It provides the functionality that we're needing. There are always opportunities for improvement and I don't ever give anyone a 10, so it's good for our needs.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Front Line Manager at a energy/utilities company with 1,001-5,000 employees
    Real User
    Leaderboard
    We use most of the test matching features in our testing processes
    Pros and Cons
    • "I like the integration a lot. We use the test matching feature and the ability to make and find data."
    • "When I run my App Test, I can directly connect with TDM. I can publish all my data into one table in TDM, then run my App Test directly."
    • "When we publish a lot data into the target, sometimes it is not able to handle it."
    • "The relationship between cables needs to be added."
    • "A lot of research, data analysis, and work needs to be done on the source system in the tool which requires a data expert. This tool definitely requires a senior person to work on this issue, as it might be a challenge for a tester."

    What is our primary use case?

    We use a lot of creating data mods and test matching features. We did establish the subsetting cloning process as well, but based on the client requirements. Creating data mods and using test matching features are something which we have found fits our purpose. We use most of the test matching features in our testing processes and also the integration with App Test is something we heavily use.

    What is most valuable?

    We can always get different sets of data and obtain the most recent data. They have production refreshers, where we can get data subsets of that data and to do our testing. We can also lock it down for a particular user's test or generate data. E.g., if the data is not there, there are multiple forms and variables through which data can go through and so we can generate it. 

    When I run my App Test, I can directly connect with TDM. I can publish all my data into one table in TDM,  then run my App Test directly. Essentially, my one test runs like 100 sets of data, I just click one test and it runs with 100 sets of data. 

    I like the integration a lot. We also use the test matching feature and the ability to make and find data.

    What needs improvement?

    The relationship between cables needs to be added.

    A lot of research, data analysis, and work needs to be done on the source system in the tool which requires a data expert. This tool definitely requires a senior person to work on this issue, as it might be a challenge for a tester.

    For how long have I used the solution?

    One to three years.

    What do I think about the stability of the solution?

    There have been major upgrades in the last couple of years. There was a stability issue, when I started to work on it. After we upgraded to 4.0, a lot of these problems were solved and there were advanced features, as well. With 4.0, the product is now really stable, and working fine.

    The stability issue was that we used to log in and see errors. We had to work around them. For example, we would log through the admin, then run some queries, and log back in again. After the upgrade was done, everything now appears to be fine. 

    What do I think about the scalability of the solution?

    When we publish a lot data into the target, sometimes it is not able to handle it. What we do normally is we create the data margins of the source and we try to publish short sets of data into the target. Many times the publish will fail. I think the reason is because of huge sets of data that we publish. I am not sure if it is the tool issue, but I have seen this a lot of times before, where it is not able to publish a huge set of data (thousands and thousands of sets of data). With a few sets, it works. When we try to publish a lot of data, every time there is a publish error. Though, I have not tried it lately, we have seen this before.

    How are customer service and technical support?

    Tech support for TDM is fine. I have logged maybe one or two issues for TDM. Most of our issues are for App Test. 

    Which solution did I use previously and why did I switch?

    We did not use another solution before TDM.

    How was the initial setup?

    The setup was pretty straightforward.

    What's my experience with pricing, setup cost, and licensing?

    Know all the data requirements before buying this product. There are a lot of features for this tool, which may or may not be useful to a particular company. Make sure of what your requirements for your data are. 

    What other advice do I have?

    I would certainly recommend this product, because of the vast variety of data it provides for testing and its different features, like subsetting and cloning. I have heard of products which can do cloning and similar stuff, but there are many additional features in this tool, which is very useful for the testing and finding defects. 

    We found we mostly use one or two features, so we need to be very clear on what we need before choosing a product. Test Data Manager is good, and there are a lot of advantages that you get from using the tool, especially for testing and incubating with the App Test, which is something that we use a lot. 

    Disclosure: My company has a business relationship with this vendor other than being a customer: Partner.
    PeerSpot user
    it_user558156 - PeerSpot reviewer
    Quality Assurance at a logistics company with 1,001-5,000 employees
    Real User
    With synthetic data generation, we can test applications with three or four times the production load. We would like to see it generate synthetic data for non-relational DBs.

    What is most valuable?

    One of the most valuable features to us is synthetic data generation. We generate a lot of synthetic data for our performance testing and bulging up our performance environment to see how much load they can sustain. We've been doing it for relational data structures.

    At a recent conference, I was talking to the product management team. We have a big use case for synthetic data generated for non-relational data structures. They have it on their road map, but we would love to see that coming out very soon. With modernization, relational databases are going away and the non-relational databases are coming up. That's a big use case for us, especially with the Grav database. We have a big, huge Grav database. We need to generate a lot of synthetic data for that.

    How has it helped my organization?

    It has really changed the culture in the company because nobody could ever imagine generating millions of records. Even production systems have just a couple of million records. When you want to test your applications with three or four times the production load, you can never actually achieve it because there is no other way besides synthetic data generation. You can’t have that volume of data in your DBs. Even if you subset your entire production, you would get just one X of it. To get three or four X of it, you have to go to either data cloning or to synthetic data generation.

    What needs improvement?

    The solution can really improve on non-relational data structures because that's a big industry use case which we are foreseeing, with non-relational database structures. I talk about databases. I talk about request-response pairs; the services data generation. We use it so much for virtualization. If we could create the web services request-response pairs non-relationally supporting GET, POST, and so on; that would be a big win for us.

    For how long have I used the solution?

    I've been using CA Test Data Manager since it was first released as Datamaker about 2.5 years ago. I've been using it pretty regularly since then. It has undergone a big, big transformation. There is a lot of good stuff coming up.

    What do I think about the stability of the solution?

    We still use the old tech line version of it, but we have seen the demos as it's moving to the web interface. I think its going to be very stable going down the line.

    What do I think about the scalability of the solution?

    It is not very scalable because even to generate maybe a couple of million records, it takes six to seven hours. If cloud muscle power could be included with it – like if the synthetic data generation can be done using a cloud instance; it's all synthetic data, so nothing is PII in it – if you could have a cloud feature where the data can be generated in the cloud, which might have multi-GB of RAM in memory, that would be great for us.

    How is customer service and technical support?

    Technical support is getting better. It's getting better and slower at the same time. That is because when I started my interaction with Grid Tools, it used to work on the bleeding edge of technology. Whatever enhancements we used to submit, the turnaround time was a couple of weeks and we would get whatever we need, whatever new features we needed. The processes were really ad-hoc. Rather than writing support tickets, you would literally reach out to somebody who you know who really works on the product. You reach out to them and they keep passing your ticket or enhancement request from person to person. Now the process is very much streamlined, but we have lost that turnaround time capability.

    What other advice do I have?

    When selecting a vendor, my personal requirements would be: the tools should be stable and there should be a knowledge repository for it. When you see the PPT presentation, it just gives you an introduction about the tool and it gives you the capabilities of the tool. To really get your hands dirty, you need an intense video or documentation to work on it.

    I think the more webinars you do, the better. If you can record the webinars, archive them, that would be great. If you could try to solve some more complex use cases in your demos, that would be great. Most companies give you a demo of new features with zero complexity. Actually, when looking at the demo, and you are trying to solve your own use cases, you just get choked. You can't proceed any further because your use cases are really more complex than what was being shown in the demo. From the recovery aspect, if they can come up with more intense videos which shows real complex use cases, that's going to be great.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Senior Technology Architect at a tech vendor with 10,001+ employees
    Real User
    Top 20
    Enables continuous testing and integration by generating the required data in advance
    Pros and Cons
    • "The combination of extract, mask, and load, along with synthetic data, is what is generally needed by any of our clients, and CA TDM has really good compatibility in both of the areas."
    • "CA is one of the few tool suites that has end-to-end features. Whatever role you are playing, whatever personality you are trying to address, it has that feature. For example, CA Service Virtualization goes hand-in-hand with TDM."
    • "It has a feature called TDM Portal, where testers can find test data by themselves, based on multiple models. They can reserve the data so that it belongs to one group or individual. Obviously, that data is not available to anybody else... This feature is for one environment. But if a different group of testers wanted that data for a different environment, they can't use it via CA TDM. That feature doesn't exist."

    What is our primary use case?

    TDM is something people do all the time. You cannot say it is something you're going to do from scratch. For every client, there is a different scenario. There are a lot of use cases. But a couple of use cases are common everywhere. One of them is the data when it is not there in production. How do you create that data? Synthetic data creation is one use case challenge that is common across the board.

    In addition, the people who do the testing are not very conversant with the back end or with the different types of databases, mainframes, etc. And most of the time they don't write very good SQL to be able to find the data they are going to do their testing with. So data mining is a major concern in most places. 

    The use cases are diverse. You cannot point to many common things and say that this will work or this will not. Every place, even though it's a TDM scenario, is different. Some places have very good documentation, so you can directly start with extraction, masking, and loading. But for most places that is not possible because the documentation is not there. There are multiple use cases. You cannot say that one size fits all.

    In the testing cycle, when there is a need for test data management tools, we use CA TDM to put up the feed.

    How has it helped my organization?

    If you take DevOps as an example, suppose development has happened and the binary code has been deployed to a certain server. To do continuous testing or continuous integration, you need the test data. CA TDM has a feature where it can generate the required data beforehand and keep it with your test cases however you need it. If you are using JIRA it will put the test data in JIRA. If you are using ALM it will give data to HPE ALM. Once you are running your test cases in an automated way, the data is already there. 

    And the data provided will work for all the results. For example, if you want to do a set of automated scenarios, it will work with that. If you want it to work with various regression cycles, it will work with that. And the same data, or maybe a different set of data that you provide, will work with the unit cycles as well. CA has the ability to provide all the data on-demand as well as on the fly.

    What is most valuable?

    The combination of extract, mask, and load, along with synthetic data, is what is generally needed by any of our clients, and CA TDM has really good compatibility in both of these areas.

    CA is one of the few tool suites that has end-to-end features. Whatever role you are playing, whatever personality you are trying to address, it has that feature. For example, CA Service Virtualization goes hand-in-hand with TDM. In addition, TDM has automation. CA has most features that complement the whole testing cycle.

    CA TDM has open APIs. If we are going to use a set of Excel data and pull out the feed and we want to help the Service Virtualization by providing a set of dynamic responses to the request that that service layer is getting, how do we do that? We can use the API layer at the moment the whole process stabilizes, after three months or so. In other tools, that takes longer. This open API capability is good in CA TDM.

    What needs improvement?

    There are multiple things which can be improved in CA TDM. It has a feature called TDM Portal, where testers can find test data by themselves, based on multiple models. They can reserve the data so that it belongs to one group or individual. Obviously, that data is not available to anybody else. Without this tool, if somebody goes through the back end, via SQL and pulls that data, you can't do anything. But through the Portal, if somebody reserves the data, it's his. This feature is for one environment. But if a different group of testers wanted that data for a different environment, they can't use it via CA TDM. That feature doesn't exist. You have to build a portal or you have to bridge the two environments. That is a big challenge.

    For how long have I used the solution?

    Three to five years.

    What do I think about the scalability of the solution?

    You can envision TDM happening at three layers. One is the application layer, the second is a cluster layer, and the third is an end-to-end layer. The data required for the first level and the second level are pretty different. You can't use first-level data in the second level. And the data required for the third, for end-to-end testing, is very different from the first two layers.

    So when we look at scalability, we have to see how we are creating the "journey" from one layer to another. For example, if we are working in the customer area and then we jump to payments, we have to see what the common things are that we can scale and what areas we have not tested and address them.

    How was the initial setup?

    The initial setup is always complex. I have been working in testing environments for the last 10 or eleven years. From what I have seen, most companies lack the basic building blocks for testing.

    Suppose I have a system, and that system gives data to system Y, and Y gives data to system Z. Nobody has a clue how that data gets there for testing, because that end-to-end testing has never happened. We cannot give someone data which will be rejected from system Z. We have to give him data which will pass across all the systems. And that means we have to understand the mapping file behind it. However, the mapping file is often not there so we have to create it.

    We have to talk about the various models, are they logical or physical? Somebody may have created a set of logical data models 20 years back but it is not usable now. We have to work with the tool to create that set of data.

    We also have to consider the scheme of values. If it's IMS, that is different from RDBMS. We have to find out what segment has more data, which segment is completing and which segment is giving data to systems. When we talk to the people who are working on that data set, one that is 20 years old or 30 years old, 90 percent of the time they don't have a clue. They are working with various tools but they don't have a clue how it is happening.

    So there are always multiple challenges at the start. But then we do due diligence for six or eight weeks and it clears up all the cobwebs: What is there, what is not there, and the roadmap. That puts a foot forward so we cna say, "Okay, this is how we should move and this is what we should be able to achieve in a given timeline."

    The initial deployment will take a minimum of three to four weeks.

    The second step is a PoC or a pilot to run with a set of use cases.

    Which other solutions did I evaluate?

    Apart from CA TDM, I've used IBM InfoSphere Optim, which was number-one TDM tool for quite some time, and I've used Delphix. Now, a couple of more tools have come into the market, like K2View. At one point in time, about two years back, CA TDM was only tool that could do synthetic data.

    CA TDM and Optim have different ways of working. CA TDM vs Optim has the major advantage of synthetic data creation. No tool was able to do that. Only in the last two years has IBM Optim come up with synthetic data capabilities, but what they are doing is creating a superset. If you have sample data, it will create a superset of that data. That is not the case with CA, as well as other tools.

    There are multiple sites that also create synthetic data, but the major challenge comes into the play once you need to put that data back into the database.

    What other advice do I have?

    There are, let's say, five market-standard tools you can choose from. If you choose CA TDM, you need to bring out all your questions for your PoC journey. You have four weeks to get answers to whatever questions you have. There is a set of experts at CA and partners have expertise as well. Both will be able to answer your questions.

    Next, you need to supply a roadmap. For example, "I need X, Y, and Z to be tackled first." And the roadmap that comes out of the due diligence needs to be followed word-for-word. So proper planning is essential.

    There are three teams which are at the base of your TDM journey. One team is a central data commandment team, one is a federated team, and the third is for creating small tools that you might require at that point in time. To start, you need three to four people. But we have gotten into all types of data: Big Data, RPA, performance; etc. Wherever data is needed, our team is providing the data. In a bank, for example, where I did two rounds of due diligence, one lasting eight weeks and the other, three years later, lasting six weeks, we even implemented bots. When we started there the team was 50. Even though we automated the whole thing, more than what anyone might have even imagined, the team is still 40-plus.

    Disclosure: My company has a business relationship with this vendor other than being a customer: Preferred Partner.
    PeerSpot user
    Buyer's Guide
    Download our free Broadcom Test Data Manager Report and get advice and tips from experienced pros sharing their opinions.
    Updated: April 2024
    Buyer's Guide
    Download our free Broadcom Test Data Manager Report and get advice and tips from experienced pros sharing their opinions.