What is our primary use case?
Data masking, exactly what this tool is created for. We are going to use it for the incorporation into test or development environments.
We are managing a lot of customer data, and the idea is to not have, or approve, or give a lot of permissions to read all this data. We need to mask them, but we still need to work with them, which means that developers need access to a lot of data.
We have needed a tool where the data provided for developers should be easy and anonymized. This is probably the one and only tool with so many sophisticated features. We need those features for masking/anonymizing data with statistical distribution and with preparation of test/dev data (a lot of data).
How has it helped my organization?
This tool is super fast and it has solved many of our issues. It is also much better than many other solutions which are on the market. We've already tested different ones, but this one looks the best currently.
We can deliver, first, securely; second, safely; and third, without extra permissions. We don't need to go through a whole procedure so that developers have permission to access production data. It's not needed anymore. And it will work with production data because it's almost the same data but, of course, not real. The structure of the data is the same and the context of the data is the same but the values are different.
The features are very technical and are definitely what we need. We've got some rules, especially from security, from compliance, but we need to take care of our customer data, very securely, and subtly. There is no other product that gives you these opportunities.
What is most valuable?
- There are lots of filters, templates, vocabularies, and functions (which are very fast) to mask data according to your needs and statistical distribution, too.
The functionality of this tool is something that changed our work. We need to manage the data, and for developers to work on actual data. On the other hand, you don't want to give this data to the developers because they are customer data that developers shouldn't see. This tool can deliver an environment which is safe for developers. Developers can work on a big amount of data, proper data, actual data, but despite the fact that they are actual, they are not true, because they are masked. For the developer, it's absolutely proper because instead of a customer's date of birth, he's got a different date of birth, which mean its actual data but not the exact data, it's already masked.
The whole process is done by functions which are compiled on the source environment itself. Normally, you take the data from the source, you manage them - for example, mask them - and then you load this masked data into the destination. With this solution, it's completely different.
On the source environment, there are functions compiled inside the environment, which means they are amazingly fast and, on the source environment, data are masked already. So when you take them, you already take masked data from the source. So you can copy them, even with an unencrypted pipe.
These are two pros you cannot find anywhere. Most tools - for example, Informatica - are taking data as they are, in the original, not masked form, then on the Informatica server you need to mask them, and then you're sending them to the destination. Here, in TDM, you already take masked data.
What needs improvement?
If you want to automate something, you need to figure it out. There is no easy way (software is only for Windows). I am missing a lot of terminal tools, or API for the software.
The software is working on Windows and, from some perspectives, that might be a problem. From our perspective, it is a problem because we need to have a different team to deploy for our Windows machines. This is a con from our perspective. Not a big one, but still.
They have already improved this product since our testing of it, so it may be that the following no longer applies.
The interface is definitely one you need to get used to. It's not like a current interface which is really clear, easy to check. It's like from those days, some time ago, an interface that you need to get to know.
Also, we are using a specific database. We are not using Oracle or SQL, Microsoft. We are using Teradata. There are some things that they don't have in their software. For example, when delivering data, they are not delivering them in the fastest possible way. There are some things which are faster.
We asked CA if there would be any possibility to implement our suggestions and they promised us they would but I haven't seen this product for some time. Maybe they are already implemented. The requests were very specifically related to the product we have, Teradata. This was one of the real issues.
Overall, there was not much, in fact, to improve.
For how long have I used the solution?
Less than one year.
What do I think about the stability of the solution?
We didn't face any issues with stability.
The only problems we had, and we asked CA to solve, were some very deep things related to our products. It was not core issues, in fact. It was, '"We would like to have this because it's faster, or that because it's more robust or valuable."
What do I think about the scalability of the solution?
I cannot answer because we only did a PoC, so I have no idea how it will work, if there will be a couple of designers working with the stool.
Still, I don't see any kind of issues because there will be only a few people working with the design of masking and the rest will be done on the scripting level, so it's possible we won't see it at all.
How are customer service and technical support?
During the PoC we had a support person from CA assigned to us who helped in any way we needed.
Which solution did I use previously and why did I switch?
We didn't use any other resolution, we simply needed to have it implemented and we tried to figure it out. We looked at the market for what we could use. TDM was our very first choice.
How was the initial setup?
I didn't do the setup by myself, it was done by a person from CA. It didn't look hard. It looked pretty straightforward, even with configuration of the back-end database.
Which other solutions did I evaluate?
After doing our PoC we tried to figure out if there was any other solution which might fit. We tried and, from my perspective, because I was responsible for the whole project, there was no solution we might use in the same way or in a similar way. This product exactly fits our compliance and security very tightly, which is important.
There aren't any real competitors on the market. I think they simply found a niche and they started to develop it. We really tried, there are many options out there, but there are some features only specific to this product and there are features you might need, if you, for example, work for a big organization. And these features aren't in any other product.
There are many solutions for masking data, there are even very basic Python modules you can use for masking data but you need to take data from the source, you need to mask them, and you need to deliver the data to the destination. If you have a big organization like ours, and you have to copy one terabyte of data, it will take hours. With this solution, this terabyte is done in a couple of minutes.
What other advice do I have?
We did a proof of concept with TDM to see if the solution fits our needs. We did it for a couple of months, did some testing, did some analysis, and tried to determine if it fit our way of working. Now we are going to implement it in production.
If there is a big amount of data to mask and you need to deliver it conveniently, pretty easily, there is no other solution. Configuration is easy. It's built slightly differently, the design is slightly different than any other tool, but the delivery of the masked data is much smoother than in any other solution. You don't need to use something like a stepping stone. You don't need to copy data to some place, then mask it, and then send it, because you copy data which is already masked. Data is masked on the fly, before they are copied to the destination. You don't need anything like a server in the middle. In my opinion, this is the biggest feature this software has.