We just raised a $30M Series A: Read our story

IBM InfoSphere Optim Test Data Management (TDM) OverviewUNIXBusinessApplication

What is IBM InfoSphere Optim Test Data Management (TDM)?

IBM Optim solutions manage data from requirements to retirement. They boost performance, empower collaboration, and improve governance across applications, databases and platforms. By managing data properly over its lifetime, organisations are better equipped to support business goals with less risk.

IBM InfoSphere Optim Test Data Management (TDM) is also known as IBM Optim Test Data Management, Optim Test Data Management, IBM InfoSphere Optim Test Data Management.

Buyer's Guide

Download the Test Data Management Buyer's Guide including reviews and more. Updated: November 2021

IBM InfoSphere Optim Test Data Management (TDM) Customers

Dignity Health

IBM InfoSphere Optim Test Data Management (TDM) Video

Archived IBM InfoSphere Optim Test Data Management (TDM) Reviews (more than two years old)

Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
  • Date
  • Highest Rating
  • Lowest Rating
  • Review Length
Search:
Showingreviews based on the current filters. Reset all filters
SK
Technical Lead at Wipro Technologies
Consultant
Data archiving reduces application performance issues and storage

What is our primary use case?

Data archiving on some of the packaged applications, test data management and data masking on custom as well as packaged applications.

How has it helped my organization?

Performance improvement in the application by archiving the obsolete data on some of the packaged applications.

What is most valuable?

Data archiving reduces application performance issues and storage. Test data management Data masking for GDPR.

What needs improvement?

Improvisation in the area of the self-service user interface for test data management is required to cater to test data needs.

For how long have I used the solution?

More than five years.

What is our primary use case?

Data archiving on some of the packaged applications, test data management and data masking on custom as well as packaged applications.

How has it helped my organization?

Performance improvement in the application by archiving the obsolete data on some of the packaged applications.

What is most valuable?

What needs improvement?

Improvisation in the area of the self-service user interface for test data management is required to cater to test data needs.

For how long have I used the solution?

More than five years.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
OM
Data Modeler Manager / Application Development Manager at CIGNA Corporation
Real User
Even when a masking-on-demand feature was provided, we decided to write our own to meet the actual business needs

What is our primary use case?

Provide production-like data to lower environments, allowing us to eliminate risks of data exposure.

How has it helped my organization?

While allowing us to eliminate all sensitive data in lower environments, it also allows us to integrate in-house developed libraries.

What is most valuable?

Even when a masking-on-demand feature was provided, we decided to write our own to meet the actual business needs. Currently embedded in an Oracle instance, it allows us to reach all kinds of RDBMS.

What needs improvement?

Areas of improvement: 

  • Installation is too cumbersome.
  • GUI
  • Very clunky Windows 3.1 look and feel.

Additional features: Unless I missed something, the direct connectivity to DB2i rather than going through a federated server.

For how long have I used the solution?

More than five years.

What do I think about the stability of the solution?

Fairly stable, I moved from 9.1 through 11.3 FP6 with no incidents.

What do I think about the scalability of the solution?

IBM InfoSphere Optim Test Data Management can be found in the following platforms: Windows, Linux, *nix and z/OS. This presents a great opportunity to scale code, develop multiplatform libraries as I did. Update the code once and deploy to all platforms.

How are customer service and technical support?

Great, up to the point when you actually figure it out and no longer require assistance. Unless, you need support to interpret the log files.

How was the initial setup?

Fairly simple once the documents were digested and skimmed for the target platforms.

What about the implementation team?

In-house, by myself.

What was our ROI?

N/A.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Find out what your peers are saying about IBM, Broadcom, Informatica and others in Test Data Management. Updated: November 2021.
552,027 professionals have used our research since 2012.
SK
Technical Lead at Wipro Technologies
Consultant
Data Archiving saved storage and improved production performance

What is our primary use case?

I have worked with IBM Optim tool, and implemented it in automobile, finance, and health care organizations on all its four solutions - Data Archiving (DG), Test Data Management (TDM), Data Masking (DP) and Application Retirement. 

How has it helped my organization?

Data archiving saved much storage and improved production performance. We were able to achieve data privatization.

What is most valuable?

Its data masking feature is very effective and the latest improvisation in this is UMask which I expect to be another powerful add-on.

What needs improvement?

The Core Optim was much better than the newer front-end release Designer & Manager, which added more complexity. However, the newer release has more elaborate reports.

For how long have I

What is our primary use case?

I have worked with IBM Optim tool, and implemented it in automobile, finance, and health care organizations on all its four solutions - Data Archiving (DG), Test Data Management (TDM), Data Masking (DP) and Application Retirement. 

How has it helped my organization?

Data archiving saved much storage and improved production performance. We were able to achieve data privatization.

What is most valuable?

Its data masking feature is very effective and the latest improvisation in this is UMask which I expect to be another powerful add-on.

What needs improvement?

The Core Optim was much better than the newer front-end release Designer & Manager, which added more complexity. However, the newer release has more elaborate reports.

For how long have I used the solution?

More than five years.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
EE
Database Security Consultant at a tech services company
User
Improved the quality of test data in non-production environments

What is our primary use case?

I have implemented IBM Optim Test Data Management on several clients from the financial and healthcare industries. Dozens of database servers were covered and running RDBMS, like Oracle, MS SQL Server, and DB2.

How has it helped my organization?

My clients improved the quality of test data in their non-production environments and the privacy of sensitive information.

What is most valuable?

Data masking features are the most valuable, as non-production environments are used by diverse parties and data privacy has to be guaranteed.

What needs improvement?

I would like to be able to combine different masking functions on the same column map, making more complex masking without the need to write a Lua procedure.

For how long have I used the

What is our primary use case?

I have implemented IBM Optim Test Data Management on several clients from the financial and healthcare industries. Dozens of database servers were covered and running RDBMS, like Oracle, MS SQL Server, and DB2.

How has it helped my organization?

My clients improved the quality of test data in their non-production environments and the privacy of sensitive information.

What is most valuable?

Data masking features are the most valuable, as non-production environments are used by diverse parties and data privacy has to be guaranteed.

What needs improvement?

I would like to be able to combine different masking functions on the same column map, making more complex masking without the need to write a Lua procedure.

For how long have I used the solution?

More than five years.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
it_user642165
IT Leader in Global Technology, Corporate Systems at a insurance company with 10,001+ employees
Vendor
The data privacy function masks PII and PHI data.

What is most valuable?

We use these functions:

  • Archive – archive data to PST files, we do not archive data to the archive DBMS
  • Data privacy – masking of PII and PHI data
  • Submitter – test data management

This product archives all types of relational databases. It is the original archive solution for business.

How has it helped my organization?

Archive

Moving data that is no longer needed in daily processing on to archive files, resulting in:

  • Faster query performance.
  • Shorter batch execution trails.
  • Stops/slows the growth of disk spaces. We don’t have to constantly add additional disk space to the DBMS. After data is archived, we do not shrink the DB, on non-processing days. We also do not allow the tool to delete the data that was archived. Instead, we work with the Application DBAs to move the data onto Oracle Partitions; instead of deleting data and watching the commits process on the transaction logs. A misstep can throw the database into recovery mode. Instead, we just drop the partition holding the data that was archived.

Data Privacy

  • All PII/PHI data in non-production environments is masked.
  • Used during end-to-end testing. By using the same rules for all data, we ensure data integrity across all of our applications.
  • Ensures customer data is not exposed in non-production environments.

Subsetter (Test Data Management)

  • In the past, PeopleSoft admin would build non-production environments using chunks of transaction data, including all the static/reference data, resulting in broken chained data and a lot of useless data.
  • Selecting logical slices of data, we can ensure 100% data integrity and smaller-size databases. Also, we can pick and choose specific types of data to test and then combine that with data privacy. We have an efficient test bed of data that is optimized for performance, testing criteria, and is PII/PHI compliant.

What needs improvement?

IBM treats non-production deploys with a lower priority to resolve problems. When we archive, we select a similar-sized non-production environment first, before we archive production. We need the volume to determine how big to make the archive files and how long the archive jobs will run.

For more detail:

As the project manager for an archive solution, we first test our solution before we deploy it to production. This involves the IT-AD folks that maintain the application and business folks that own the data. Like any project plan it has a start date and an end date. Fudge is integrated into the project plan for accommodate on foreseen delays.

We build the archive solution in a non-production environment with production like volume of data. Let’s call this our test environment. When we encounter an problem that we can't resolve, we reach out to IBM. They build an environment to simulate the conditions that are causing our outage. Because this is not a production environment. IBM assigns a lower priority for a resolution. In the past it has taken weeks up to 2 months for a fix. Delaying the completion of the project plan, beyond the fudge added to the plan.


For how long have I used the solution?

I have been using this product since it was originally sold by Princeton Softech.

What do I think about the stability of the solution?

It is a complex environment to maintain between keeping the Optim server releases working with the DBMS releases and the OS file system releases. We have a development environment (used for testing new software, releases, etc.), QA for testing the application archive, production for archiving production environments by application, and DR (disaster recovery). All changes to production need to be in sync with DR in case of a datacenter outage.

We keep all archived PST data on EMC Centera drives, which have bi-directional replication between production and DR sites.

What do I think about the scalability of the solution?

Archive – There are two deployment methods:

  • Lazy Susan option – We deploy once and can expand effortlessly. This solution does not need any additional hardware to archive a new application. Zero scalability issues.
  • Standalone option – Each new archive requires new hardware (Optim Server, Optim meta-data DBMS, Citrix).

Data Privacy:

  • It is deployed onto multiple mainframe engines.

How are customer service and technical support?

IBM technical support uses labs to mimic our environment, which is fine. But we archive a non-production environment first prior to archival of an applications production database. Doing it this way, we can keep our business customers calm, as we can show we archive every record (record count) and reconcile down to the penny. If we are off, we do not move the archive to production. But IBM treats our archiving of non-production database with lesser priority. In the past, we have waited months for IBM to resolve the problem. Our only solution has been to light a fire under the bottom of the sales reps.

One issue is that IBM only wants to deal with one person. It makes sense from their point of view, to have a single point of contact. But from the customer side, it is a pain. Especially when the person is on vacation or out sick, everything grinds to a halt.

Which solution did I use previously and why did I switch?

Optim established itself in-house because of their archive feature. We began to expand into data privacy and subsetter functions in later years.

EMC offers a product that archives PeopleSoft modules, which we purchased. Sadly, we could not get it to work properly. We soon discovered performance and scalability issues. When we tried archiving data, it was very sluggish. In the end, we used it to archive standalone tables, which proved useless.

How was the initial setup?

Initial setup was straightforward.

What's my experience with pricing, setup cost, and licensing?

Pricing is based on the number of CPUs or mainframe MIPS on the application DBMS. Over time, it made more sense to obtain an enterprise license with unlimited deploys.

Which other solutions did I evaluate?

All products go through a proof-of-concept to ensure they work as advertised and work within our environment.

I met with Informatica, as they offer a similar product. It performs the same functionality as Optim. When I asked why is Informatica better than IBM, I was told their technical support would smile when we asked questions. Needless to say, there is no reason to leave IBM.

What other advice do I have?

  • Reduces the need to keep expanding data storage
  • Faster query performance with shorter batch execution times
  • Consistent data masking and data integrity in non-production environments
  • Efficient test data management
Disclosure: I am a real user, and this review is based on my own experience and opinions.
it_user651513
Director-Projects at a tech services company with 10,001+ employees
Consultant
We can use match keys and compare the records before and after masking. I would like to see a drop-down like interface to generate a policy for masking.

What is most valuable?

I really appreciate using the match keys and comparing the records before and after masking. This is done by joining multiple tables to form a single record. This helps to get a faster validation of data.

How has it helped my organization?

The product has helped to streamline the Test Data Management (TDM) process. Without this product, participation from several stakeholders such as DBAs, Admins, and the Environment refresh team were required.

What needs improvement?

Scripting complex data masking requirement in Lua can be converted into a simpler, user-friendly interface.

Instead of a free text editor, which is used to code the functions (mostly "if-else" conditions), a drop-down like interface can be created to join various conditions and generate a policy for masking.

For how long have I used the solution?

I have used the product for three years.

What do I think about the stability of the solution?

We have not encountered any issues with respect to stability.

What do I think about the scalability of the solution?

We faced some challenges with a combination of complex referential integrity and a higher volume of data.

How are customer service and technical support?

We have enjoyed excellent technical support.

Which solution did I use previously and why did I switch?

IBM Optim was the first solution that we used.

How was the initial setup?

This response is specific to a z/OS environment. The initial setup is complex and requires assistance from mainframe administrators.

The initial setup should be made simpler with a step-by-step or a one-click installation guide.

What's my experience with pricing, setup cost, and licensing?

Pricing and licensing is higher compared to other products.

Which other solutions did I evaluate?

We evaluated and compared three other COTS TDM solutions.

What other advice do I have?

I recommended that others complete a PoC/pilot before starting an implementation. Some of the diverse technology landscape might require a hybrid Test Data Management solution.

Disclosure: My company has a business relationship with this vendor other than being a customer: Cognizant has partnership with IBM.
it_user660612
Systems Analyst at a consultancy with 201-500 employees
Consultant
It provides data masking with consistency across databases and applications.

What is most valuable?

Data masking with consistency across databases and applications is the most valuable feature.

How has it helped my organization?

We only do data masking as of now because it is a compliance requirement. However, the benefits in sub-setting have not been realized yet. No application has availed of the service.

What needs improvement?

Licensing could be improved. The PVU licensing mechanism is a limiting factor to expanding the use of the tool.

For how long have I used the solution?

I have been using IBM Optim for five years.

What do I think about the stability of the solution?

We did have some stability issues.

What do I think about the scalability of the solution?

We did have some scalability issues.

How are customer service and technical support?

I would give technical support a rating of 7/10.

Which solution did I use previously and why did I switch?

We did not have a previous solution.

How was the initial setup?

The setup was complex for me, because I am not familiar with ODBC. I have to learn how to use it, in order to connect to different databases.

What's my experience with pricing, setup cost, and licensing?

Discuss the PVU limitations when it comes to extending the service with new apps.

Which other solutions did I evaluate?

I have no idea if our architecture team looked at alternatives. I started to use the tool as developer.

What other advice do I have?

There are many flavors of Optim. Make sure you understand what you want.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
ITCS user
Product & Project Manager at a tech services company with 51-200 employees
Consultant
Automates recurring processes that create and manage non-production environment data.

What is most valuable?

  • Optimizing and automating recurring processes that create and manage non-production environment data.
  • Creating right-sized fictionalized test databases.
  • Protecting sensitive data in non-production environments: Sensitive data such as national IDs, credit card numbers, email addresses and confidential corporate information can be masked to protect it from misuse and fraud.

How has it helped my organization?

This product allowed us comprehensive test data management capabilities for creating right-sized, fictionalized test databases and it accurately provides subsets of referential intact data.

For how long have I used the solution?

We have been using this solution for six years.

What do I think about the stability of the solution?

IBM Optim TDM is quite stable. We have never had any stability problems.

What do I think about the scalability of the solution?

IBM Optim solutions are highly scalable. We have never had any scalability issues.

How are customer service and technical support?

I rate level of technical support a good 7/10.

Which solution did I use previously and why did I switch?

We didn't use any different solution before.

How was the initial setup?

Initial setup is a little bit complex.

What's my experience with pricing, setup cost, and licensing?

Pricing and licensing models may vary depending on the server infrastructure.

Which other solutions did I evaluate?

Before choosing this product, we also evaluated Informatica Test Data Management and Oracle Data Masking.

What other advice do I have?

Optim Test Data Management and Privacy solutions help you to create right-sized test databases, automate test processes and mask sensitive data to protect privacy, maximize the accuracy of testing using production data for test purposes.

Disclosure: My company has a business relationship with this vendor other than being a customer:
Jyotirmay Mishra
Senior Project Manager /Senior Solution Architect at Cognizant
Consultant
The Data Growth feature enables business policies, data retention and access control.

What is most valuable?

The Data Archive or Data Growth feature is one of the very useful solutions that help organizations to manage and support their database archiving strategies. This feature also enables business policies, data retention and access control.

How has it helped my organization?

This feature helps organization to plan test data strategies based on data retention policies.

What needs improvement?

In the IBM Optim tool, the Synthetic Test Data Generation feature is lagging behind.

What do I think about the stability of the solution?

We have not encountered any stability issues.

What do I think about the scalability of the solution?

We have not encountered any scalability issues.

What is most valuable?

The Data Archive or Data Growth feature is one of the very useful solutions that help organizations to manage and support their database archiving strategies. This feature also enables business policies, data retention and access control.

How has it helped my organization?

This feature helps organization to plan test data strategies based on data retention policies.

What needs improvement?

In the IBM Optim tool, the Synthetic Test Data Generation feature is lagging behind.

What do I think about the stability of the solution?

We have not encountered any stability issues.

What do I think about the scalability of the solution?

We have not encountered any scalability issues.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
it_user653673
User at a tech services company with 10,001+ employees
Consultant
Provides a data growth solution. Does not support new test data generation.

What is most valuable?

Data growth solution Data masking and test data management Data decommission

What needs improvement?

The tool does not support new test data generation. Optim tool is helpful to extract (subset and mask) the data from a production-like environment. The test data generation is not part of IBM Optim.

For how long have I used the solution?

I have used it for three years.

What was my experience with deployment of the solution?

Sometimes, we have had deployment issues.

What do I think about the stability of the solution?

Only very rarely have we encountered stability issues.

How is customer service and technical support?

Customer Service: Customer service is good. Technical Support: Technical support is not bad.

How was the initial setup?

The…

What is most valuable?

  • Data growth solution
  • Data masking and test data management
  • Data decommission

What needs improvement?

The tool does not support new test data generation.

Optim tool is helpful to extract (subset and mask) the data from a production-like environment. The test data generation is not part of IBM Optim.

For how long have I used the solution?

I have used it for three years.

What was my experience with deployment of the solution?

Sometimes, we have had deployment issues.

What do I think about the stability of the solution?

Only very rarely have we encountered stability issues.

How is customer service and technical support?

Customer Service:

Customer service is good.

Technical Support:

Technical support is not bad.

How was the initial setup?

The latest version of Optim with designer and manager are quite complicated, but this includes various features.

Disclosure: My company has a business relationship with this vendor other than being a customer: My company is an IBM Premier Business Partner.
AS
Technology Analyst at a tech services company with 10,001+ employees
Real User
I value data masking, data extraction/sub-setting, data comparison, and data load.

What is most valuable?

I value data masking, data extraction/sub-setting, data comparison, and data load.

How has it helped my organization?

  • Creates an automated process for production data extraction
  • Does scrubbing
  • Does sub-setting and movement of data
  • Tests regions across various platforms including legacy and distributed platforms

What needs improvement?

The tool needs to have more test data management features like synthetic data generation and data reservation.

Optim is capable of extracting/masking production data and move it to the test region, for test data provisioning. However, there are various scenarios when the production data, cannot fully meet the testing requirements. In such cases, the testers have to manually set up the data, as Optim does not provide any functionality, to generate data from scratch. Hence, synthetic data generation is a very important feature, that must be present in a TDM tool. Other tools such as CA TDM have this feature.

There are other important TDM features, that are missing from IBM Optim such as:

  • Data reservation that provides ability to the data user, in order to reserve certain data, so that no one else uses the same data for testing.

  • Data mining that provides a capability to search for data, that matches the test requirements.

For how long have I used the solution?

I have used the product for three years.

What do I think about the stability of the solution?

I did not encounter any stability issues.

What do I think about the scalability of the solution?

I did not encounter any scalability issues.

How are customer service and technical support?

I would give technical support a rating of 7/10.

Which solution did I use previously and why did I switch?

In the past, in-house scripts were being used for production data extraction/masking. We moved to Optim to utilize an industry standard tool to enable the entire enterprise to use a consistent solution for synchronous data refresh across applications.

How was the initial setup?

The initial setup was easy, but time consuming. There is a lot of one time development effort involved to setup and implement the Optim based automated process.

What's my experience with pricing, setup cost, and licensing?

I cannot comment on pricing as it varies for each customer and agreement with IBM.

Which other solutions did I evaluate?

Other options were evaluated including tools like CA Test Data Manager.

What other advice do I have?

Optim is a great tool for enterprise data movement, data masking, data sub-setting, data comparison, and data aging.

However, it lacks certain capabilities critical for central test data management operations like data generation. The requirement scope/goal should be clearly defined before selecting the tool.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
it_user599013
Senior Test Analyst at a retailer with 1,001-5,000 employees
Vendor
It has more data privacy options than older technologies. Initial setup was a bit tedious and time consuming.

What is most valuable?

  • Data masking
  • Test data environment setup and management

How has it helped my organization?

Makes it really easy to obfuscate big tables involving lots of columns and to manage 100’s of tables without putting in much effort.

What needs improvement?

Data masking with InfoSphere Optim products creates intermediary extract files and writing the data to persistent storage, which means that extraction, masking, and insertion happen being separate processes, taking more for the process to run.

For how long have I used the solution?

I have used IBM InfoSphere Optim for five years.

What do I think about the stability of the solution?

We have not had any stability problems.

What do I think about the scalability of the solution?

We have had some scalability issues.

Which solution did I use previously and why did I switch?

We previously used IBM DataStage. There is a lot less flexibility in the InfoSphere DataStage Pack compared to the InfoSphere Optim products for selecting a specific subset of data to be masked. DataStage is driven by supplied SQL statements, whereas InfoSphere Optim is driven by traversing the database model and picking up related data elements from a starting point. Optim has more data privacy options.

How was the initial setup?

Initial setup was a bit tedious and time consuming.

What other advice do I have?

It’s very helpful in the mainframe environment.

Disclosure: I am a real user, and this review is based on my own experience and opinions.