Talend Data Quality Review

​It lowers the amount of time in development from weeks to a day

What is our primary use case?

We use it to load our big data system with S3 and Redshift. We also use it to process in HL7 from hospitals in real-time.

How has it helped my organization?

It lowers the amount of time in development from weeks to a day.

What is most valuable?

The ease of transforming data with inputs to TMaps and tJavaRow makes life so easy.

What needs improvement?

There is one place where I would appreciate an upgrade, if it is possible. If the SQL input controls could dynamically determine the schema-based on the SQL alone, it would simplify the steps of having to use a manually created and saved schema for use in the TMap for the Postgres and Redshift components. This would make things even easier. When it does guess the schema it tends to bring back every column from every table or every column from the table specified in the table name in the component. Sometimes, the SQL comes from multiple tables and has some transformations of data. 

I do not know if it would even be possible, but if this could be figured out automatically for the column names and types, that would be amazing.

For how long have I used the solution?

More than five years.

What other advice do I have?

I have not run into anything we could not use Talend to find a solution for.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Add a Comment
Sign Up with Email