CA Process Automation Review
Provides the ability to import objects as new versions of existing objects and to make the prior version the current version.


What is most valuable?

As a developer, the biggest time-saver was its import/export features and versioning. Using template-based process objects and code, I was able to make extremely complex changes with a great deal of success and ease across hundreds of objects, which took minutes, as opposed to having to use the GUI, which would have taken days. The ability to import objects as new versions of existing objects, and the ability to make the prior version the current version aided in development, bug fixes, and deployments.

How has it helped my organization?

I used the product to develop a process orchestration “framework” to be used in systems integrations between customer and internal ITSM applications (e.g., primarily for Change Management, CMDB, Incident Management, Problem Management, and Request Management). The framework, which made excellent use of stored procedures, included the following dynamic request-driven features:

  • request routing (e.g., ensuring a request from “ACME” gets to its intended recipient)
  • request sequencing (e.g., ensuring an update request from “ACME” is held until proof of the insert request from “ACME” has completed, and ensuring requests are processed based on the order they were submitted to the framework, as opposed to the order in which they were received by the framework)
  • exception handling (e.g., retrying a step within a particular process, resubmitting requests, timing-out requests, and reporting exceptions back to requesters)
  • data validation (e.g., ensuring each request comprised the data needed to ensure the successful delivery of the request)
  • data transformations (e.g., data look-ups and data overrides)
  • data delivery (e.g., primarily using SOAP and RESTful web services for both synchronous and asynchronous request deliveries and acknowledgements)
  • request acknowledgement (e.g., during synchronous request delivery, the framework was capable of producing an acknowledgement request on behalf of the recipient, if one was required)
  • relationship maintenance (e.g., the framework supported one-to-one, one-to-many, and many-to-many record relationships between customer and internal ITSM applications)

In order to accomplish all of this, I made use of CA Process Automation objects (e.g., Processes, Start Request Forms, Calendars, Schedules, and Datasets) and operators (e.g., Run JavaScript, Invoke SOAP Method, Execute Process, and Query Database). Using template objects and template code in combination with extremely robust stored procedures and tables, developers were able to release brand new integrations during every two-week development cycle with a very minimal number of defects discovered during testing. At the end of testing, it was proved the product and the framework was capable of serving every design, performance, and support requirement for our day 1 and day 2 operational needs.

What needs improvement?

CA offers minimal public information pertaining to the performance drain the usage of some objects and operators introduce to processing. As an example, swim lanes within a process provide an excellent means of organizing operators within a process, but they can introduce substantial performance issues. As another example, it’s better to perform verbose JavaScript execution within a Run JavaScript operator instead of within another operator’s pre- or post-execution script. As yet another example, it’s better to hard-code variables within the process dataset as opposed to creating the variables at run-time. The biggest issue for me is its lack of support for current JavaScript methods and functions, which makes scripts unnecessarily longer than they need to be. It seemed I could only rely on the methods and functions available in ECMA 1 (which was released in 1997), but that wasn’t a deal-breaker and the product is capable of extending its capabilities through the inclusion of other code libraries.

For how long have I used the solution?

We used it from December 2014 to April 2016.

What do I think about the stability of the solution?

It’s very important that all of the database servers reside within the same VLAN as the application servers; otherwise, there will be numerous performance issues. Additionally, it’s important to work with CA in order to determine the proper application configuration based on the number of requests you expect to process, the size of those requests, and the number of process instances required to satisfy the objectives of those requests.

How is customer service and technical support?

Technical support is unequivocally excellent; in fact, the best customer service and technical support I’ve experienced during my 12 years in IT.

Which solutions did we use previously?

Before CA Process Automation, we used BMC Atrium Orchestrator; however, that product was extremely difficult for our support staff to support and had performance issues due to the size and frequency of our requests. Most of our developers found it difficult to trouble-shoot in AO. Before AO, we used Opalis (which is now Microsoft System Center Orchestrator). Opalis is exceptionally easy to develop in and support; however, it suffered through substantially slower performance when compared to AO and CA Process Automation (up to 30 minutes per request) when developed to accommodate all of our integration architects’ design, performance, and support requirements. I should add that I had no training on CA Process Automation; however, I did have training on AO and Opalis, but CA Process Automation was still easier to grasp, develop in, deploy, and support than AO or Opalis.

How was the initial setup?

The initial setup was straightforward. CA produces a great deal of documentation, which is extremely helpful during re/installs and updates. The production configuration was somewhat difficult due to our particular requirements and infrastructure.

What about the implementation team?

Both a vendor team and an in-house implemented it. The orchestration framework was developed in-house. Some integrations utilized custom APIs (developed by third-party developers) that allowed customers to interact with the framework. I would suggest all development work should be done in-house and expedited by using template process objects, dataset objects, start request forms, and code. (You don’t want a third-party developer’s work to cause performance issues against work that has proven its performance capabilities.)

What other advice do I have?

It’s a very good product, but the usage of any product really depends on your design, performance, and support requirements. There are other products that are built for high performance, scalability, and request-driven behavior, such as IBM DataPower Gateway, which provides a much faster and simpler solution to the framework that was developed within CA Process Automation (because it relies exclusively on XSL), and has built-in request sequencing (if you include IBM WebSphere MQ in the mix); however, there is a need for more technical acumen when working with IBM DataPower than with CA Process Automation.

Disclosure: I am a real user, and this review is based on my own experience and opinions.

Add a Comment

Guest
Why do you like it?

Sign Up with Email