It's primarily used for automation. We're a pretty complex environment. We have hundreds and hundreds, if not thousands, of applications. We have more than 200 Agile teams which are doing builds across five big locations in the US.
It's primarily used for automation. We're a pretty complex environment. We have hundreds and hundreds, if not thousands, of applications. We have more than 200 Agile teams which are doing builds across five big locations in the US.
We have five large business units: financial, property and casualty, enterprise applications, marketing, and emerging business. And then we have a shared-services organization. Across this, we have more than 200 Agile teams that do build work. A lot of the time, these agile teams focus on developing and testing the work that has been handed to them. When Tosca came in, one of the things we started thinking about was how Tosca could help us facilitate some end-to-end testing.
When I say end-to-end, that doesn't mean in one particular business solution area or in one particular department, but rather, how do we go across departments? If we have to create a retirement plan, the work is not just in the retirement area. It has to flow from a lot of different applications and different business units, and that facilitates the end-to-end.
The way we are organized right now is by department. Initially, our scope was to see if we could do end-to-end, but we reduced the scope because that would have meant that many people had to be ready at the same time to consume the work. So what we said is that if we can use Tosca to do end-to-end for an application, then we can use orchestrator tools like Jenkins or Concourse to create an end-to-end flow from a business perspective.
To give you some numbers, we will be harvesting a saving of almost $1 million, in the first area that we tried this. We had 36 manual testers and we were able to go down to 14 quality engineers, so we are seeing some savings. That was the biggest. In other areas, we are seeing savings from reducing by three or four people.
In terms of cases covered by testing automation with Tosca, it's very difficult to put a number on that. Where Tosca has really made a difference is where we had manual testing only and the percentage was zero. In the area that I just mentioned, where we went from 36 to 14 testers, they were at zero percent automation and they're already at 40 percent. The goal is to be 80 percent by mid next year.
Out of our 200 teams, we have not finished all the waves yet for Tosca. Around 50 percent of them are already using it, if not more. In January of 2018, the number of associates we had doing manual testing — I'm not even talking about contractors — was around 370. Our projection is that by the end of next year, we are hoping to go down to 230 associates. That's about a 37 percent decrease in the test-analyst workforce. Most of that is going to be enabled via automation using Tosca.
We have seen lead-time change impacts as well.
We are seeing some defect numbers dropping down and we are still operationalizing. The biggest is Speedplay, where we were doing testing manually. Now, with automation, we are able to execute some of those regression tests sooner.
Tosca enables us to run the entire regression test suite immaterial of where changes have been made. We now have much more confidence in the areas where we've implemented it. We have a high level of confidence in the changes that we are making because we know we have a regression suite that we can run.
We have seen workflow improve. Before, we wouldn't have thought off coordinating among different applications. For example, application A does their testing and tells application B, "Here is what we need." Application B goes into their cycle and they do their work and say, "Okay, we're ready." That might have taken a day, two days, five days, or a week. Now, we have examples where we have been able to directly call the database for application B and retrieve the information. What used to take six days, we are able to do in six hours.
One more example where Tosca helped a lot recently was when there was an issue where our retirement plan holders were seeing incorrect information on their PDF statements. There were 32,000 PDF statements that needed to be validated after the fix was done and it would have taken 60,000 hours for one person. So we had a couple of our folks create scripts. It took them two days and they executed all the 32,000 PDF validations in one day. Tosca didn't only help us in terms of time, because it was more an issue of cost-avoidance, but it helped us gain the trust of business because business signed off on it. Business was the one which had said, "We need to validate each and every PDF manually," and that's where that estimate had come from.
We are seeing a lot of these success stories. We expect, once we have the full implementation done next year, that we'll see many more examples.
Overall, by next year, we will be looking at a total reduction in testing costs of about $14 million, across the span of three years. That includes both build and run. Build is very difficult for us to harvest because if we reduce the money in build — if we take away two people — the demand grows and they add two developers. But overall, from a build and run perspective, we are looking at forecast savings of $14 million. This year alone, we have proved that we have reduced the overall spend by $3.5 million.
Another big use case which where we have been helped a lot is through Tosca BI, which helps with large-volume validation from a data perspective. For a lot of our data lines, we have an automation framework, but it doesn't do as well when it comes to comparing huge volumes of data from source to target. That's a place where Tosca BI has helped us because it can do those large file and data comparisons in a very short time.
The Model-Based Test Automation is the most valuable feature, where you can create reusable components. Even though we are using a scriptless automation tool, there still needs to be an understanding of how to create reusable components and how to keep refactoring and how to keep regression, the test scripts, at an okay level. We are coupling Tosca with some other risk-based testing tools, as well, but automation is primarily what we're using Tosca for, the scriptless, model-based technology which is driving automation for us.
It has other features for requirements which we are just starting to look at. We do have another requirements tool which is enterprise-wide. Not everyone is using Tosca in our company. We still have a mix of a couple of tools. Even though Tosca has a great requirements feature, and some teams are using it, they are still expected to use our standard enterprise requirements tool. It's a choice that the users are making regarding whether they use the other features of Tosca, like requirements and risk-based testing, or not.
There have been some setbacks because of upgrades. While Tosca has been around for a while, Tricentis has catered to smaller clients and I don't think they have done such a large, at-scale transition or transformation before, or worked with a company like ours which is doing an enterprise-wide transformation. When we go to their customer advisory-board meetings, upgrades have been on the agenda an issue. They have been working a lot to make upgrades seamless.
There have been cases where we have needed customization because things haven't worked with the out-of-the-box functionality of Tosca. Customizations are VB and C# and those are not a "go-forward" for our company from a technology perspective, so we have asked Tricentis to do all the customizations for us. There have been cases where we have gone back because of the upgrades that they have done. We had to redo and re-scan things.
Since we operate at such a large scale, we want to limit ourselves to one or two upgrades a year. That was our biggest complaint, when we went to California this year in May we told them they need to make their upgrade process more seamless. Initially, it felt like anytime we took an upgrade, we had to go back and re-scan everything. There was a combination of having to do re-scanning but also our not knowing how we should do things. In the last six months or so, we have reached a place where that has been much better. The last upgrade that we took was much more seamless than the first upgrade we took this year. They have made great strides in helping us do that.
With regression testing, the challenge we are now facing is data. That's a whole other effort that we are working on, as test data is a problem. This is especially true where a system gets data from five other areas. It is very dependent on their data. Until we are in a place where we can do end-to-end testing, or we can virtualize their data, even though we may have 100 percent automation, it does not help. We are working with Tricentis on this, and we are working at some other tools as well.
From a testing automation-perspective, we are still continuing the journey. It's going to go to 2020. We have areas that we have not touched yet. We are heading there but we are also starting to take a look at data to see how can we combine the automation that we have done with test data to have an automated CI/CD pipeline. We have gained a lot of confidence by implementing automation using Tosca. If we hit any roadblocks, it's more from a data perspective.
From where they were at when we started with Tosca in 2018, to where they are now, they have made huge changes and enhancements to their features. It's much better. And I think they have gained as much from partnering with us on this large-scale enterprise as we have.
Tricentis is pretty open to helping and working on any enhancements and patches. If you ask Tricentis, they will say qTest has all the capabilities we are looking in integration. We are going with JIRA, and we have tools like Hexawise and GitHub. One thing we would like to see is integration of Tosca with those. We know their qTest is integrated with everything, but not everybody is necessarily going to take qTest. We are looking at qTest as an option for replacing defect management, but we are not sure we will be going that route. If companies don't have qTest and only have Tosca, integration is an area where there is room for improvement in Tosca.
Finally, Tosca is on-prem right now. We have VDIs that have the Tosca agents installed. We have a very aggressive cloud roadmap as a company for moving our applications to the cloud. We are trying to work with Tricentis to help us make the move. We would love it if the Tosca agents could be installed on Docker instead of VDI. But I don't think Tricentis is ready for that yet. I don't think Tosca is actively on the cloud, so we are using the on-prem.
We have been using Tosca for a little bit more than one-and-a-half years. The last quarter of 2017 was the first time we started looking at Tosca and our actual implementation started in 2018. We are now going full-fledged with Tosca.
The stability has been pretty good so far. Whatever we build, we have been able to use it on a regular basis. The only challenges, as I mentioned before, were when we did the upgrades. We had to go back and re-scan. Apart from that, the stability has been pretty good.
The way we are going about it, it is a scalable model. We are looking at reusability. We are trying to put some kind of governance model in place, where we will have people going back to review and analyze how things are being used. We are trying to come up with an assessment framework for Tosca.
We have some areas that have Ruby. When we go to an area that already has something implemented in Ruby, we have an automation assessment framework from a Ruby perspective, where we ask our Ruby experts to assess their framework. If the assessment comes out that their framework is pretty solid, stable, maintainable, and they have the skillset, we leave it alone. If the assessment comes out that it is not maintainable, people are not using it, it's not providing the value, we recommend using Tosca. In the same way that we have an automation-assessment for Ruby, we are trying to work on some kind of assessment framework for Tosca. If someone has been using Tosca for two years, we want to be able to go in, assess their framework and say, "Hey, here are the things you have been doing that are great. But you don't have any modules that you're reusing. Why is that?" We are working on implementing something like that, which help us with the reusability and maintainability perspective
With every release they are adding great features. This year, we have taken two upgrades and they have added patches for some defects, like the one that caused the problem with the PDFs. We have 12 or 13 incidents that are open, which include three customization requests, that Tricentis is working on with us. But with their every release, they have added features, not just closed defects, but actually added features, which is great. The challenge is that, in a big company like ours, we cannot afford to take every feature release, every upgrade. That creates a lot of work and a lot of testing, because we need to test everything. Tricentis has been able to give us their roadmap and, at this point, we are planning to take on 13.1 which will be around January/February, 2020.
They do a lot of good releases. It's just that at our scale, the way we are operating — we have 200 licenses and I'm pretty sure we'll need more — we cannot afford to take every upgrade.
We still have other applications that we have not finished in terms of setting them up with Tosca. This is going to go through 2020. Then we will be looking at ways of having people go back to the areas where it has been implemented to see how things are going and what was not implemented. When we are deploying Tosca, we are not doing it for all applications. A particular business solution or department might have 50 applications out of which ten might be critical. We have only put Tosca in place for one or two of those, but we put the right structure in place for them to be able to extend it to the other applications. We intend to go back and see if they have been able to extend or if they need help in increasing the automation presence. We do expect to see an increase in automation usage and coverage, moving forward.
We had someone from Tricentis here until September or mid-October. He was not a very technical resource, but he was our go-to person who would reach out. We also have a customer-facing rep, so I usually text him if I have any issues. We have biweekly calls with tech support.
The challenge we have seen, at times, is that when an incident is opened or when a ticket is opened, it is treated as normal, just like anything else. But there are times where they are very high-priority for us and that's where escalations come into play. It's frustrating when different people are looking at an incident at different times. We have raised some of these concerns.
A recent example is that we needed a customization for a big area. It's a big data space and we have been waiting on this customization for a couple of weeks. We recently escalated and they said, "Hey, they will start development in mid-December or late December." We said, "That's absolutely not going to work out for us." I texted my contact and he immediately texted back saying, "I have raised it to the highest level. This person is on vacation, we'll reach out to you next week."
So they provide pretty good response. Our customer rep is pretty good at escalating.
The tech support is good, they are reaching out to us. We get it, that they're all busy, but we keep pressing that if they want to work with a company like ours, which is doing a large, at-scale, enterprise-wide transformation, if they cannot meet us where we need them to or if they cannot meet our SLA, it's not going to work out.
Sometimes, it's one person at Tricentis who is working on something for us. If that person is on vacation, we are stuck. We have said, "Hey, only one person?" but again, they are also growing. We are most probably one of their biggest enterprise-wide transformations. They have clients with tons of licenses, but they are all department-level automation transformations. This is their first enterprise-wide where we are doing it across the company. We have buy-in from the highest level, our CIO.
Tricentis has done a good job of keeping stride with us. Is there a place for improvement? Absolutely. They need to change their operating model to be able to cater to large-scale enterprises if that's the direction they want to go in. It was very evident in the beginning that they're not used to working with a big company. They have come a long way and we have been pretty vocal about it too.
But they have been pretty good at it. We do face our challenges and we do have to do escalations and so forth for some of these areas, because our business units and our partners don't understand delays. They will say "Okay, if Tosca is not able to help us, we are going somewhere else." Tricentis understands that and they give us high priority. This is an issue that will be there for any company, any tool, irrespective. I don't think this is something unique to Tricentis. We have tech debt on our side and sometimes we ask them to work around our tech debt, which is not what they would expect.
Overall, the tech support is pretty good. We have a way of reaching out to their management, a direct email address that takes it to the highest level of escalation. We have a good working relationship and we are not unhappy. We are pretty satisfied. Some frustration will always be there, but they have been able to work around things pretty well for us and they turn around things pretty well.
Tricentis is still growing and they are understanding what it means to work with a large, at-scale company. We do escalations every week to their upper management. But from a tool perspective, it's doing great. It's working out really well for us in terms of automation. They are at a better place when it comes to being able to release in a way that is not causing people to go back. In a company like ours, if people have to put in a lot of effort for upgrades, or if they have to go back and re-scan and re-work a lot, they will just move away to other tools. That's something we have made very clear to Tricentis. We have said, "If you don't give us what we want and if you're not able to meet us where we are, and at the speed at which we want, it will not take long for us to move away." So far, they have been great at meeting us where we are, or escalating and getting the right people to do what we want done.
In terms of model-based, scriptless automation, Tosca is the first of its kind in our organization. We have been using Ruby Cucumber and Selenium, but they are all solutions where development and coding skills are needed. This is our first model/scriptless tool.
We have an automation solution which was primarily Ruby Cucumber, but Ruby requires development skills. We had not been able to penetrate it into all the places we wanted it to be used. We have a lot of legacy and mainframe applications and there was a lot of downward testing still going on. On the digital applications and the go-forward applications, we have a good footprint from a Ruby perspective and test automation. But in all these legacy applications and some of the older technology applications, we had manual testing going on.
We had a big goal as a company. We had done some benchmarking studies and what came out from them was that we are pretty good from a quality perspective when compared to our peers, but we are very expensive. So we were charged with a couple of things: to bring down the expense and increase speed, while maintaining the quality if not improving the quality. That's what led us on the path to Tricentis Tosca.
When using Ruby, there are more hands-on coding development skills. Tosca is something that needs a technical mindset or aptitude, but even our manual testers were able to make the shift from manual testing to automation using Tosca. That was a big driver: How do we move from manual testing to more automation and how do we not impact the big workforce that we had in manual testing? We did not want to have to let go of all manual testers, and we didn't have enough skills from a Ruby perspective. With this effort, we have been successfully able to convert around 80 percent of our manual testers to automators, using Tosca.
There are areas in our company which have Ruby, some of the digital areas, where it works. So we are letting those areas stay with Ruby.
When we did the initial setup — and this is a lesson we learned as a company — it was not directly with Tricentis. We worked with one of our global service providers, Accenture. They are the ones who proposed some of these tools to us and they are the ones who helped with the initial setup. They were here with us for 2018. In 2019, we took it on from them, and then Tricentis came onboard and we started having a direct relationship with Tricentis.
Looking back, I think it would have been much better if we had had a direct relationship with Tricentis from the get-go. There were some things that were done where Tricentis came in and said, "Hey, this is how you should have done this." That was a lesson learned for us as a company where, going forward, even if we have global service providers, we will hopefully have direct contact with the vendor as well. That's something we didn't do.
We did take a complex area with COBOL and all that, so we needed customizations, but we did it through a GSP and it was okay. It went pretty well. I wouldn't say the setup was very complex since Accenture did the heavy handholding, so it was easy for us. Had we done it, it would have been a different story, but Accenture paved the way for us.
For the initial setup, we had five or six folks from our side. Accenture's team was offshore but there were three or four of their team here. It was a pretty complex application. There were ten to 12 people who worked on it to get it going. Once the initial baseline was set, we had people automating it, working with business, coming up with the list of features, prioritizing the features, and then coming up with a roadmap.
When we go into these waves, it's a nine-month plan. The first week is diagnostic where we consider which applications we even want to attempt to start automating. Then we do a two-week design period where we consider which flows we want to automate. What are the critical flows or critical business functionalities within the application, with the most manual testing or the most frequent testing, that we want to automate? We identify five workflows, usually critical and high-priority, that we will try to automate using Tosca. Then we have a one-month "test-and-learn" where two or three people from our side partner with the department testers, and they automate things during that a month. Once that's done, we provide a recommendation saying, "Hey, we did it. Tosca seems to be the better option. Here are the benefits you will get." There is then an operationalized phase of usually six, and up to nine, months after that. At the end of those six to nine months, we should be able to go from 36 people down to 20.
We come out with a big plan that we present to the department. They all agree. We talk about what skillsets are needed going forward. How many will be flexible staff, how many will be contractors, how many will be associates? We replicate that plan every time we go to a business. That is a plan that Accenture helped us put together and it has been very successful. That's something which Tricentis is trying give over as an example to some of their bigger clients because it's been a pretty successful implementation so far. It is a reusable, repeatable process that we have come up with. Every group knows what to expect out of it.
The learning curve for Tosca is not steep compared to some technical products. It has been pretty okay, as long as the people have the mindset. So when we hire quality engineers, we don't ask them questions about Tosca. We ask them questions about their mindset to see if they have an automation-first mindset. What we are seeing from a skillset perspective is that 80 percent of the manual testing quality engineers are able to make a shift but at a lower level.
For entry-level or mid-level people, it's very easy to make the shift. At a higher level, we do expect our quality engineers to have some technical aptitude, like SQL programming skills. in addition to Tosca, we are also using Tosca BI, which needs SQL skills. So we do need some technical skills, but we are seeing much better success in transforming our manual testing population to using Tosca and Tosca BI than we have seen with any other development or programming language. So the learning curve is pretty good. It's easy and not too steep.
We have been doing the implementation in a wave approach. We didn't go with a big-bang saying, "Okay, we want to implement Tosca everywhere," because that would not work. Instead, we did an enterprise-wide transformation. We looked at testing spend for each department and started with the ones that had the highest testing spend. We put together a two-and-half-year to three-year roadmap saying, "Here is how we are going to hit applications, the big applications." We have 40 major application suites, each of which can be a combination of applications.
We defined roles, because each application suite needs an application suite quality engineer who is going to be accountable. We had a whole process that we worked through. It was not just a tool transformation but a transformation in terms of people and technology. Technology came at the end. It was more transforming peoples' mindsets and making sure they understood what we were trying to do. We wanted them to understand the intent because with Tosca, it's very easy to fall into the same trap as with anything else. If there are practices where people have wrong naming conventions or don't understand the value of versioning that Tosca provides, there is a need to create best practices. Tricentis has been pretty good at teaming up with us to share their best practices and working with us to come up with best practices that work for our company.
For ROI, when we originally planned, we had thought it would be somewhere around $13 to $20 million. Right now, we're looking at $14 million over the three-year period.
We have a three-year license. The one thing I'll say is that it is very expensive. That's the reason we are not giving access to Tosca to everyone in the company, because it's licensed-based. The licenses are concurrent, but still, only our quality engineer workforce gets access to Tosca. Developers don't get access. Test automation developers don't get access. Others don't get access either. We have around 200 licenses and the cost around $1.4 million a year.
We have found benefits with Tosca. That's the reason we went with it. Ruby and other open-source solutions are free, but we were not able to get the skillsets we needed for them and maintenance was an issue. With Tosca, we see benefits from that perspective. We don't have to lose our workforce. We can harvest the subject-matter expertise that people already have and use them for this. We did see a lot of benefit in moving to Tosca. That's why we were ready to take on a licensed tool, in comparison to an open-source tool.
For Tosca, there are no other costs.
Tosca BI is twice the cost of Tosca so we have limited Tosca BI licenses as well. There are things that come along with Tosca from Tricentis and we're looking at some of those tools.
We work with Accenture, Cognizant and Tara. They are our global service providers. We did an RFP, but Accenture only put Tosca on the table. They didn't give us any other options for scriptless.
We went online, went to some conferences, and talked to people who are using Tosca. We did industry-review research and the like that. But in the end, we were able to prove out Tosca and that's the reason we went with it.
Before looking at a tool, it's not just about implementing the tool. It's about having the right mindset to be able to implement a tool. With Tosca, we didn't go and say, "We want to use Tosca. Go." We have a whole training where we talk about what needs to shift, how you need to shift, why we are doing it, and making people understand the intent behind using some of these tools. Otherwise it does not work. We have seen enough failures with other implementations to know that if we don't have the right mindset, these things don't work.
Also, having the right partnership and setting expectations with the vendor is very important. If they have good best-practices and if they know what they're doing, that will help. Having a long-term vision is very important, instead of just saying, "Okay, I'm going to implement Tosca. Go," and not knowing where you want to go or what you want the future to look like. Those are things that helped us a lot in our implementations of Tosca.
One lesson, as I said, is that working with the tool vendor helps. Not that Accenture didn't do a great job. They did. They helped us a lot, especially from a process and best-practices perspective. But having Tricentis in there would have been good.
Tosca was relatively new in the US when we started using it. The roadmap was not fully baked. I would love to work with them on their cloud roadmap. I still don't have a great answer when it comes to cloud roadmap for Tosca, from Tricentis. That's something we're still pushing them on.
Tricentis also has a great relationship with Salesforce and that has helped us a lot. We're trying to push them to have a similar kind of relationship with Guidewire, with Guidewire 10 coming and the cloud option. That's another thing which we will be using Tosca for. We have been asking Tricentis to have some kind of partnership with Guidewire, and they have been asking us if we can introduce them. We're working through that. We have a user group for that with other Tosca users that are in Guidewire and who are moving to Guidewire 10.
It would be great if Tricentis could take a look ahead at some of these big vendors, packaged applications, and form some relationships with them. That would help companies like us who are heavy users of these packaged applications. We would like to see something similar to what they have done with SAP and with Salesforce. Guidewire is big right now. That's an area they have not done anything in yet.
We were an IBM shop. We used to use Rational Team Concert and Rational Requirements Composer. We're shifting to JIRA for requirements. We're still looking at what we will use for persistent requirements, but right now it's JIRA. Even if users want to use Tosca for requirements, and there are teams who are doing it because it helps us get good coverage, we still use JIRA and then we try to integrate from JIRA to Tosca, but we have not done that yet.
Tosca has a risk-based functionality, which we are not using, which they added after we started with Tosca. It can take a look at the data and say, "Hey, you have 200 test cases and you are getting 50 percent coverage based on the requirements that you have in Tosca. You can get the same coverage using 100 test cases." The reason we didn't use the Tosca's risk-based testing is that risk-based testing is applicable irrespective of the tool. Since Tosca is not something that is the only standard at our company — we have multiple tools — we had to choose a tool that we could use across the enterprise and not be dependent on the Tosca licenses. So we use another tool for risk-based testing.
When you talk about redundancy, there are two aspects. One is: We have been using Tosca for three years; we have 2,000 scripts. Even though Tosca has all the functionality by being model-based with reusability, not everyone understands that. In these past three years, new people might have come in and added test scripts and test cases without knowing that something already existed. They may have created duplication and redundancy. Can Tosca go and help us with that? I don't think so. We are actually looking for a tool that can help us do that. We were looking at Saffron AI Suite, which is an Intel product, but Intel decided not to support it. We're still looking for a tool that can help us with duplication.
But what we are doing is from a data perspective is the following: If I say, "Hey, here is my data, here is my test case, here are the data elements that are needed. Tell me what is the minimum number of scripts I need to get maximum coverage?" Tosca can do that. But as I said, we didn't want to depend on Tosca because we're not using Tosca across the enterprise. We're using another tool called Hexawise to help us do that and it's something that we're implementing across the company. It is much more cost-effective for us than having Tosca licenses for everyone. Tosca is expensive. We're trying to use the output from Hexawise to create test cases in Tosca to help us get that minimum number of test cases we need to get the maximum coverage. That coupling has been working well for us.
Maintenance is better with Tosca. The way Tosca is structured, it tells you where your tests are failing and the like. If I have nine quality engineers for an application and they are spread across three build teams, if they have the continuous integration implemented, whatever issues or errors come up, we expect them to keep maintaining things on a go-forward basis. Maintenance is absolutely easier with Tosca, provided it has been implemented the right way. We're not differentiating between people who are doing build work versus people who are doing maintenance. It's the same people. We expect them to build the test cases and maintain them as well.
I would rate Tosca at about eight out of ten. We're pretty happy with the results we've seen with Tosca.