We just raised a $30M Series A: Read our story

Tricentis qTest OverviewUNIXBusinessApplication

Tricentis qTest is the #3 ranked solution in our list of top Quality Management Tools. It is most often compared to Tricentis Tosca: Tricentis qTest vs Tricentis Tosca

What is Tricentis qTest?

Tricentis is the global leader in enterprise continuous testing, widely credited for reinventing software testing for DevOps, cloud, and enterprise applications. The Tricentis AI-based, continuous testing platform provides a new and fundamentally different way to perform software testing. An approach that’s totally automated, fully codeless, and intelligently driven by AI. It addresses both agile development and complex enterprise apps, enabling enterprises to accelerate their digital transformation by dramatically increasing software release speed, reducing costs, and improving software quality. 

Tricentis qTest is also known as qTest.

Tricentis qTest Buyer's Guide

Download the Tricentis qTest Buyer's Guide including reviews and more. Updated: October 2021

Tricentis qTest Customers

McKesson, Accenture, Nationwide Insurance, Allianz, Telstra, Moët Hennessy-Louis Vuitton (LVMH PCIS), and Vodafone

Tricentis qTest Video

Pricing Advice

What users are saying about Tricentis qTest pricing:
  • "Our license price point is somewhere between $1,000 and $2,000 a year."
  • "For the 35 concurrent licenses, we pay something like $35,000 a year."
  • "We're paying a little over $1,000 for a concurrent license."
  • "We signed for a year and I believe we paid $24,000 for Flood, Manager, and the qTest Insights. We paid an extra for $4,000 for the migration support."
  • "The price I was quoted is just under $60,000 for 30 licenses, annually, and that's with a 26.5 percent discount."
  • "It's quite a few times more costly than other tools on the market."

Tricentis qTest Reviews

Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
  • Date
  • Highest Rating
  • Lowest Rating
  • Review Length
Search:
Showingreviews based on the current filters. Reset all filters
RyanO'Neill
Sr. Manager Quality Assurance at Forcepoint LLC (Formerly Raytheon|Websense)
Real User
Provides a central point of reference for tracking bugs and failures, who owns the issue and its status

Pros and Cons

  • "The test automation tracking is valuable because our automated testing systems are distributed and they did not necessarily have a single point where they would come together and be reported. Having all of them report back to qTest, and having one central place where all of my test executions are tracked and reported on, is incredibly valuable because it saves time."
  • "I wouldn't say a lot of good things about Insights, but that's primarily because, with so many test cases, it is incredibly slow for us. We generally don't use it because of that."

What is our primary use case?

I use it for test case management. I manage testers and I use qTest in order to schedule and track test case execution within my testing group.

We're on the cloud version.

How has it helped my organization?

The solution’s reporting enables test team members to research errors from the run results. That has definitely sped up productivity because it allows multiple engineers to be aware of the failures, all at once and in one place. There's no duplication of effort because everybody knows what's going on and who's working on it, through qTest, as opposed to people seeing an email that something's wrong. In the latter scenario they might all run off to try to fix it and then you're duplicating effort through a lot of people working on it and not communicating with each other. Having qTest as the central point when there's a failure means we can easily track if a bug has been created on the issue, who owns it, who created it, and what its status is. All of those are linked right in qTest so you can automatically see if this failure is being tracked and who is tracking it.

Previously we were using a product called Zephyr. It did not have history based on the test cases. At least it didn't have the history the way I wanted to track it. It didn't show all the defects that were generated by that test case and it didn't track and display those defects statuses within JIRA. QTest, specifically with the link from qTest to JIRA — so that my test cases are continuously linked either back to the requirement that generated them or to any defects that were created because of them — that link is what is allows me to be much more efficient because I'm now not running between multiple systems. I'm not saying to my testers, "Hey, who's working on this? What was the problem with that? Why don't we run this?" All of that information is located right there in the solution. 

My personal efficiency has been increased because I have a single point of truth within qTest, to always be able to see what the status of my tests is. My team's efficiency has been increased, again, because of the lack of duplication of their efforts. They always know what's assigned to them and what they own and what its status is. And they don't have to manually connect test cases from one system to the next, because they're automatically linked and the information is automatically shared. There are a lot of efficiencies built into that link between qTest and my ticketing systems, as well as, of course, by using qTest in my automation systems. Those links are really what has turned things up.

qTest has probably doubled our efficiency. There has been a 100 percent improvement in the time the testers and I spend on managing our test cases.

We have also used the product for our execution of open-source test automation frameworks. In our case specifically, that would be Cypress and pytest. I wouldn't say that ability has affected productivity. I don't think it has a multiplying effect when it comes to doing automation faster. Its multiplier comes after you've created the automation. At that point, executing it and getting the results are a lot faster. We still execute test case automation the same way we always did. We put a JSON file into Jenkins and Jenkins executes the test cases. But now, instead of just executing them and being done with it, it executes them and reports the results back to qTest. It's the same process, just with an extra step. Because of that reporting, we have a central point of truth. We don't have to look at Jenkins and try to figure out what happened, because it's not a very good interface to get an overall view of the health of a system. That's what qTest is.

In addition, the solution provides our team with clear demarcations for which steps live in JIRA and which steps live in qTest. Using, say, requirements within JIRA to test cases within qTest, there is a distinct difference between those two systems. Being able to build off of the requirements that are automatically imported allows my people to generate test cases faster and in a more organized manner, because they're based on information that's being given to them by project management via the requirements. It makes it clearer where each step that lives within the process and that is an efficiency-increaser.

Finally, since we started using qTest we have seen a decrease in critical defects and releases, although not a lot. We didn't really take on qTest to reduce the number of defects. We took on qTest to be better organized and efficient in our quality assurance processes. I had no expectation that qTest was going to decrease the number of defects we had. It was definitely going to increase the efficiency and the speed at which we were able to do our testing. That does then decrease the number of defects and issues that we run into on a regular basis. Over the first year there was probably a 50 percent decrease and over the second year we've seen about ten to 20 percent. It's not significant but, again, it was never expected to be a significant decrease.

What is most valuable?

Among the most valuable features are 

  • test automation tracking
  • JIRA linking
  • defect tracking
  • reporting.

The test automation tracking is valuable because our automated testing systems are distributed and they did not necessarily have a single point where they would come together and be reported. Having all of them report back to qTest, and having one central place where all of my test executions are tracked and reported on, is incredibly valuable because it saves time. It allows me to just look at and use one place, and one reporting solution, to track all my executions and the defects that are generated from those.

The integration with JIRA allows us to have an integration between both our automation testing systems, such as Jenkins, through qTest, and into JIRA. It allows all that data to be transferred and distributed among all the different stakeholders within the organizations. That way I don't even have to do reporting. They can just look in JIRA and see what the testing results were. It's very simple for me. It makes my life a little easier so I don't have to generate so many reports.

What needs improvement?

I wouldn't say a lot of good things about Insights, but that's primarily because, with so many test cases, it is incredibly slow for us. We generally don't use it because of that. It would be nice. It has good features, but as soon as we started using qTest, Insights became unusable. I do know that they're planning on replacing it next month. It's the one bad side of the application and they're replacing it, so at least they're listening to their customers. They know when they've got a problem, so that's a good thing.

In addition, within Insights, the report creation could be more versatile and intuitive. Generally, the reporting tools could be made more streamlined and easier to access by people outside of the organization. If I have one complaint about qTest, it's its reporting. Again, that is something that's being replaced here soon, so it'll be an invalid point within a month.

It has already been fixed in the on-premises version. The hosted version has yet to have the replacement. I don't know what the replacement's going to be like. I haven't used it so I can't really judge it.

For how long have I used the solution?

I've been using qTest for over two years.

What do I think about the stability of the solution?

There are some optimizations that could be applied. There is a bit of lag when you're getting up into the hundreds of thousands and even millions of records, but that is to be expected. 

Stability-wise it has always been available. I actually can't think of a time when it wasn't available when we needed it. The stability itself has been 100 percent. The optimization is an area for improvement.

What do I think about the scalability of the solution?

The scalability has definitely been impressive. We've got a global organization with so many different teams and I don't hear any complaints from any of them. They're all up and running on this product, all around the world. So we've scaled extensively. The different teams don't really affect each other, but we're all using the same system. We don't really notice that there are 30 different product teams using the system. You only see your own.

It's extensively used in the sense that all the QA organizations within the different product teams — we're looking at 15 to 20 different product teams, each with five to ten quality assurance engineers, and some of them with up to 30 or 50 engineers — all of them are using the product at least as their test case management system. Some of them have different implementations when it comes to their automations. Some have different implementations when it comes to their ticketing system integrations. But all of them are equally supported by the product in different project scenarios and product configurations.

It requires zero people for maintenance because it's cloud.

How are customer service and technical support?

Tech support is incredibly responsive and has always come back very quickly and helped us find issues. They have gone out of their way to make sure that we are served as best as we possibly can be. I feel like I'm in really good hands with them. That definitely started from the time at which we took on and transitioned to qTest, in the way that they helped us get up to speed with information and support.

Which solution did I use previously and why did I switch?

We used JIRA and both the Zephyr and the Xray plugins. The scalability of those plug-ins was usually fine. They scaled along with JIRA, and JIRA is endlessly scalable. Reporting is where they would fall down. JIRA doesn't have the greatest reporting and most of the reporting is manual. When you're looking at reporting within qTest, most of it is already built for you. It has canned reports that already exist and which don't require a lot of effort. Mind you, that is where qTest somewhat falls down as well, on the reporting side of things, but it is still head-and-shoulders above the open-source solutions.

The decision to move to qTest was due to the way we had our implementation. We had no central, single enterprise-class test case management solution available to any of our teams. As they grew and became more extensive, they found either that the low-budget solutions they were using, or the open-source solutions that we're using, or the complete lack of solutions that they had, were simply not adequate. The decision was made at that time by upper management that we needed to find a central, enterprise-class solution for test case management. 

How was the initial setup?

The initial setup was very straightforward since Tricentis did most of the work for us. We're using a hosted cloud product so for us it was, "Here's your username and password." 

We received extensive support from, at that time, QASymphony, and Tricentis now, in getting up and running, understanding the product, and getting the information that we needed to make the best possible use of the product and to be successful. QASymphony and Tricentis have excelled at making sure that we are successful. I have a regular meeting with my success manager and she's always on call to be able to help us with issues.

Globally, for our organization, it took about six months for complete adoption. That was not Tricentis' fault. That was just how long it took us to get everybody up to speed and onboard. If it came down to how long it took Tricentis to do the deployment, it was probably a day and we were up and running and ready to go. There was not really a lot of configuration required on their side. The effort to get a large, global organization transitioned from one tool to another is not trivial. With Tricentis' help we were able to do it in what I would call an "impressive" six months.

Our implementation strategy was varied. Globally, we have many different projects and project teams and they all were using different tools. Some were simply using spreadsheets, while others were using tools like Zephyr. All of them chose to transition over to the central qTest test case management system. Each team had a very different implementation and that's definitely where Tricentis' support shined.

What was our ROI?

We have definitely seen return on our investment, simply through the efficiencies of the process. It's a tool that everybody knows how to use and it's global, so there's a good support network. And the support network from Tricentis is so extensive and useful to everybody around the world. Simply through the increased efficiencies of our test case management system, we have seen a return on the investment. That's not even taking into account the improvements in quality within our products, which is immeasurable.

What's my experience with pricing, setup cost, and licensing?

There is an upfront, yearly cost for concurrent licenses, meaning we're not limited to a specific number of users, only to a specific number of users online at a certain time. That works really well for us because we're a global organization. We'll have people online in San Diego, and those licenses then can be used later in the day by people online in Tel Aviv. It's been a really great licensing model for us.

I believe that there is a maintenance cost as well. I'm not really involved in the payment of that, so I don't really know what it would be.

Which other solutions did I evaluate?

An evaluation was opened up to search for the proper solution. qTest was the winner. 

What other advice do I have?

The biggest lesson I have learned is that the transitioning process is only difficult if you drag it out. Transitioning over to a new product needs to happen quickly. It needs to be a top-down decision and the information needs to be disseminated to everybody in a quick and efficient manner. We saw that happen easily with the qTest product and that sold me on the lesson that I learned, when it comes to implementing new, global-enterprise software.

qTest is a great solution. It should definitely be at the top of your list when you're looking at test case management solutions. It's really the service and support that comes from Tricentis that sets it apart. In addition to that, its integrations with systems that we are already using is the force multiplier that allows qTest to be even more efficient than just another tool that people have to use on a regular basis. It has become the only tool that they have to use on a regular basis.

In our company, executives or business users don't review results provided by qTest because that would be a kind of an Insights thing and we don't really use that. They do see the testing status in their tool of choice because we have the links to JIRA, so that they don't have to review the testing status within qTest. They don't log into qTest at all. They see the information that they want through our links with the ticketing system.

The solution doesn't really help us to quickly solve issues when they occur, but I don't really feel like that's its job. Its job isn't to help me solve issues. Its job is to make sure that I'm aware that there are issues that need solving, and that information is distributed to all the people who need it, right when it happens. There are some things in there that help me figure out what's going on and what do I need to do to fix a problem; it depends, of course, on the problem. But I don't feel that qTest's job is to help me solve problems. qTest's job is to make sure that I'm aware of the status of problems, that there are problems, and whether or not they're being worked on.


Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
RobinaLaughlin
Assistant Vice President, IT Quality Assurance at Guardian Life Insurance
Real User
Top 5
Very much a QA-centric application; using it is pretty seamless if you're a QA engineer

Pros and Cons

  • "Being able to log into Defects, go right into JIRA, add that defect to the user story, right there at that point, means we connect all of that. That is functionality we haven't had in the past. As a communication hub, it works really well. It's pretty much a closed loop; it's all contained right there. There's no delay. You're getting from the defect to the system to JIRA to the developer."
  • "I would really love to find a way to get the results, into qTest Manager, of Jenkins' executing my Selenium scripts, so that when I look at everything I can look at the whole rather than the parts. Right now, I can only see what happens manually. Automation-wise, we track it in bulk, as opposed to the discrete test cases that are performed. So that connection point would be really interesting for me."

What is our primary use case?

It's our primary tool for managing for testing across the Guardian enterprise.

How has it helped my organization?

It's helped us in having a web interface and being intuitive for how testing is done. If you're a tester, it makes a lot of sense. Instead of an application that we try to modify to make use of in QA, this is very much a QA-centric application. How to use this and what it's referring to are pretty seamless if you're a QA engineer. To that end, it has really increased the productivity of my team. In an agile world, being able to create suites of test cases, and copy them from one project to another project, is really important.

The on-demand reporting has also helped. To be able to just look at defect counts, and how much progress was made for the day, or where we stand overall with the project, is really important. All of that has really simplified things for us quite a bit.

On a weekly basis, for reporting it has definitely saved at least 50 percent of our time, if not more.

In terms of it helping to resolve issues when they occur, being able to log into Defects, go right into JIRA, add that defect to the user story, right there at that point, means we connect all of that. That is functionality we haven't had in the past. As a communication hub, it works really well. It's pretty much a closed loop; it's all contained right there. There's no delay. You're getting from the defect to the system to JIRA to the developer.

It very much provides our team with clear demarcations for which steps live in JIRA and which steps live in qTest. That takes away the discussions like, "What are we going to use? How are we going to communicate? Where will the data be?" Any of that preparation time is behind us. It's now the default for how the teams function and what they do. That's a really powerful process. The fact that it's a reusable, repeatable process makes everybody much more comfortable and trusting of the data that they're getting. They can then focus on the issues at hand. qTest really becomes a tool, and the best thing about a tool is not knowing you're using it. To that end, it's doing a really great job.

I can't say that we've seen a decrease in critical defects in releases since we started using qTest, but we have more visibility into our test coverage, block test cases, daily activities, etc. But I can't say that it's done anything to necessarily improve the quality of code.

Overall, it has helped to increase testing efficiency by around 30 percent. A lot of that, again, is due to being able to reuse things and being able to get to the metrics quickly. I can't overemphasize how easy it makes things.

What is most valuable?

We are using qTest Manager and qTest Insights, primarily.

We have a global testing team, so we needed a centralized, easy, web-based interface for accessing all of our testing and being able to manage test cases. We ship over 400 projects a year, so we needed something that was going to scale to that. That's what the Manager piece is for.

The Insights piece is for obtaining where we are in terms of status and finding out about metrics, etc. It provides insight into the status of my testing.

Also, we are fully integrated with JIRA, so back and forth, we use JIRA as an agile shop. JIRA does all of our user stories, etc., and is the main source for defects. qTest Manager is the testing hub. The integration between the two has been great, pretty much seamless. We did run into one defect with volume, but the 9.7.1 release fixed that.

What needs improvement?

They're coming out with a new feature now, an analytics module. I, personally, slide more toward the metrics/analytics side of things, so anything they can do to come up with a reliable template where I can look at all of my metrics at a project-, quarter-, or enterprise-level, would be fantastic. So that's my "nirvana" goal. I don't want to have to go to Tableau. I have a lot of hopes in their analytics module.

And I would really love to find a way to get the results, into qTest Manager, of Jenkins' executing my Selenium scripts, so that when I look at everything I can look at the whole rather than the parts. Right now, I can only see what happens manually. Automation-wise, we track it in bulk, as opposed to the discrete test cases that are performed. So that connection point would be really interesting for me.

We have between 150 and 200 users who are all QA. Project managers might cycle in sometimes for metrics, but we publish our metrics. You can embed scripts that come out of Insights, which is a really great feature. It's a feature I would really like to see them work on more, to make sure their APIs are bi-directional or timely. It's a little unclear if they refresh at a certain point in time or when I click it. That is one area that is a little murky.

For how long have I used the solution?

We're just about to start our third year using qTest.

What do I think about the stability of the solution?

We hit a wall before the 9.7.1 upgrade — we waited too long to upgrade. We hit a volume where we started seeing that JIRA and qTest were out of sync a little bit. It seemed to be a timing thing. But once we upgraded, that all went away. 

In terms of stability, I don't think it's ever crashed. Sometimes the Insights module is slow to load up but I think that is a timeout issue.

What do I think about the scalability of the solution?

I have a better feeling about scalability with 9.7.1 than I did prior to that. We should be okay. There will become a time, though, where we're going to have to consider archiving data, and how we want to do that. That would be a great feature for them to have over time, to be able to go back and archive.

We continue to bring a lot of projects into the department, so the volume of projects that qTest will manage for us continues to increase. We're starting to integrate it a lot with other products. An example would be SmartBear — we do a lot of API testing there. Anything that Tricentis would build, API-wise, along those lines would be really helpful.

We use NeoLoad for all our performance testing and that integrates with AppDynamics, so I don't know that we would need to integrate them, but it would be nice if it were an option. We definitely continue to use JIRA. We'll continue to expand on that platform. There's a lot of potential.

How are customer service and technical support?

I speak very highly of the company, especially the QASymphony folks who were merged into Tricentis. There was some merger pain in terms of availability. We found that our calls were cycling. But they recognized that pretty quickly and definitely helped us get on the right path. 

When we were doing the upgrade, we were able to get slots scheduled fairly easily. 

So tech support is as I expect it to be, at this point. 

I have names of people whom I can call. That's always nice. It's not just "1-800-qTest." As a vendor they're attentive. They've been up here a few times and we definitely have a view into their roadmap. I find that as much as you're willing to give, you'll get.

Which solution did I use previously and why did I switch?

We were using the HP suite. We switched because of price point and ease of use. We went into agile quickly, as an enterprise, and HP wasn't at an agile point at that time. We needed to make a switch.

qTest is much more intuitive and straightforward. There's not a lot of complexity to it. HP opened up the world so there were far too many features than we needed. That became a burden over time. HP's integration with JIRA was difficult. It was a thick client and it was very difficult to use the web interface and have good response times. I could go on and on, but you get the gist of it.

How was the initial setup?

We did a prototype two years ago and demo'ed it. It definitely played strong. The price point was right and then we started road-mapping it in 2018. We started implementing in October of 2018. We have a lot to do here so it took us until June of 2019 to get us all to steady-state. But it went without hitches, and that's probably due to a combination of how much planning we put into it and its ease of use.

You need to plan it. You need to know what your JIRA templates look like. You need to know what your JIRA workflow is, and then you need to understand what you want qTest Manager to look like. If you're integrating with JIRA, that will be the defining piece in how all of that structure will look. Once you understand that — and fortunately, I have control over both in my department, so we are really intimate with what our Jira template looks like — it really maximizes how it integrates and the efficiency of how to get to where we wanted to go. It sounds like it took a long time, but it was really a lot of planning time and then we did the cutover. We also had a lot of training that we put into it. Having done this before, qTest was, by far, one of the easiest ones I've done.

We do internal audits on our own. We look back quarterly and say, "Are we meeting our own processes? Do we have reliable, reputable standards with our projects and the metrics, the way we count things? Are we consistent?" I do you think you have to measure yourself, in addition to measuring your projects. That's really helped us significantly.

As for adoption of the tool, it's been really easy. It has simplified a lot of things. Things are right there. You can quickly drill through and it's pretty intuitive to pick up. There's not a lot of complexity around it. There are not a lot of unnecessary fields. The training on it and the adoption of it have been a lot easier than with HP.

What about the implementation team?

The deployment was all my department. We have a third-party vendor, Cognizant, that we work with. We have an 80/20 split: 80 percent of the department is Cognizant, 20 percent is Guardian. This touched everybody in the department, and we're somewhere between 150 and 200 people. But we had a core team of about a dozen people who mapped and planned it all out, and then they touched the rest of the department as their projects migrated over.

I have two to two-and-a-half people maintaining it.

What was our ROI?

I definitely see ROI in that I have testers who are focused more on doing really complex testing, rather than writing test cases. The reusable regression suite is always a good thing; to be able to copy and move tough cases from one project to another. I don't want my testers to be rewriting things.

What's my experience with pricing, setup cost, and licensing?

I have not looked at it recently, but our license price point is somewhere between $1,000 and $2,000 a year. It's pretty low when you think about what we used to have. We haven't had any additional costs from Tricentis. We do the hosting on Amazon, so that's our cost.

Which other solutions did I evaluate?

We actually did a bake-off between Tricentis and QASymphony. And then we got the best of both worlds when Tricentis acquired QASymphony.

We looked at Zephyr and Xray but they were really too small-scale for the enterprise that we have. They probably would have saved us a lot of money, but our efficiency would have really scaled off.

What other advice do I have?

What I've learned from using the solution is "don't be afraid of change." HP was the blockbuster of our industry. There are a lot of great options out there. Do your due diligence and be brave.

Also, have a plan. It's not something that you want to go into and figure out as you're going. You need to really sit down and consider where you are, where you want to go, what variables are going to help you figure out how to implement this. It's just like any other software package. You need to need to have a plan. You need to have a training plan. You need to make sure your team understands the opportunity and what they're going to get out of it. It can be scary, so you have to manage change as much as you have to manage implementation.

In terms of using qTest to look into failures, we haven't really enabled that part of it, yet. We use Selenium to do all of our automation, and that's a little different than using the Tricentis application. We're a Java shop and .NET shop, so we wanted to go with an open-source tool so we could hire Java developers for automation. We also use Selenium for open-source test automation. We know there are some exploratory options in qTest, and we will start setting the roadmap for 2020 in that direction. We definitely want to expand what we're using within the product, now.

Getting our upgrade to 9.7.1 was really significant for us. This past year has been a migration year. We got to steady-state around June. I wanted to get everyone to steady-state, spend some time with it, get our upgrade behind us, and then start to expand out to use some other pieces of functionality in 2020.

We use qTest to provide results to executives, but it's usually sanitized through my team a little bit, just because we're still getting used to the cycles and execution. We can do multiple runs and we have to get down to what the actual results are, not the overall multiple runs' results. I use qTest for that and that information gets cleaned up before it goes out to executives. The information it provides them is accurate.

qTest is an eight out of ten at this point. For me, it's been the metrics. Every company counts things differently. To understand their reports, out-of-the-box, and to align the solution to where I want it to be — do I massage it to maintain the metric that I have or do I have to wait for a breaking point or do I redefine my calculation — is where those two points go. That has taken a little more than I would've expected. All the data is there. It's just a matter of how you're layering it out for your company.

Knowing what our calcs are and knowing what the qTest calcs are, and where they diverge, would've been really great. We were a little more naive than we had planned for.

I hope Tricentis keeps it alive and well. It's a great little product that is going to quickly grow to be something that gets out there with the big boys.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Learn what your peers think about Tricentis qTest. Get advice and tips from experienced pros sharing their opinions. Updated: October 2021.
543,089 professionals have used our research since 2012.
NancyMcClanahan
Quality Assurance Team Lead at Parkview Health
Real User
Top 5
Puts all our test cases in one location where everyone can see them. qTest also allows the segregation of different types of Testing.

Pros and Cons

  • "I like the way it structures a project... We're able to put the test cases into qTest or modify something that's already there, so it's a reusable-type of environment. It is very important that we can do that and change our test data as needed..."
  • "Reporting shouldn't be so difficult. I shouldn't have to write so many queries to get the data I'm looking for, for a set of metrics about how many releases we had. I still have to break those spreadsheets out of there to get the data I need."

What is our primary use case?

When I first started here, my goal was to get a test case management tool. The testers were using spreadsheets, so the idea was to set up a strategy and plan to not only create a testing process, but to provide a way to improve that testing process. One of my suggestions was that we get a tool that allows us to be ready for the future, as we get to automation. Now after all the manual testing we have moved to Cloud Application testing for 3rd party Cloud solutions and also complete testing for our Electronic Medical Records system and integrations. It has allowed use to expand the Quality Assurance of all software used at Parkview Health .

How has it helped my organization?

We can actually track without having to have spreadsheets, which really improved our process by, probably, 180 percent. That was a biggie. We were able to put test cases into one area so that everybody can see them, for every module and every application that we use.

The solution's reporting does a good job of enabling test team members to research errors from the run results. You can query down, make it smaller. Since we are not a Dev shop, our releases are projects that are opening offices, or changing a piece of functionality, or if we get a release from our vendor. We're able to see a report of the execution, and that too is getting better. It has improved our productivity but has not totally gotten rid of some of the manual work that we do.

qTest has also helped us to quickly solve issues when they occur. I've seen demos of the automation and the like for Tosca. That's going to be interesting and an eye-opener for my company, given that "medical" is very slow at adopting new technology. 

We have also seen a decrease in critical defects in releases. When we started off we probably had close to a 60 percent defect rate, and the last time I did my metrics, a couple of months ago, it had a gone down to 36 percent. It's because everything is right there. It's visible. You have to be accountable for what you do and what you mark up. In our company, it's been a huge culture change that they actually have to keep track of what's not working, in one location.

Overall, the solution has increased testing efficiency by a good 95 percent.

What is most valuable?

The most valuable features are the execution side of it and the tracking of defects. I like the way it structures a project. We have a PMO that does project management. That project management then triggers a process that tells us that it's going to be tested at a certain date. We're able to put the test cases into qTest or modify something that's already there, so it's a reusable-type pf environment. It is very important that we can do that and change our test data as needed, especially for our EHR system. I also like the flexibility in how it can used for DevOps or non-DevOps operations of Quality Assurance and Testing.

What needs improvement?

We have used the Insights reporting engine but, within the last six months or so, since Tricentis took it over, they've started to improve that. We had some custom fields to match our process dates, and to track who is the project manager of the release, and who the test coordinator is. That way, we can keep track of what kind of testing is being done for that particular project. The Insights engine would not show us any of the custom fields when we first started using it. I've been working with them to improve that factor for Insights.

The next phase is that by the end of the year, they're supposed to release a new analytical tool within Insights or change Insights to be that analytics tool. I'm looking forward to that because I do all my analytics with exports from qTest and exports from our ITSM/ITIL system, Cherwell. I then make my reports out of them, so it will be very welcome to have that functionality.

I do some reporting for executives and business users from qTest. I go to Insights, do a query on the fields I want them to see, and then export that into Excel. I get the graphs, and then do a screen print, put it into a report, and send it off in a PowerPoint presentation. The quality of that data needs help. I use it fairly regularly for defect reporting because it does show an excellent view of the defects that are associated with the project and whether they're open or closed—looking forward to the new Analysis tool that is coming to Cloud customers soon.

Reporting shouldn't be so difficult. I shouldn't have to write so many queries to get the data I'm looking for, for a set of metrics about how many releases we had. I still have to break those spreadsheets out of there to get the data I need.

Also, qTest doesn't have any workflow engine. The only one they have a workflow engine for is the defects. I'd like to see more of something of that nature. It might help improve efficiency as we move into the future, especially when automation comes in.

For how long have I used the solution?

We have been using qTest for almost five years. We are able to track all types of testing Application, Unit, Functional, Integrated, and or the combination of types of releases.

What do I think about the stability of the solution?

We just had an outage on Monday. They had a data center outage. Outages happen a couple of times a year, so it's not bad. It's up about 97 percent of the time.

What do I think about the scalability of the solution?

I don't think I have any scalability issues with it. Right now, we have concurrent licenses, which seem to be plenty. We've not had a problem with that. It has only happened once or twice where there were 35 licenses used at the same time. The tool tells you, and then you have to wait until somebody signs off. That's easy to manage. We don't have any issues yet. I'm not saying that we won't, once automation is there.

We have 200 users and a total of 35 concurrent licenses. Generally, the users are analysts or Epic analysts, as well as managers, directors, and people involved in network validation. We have a tester lab, project administration, project manager, quality assurance, and quality assurance leads, as well as some people who have report-read-only access. Some of our vendors also have access. They have their user profiles because I limit their access in terms of what they can see and what they can do.

There are two of us involved in deployment, upgrading, and maintenance of qTest. I'm the lead, and I have a test coordinator who helps me.

We plan on increasing usage as we add more systems to it, and once we add automation. I will analyze how many licenses we have versus what we will be running at that time and will determine if we need any more. Make sure you set it up the way you do your business. The process is essential, not just the tool that you're using to manage it. The biggest lesson I have learned from using qTest is that I should have used it years ago. We should have had this a long time ago, not just five years ago.

I send out periodical reports of all the metrics that we do, usually twice a year. We use other tools for keeping track of tasks that have to be done on each one of the projects. We use Microsoft Planner. It makes it easier for people to actually do their assignments and then let us know that the tasks are completed. If we had the JIRA tool or something of that nature, that would help the process. But, at this time, we don't use that functionality.

How are customer service and technical support?

We've used Tricentis technical support quite often. The way I understand it, they have tier-one, tier-two, and tier-three. Their development people are probably their third level.

They answer quickly, but sometimes they ask questions that I cannot answer because it's part of their tool. Last week I finally told them, "Just go out and look at our system. Follow my instructions and just go out and look. You have it. It's your cloud."

Which solution did I use previously and why did I switch?

I have used Micro Focus LoadRunner at a couple of locations. The Micro Focus tool is very complex and not as user-friendly as qTest is. I knew that implementing HP, plus the price — it is much more expensive than qTest — would be more difficult.

So the factors were both price and usability of the tool. Because some of the people who do our testing are not IS people. They don't understand the software development lifecycle. You have to make it simple for them to use, and I can do that within qTest.

How was the initial setup?

The initial setup was easy to use, but you have to make sure that you follow the process that is associated with how you manage your testing overall.

Our deployment took about six months. That was because of the data that we had to load. The strategy was to make sure it was working for all of Epic, which is our EHR system. We wanted to get that part done first. We then started making the third-party applications and got all that data into it. We waited about a year to make sure that the third-party applications had their regression test cases in the system, and we still add new application data in, as we go forward, separating the implementation of our EHR. In terms of the adoption of the solution within our company, it's much more user-friendly. It allows everything to be in a central location. The data management becomes more critical because you have everything right there at your fingertips, versus a spreadsheet which could be located anywhere.

What about the implementation team?

I did it myself.

What was our ROI?

We have absolutely seen return on our investment.

What's my experience with pricing, setup cost, and licensing?

For the 35 concurrent licenses, we pay something like $35,000 a year. There are no additional costs to the standard licensing fees, until we get into Tosca. 

We have the Elite version, which allows us to have Insight, Parameters, Explorer Sessions, Pulse, Launch, and Flood.

Which other solutions did I evaluate?

We evaluated five tools, narrowed it down to three, and qTest was ranked as number one. SmartBear, the HP tool, and Tricentis were the top-three. Back then, it was QASymphony. It was before Tricentis bought them out.

The solution had to manage test plans, requirements, and test design. We wanted to make sure we could revise test cases as we moved forward with releases. Because we're not centralized as a testing organization — we have other groups that do our testing — it had to be able to get them involved cohesively. It had to track defects. Also, we do not have a project management tool, nor do we do DevOps projects. But we are continually doing different releases of all types of medical software. So we wanted to be able to manage our releases for all of our software. Ninety percent of our software is medical, but we also have things such as supplier management. We wanted to be able to do it all, all of our test cases, in one location. We wanted it to be easy to share. We also wanted it to have a good road-map toward the future. It needed to be integrative and have the ability for single-sign-on.

What other advice do I have?

Make sure you set it up the way you do your business. The process is essential, not just the tool that you're using to manage it. The biggest lesson I have learned from using qTest is that I should have used it years ago. We should have had this a long time ago, not just five years ago.

I send out periodical reports of all the metrics that we do, usually twice a year. We use other tools for keeping track of tasks that have to be done on each one of the projects. We use Microsoft Planner. It makes it easier for people to actually do their assignments and then let us know that the tasks are completed. If we had the JIRA tool or something of that nature, that would help the process. But, at this time, we don't use that functionality.

Which deployment model are you using for this solution?

Hybrid Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Other
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
DF
Senior Director of Quality Engineering at a tech vendor with 1,001-5,000 employees
Real User
Top 10
Gives us more efficiencies and overall improvement in transparency and visibility of the testing progress

Pros and Cons

  • "The main thing that really stuck out when we started using this tool, is the linkability of qTest to JIRA, and the traceability of tying JIRA requirement and defects directly with qTest. So when you're executing test cases, if you go to fail it, it automatically links and opens up a JIRA window. You're able to actually write up a ticket and it automatically ties it to the test case itself."
  • "The Insights reporting engine has a good test-metrics tracking dashboard. The overall intent is good... But the execution is a little bit limited... the results are not consistent. The basic premise and functionality work fine... It is a little clunky with some of the advanced metrics. Some of the colorings are a little unique."

What is our primary use case?

The primary use case is to the overall testing process and management of our test cases, as far as the design, creation, review, and archiving of them goes. We use it to manage their overall status.

We are cloud users, so we've got the latest and greatest version. They transparently push updates to us.

How has it helped my organization?

The solution’s reporting enables test team members to research errors from the run results. We do have some metrics and some dashboards that are set up that which allow the testers themselves to get good visibility into where things are at and which allow others to see "pass," "failed," "blocked."

qTest has been very useful for us. It's helped in productivity. It's helped in automating a lot due to the seamless integration with JIRA. It has taken us to the next level, in a very positive way, in the management of our overall test cases. It has been outstanding.

In comparison to managing test cases in spreadsheets or other tools we've used in the past qTest is saving us a couple of hours a day.

Investing in Insights to have one location for a dashboard of all reports and metrics, it has allowed us to minimize the number of reports or URLs which other stakeholders have had to go to in order to get status on the testing. There has definitely been an improvement there.

Use of the solution also provides our team with clear demarcations for which steps live in JIRA and which steps live in qTest. Test cases and tickets are assigned to test plans, etc. through the tools within qTest and they are all linked back.

What is most valuable?

The main thing that really stuck out when we started using this tool is the linkability of qTest to JIRA, and the traceability of tying JIRA requirement and defects directly with qTest. So when you're executing test cases, if you go to fail it, it automatically links and opens up a JIRA window. You're able to actually write up a ticket and it automatically ties it to the test case itself.

It has seamless integration with other key, fact-tracking or ticket-tracking tools, with overall good API integrations.

What needs improvement?

The Insights reporting engine has a good test-metrics tracking dashboard. The overall intent is good, compared to other test tracking or test management tools. But the execution is a little bit limited. The overall solution is good, but the results are not consistent. The basic premise and functionality work fine. When you try to extend the use of it a little bit more, it struggles. It is a little clunky with some of the advanced metrics. Some of the colorings are a little unique. They are currently working on a new flavor for Insights.

We do have dashboards and links set up that our executive level access. Overall, the numbers are accurate, based on what we're putting into it, but where we lose integrity or where we lose the overall perception of things, is when the colors start changing or when red is used to mean good. That's when executives lose respect for it. We've used it as a dashboard during key deployments. And then, as press is being made and the reports are being updated, colors start to change and that distracts from the overall intent of the reporting progress.

We chose to leverage Insights so that we didn't have to manually create charts via either a Google Sheet or Excel since we don't have the resources, time, or bandwidth to do that. That is what excited about Insights. But then, it just didn't meet our expectations.

We have voiced our concerns to Tricentis and they definitely have empathy. We talk about it and they keep us updated. With an acquisition they're going to leverage their analytics tool. We are excited about that, once it launches. 

We have also discussed with our account manager a couple of possible enhancements here and there, but nothing that's critical or major. One example is when you're trying to link test cases to requirements, a lot of time there is duplication between the two. Sometimes you want to tie in some of the same test cases to the same requirements. An enhancement would be a quick way to copy that over directly without having to manually link every single one again. We have some instances where a large chunk of test cases are tied, re-used, and similar. When you get upwards of 15 or 20, to limit some of the tediousness of doing them all manually, if you could take a copy of the links from one and switch them over to another, that would be helpful. It's not of major concern. It would just be nice as a quick way to do it.

Another example is that with the charts — and again, great intention — you can put in a date range and apply it. Then you get to another screen and come back. After updating several charts, the date range is gone again. You have to go back in and it's sometimes two to three times before that date range is saved. 

For how long have I used the solution?

It's just about a year since we procured licenses. We've been using it for about 11 months.

What do I think about the stability of the solution?

Stability with qTest is not an issue at all. We've had no downtime and no complaints, along those lines, with anything at all. qTest, by all means, is definitely one of the top test management tools out there.

What do I think about the scalability of the solution?

We're not a big shop so for our situation it's fine. We haven't seen any bandwidth issues with running in the cloud. People are accessing this tool across the globe and we've had no complaints or issues.

We don't plan on rolling it out further until we see the analytics portion of it. Our plan is that we will pick back up again at the start of the calendar year, once we see, at the end of this year, what analytics has to offer and once we get that working. Then we'll go back to the drawing board on how we can use it and then we'll roll it out and provide training.

How are customer service and technical support?

They have been doing okay in terms of the suggestions we make. It depends on the level of severity of what had occurred, what changes are needed. But they're responsive. We do get communications from the support team pretty well and our account manager is pretty good on following up on things.

For the most part, first-tier support has to ask some basic questions, but they're pretty good. There is room for improvement on communication response time from first-tier support. What we do is we wind up copying our account manager on tech support requests so she can assist in following up a little bit quicker. Ideally, we shouldn't have to do that, but we have learned to do that and it does make it a lot faster.

Which solution did I use previously and why did I switch?

We worked with a customized plugin within JIRA, not even a basic, off-the-shelf version. It was an in-house created module that was built to integrate. They couldn't afford to buy a plug-in, so they made one. That was why we started looking for a new solution. It was horrible. I would have preferred Excel.

How was the initial setup?

Because we have used tools like this in the past, we knew what we were getting into and we hit the ground running. So the initial setup was pretty straightforward. Compared to vendors we've worked with in the past, they've been extremely responsive, especially on the client success side of things. We've had that type of support and they have made sure that our needs are met. They have set us up with training and the like and that has been a really good experience.

Our deployment of the solution took a couple of months. Our complexity was that the test cases were being managed as tickets within JIRA and not necessarily using a test management plugin. The conversion of the test cases, and ensuring they were being transferred and translated into a single entity of the test case, was quite a big project.

What we were using before was a JIRA plugin. Given the way it was designed, what we had to do was extract everything into Excel and then import things in. That part of the tool works phenomenally. It's just that we had well over 20,000 test cases to deal with. We wanted to make sure we organized them into libraries. So it took a bit of time to get everything instated in proper order; to make sure that we didn't just dump everything in there.

We had one person doing the initial deployment. On Tricentis' side, there were two people involved in training us as well as our client support person. At this point, there are just two of us who are managing the tool. We tag team, but being that I am the senior director of the organization, I've tried to become the subject matter expert. I didn't really have anybody to delegate it to. That's why it's been a challenge that Insights is not behaving for us.

We've got 50-some licenses, but we probably see a peak of concurrent at no more than between 15 and 20. We're a medium-size company with about 1,300 employees. Mostly it's quality engineers who are using it. Developers have access to help with test cases. We're trying to get scrum masters in there to use Insights but with the challenges we've had with it we've backed off the roll-out of that.

qTest, is being used quite extensively. But there are just two of us who mostly use Insights. It's good in its ability to correlate all of the results coming from a double-digit number of scrum teams from across the globe. We can see the status of that testing.

For our team, the adoption of the solution has been fantastic. It has been well-received. You couldn't ask for a more straightforward, user-friendly, easy-to-use tool on the qTest side, from a user perspective.

What was our ROI?

We have absolutely seen ROI. We didn't have good visibility and transparency.

Don't get me wrong about Insights. For basic "not run," "pass/fail"-type metrics it is fine. It gives us much more visibility than we had in the past in terms of the ability to collaborate on the design, review, tracking, and archiving of the test cases, and the basic results of some of the sprints.

What's my experience with pricing, setup cost, and licensing?

We're paying a little over $1,000 for a concurrent license. One of the solutions we looked at was about half of that but that one is very much a bare-bones test management tool.

There are no additional costs. We pay a flat yearly rate for each license.

Which other solutions did I evaluate?

We looked into SmartBear and Zephyr, and not that we would purchase Quality Center, but it was used as a benchmark.

The main reason for going with qTest was not only that their test management application is more feature-rich and a good solution compared to others, but the ability to create a dashboard and report on a ton of metrics. We could have saved a lot of money, but I pushed hard for paying a premium to get the Insights dashboard.

What other advice do I have?

The biggest lesson I've learned from using the solution, because of the Insights challenge, is that I would probably do more of a formal trial. They are aware there are issues with it, and they are going to work on it.

Absolutely use it for its test management capabilities, without a doubt, but have an alternative solution for your reporting metrics.

Your testing using the tool is not going to change the result of the testing. It's just that the means are more efficient. Our testing scope has been the same and our processes have all been the same. But we're implementing a tool that's a little more organized. We're not really going to become better testers just because we're tracking things a little bit differently. It gives us more efficiencies and an overall improvement in the transparency and visibility of testing progress and its status. qTest has been pretty rock-solid.

Which deployment model are you using for this solution?

Private Cloud
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Raja-Veeraraghavan
Automation Lead at LogiXML
Real User
Top 5
We're spending less time trying to find defects and doing manual testing

Pros and Cons

  • "The most important feature which I like in qTest manager is the user-friendliness, especially the tabs. Since I'm the admin, I use the configuration field settings and allocate the use cases to the different QA people. It is not difficult, as a QA person, for me to understand what is happening behind the scenes."
  • "As an admin, I'm unable to delete users. I'm only able to make a user inactive. This is a scenario about which I've already made a suggestion to qTest. When people leave the company, I should be able to delete them from qTest. I shouldn't have to have so many users."

What is our primary use case?

We have licenses for qTest Manager and Flood. We use Flood for performance testing. We use the Manager on a day-to-day basis for storing the test cases and linking them with the NPM, with the Selenium automation test cases, and we schedule runs through qTest.

We also have Jira Cloud and connectivity using the CI/CD pipeline. We connect qTest with Jira and set up our runtime and regression automation. Manual is done using just the Manager and the automation is done using Selenium and Selenide.

All the user stories are done in JIRA. We take those user stories from JIRA and input them into qTest. From there, people write the test cases related to each and every user story and these test cases reside in qTest. qTest is then connected to a Linux box and Selenium. Since we have connected the qTest automation, Selenium runs the suite. We create defects in JIRA and connect that to qTest. This is how we link the entire package.

How has it helped my organization?

The solution's reporting enables test team members to research errors from the run results.

Our executives have started to review results provided by qTest, but that process is not completely done. We are in the process of implementing it for the higher officials and showing it on their screens. Everything is in the cloud and they can just click on things and it says, "Okay, these passed and these failed."

The speed with which our team understood the tool and started implementing and using it has drastically improved things. I'm sure we will improve our use of it over the next couple of years and use the tool to the maximum.

The solution is helping increase testing efficiency. We spend less time trying to find defects and doing manual testing.

qTest is definitely doing a good job of meeting our requirements and meeting the needs of our higher officials for understanding how the tests are being run.

What is most valuable?

The most important feature which I like in qTest manager is the user-friendliness, especially the tabs. Since I'm the admin, I use the configuration field settings and allocate the use cases to the different QA people.  It is not difficult, as a QA person, for me to understand what is happening behind the scenes. Looking at the code and looking at the Node.js or the NPM connection, it is so easy for anyone to understand the CI/CD pipeline.

The terminology used on the tabs, like "test plan" or "test cases" or "regression" or "requirements," make it easy for any layman to understand: "Okay, this is the model. This is how QA works. This is how the lifecycle of a product moves." There isn't any tough terminology which might make a new user say, "Okay, what is this? I don't understand."

It also provides the easiest way in which I can set up automation. It is really easy to do, compared with other ALM tools.

The integration with JIRA is superb. It was easy for my DevOps manager to go ahead and create integration between JIRA and qTest.

What needs improvement?

As an admin, I'm unable to delete users. I'm only able to make a user inactive. This is a scenario about which I've already made a suggestion to qTest. When people leave the company, I should be able to delete them from qTest. I shouldn't have to have so many users.

There are more improvements, which can be made, such as giving users an easier way to access the tool.

For how long have I used the solution?

We have been using qTest for just under a year.

We have not completely implemented everything, all the features. Although we know what qTest has, we have not explored the data and the dashboard and the tabs. So we are just using 60 percent of the tool's assets. We are still waiting for our own stable releases to happen and then we can say, "Okay, automation is done, manual is done."

What do I think about the stability of the solution?

Initially, we had a few issues, but now it's stable. It doesn't give us any problems. For the past six months at least, I haven't had to create support tickets as often as I used to in the six months before.

The tool was new for us and it was pretty difficult for us to understand certain things. But now, we know what it is and how to implement it. We know how to integrate with JIRA, with Selenium, etc. Everything has settled down.

What do I think about the scalability of the solution?

The solution is scalable.

We currently have ten users using licenses out of our total of 12 licenses, and they use it on a daily basis. It's used extensively to create the test cases, run automations, and create defects in JIRA. 

Currently, we don't have any plans to increase our usage. Five staff members are required for the deployment and maintenance. They are the people who schedule the automation runs and who do all the other jobs on a daily basis.

How are customer service and technical support?

Technical support is pretty good. During the first six months I was creating tickets and tried to get the answers immediately through email. If it was not possible for me to understand their answer, they immediately scheduled a meeting. So at the maximum, my problem would be resolved over the course of a week. The support is really good.

Which solution did I use previously and why did I switch?

Previously, we used Micro Focus ALM. Now, we have divided our products internally: an old product where we use Micro Focus, and a new product for which we wanted a newer tool to be implemented, which is qTest.

How was the initial setup?

The initial setup was a little bit difficult. Once we started with qTest, we had to migrate all our test cases from Micro Focus ALM. That's where we had a few difficulties in implementing this.

We had the help of a migration manager from Tricentis who really helped us out. At that particular stage, I had difficulty setting this up. Once it was done I was so relieved. It did take time. We thought it would take a week's time, but it took a month to finish the entire task. 

The code didn't work as it was supposed to in the wizard for the migration. It's true that our company's repository in Micro Focus ALM was very large, so it was difficult for us to take everything from there. We had to break the repository in half, and we had a lot of issues with IT here, and with Tricentis there. Everything got settled, but it was not quick.

What about the implementation team?

Our experience with the Tricentis consultant was good. 

It's just that our setup took a lot of time. We had a lot of difficulty, initially, in migrating the entire project. We needed to activate the product in ALM, and then deactivate back. It was kind of a mess. But the support engineer would coordinate with me, even outside of office hours. We sat together in meetings and tried to clear things up. He was a pretty good guy who really helped us out to set this up. 

Now, when it is so user-friendly and so easy to work on, that's only because he gave us the initial foundation for the product. We're really thankful for that.

What was our ROI?

It's too soon for us to see return on investment.

What's my experience with pricing, setup cost, and licensing?

We signed for a year and I believe we paid $24,000 for Flood, Manager, and qTest Insights. We paid an extra for $4,000 for the migration support.

Which other solutions did I evaluate?

Being a lead manager, I shopped around among many ALM tools and tried to understand which would really suffice the needs of our newer tool. We found qTest was the most user-friendly, and I can even say the most popular. The cost-effectiveness was also part of it. All of that helped us choose this.

Comparing Micro Focus and qTest, the cost of qTest is far less. Secondly, the cloud base and the fact that I am able to see everything on one screen is helpful. Although Micro Focus is updating as the time goes by, it's not as easy and as user-friendly as qTest.

There's reporting in both solutions. qTest Insights has more customization. Although ALM has some customization, it's not so easy to set up. You need to write a type of VBScript code to do more customization. But in Insights, it's easier for me to customize my reports.

We use both solutions, but the team that started using qTest is entirely different. The team is new, the product is new, so they didn't find any difficulty in adopting this tool. The other team, which was using Micro Focus ALM is still using it. We have not changed any team's structure because qTest is used by the newer team and Micro Focus ALM is used by the older people.

We looked at SpiraTest Inflectra and TestRail. SpiraTest is definitely competitive with qTest. We found everything that was in qTest was in SpiraTest as well. But there were flaws in terms of the terminologies used by Inflectra. It would not be easy for any QA to really understand. That was one of the differences we found. And the initial support which I needed from SpiraTest — I did have to mail them every day — was not what I wanted. I was not getting immediate answers to my questions. 

As for TestRail, its integration with JIRA was not as easy as we thought it would be. That was one of the flaws in TestRail which caused us to give up on it and we moved to qTest.

What other advice do I have?

Go for it, take a shot at it. Try it out with the 30-day free trial. If you really find it to be a good fit for your company, the productivity and the cost, go ahead and choose it. It's definitely a good tool.

The biggest thing we've learned from this tool is the ease of using it. It is easier. There is a possibility of creating the entire application lifecycle management by moving around different tabs and moving around different options. With one screen it is easy for a QA person to get into it.

We have not used Insights that much. We have used it to some extent but we haven't gone into the details in the graphics and the reporting. Because our own product is changing so often — the versions and the management and the configuration of the product are changing — we do not have a stable release for our product. So we are not set up completely with Insights. We are in the process of doing so.

About 40 percent of what we do is still manual testing; only 60 percent is automated. The basic aim is for at least 80 percent automation.

Our team which is working on qTest Manager is located in Ukraine, so a team leader there could provide more elaborate answers than me. I'm leading it from our head office.
The team in Ukraine are the people who are using it on a day-to-day basis.

I would rate qTest at seven out of ten. To make it a ten there are a few things here and there which could be easier for the user, like giving popups in between operations. When I want to delete something it asks me, "Are you sure you want to delete?" But it does not do that everywhere. So there are some small things, here and there, which could really improve the tool. It is supported in Chrome, Firefox, Safari and IE11. I would like to see more browser compatibility options, like using it in Edge. And when I move to different browsers, the format of the tool is not consistent.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
JK
Testing Lead Manager at PDC Energy
Real User
Top 10
Helps us resolve issues faster because everyone is working off of the same information in one location

Pros and Cons

  • "qTest helps us compile issues and have one place to look for them. We're not chasing down emails and other sources. So in the grand scheme of things, it does help to resolve issues faster because everyone is working off of the same information in one location."
  • "I really can't stand the Defects module. It's not easy to use. ALM's... Defects Module is really robust. You can actually walk through each defect by just clicking an arrow... But with the qTest Defects module you can't do that. You have to run a query. You're pretty much just querying a database. It's not really a module, or at least a robust module. Everything is very manual."

What is our primary use case?

We are using qTest to store and house test scripts and test design. We're using the test execution to execute and we're using the Defects module to track defects.

How has it helped my organization?

This is an SAP implementation and until we brought in qTest the team had no tool. They were doing everything manually in Excel, all their tests and execution. I came on board to help lead. We've done multiple test cycles. We're in UAT right now. They did one integration test cycle without the tool and we've done two with the tool. It's helped with productivity when you compare it to doing it manually in Excel.

The solution's reporting has enabled test team members to research errors from the run results, for the most part. In terms of its effect on their productivity, they went from a complete manual Excel testing solution to this tool. We're able to execute 550 test scripts within a six-week period. Some of these are simple tests but some of them are robust integration scenarios.

Our company had an assessment done by an outside organization before the tool was in use. When they did their assessment they said they were not seeing good metrics. When they did the second assessment, they asked, "What's changed since our last assessment? We're seeing numbers, we're seeing reports, we're seeing data. Everything looks stable and not all over the place." They definitely noticed the change, and that's an outside organization.

qTest helps us compile issues and have one place to look for them. We're not chasing down emails and other sources. So in the grand scheme of things, it does help to resolve issues faster because everyone's working off of the same information in one location.

Overall, the solution has increased testing efficiency by about 60 percent, compared to what they were doing before.

What is most valuable?

We get the most benefit from the

  • test library
  • test execution.

There is also a high level of reporting provided in the tool and you can extract reporting out of the tool. That has been very beneficial.

Ease of use is another helpful aspect. Right now, during the UAT, we have a lot of business users using the tool. They test the UAT scripts. The team quickly adapted to the tool. I was able to roll it out and I was actually commended by several internal people, as well as external consultants who were doing assessments, as to how things had changed. They were shocked that I was able to implement the tool so quickly. I had never used the tool before either. I received a trial copy of the tool and was able to quickly implement it and have it up and running for the team.

What needs improvement?

The Insights reporting engine is a little challenging to use in terms of setting up a report and getting the data. It took me a while to understand how to use the tool. 

I'm mainly extracting the data out of the tool. I'm not necessarily using any of the dashboards in the tool. There are some fields that I did not make site-specific because I had to get things up and running quickly. The fields are in both the Test Run area and Defects. If you do a project via site-specific, you can't get any of those fields out of Insights. That's a limitation that they need to figure out. They shouldn't have that limitation on the tool.

In addition, I really can't stand the Defects module. It's not easy to use. ALM Micro Focus used to be called QC. That solution's Defects Module is really robust. For example, let's say you have a defect and you have a query. You can actually walk through each defect by just clicking an arrow. You go through that defect, add your updates, click the "next" arrow, and walk down through them. But with the qTest Defects module you can't do that. You have to run a query. You're pretty much just querying a database. It's not really a module, or at least a robust module. Everything is very manual. By contrast, qTest's test design and test execution modules are very robust. They just missed the boat on the Defects module. From what I've heard and from what I can understand, other people are using JIRA or something else to do their defects tracking, and we're not. I needed a tool to do everything. That's their weakest link.

For how long have I used the solution?

We have been using the product for a few months.

What do I think about the stability of the solution?

Overall, qTest is stable. Sometimes we see some performance slowdowns, a hiccup or glitch-type of pause. But for the most part, it has been operating. We haven't felt any pain yet.

What do I think about the scalability of the solution?

Right now, it's handling everything we're throwing at it. Since we're in UAT, this will be the highest number of people in the tool and probably the most activity in the tool, and it's been supporting things without any issues.

We went from 30 licenses to 60 licenses during this four-month period of time. I don't think that number will be increased. Once this project is over, the number of consultants will be reduced and the number of people involved will be reduced.

How are customer service and technical support?

Technical support has been fine, acceptable. Their responses have been in an inappropriate amount of time for the most part. 

There are just those two limitations that I've uncovered, as compared to other tools that I've used. So a lot of my interactions are like, "Hey, I want to do this," and they say, "Oh, you can't do that," or "the tool doesn't support that." That's the thing I have run into the most. It's not a support issue, it's just a tool issue. Functionality.

Which solution did I use previously and why did I switch?

Cost and time were the main reasons I went with qTest. If I were to have my choice, I probably would have implemented the Micro Focus product because I am familiar with it and know it can do everything I wanted to do. But that would likely have been overkill; way more than this project needed, and it was much more costly. 

I was looking at another tool, the SmartBear QAComplete tool that I had used on a previous project. I didn't necessarily like that tool, but its cost was less than either qTest or HP QC/ALM. But once I get my hands on qTest, I definitely liked it better than the QAComplete product.

The ease of use and the interface helped push me toward qTest. I had also called a friend and he said. You have to look at QASymphony or Tricentis. This qTest is good." I said, "Are you sure?" He said, "Yes, it's good. Trust me." That helped push me over the top.

How was the initial setup?

The initial setup was challenging. There are certain areas where it's very strict in how you have to set up your project. There are some strict guidelines that you have to follow. You have to have a release and a test plan. You can't do certain things within the test design module or test execution module. There are only certain ways that you can set up a folder structure, whether it's related to a cycle or a test suite or a module. I would prefer fewer restrictions. The restrictions are what made it complicated.

Deployment itself was done over a weekend. This is not on-prem, it's in the cloud. I set up the structure and then had to understand how to load the test scripts. It was very fast.

Our implementation strategy was to get it done as soon as possible. It was very off-the-cuff. There was no time to plan. I landed here right before this test cycle was supposed to start and I knew that if we left it as a manual execution we would fail miserably. For me, the plan was to identify, learn, and implement a tool, all within less than a week. It took me two weeks, including training myself. There was no plan other than "we need a tool."

What about the implementation team?

I used Justin, I used one of the support people, and I had one meeting with one of their people. I had no more than four hours of support and a couple of emails. 

Overall, my experience with their support during deployment was good. I was asking some questions and needed to take an approach that either they didn't agree to or didn't understand why I was doing it that way. The one person I remember talking to was so tied up in the Agile methodology that she couldn't see outside the Agile box, and that's what I needed. We weren't coming from an Agile methodology.

What was our ROI?

Over the four months there has been ROI. If I crunched the numbers I would probably find it has paid for itself already.

What's my experience with pricing, setup cost, and licensing?

The price I was quoted is just under $60,000 for 30 licenses, annually, and that's with a 26.5 percent discount.

Which other solutions did I evaluate?

I've used QAComplete from SmartBear. I've used HP QC or ALM from Micro Focus. I also used an old IBM rational test manager which I think was called SQA.

I think qTest was really built to support Agile, where the other tools were built to support traditional Waterfall but were easily adaptable to Agile. qTest is probably going to struggle a bit before they can truly support non-Agile implementations.

What other advice do I have?

The biggest lesson I have learned from using qTest is that every tool has limitations and you need to be able to adapt and overcome and not be stuck with one way of doing things. You have to find out where the tool shines and where it doesn't and make sure that the team feels the least amount of pain when you do implement it.

This solution has been implemented for one particular project. We have 60 concurrent licenses available and we have about 120 users who have been given access. Their roles in the project are either business analysts or quality testers. But these people also have their roles within the business. Some are managers within finance, some are directors, some are AP specialists, some are AR specialists. The project is a financial system implementation so we have a sampling of users from all departments executing scripts.

Since implementing the tool, we've seen a decrease in critical defects but I don't know if I can attribute it to the tool. I don't know if that's possible. It might be a stretch. But we definitely have seen a drop in critical defects. Over the last four months we have seen a 40 to 60 percent drop.

For deployment and maintenance of the solution, it's just me. I'm the one who picked the tool, I'm the one who implemented the tool, I'm the main administrator of the tool, and I am leading all of the testing efforts.

Setting up the users is pretty simple. I would recommend it. If you're looking for something quick, easy to use, and robust, it's definitely a very good tool. If I could get them upgrade the Defects module, I would be very happy.

I do love it. I'm giving it a nine out ten just because I don't think any tool out there is a ten, but Tricentis is close.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
VSwaminathan
Product QA Manager at Reflexis Systems
Real User
Top 5
Provides us with visibility into test results as well as better accountability within the QA team

Pros and Cons

  • "The integration with Selenium and other tools is one of the valuable features. Importing of test cases is also good."
  • "We feel the integration between JIRA and qTest could be done even better. It's not as user-friendly as qTest's other features. The JIRA integration with qTest needs to mature a lot... We need smarter execution with JIRA in the case of failures, so that the way we pull out the issues again for the next round is easy... Locating JIRA defects corresponding to a trait from the test results is something of a challenge."

What is our primary use case?

We have multiple teams working at Reflexis and test management is a critical aspect. We wanted to be able to maintain tests. We have multiple releases to be sent to customers and to internal teams.

We use JIRA for defect management and for our internal project-tracking purposes, but for test management we primarily use qTest.

How has it helped my organization?

qTest has created a lot of transparency in the distinct classes. Everyone now has access to the tool, so there is visibility, internally, from one team to another team, regarding the results. When the builds are being sent out, people know how stable a build is and what the quality of that release is like. This information is very transparent and available to everyone who has access.

The way it's optimizing things is through the transparency within the teams. For example, we have an engineering QA team and then we need to send the build release to the implementation QA team. They are also able to review things. They get to know what things have passed or failed. And when we need to share with customers or others, they get very good information. They know that these builds have taken care of these things.

With respect to accountability, it provides clear information: This person has worked on this and that defect or these and those test cases and whether they have passed or failed.

As a QA team, there is more accountability. Now we are able to see what the test cases are that are assigned to us for QA, how much has been executed, and what has passed and what has failed. Later, those things can be evaluated, so it improves the accountability of the tester and creates more transparency in the results.

qTest has improved our time to release. With the automated testing which we are able to integrate with qTest, people are able to go through things immediately. We haven't seen a big change in time to release, but there is a gradual change. It has definitely improved release time, but that still needs to improve a lot. Release times have improved by 20 to 25 percent, roughly. We expect that to increase a lot. A few teams have adopted qTest completely, while other teams have started to adopt it in their work. Those things are going on in parallel. As more teams come into qTest, release time should definitely improve, in the longer run.

In addition, the automation integration that we do has been valuable. Because it has APIs, whenever we run an automation test it is automatically updated in qTest. Those efforts have been taken care of, especially with the transparency that it provides when we need to share the results or the release status with other teams. That is certainly a big plus we get from qTest.

What is most valuable?

The integration with Selenium and other tools is one of the valuable features. Importing of test cases is also good. 

The way we structure the test cases and the way we structure the execution cycles and the way we are able to integrate the requirements with the test cases and then generate reports, they're all pretty awesome.

There is a qTest reporting engine and they have Insights which is separate from the standard, conventional reports. Insights is pretty good. Once you get into it and start to understand how it has been designed, you will be able to take advantage of all the features.

The reporting is awesome. The way you get the stats and other critical information from the test reports in qTest is good.

What needs improvement?

We feel the integration between JIRA and qTest could be done even better. It's not as user-friendly as qTest's other features. The JIRA integration with qTest needs to mature a lot. We have some concerns and we have some challenges as we try to work with those features. This is an area where, if we see more improvements, we will be very happy.

We need smarter execution with JIRA in the case of failures, so that the way we pull out the issues again for the next round is easy. Currently, we have some challenges and complexities around that. Locating JIRA defects corresponding to a trait from the test results is something of a challenge. It impacts productivity. The reason is that the team spends more time on mapping it again for new execution failures. If that is taken care of, it will actually save a lot of QA effort.

I'm not sure if someone is working on that. We had raised this point during our evaluation, so it was probably discussed at some point in time, that they will get at it, but we don't have a clear version by which it will be taken up.

Also, Insights is not that easy to use for someone who has just started working with qTest. You need to know what all the fields are and have some background on Insights. It's not that user-friendly for someone who's just starting to work with it. People should be trained so they know what all the various features are inside it. Then people will be able to appreciate it.

For how long have I used the solution?

We've been using it less than a year.

What do I think about the stability of the solution?

The stability has been good. It's definitely serving our purposes and that's one of the reasons went for qTest.

What do I think about the scalability of the solution?

We see it helping us in the long-run as well. qTest seems to be adding more and more new features.

We have about 40 to 50 team members using it right now. We plan to slowly increase the number of users. It's a gradual process. We are planning to scale it. We are not currently reaching the peak of 25 concurrent users, most of the time. It rarely gets to the max. We average 15 to 20 users at any point in time.

There is no immediate plan to increase our licenses. As more teams and more members come into play, and when we hit the peak very frequently, we may increase the number of licenses.

How are customer service and technical support?

We have the option to contact tech support but, so far, except for a couple of times, we haven't had a reason to contact them. Tech support is good. They have set up a good infrastructure and process, so things are getting addressed quickly.

Which solution did I use previously and why did I switch?

Previously we had TestLink but we found many challenges with it when we had to run automated tests. There are good features in qTest, which helps us in maintaining it and sharing with others, with ease. The UIs are good and give a lot of flexibility to the testers when working with them. Those are some of the main reasons that we chose qTest for our test management.

We did an extensive evaluation of qTest. We had multiple people from Tricentis helping us during our evaluation process. It has been adding value to our organization.

How was the initial setup?

The initial setup was straightforward. I was not involved in that process. It was done by the IT team in discussion with qTest counterparts. But overall, I didn't see any challenges. It was planned for a specific day, and it was completed on that day.

There was one person from our side and one person from Tricentis involved.

The adoption has been good. The organization is impressed with the features and the value that it will add to our QA processes. That's definitely a positive. It's definitely doing what we were expecting. We haven't seen any concerns from the end-users or management.

What was our ROI?

We have definitely seen ROI. One area of return is due to the simplicity of use. It brings a defined process to the team. TestLink, which we used previously, was not very usable for the testers in terms of maintaining the test cases or creating them. It was taking a lot of time. People are able to work with qTest and are able to focus more on the actual testing, rather than maintaining things due to complexities. Those are the areas it has improved.

We haven't seen dollar savings, but it is definitely adding value to the teams.

What's my experience with pricing, setup cost, and licensing?

It is pretty costly, from what I remember. It's quite a few times more costly than other tools on the market. We compared it to the other leading test management tools. We went for it because of the features and the value it could add to our organization.

Which other solutions did I evaluate?

We have evaluated several other tools. But the features, especially the requirements being integrated with the test cases, are pretty awesome. Many tools do not have the features and, even if they have those features, they are not as simplified as they are in qTest. That's one of the primary reasons qTest has been very useful for us.

Open-source solutions don't have as many features and their usability is also not as good.

Multiple people in our company evaluated other solutions and, based on all their input, we finally chose qTest. 

What other advice do I have?

Do a cost-benefit analysis. qTest is more costly than other tools. If you have multiple teams, it's going to be essential, and it's worth buying qTest. Apart from that, if cost is not a factor, there are more benefits from qTest and it's definitely a tool you can go for.

All the features we have used are pretty impressive and good. The JIRA integration is the only thing that, if it is very critical, you need to plan accordingly.

It's a good investment for the implementation of the QA process. It creates more accountability in the team and also makes a lot of things easy for the managers as well. It simplifies a lot of QA processes. These are the things we've learned from using the solution. As we start having other teams use the tool, they should also be able to see and take advantage of these things.

Not many business users are using qTest. We share reports with them and they use them for management and other purposes. Primarily, qTest is used by the QA team only. But people take the reports as a starting point for discussion for things like product-improvement purposes. The business users rarely go into the tool to get to the various details they need. Mostly the reports are PDFs that we generate. That becomes the source for them instead of them logging into it and getting information.

The IT team maintains it along with all the software that we have installed on our premises. That team is taking care of it. But we hardly have any maintenance requests for qTest. There have been a couple of times where we had outages but, apart from that, we have hardly had any maintenance requests for qTest.

We haven't seen any change in the number of defects. It mainly creates transparency, and accountability has been increased.

It's easily understandable, including the reports. It's pretty comprehensive and provides all the essential details that we need to publish from any of the teams.

I would rate qTest at nine out of ten. It's a perfectly good tool. It definitely serves its purpose and I can definitely recommend it.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
MD
Manager, IT Quality Assurance (EDM/ITSRC/Infrastructure) at a financial services firm with 1,001-5,000 employees
Real User
Top 10
Integration with JIRA makes all test cases available to anybody in the company with JIRA access

Pros and Cons

  • "The solution's real-time integration with JIRA is seamless."
  • "qTest offers a baseline feature where you can only base sort-order for a specific story or requirement on two fields. However, our company has so many criteria and has so many verticals that this baseline feature is not sufficient. We would want another field to be available in the sort order."

What is our primary use case?

qTest is our test case management tool.

How has it helped my organization?

Our company's workflow starts in JIRA. We create epics, stories, bugs, etc. All of those things are integrated within qTest. There was a disconnect before, with the testers working in Quality Center, while developers and business analysts were working in JIRA. qTest has eliminated that piece, because there is a specific JIRA integration. All the test cases are available in the links section within JIRA, so they're visible for anybody in the company who has access to JIRA. They can pick up the item, the cause-of-issue type, and look at a story or bug and see what level of QA testing has been done and whether its status is pass/fail. All of the test statuses are available in the story itself, so there is one place to view things.

We also use that information for release management. Every release will have an associated JIRA tag for release to production. It's easier for the change-management people to look at JIRA itself and see what level of testing has been done, if it's pass/fail, etc.

We use Selenium WebDriver for test automation. We use Python automation scripts which are located in BitBucket, the central location where we keep all our automation scripts. We execute these scripts with Jenkins and then use a qTest plugin to push the results from Jenkins to qTest test results, once the executions are over. We can also run the same automation scripts within the qTest Automation Host feature. Through the Launch feature we can kick off automation scripts, which are available in BitBucket. So we can either use Jenkins or qTest to run the automation scripts. Because of the reporting mechanism we are directly passing test results to the execution tab, so senior staff can see how many scripts we ran, how many passed, how many failed, in a detailed report in Insight. Jenkins has the ability to talk to qTest. Previously, when we used Quality Center, it didn't have any capability to talk with JIRA.

qTest also provides our team with clear demarcations for which steps live in JIRA and which steps live in qTest. It's a positive feature. It improves our understanding of expectations as to what requirements are to be filled in through JIRA, and blind test-case management and controls available in qTest. It also separates roles and responsibilities and allows people to work within their boundaries.

What is most valuable?

The solution's real-time integration with JIRA is seamless. With ALM or QC, we had an additional plugin which was a schedule-based integration between the tool and JIRA. Sometimes things errored out and there were too many integration issues. We didn't have an updated sync-up between JIRA and Quality Center. Quality Center was predominantly used by testers and JIRA was being used by other users in our company, including PMs, DSAs, and devs. qTest solved one of those challenges for us.

The reports qTest provides to executives are pretty good because what we do is mimic what we used to do with Quality Center: defect reports, throughput reports, aging reports, and an execution summary report to show how many test cases there have been. The executives are reviewing them and they do not find any difference between what we had in Quality Center versus what we are doing in qTest.

What needs improvement?

We are starting to use qTest Insights a little bit. Right now, on a scale of one to five, I would say the Insights reporting engine is a three because we are facing some performance issues.

For example, qTest offers a baseline feature where you can only base sort-order for a specific story or requirement on two fields. However, our company has so many criteria and has so many verticals that this baseline feature is not sufficient. We would want another field to be available in the sort order. When tickets come over from JIRA, it would be helpful to be able to sort by sprint, to begin with. And within the sprint there are labels or subcategories. Currently, it allows us to only sort on a sprint and then subcategory. We would like to see things bucketed or placed in a folder with a status within the subcategory. We need three fields instead of two. When we raised this item, Tricentis said that it's a feature request. 

Also, the features that are customizable or specific for a team are still not available in Insights reporting. We have submitted approximately 15 or 20 tickets to Tricentis so far to address those features/enhancements/bugs. That's all in the works.

Another important issue is that we can export from JIRA. When we export, any attachments in JIRA will be part of the export. Although qTest is just plugging into the test cases, it's not letting us export attachments from JIRA. Everybody else in the company who is operating in JIRA would like to see the attachments that come out of the integration links, meaning the test cases. That is actually a feature that has to be set by Tricentis and not JIRA. Again, that required a new feature request to be submitted.

For how long have I used the solution?

We have been using qTest for around six months.

What do I think about the scalability of the solution?

So far the scalability looks pretty good. I cannot say for sure because in a matter of six months we have 26 projects that are live and functional. So far, so good, but I cannot talk about the scalability yet.

From what I have heard from Tricentis, there is no restriction on data storage. In terms of latency, because the application itself is in cloud, and we shouldn't be seeing any performance issues accessing qTest.

We have about 55 people, contract testers, who have access to the edit, add, and execute features. We have three admins. And we have about 100 people who are view-only users of qTest items. We don't require any people to maintain the solution since it's hosted on the cloud.

We definitely anticipate increasing the number of projects in qTest.

How are customer service and technical support?

Technical support is friendly and quick. Most of the time we get a response the same day. They're located in Vietnam. There is a ticketing process. If we have an issue we open a ticket with them. If we need to, they will schedule a meeting with us to complete the request. They respond on time.

Representatives come over or Skype us to tell us about the next version date and the like. We get the communications from Tricentis indicating the dates of rollout of new versions.

Which solution did I use previously and why did I switch?

We used to have Micro Focus ALM Quality Center as our test management tool and we were nearing our licensing limitation at the time. We evaluated a couple of tools in the market and we picked up qTest because it had a better reporting mechanism and dashboard features, along with a clean integration with JIRA.

How was the initial setup?

The initial setup was straightforward. There were clear project templates and clear user templates available. We were able to add and update roles as needed. The user list was already available. All we had to do was checkmark and save. It was really seamless setting up users within the tool. 

Likewise, we could model it as a waterfall or agile template, and it then gave us the workstreams created in the folder structure mechanism within qTest. These are all good features that allowed us to quickly set things up and keep things moving.

It's hard to say how long it takes to set up qTest because it's handled by Tricentis. All they told us was that they had finished their deployment.

We were given a sandbox and some sample projects to be evaluated and tested. We had a month or so during which all our testers were given access to those sample projects. We tested them and we said we were good to go. The production environment was then available for us to roll out our projects.

Our organization’s adoption of the solution has been pretty positive. Users were looking forward to it. They embraced it pretty quickly and pretty well.

What was our ROI?

It's too early to tell about a return on investment.

What's my experience with pricing, setup cost, and licensing?

I believe we have an annual subscription.

Which other solutions did I evaluate?

We evaluated QASymphony and QMetry. To begin with, we had a list of about ten tools that researched on the internet and via some phone calls. We narrowed it down to these two and Tricentis.

The main differentiators were the dashboard and reporting mechanism, the artifact reporting mechanism, and the JIRA integration. Those were the reasons we chose Tricentis.

What other advice do I have?

It's a simple tool. The usability is pretty straightforward. For a QA tester who is an active user, the UI is pretty simple, the linkage of requirements to test case is simple, and there is searchability of test case across the project. Overall, it's better than Quality Center in the ways that I have explained.

My suggestion would be to always put your use cases up-front with vendors whose tools you're looking at. Ask for a demo and make sure that your use cases match up to the demo that's being performed. We had some challenges with JIRA and the Quality Center integration, real-time interfaces, the time lag, and visibility for all people into the tool. These were challenges in Quality Center that qTest has overcome.

At this time the report section in qTest is used by QA managers and above. Our QA testers are not looking directly into the reports to gather stats. It's the QA managers, directors, and VP, as well as people in IT, who are starting to look at the metrics within the reports and who are trying to form a consensus on the reporting bugs. Our testers just log bugs and Insight reports are gathered by a QA lead to create a defect summary. Then the QA lead talks to the PM, and dev, etc. So the reporting itself is not affecting the productivity of the testers. Overall qTest is not affecting our testers' productivity.

In terms of executives or business users reviewing results provided by qTest, we are just starting that process. They are reviewing the results but there is no active, standardized communication, back and forth, on the artifacts that they review.

I can't say we have seen a decrease in critical defects in releases. qTest is not even enabling people to put out the right requirements. The defect reduction is in the upstream work: how they write the requirements, how they code, how we test. qTest is just the tool that allows us to do a pass or fail. The tool itself is not an enabler of defect reduction.

It's a flexible tool but, overall, I cannot say that it has increased testing efficiency.

Which deployment model are you using for this solution?

Private Cloud
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.