We use it for QA software that we build.
We use it for QA software that we build.
It boosts productivity because we're able to quickly come up with a test plan, as opposed to doing it from scratch each time or from something homegrown.
The most valuable feature is reusing test cases. We can put in a set of test cases for an application and, every time we deploy it, we are able to rerun those tests very easily. It saves us time and improves quality as well.
It also helps us to identify defects before we get them into production. And, overall, it has increased testing efficiency by 30 percent in terms of time.
The information that qTest provides to executives could be better. If there are tests that have a lot of steps in them, people will go through and do seven out of eight steps, but it doesn't show the test is complete. So from a metrics perspective, what executives normally see is that it looks like nothing was done, even though they did seven out of the eight steps.
In addition, you can add what I believe are called suites and modules. I opened a ticket on this as to what's the difference. And it seems there's very little difference. In some places, the documentation says there's no difference. You just use them to organize how you want. But they're not quite the same because there are some options you can do under one and not the other. That gets confusing. But since they are very close to the same, people use them differently and that creates a lack of consistency. My preference would be that qTest establish the way they do it and everybody has to do it that way, so everything is done the same way.
In response to my ticket, they said that they are the same and that you can choose whichever one to best organize how you want to organize. But the problem is that everybody in the organization makes a different choice. And they sent me a link to the documentation. Some of the documentation does say that there are some differences. There was one thing, like importing tests or something, that we could do under one but not under the other. That really made it a mess. That's the only really big concern I have had.
We've been using qTest for between six months and a year.
It seems very stable.
Scalability gets to be a little bit of a mess. I've never seen a performance issue but, as we continue to add projects, especially if somebody has access to a lot of the projects or is an administrator who has all the projects, it feels a little bit unorganized. There's too much stuff. When I create projects, for example, they're in my dropdown forever, as far as I know. That just creates a huge list of products. I would like, when a project is done, to get it out of my face.
Tech support did answer promptly. My issue is not the fault of the tech support. The tech support did fine. The issue I described above is the only time I've contacted them.
In this organization, Tricentis was the first. In my last job we used Micro Focus Quality Center. Both it and qTest are a pain. They're pretty similar.
The initial setup is a little bit wonky. What you need to do to get the job done is not intuitive. It takes more time to train people than if it were a little bit simpler.
Getting all the products set up and getting all the testers assigned took a while.
The adoption of qTest in our organization has been average. People aren't against it. They comply. But again, because we don't have a formal QA team, it's our biggest option. When we ask people on the business side to use it, they are pretty good about using it, as long as we show them how to.
It does what it's supposed to do. I don't know what the organization paid for it, but it is getting the job done that it's supposed to get done.
I would recommend planning how you're going to organize using it and have everybody organized the same way as they use it. A lot of times you see this in software: They build in flexibility thinking they're doing you a favor because they're making it flexible and thinking you can use it the way you want. But if you have ten users, those ten users each use it ten different ways. If there's no flexibility at all, the ten users use it the same way. To me, that's almost better. Even if it's not exactly how we want, at least it's the same. Uniformity, over being able to choose exactly how I use it, would be my preference.
The biggest lesson I've learned from using qTest is that we need dedicated QA people. What will happen is something like the following. I have a developer, Amos, who, thinking he's doing the right thing, goes in and loads up 20 tests and then he gives that to the business to test. And they think, "Hey, the expectation is that I do exactly what this thing says." The problem is we only then test it from the perspective of the developer. We're not actually getting the business to think about what they should look at or, better yet, developing a dedicated QA team which knows to look for defects. It's a myopic perspective on testing. And because of that, we do not find as many defects as we would. That is not a qTest issue, though. If we had a dedicated testing team using qTest, that would be ideal.
We have not seen a decrease in critical defects and releases since we started using it but I wouldn't blame qTest for that. It's more that we do not have a dedicated QA team. My management team seems to think that qTest is a substitute for a dedicated QA team and we have the developers and the business desk use it to test. But developers and business are not as good at finding defects as a dedicated QA team is.
In terms of maintenance and for administration of the solution, we don't have anybody dedicated to those tasks. People do the maintenance needed to get done whatever they need done. It's mostly me who creates projects, adds users, etc.
We have 56 users, who are primarily developers and on the business side.
Overall, it gets the job done, but it's a struggle to do it. It's not as intuitive to use as it could be.