What is most valuable?
The most valuable aspects to me are its versatility and how powerful it is with all the add-ins for so many different platforms.
I love working with database testing with the tool. I also love how UFT can run functional tests on the UI, then execute tests against a web or REST service, then it can use data from the database to test against the front end, and it can finish the test run by kicking off performance testing for the same application.
And all of that can be done from the QC/ALM tool so defects can be linked back to requirements and test cycles.
How has it helped my organization?
We do consulting, training and mentoring with the HP tool set, including UFT, so it is kind of our bread and butter. There are a lot of options with the tool. We just finished implementing an automation framework with over 600 tests using UFT.
Last week, I mentored another customer in how to use the tool with their team so they can start automating their tests.
We use it in a lot of different ways. I used it to build a script that automatically checks me in if I have a flight with Southwest to help me get a better boarding group, so it helps with my travel too.
The product is so robust by itself, testing both GUI and backend processes in conjunction with other tools like Loadrunner and ALM. The UFT tool can be such a huge boon to a testing organization that can commit to its use. Over time there is so much testing that can be taken off of the manual testers hands, allowing them to focus on the more complex testing issues.
What needs improvement?
Those areas I would have spoken of before are being addressed. HP added the LeanFT functionality for UFT 12.51 so users can build tests using Java or C# or other programming languages they might be comfortable with.
I would, however, like to see the application have fewer issues with crashes.
For how long have I used the solution?
I've used it for over eight years.
What was my experience with deployment of the solution?
That is one of the good things about the UFT tool. It is a mature product from a mature company, so while there are issues from time to time with installations, the tool usually deploys without issue.
What do I think about the stability of the solution?
Stability can be an issue, and the weaker the resources on the machine running UFT the more likely there will be problems.
What do I think about the scalability of the solution?
Scalability is not an issue as long as an organization can afford the licenses.
How are customer service and technical support?
Partners who offer support like our company tend to get high marks for that support. HP support is notoriously difficult.
Which solution did I use previously and why did I switch?
Over the course of my career, I used Rational Robot back before IBM bought them and Silk Test as well as Silk Performer when Segue owned them both. All good tools, but not a fair comparison since I used them so long ago. I will say I loved working with Silk Performer.
How was the initial setup?
There is a wizard for the set-up which I have always found to be simple and straightforward. That same wizard can be used to set up the license server, repair installations, install some add-ins, and some other features. It has always seemed pretty intuitive to me in terms of setting up QTP and UFT.
What about the implementation team?
We generally will implement IR in-house, but then again we train and mentor folks on using these products, so that makes a certain amount of sense.
Read the install notes before you start and make sure your target system meets all the requirements. So often folks call for support when really it was a matter of not reading the installation documentation.
What was our ROI?
Well, ROI will be specific to a customer and their needs, but I can give an example.
We built automation for a company that needed 17 people for 12 or more weeks to run a regression test. That same test can be run in a week with the UFT tool and one or maybe two people to make sure there are no problems with those test runs. I built automation that created test sets, executed tests with those sets, and validated the results for a testing effort that took three people two or more weeks.
Given all that, ROI is really what automation is all about.
Which other solutions did I evaluate?
What other advice do I have?
Get training. Being self-taught will leave a lot of frustrating holes that training fills. You can have really bright people but they just won’t know how to use some of the features of the tool because they won’t know those features exist. As a result they can grow frustrated and mistake their lack of knowledge for shortcomings in the product.