We performed a comparison between OpenText Silk Test and OpenText UFT One based on real PeerSpot user reviews.
Find out what your peers are saying about Tricentis, OpenText, Katalon Studio and others in Test Automation Tools."Scripting is the most valuable. We are able to record and then go in and modify the script that it creates. It has a lot of generative scripts."
"A good automation tool that supports SAP functional testing."
"The ability to develop scripts in Visual Studio, Visual Studio integration, is the most valuable feature."
"The scalability of the solution is quite good. You can easily expand the product if you need to."
"The feature I like most is the ease of reporting."
"The statistics that are available are very good."
"The major thing it has helped with is to reduce the workload on testing activities."
"Compared to other products, UFT One is better, faster, and more accurate."
"The inside object repository is nice. We can use that and learn it through the ALM connection. That's a good feature. The reporting and smart identification features are also excellent."
"The ease of record and playback as well as descriptive programming are the most valuable features of UFT (QTP)."
"The initial setup is relatively easy."
"Being able to automate different applications makes day-to-day activities a lot easier."
"For traditional automation, approximately half of our tests end up automated. Therefore, we are saving half the testing time by pushing it off to automation. That gives it an intrinsic benefit of more time for manual testers and business testers to work on possibly more important and interesting things. For some of our applications, they don't just have to do happy path testing anymore, they can go more in-depth and breadth into the process."
"Record and Replay to ease onboarding of new users."
"It helps in identifying defects earlier. With manual testing, that 15-day timeline meant there were times when we would find defects on the 11th or 12th day of the cycle, but with automation we are able to run the complete suite within a day and we are able to find the failures. It helps us to provide early feedback."
"Everything is very manual. It's up to us to find out exactly what the issues are."
"We moved to Ranorex because the solution did not easily scale, and we could not find good and short term third-party help. We needed to have a bigger pool of third-party contractors that we could draw on for specific implementations. Silk didn't have that, and we found what we needed for Ranorex here in the Houston area. It would be good if there is more community support. I don't know if Silk runs a user conference once a year and how they set up partners. We need to be able to talk to somebody more than just on the phone. It really comes right down to that. The generated automated script was highly dependent upon screen position and other keys that were not as robust as we wanted. We found the automated script generated by Ranorex and the other key information about a specific data point to be more robust. It handled the transition better when we moved from computer to computer and from one size of the application to the other size. When we restarted Silk, we typically had to recalibrate screen elements within the script. Ranorex also has some of these same issues, but when we restart, it typically is faster, which is important."
"The support for automation with iOS applications can be better."
"They should extend some of the functions that are a bit clunky and improve the integration."
"The pricing is an issue, the program is very expensive. That is something that can improve."
"Could be more user-friendly on the installation and configuration side."
"The solution has a lack of compatibility with newer technologies."
"Needs to improve the integration with the CI/CD pipeline (VSTS and report generation)."
"The solution is expensive."
"We'd like it to have less scripting."
"They should include AI-based testing features."
"The speed could be improved because a large test suite takes some time to execute."
"It doesn't support Telerik UI controls and we are currently looking for a patch for this."
"We used to run it as a test suite. Micro Focus provides that in terms of a test management tool as ALM, but when we think of integrating with a distributed version control system, like Jenkins, there isn't much integration available. That means we need to make use of external solutions to make it work."
"I would like Micro Focus to provide more information on their portal about their newer products. The information about UFT One was outdated. The image recognition features could also be better."
Earn 20 points
OpenText Silk Test is ranked 24th in Test Automation Tools while OpenText UFT One is ranked 2nd in Test Automation Tools with 89 reviews. OpenText Silk Test is rated 7.6, while OpenText UFT One is rated 8.0. The top reviewer of OpenText Silk Test writes "Stable, with good statistics and detailed reporting available". On the other hand, the top reviewer of OpenText UFT One writes "With regularly occurring releases, a QA team member can schedule tests, let the tests run unattended, and then examine the results". OpenText Silk Test is most compared with Selenium HQ, Apache JMeter, OpenText UFT Developer, SmartBear TestComplete and froglogic Squish, whereas OpenText UFT One is most compared with Tricentis Tosca, OpenText UFT Developer, Katalon Studio, SmartBear TestComplete and Selenium HQ.
See our list of best Regression Testing Tools vendors, best Test Automation Tools vendors, and best Functional Testing Tools vendors.
We monitor all Test Automation Tools reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.