We performed a comparison between OpenText Silk Test and OpenText UFT Developer based on real PeerSpot user reviews.
Find out in this report how the two Functional Testing Tools solutions compare in terms of features, pricing, service and support, easy of deployment, and ROI."A good automation tool that supports SAP functional testing."
"The statistics that are available are very good."
"The feature I like most is the ease of reporting."
"The ability to develop scripts in Visual Studio, Visual Studio integration, is the most valuable feature."
"The scalability of the solution is quite good. You can easily expand the product if you need to."
"The major thing it has helped with is to reduce the workload on testing activities."
"Scripting is the most valuable. We are able to record and then go in and modify the script that it creates. It has a lot of generative scripts."
"The recording feature is quite good as it helps us to find out how things are working."
"One of the important features, which speeds up the automation testing development with LeanFT, is its object repository functions. Object identification are the most time-consuming aspect of building automation tests. LeanFT gives that out of the box. It helps you identify the objects and after that, once you got the object in place, then it's just about building the test scripts. So it reduces your development time significantly."
"The most valuable feature of the solution is the number of plugins for object recognition. The predefined libraries allow us to automate tasks."
"The solution is very scalable."
"It's a complete pursuit and it's a logical pursuit working with HPE."
"The most valuable feature is stability."
"The cost is the most important factor in this tool."
"The most valuable feature is the Object Model, where you can directly pull up the object as a global or a local."
"Everything is very manual. It's up to us to find out exactly what the issues are."
"The solution has a lack of compatibility with newer technologies."
"We moved to Ranorex because the solution did not easily scale, and we could not find good and short term third-party help. We needed to have a bigger pool of third-party contractors that we could draw on for specific implementations. Silk didn't have that, and we found what we needed for Ranorex here in the Houston area. It would be good if there is more community support. I don't know if Silk runs a user conference once a year and how they set up partners. We need to be able to talk to somebody more than just on the phone. It really comes right down to that. The generated automated script was highly dependent upon screen position and other keys that were not as robust as we wanted. We found the automated script generated by Ranorex and the other key information about a specific data point to be more robust. It handled the transition better when we moved from computer to computer and from one size of the application to the other size. When we restarted Silk, we typically had to recalibrate screen elements within the script. Ranorex also has some of these same issues, but when we restart, it typically is faster, which is important."
"Could be more user-friendly on the installation and configuration side."
"The support for automation with iOS applications can be better."
"They should extend some of the functions that are a bit clunky and improve the integration."
"The pricing is an issue, the program is very expensive. That is something that can improve."
"The parallel execution of the tests needs improvement. When we are running tests in LeanFT, there are some limitations in terms of running the same tests simultaneously across different browsers. If I'm running a test, let's say to log in, I should be able to execute it through IE, through Microsoft Edge, through Chrome, through Mozilla, etc. This capability doesn't exist in LeanFT. Parallel execution of the test cases across different browsers need to be added."
"In the next release, I would like to see the connectivity improved to be less complex and more stable."
"The pricing could be improved."
"With Smart Bear products generally, you can have only one instance of the tool running on a machine."
"Integration with other tools can become a costly exercise."
"I have to keep the remote machine open while the tests are running, otherwise, it leads to instability."
"The support from Micro Focus needs a lot of improvement."
"In the next release, I would like to see integration with different cloud-based tools such as Azure."
Earn 20 points
OpenText Silk Test is ranked 26th in Functional Testing Tools while OpenText UFT Developer is ranked 16th in Functional Testing Tools with 34 reviews. OpenText Silk Test is rated 7.6, while OpenText UFT Developer is rated 7.4. The top reviewer of OpenText Silk Test writes "Stable, with good statistics and detailed reporting available". On the other hand, the top reviewer of OpenText UFT Developer writes "Integrates well, has LeanFT library, and good object detection ". OpenText Silk Test is most compared with OpenText UFT One, Selenium HQ, Apache JMeter, froglogic Squish and Katalon Studio, whereas OpenText UFT Developer is most compared with OpenText UFT One, Tricentis Tosca, Original Software TestDrive, Selenium HQ and froglogic Squish. See our OpenText Silk Test vs. OpenText UFT Developer report.
See our list of best Functional Testing Tools vendors, best Test Automation Tools vendors, and best Regression Testing Tools vendors.
We monitor all Functional Testing Tools reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.