We performed a comparison between OpenText Silk Test and Tricentis Tosca based on real PeerSpot user reviews.
Find out in this report how the two Functional Testing Tools solutions compare in terms of features, pricing, service and support, easy of deployment, and ROI."The scalability of the solution is quite good. You can easily expand the product if you need to."
"Scripting is the most valuable. We are able to record and then go in and modify the script that it creates. It has a lot of generative scripts."
"A good automation tool that supports SAP functional testing."
"The major thing it has helped with is to reduce the workload on testing activities."
"The feature I like most is the ease of reporting."
"The ability to develop scripts in Visual Studio, Visual Studio integration, is the most valuable feature."
"The statistics that are available are very good."
"The most valuable feature of Tricentis Tosca is the Tosca Commander. Functionality is another thing I find most valuable in the solution."
"The most valuable feature is being able to create a test case by recording some scenarios and then leasing that task case to other scenarios."
"The mainframe testing and UI automation are the most valuable aspects of the solution."
"The solution is script-less, so you don't need IT knowledge to use the solution in an operational way. This is the most valuable feature. It's also only one of two or three tools that can do good automation on SAP, and in my opinion, it's the best of those."
"Image recognition: It has allowed us to automate a GUI section of our product which involves drawing different topologies."
"The item that is different from all the other tools is that it's module based."
"We are satisfied with the support of Tricentis."
"What I find valuable is that Tricentis is always refining the test methodology. They listen to feedback from the analysts about what the testing tool should do, and then Tricentis always implements it. So all the necessary testing functions are already implemented in their tools."
"They should extend some of the functions that are a bit clunky and improve the integration."
"The support for automation with iOS applications can be better."
"The solution has a lack of compatibility with newer technologies."
"We moved to Ranorex because the solution did not easily scale, and we could not find good and short term third-party help. We needed to have a bigger pool of third-party contractors that we could draw on for specific implementations. Silk didn't have that, and we found what we needed for Ranorex here in the Houston area. It would be good if there is more community support. I don't know if Silk runs a user conference once a year and how they set up partners. We need to be able to talk to somebody more than just on the phone. It really comes right down to that. The generated automated script was highly dependent upon screen position and other keys that were not as robust as we wanted. We found the automated script generated by Ranorex and the other key information about a specific data point to be more robust. It handled the transition better when we moved from computer to computer and from one size of the application to the other size. When we restarted Silk, we typically had to recalibrate screen elements within the script. Ranorex also has some of these same issues, but when we restart, it typically is faster, which is important."
"Could be more user-friendly on the installation and configuration side."
"Everything is very manual. It's up to us to find out exactly what the issues are."
"The pricing is an issue, the program is very expensive. That is something that can improve."
"The UI does not have the option of automating the scroll bars."
"The main area where there is room for improvement is how they do upgrades. Going through this current upgrade, we were delayed a month because we are using a third-party tool. It's called Tosca Connect by Tasktop. When this latest upgrade broke that relationship between the two, it took Tricentis a month to come back with a workable solution... Their whole upgrade process needs to be better and cleaner, from an end-user standpoint."
"Very difficult to get information about licensing costs."
"Not being able to mask test data in relation to testing data management, in my opinion, is also a limitation."
"It needs better integration with JIRA."
"Setup wasn't that straightforward; it was more complex. It all depends on the environment, because there were a lot of errors on our applications. Therefore, it wasn't an easy setup for us."
"The product is not very stable when used with cloud storage. It is very hard to load the screen, making it difficult to use the tool in cloud storage."
"They can make it more stable. I have used this tool for SAP applications. They have an alliance with SAP, and it mostly worked fine, but there were a few glitches. However, we got the required support from the Tricentis team. They are coming up with their new versions and upgrades with respect to how the Tricentis systems as cloud applications are updated, and it would be good if they have a robust accelerator pack."
Earn 20 points
OpenText Silk Test is ranked 26th in Functional Testing Tools while Tricentis Tosca is ranked 1st in Functional Testing Tools with 98 reviews. OpenText Silk Test is rated 7.6, while Tricentis Tosca is rated 8.2. The top reviewer of OpenText Silk Test writes "Stable, with good statistics and detailed reporting available". On the other hand, the top reviewer of Tricentis Tosca writes "Does not require coding experience to use and comes with productivity and time-saving features ". OpenText Silk Test is most compared with OpenText UFT One, Selenium HQ, OpenText UFT Developer, Apache JMeter and froglogic Squish, whereas Tricentis Tosca is most compared with Katalon Studio, OpenText UFT One, Worksoft Certify, Postman and Testim. See our OpenText Silk Test vs. Tricentis Tosca report.
See our list of best Functional Testing Tools vendors, best Regression Testing Tools vendors, and best Test Automation Tools vendors.
We monitor all Functional Testing Tools reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.