We just raised a $30M Series A: Read our story

Sauce Labs Competitors and Alternatives

Get our free report covering BrowserStack, Perforce, SmartBear, and other competitors of Sauce Labs. Updated: September 2021.
542,823 professionals have used our research since 2012.

Read reviews of Sauce Labs competitors and alternatives

Chris Trimper
Test Automaton Architect at Independent Health
Real User
Top 20
Testers have been able to free up their time: instead of doing mundane, repetitive tasks, they shift them off to automation

Pros and Cons

  • "For traditional automation, approximately half of our tests end up automated. Therefore, we are saving half the testing time by pushing it off to automation. That gives it an intrinsic benefit of more time for manual testers and business testers to work on possibly more important and interesting things. For some of our applications, they don't just have to do happy path testing anymore, they can go more in-depth and breadth into the process."
  • "Sometimes, the results' file size can be intense. I wish it was a little more compact."

What is our primary use case?

We build helper utilities. For example, your particular test is one where when you do the test, you have 30 minutes of setup, but then at the end, you need a real human eye because it is brand new stuff and you don't know what to do. However, if you could have an automation build that 30 minutes worth of stuff and not be worried about it over and over again, thinking about it as your test prerequisite, then we have an awful lot of stuff for that.

The real good stuff is that we have full-blown replacements for manual tasks, whether it would be for desktop applications or hybrid web applications. There are a lot of apps out there, especially in the enterprise space where it is in a web browser, but there is an installer on your computer and the web browser is the view. We have PureWeb, our websites, and others, and we do a lot of mobile testing with UFT One. We do almost all our API testing with it for our web services. We also do a good amount of data testing with it as well.

The use case is really just to add testing efficiencies in any way, shape, or form that we can through a helper for some prerequisites, since we do a lot of data builders with it. In fact, that is a project that I am working on today, building test data where an actual person doesn't have to sit there and build test data because that is boring and unproductive. We have scripts to do full-blown test case replacements. So, any one of our projects or applications can have anywhere between 20 percent and mid-sixties percent automation coverage for the application of automated replacement of manual tests.

It is a development IDE. When you're working with a development IDE, you need to proof it through a bunch of different techniques that you use to make sure that there is no recompiling you need to do. So, we are in the process of getting version 15.0.2, but we are using version 15 across the entire team.

It is all on-premises. So, UFT can encompass a couple of things. There is UFT One, which is like any automation software that you would use. Technically, the most prevalent that people see the marketplace is Sauce Labs working with Eclipse, or something like that. Think of this as is Eclipse (or your favorite IDE) and the automation software all bundled into one. It is only applicable for on a desktop computer of some sorts, whether it is a laptop, desktop, or virtual machine. We use it all on-premises.

Cloud is a little bit iffy for some of the things that we do, being in the healthcare space. We do use some cloud stuff, but for this particular one, I would imagine we use on-prem as long as we can. Now, it is mostly all virtual machines. We have almost no physical desktops left with it because gone are the days of trying to figure out a problem. Because you have templates to base it off of, it's like, "Listen, just rebuild my machine. I'll use it tomorrow."  We are using it on Windows 10 virtual machines.

Our virtual machines are constantly running. It is not like we turn them down and stand them up. If I discuss the side that a block of them are bad for whatever reason, we can destroy them and get new ones built, but they are all pretty standard. I am actually sitting on one right now, which is a dual-core, two and a half gigahertz machine with 8 GB memory. This represents your slightly above average laptop that you would buy at a store. One of the reasons that we shifted to all virtual machines is when you are doing normal office work, you have to open your chat windows, Outlook, browsers for different things, and maybe Word or Excel. All that is just stuff that muddles up the water for your development environment, regardless of what development you are doing. By using VMs, even for scripting, we have our ID and the application you are testing open on that machine, and nothing else. So, that machine gets to just do automation stuff and nothing else. It's not interrupted by Outlook things. If you have 15 Chrome browser tabs open where you are researching something, then the hog of some of those sites aren't impacting you. You just have the application that you are testing and the IDE open. We have had really good success with this. The perfect mix for this is what we have: dual-core 8 GB memory. That is really good enough. We even have that for the machines with an AI engine on them. At this point, the AI engine is local. So, all the stuff that it does to look at the screen, interpret things, read it, tell you where menus are, etc., those are all running on that machine. I haven't really seen a blip on it. We tried to run it with four 4 GB memory once, and it was so-so. Let's face it - Windows 10 on 4 GB of memory isn't good anyway.

How has it helped my organization?

UFT One can definitely be a big component for continuous testing across the software lifecycle. We are personally still working on the continuous part of it. For the build to our test environments, we have it nearly all integrated. Unfortunately, on our build servers, we don't only because our build servers don't have touchable code. Think of it like you compiled the website, but you didn't deploy it to Apache, IIS, etc. So, that part we don't have, and that is a limitation on our end. However, we do have the plugins to be able to integrate with ADO or Jenkins, depending on the team, and even if that didn't work, we could send calls off to it. 

Mobile aside, we have a lab of about 35 or 40 virtual machines. I struggle with the number, because on any given day, a virtual machine just craps out on us because some Windows update made it bad, etc., but we have those readily available. They are all profiles using ALM Octane, saying, "These are the machines that have the web browser plugins. These are the machines that have Outlook configured. These are the machines that have desktop app A, B, or C." At any given point, a person or a non-person (like a CI process) could say, "Run these tests and give me the results," and it kind of works pretty nicely.

We are using the AI piece with all our mobile devices. When the AI capabilities that are built into the UFT One, version 15, first came out, I watched the presentation on it. I was there when they launched it at one of their conferences, and it seemed cool, like the whole Alexa thing. I don't know what I would use it for, but it was neat. All of a sudden, I was told, "Hey, you're bringing the mobile app in-house for development and testing." I am like, "that doesn't sound like fun at all." However, I remembered that AI stuff, so I thought, "I am going to try it out and see if it makes my life better." It has been an absolute game changer. 

It took testing mobile applications from being a headache to being fun. It's cool because you are actually working like a real user. For example, you are working with someone who has never really worked with a particular mobile app. You can click on the menu, then click on claims, and now you can see a list of claims. If you want to see just your medical claims or pharmacy claims, click on the filter. If you click on medical, then it should show you that. It is like talking to a human being. There is less code. When changes are made, unless it is a change to the user interface, where there are new features being added or taken away, you don't have to worry about it anymore. 

It is really awesome. For example, if you want to know your available balance at your bank. You go to your bank's app, click on checking and look at the available balance. You don't have to know the names of objects anymore. The objects can change a million times, and all I have to say is, "What is the dollar amount next to the label: 'Available balance'," and using AI, OCR, and all the different computer vision things that are built into the engine, it just works. It just knows about objects. The best part about that for me is those that objects differ from iOS to Android. I don't care anymore because I can write one ubiquitous script that will run on both of those. If the user interface is somewhat similar on the web, I instantly have a test for the web as well.

The multi-device test automation capabilities have allowed us to get to the coverage that we desired faster. We might have had to make a decision of: 

  • Are we going to take twice as long to automate?
  • Are we going to choose iOS or Android? 

Here, we didn't have to make that choice. We just knew that it would work. 

We did do a lot of trial and proof of concept with it. We started to realize that this technology would allow us to instantly have scripts for all the OSs, assuming there was a driver for it. For example, assuming there was a driver available for it and our dev team built it, I could get phone OS coverage for the Mozilla OS and have scripts for it tomorrow. The scripts that I have today would work tomorrow as it comes out. Because it is using the interface, it is using the screens and interacting with them. It doesn't care about the native objects that you have to worry about with traditional automation.

It has helped us, because as we are building scripts we have them all covered. If you want proof, we can run them all. We always do run across a selection. We don't just blindly have faith in it, but we have had it where we build a login script and it works across everything. You build a script to say, "Check a person's deductible balance," and it works across everything. The only time there's any difference whatsoever is if the phone OS has a difference. For example, if somebody wanted to test when you click on the phone number that the dialer opens, that experience is different from iOS to Android. So, that would be a slight deviation. For near everything else, I would say 95 percent or more of our actions - one script covers all devices or platforms right away. Unfortunately, our app is not available right now for iPads or Android tablets. When they decide it is available, other than putting a couple of those into the farm as physical devices, our scripts are ready for it.

What is most valuable?

UFT One has the ability to interact with multiple technologies. We work with .NET desktop applications, web browsers, web services, and mobile. Those are the main things that we work with. That is in terms of technologies. It is nice to have a tool that can solve all languages. Whereas, in other spaces, you would have to do a whole bunch of back-end work to make it so you could talk to desktop applications, mobile applications, websites, web services, etc.

We certainly leverage the IDE to build our tests. We make use of the integration with ALM Octane for recording our results. 

The reporting is pretty nice. You have reporting that can either be leveraged for an end-user, which is maybe a normal manual test, or a business user who wants to see some test results. Or, it can get deeper into stack traces, e.g., an automation person might say, "Gee! Why is that failing?" Then, they might get some analysis available to them for that.

We also use their mobile product, which gives us the ability to interact with UFT Mobile. This gives us the ability to interact with a fleet of real mobile devices on our campus. It is like having a remote desktop view into them, whether you are a manual tester who just wants to interface with it or an automation tester who wants to send one of your test scripts against a mobile device. This is a feature we are using more often now as our mobile app is gaining some more ground. In this day and age, a whole bunch of companies, including our own, are recognizing that more people are favoring their devices over their actual computers for getting data, consuming stuff, reading things, etc.

This solution covers multiple enterprise apps, technologies, and environments, and that was a big part of our decision to go with it. If tomorrow somebody says, "We are going to have a new Java app." While I can't blindly say that we have the absolute best automation software available in the marketplace for touching this Java app, because that would be borderline foolish to say. However, I can say that I can touch the Java app. That is a piece of cake. They are switching us from web services to REST services, and I got that covered. When mobile came underfoot, I didn't have personal experience with mobile when we started doing a mobile app, but I knew that it could cover it.

I rest knowing that anytime there is a new browser available that it is either covered right away or will be covered very soon. When Edge first came out, I don't think it was covered on day one, but it was covered pretty soon after that. Just knowing that it will cover pretty much anything that we run into is very reassuring. 

UFT One gives us integration capabilities with both API and GUI components. I can test either on their own or in the same test. I can test the .NET desktop application using the UI. I can test any kind of API that I run into, and the two most common things are either a web service using a SOAP Protocol or using a REST or RESTful service. The cool thing that I enjoy, we not only leverage it for testing the functionality of our services, but we also make sure that we make our tests as efficient as possible. I am a big proponent of, "Just because you can automate something doesn't mean that you should." For example, in your scenarios, you log into your bank website to do a transaction. Now, normally in the office, a teller might go to the system and log into a weighty desktop application to see that your transaction went through. There is absolutely nothing wrong with that. Well, what happens if you had an API to see that same thing? Why should I waste the time of the desktop application when I could just make an API call and have it in a snap of a finger? That has been a major benefit for us, ensuring that we are able to add efficiencies to our tests and doing the right thing as well as verifying that our APIs are working as we would expect that they would.

We have had it where testers have been able to free up their time, where they might be doing mundane, repetitive tasks, then shift them off to automation. We have been going through an initiative for the past year or so, going through each of our applications and doing what we have called self-service. That is the notion where a tester has the ability to push a button and have their tests run, then get results. 

Another thing with our self-services, they need to provide some input for some configuration. They need to say the name of a plan that they want to task, maybe they need to actually send it some test data to use. We have been working on building all of them as self-service. Instead of testers doing a lot of those things, where maybe in the past they could have only gotten through 10 test cases in a cycle, now they are able to get through 100 because they could just ship them off to automation. 

I am not necessarily saying that more is better. It sounds like it's better, but it's really helping us gain more coverage. I am sure you have heard in the past that a lot of testers may say, "Well, I test based on the time that I have." I get that as a vendor, but wouldn't be great if you could just say, "I test based on what I know I should be testing," and automation has absolutely helped us get to that point.

In terms of key features which are great with UFT One, certainly look at data driving. You are more than likely going to instantly fall in love with how easy it is to data drive. That is a big one. Everything else will be circumstantial based on what it is that you want to do.

A lot of people can use it. They did a nice job with trying to make a testing tool that wasn't just for diehard developers. It has record and playback. If you want to go in there and hit record to record a website, then do some variable substitution, have at it.

What needs improvement?

The one thing that has been throwing us for a loop is that they have been changing labels, e.g., how marketing people like to flip-flop around five or six terms. So, there has been a lot of maintenance needed for that. So, the cool thing is that if the "Available Balance" label changed to some other term, then I would just have to go into the script and just plunk the new term in there.

Because we are using real devices (apps), AI versus traditional automation can't really make it faster, i.e., for a screen to load on a phone is a screen to load on a phone. Unfortunately, I don't know anything that can make that faster. Emulators might, but I am not really sold on emulators. I want to use real devices. For execution, the only thing that we can do is just run it in parallel, e.g., run one test on multiple phones at the same time, as opposed to phone A, then phone B, and then C. 

For execution, you are stuck. That is one thing with device testing. With browsers, they had headless browsers, and that made things faster. However, I don't really think you will ever have that with mobile. I could theoretically represent the data bits with API testing, but I still want to be testing the app. Unfortunately, at this point, I don't see how it could ever be faster, shy of using parallel execution.

I used to say, "I would like to see them do something more with innovation in it," but then they came out with this AI thing. That kind of blew my mind to think that not only is this technology which is available in a tool that most people have written off, because it is not getting the market share that it once had because people just won't give it a chance. 

I haven't had a chance to tinker with it yet, but I would be intrigued to see its integration with Git.

Sometimes, the results' file size can be intense. I wish it was a little more compact.

There are podcasts out there for everything, and they usually tackle a new topic on a weekly basis. It would just be great to have them do something more like that. Where you send in a letter, and someone picks up the letter, then they answer it for the community talking to the people.

For how long have I used the solution?

I have been using it for about nine years.

What do I think about the stability of the solution?

I have had tests run for one minute because that is all they need. I have also had tests, not because the tool is slow but because sometimes the application for testing is slow, that have run for five hours straight with no issue.

I haven't really had any stability issues. There have been times where if I leave the ID open for five or six days in a row, then it gets cranky. However, I don't know if that is Windows or my ID. 

For the most part, we are in kind of a weird place for maintenance. We just did our first big launch of this newly revised app this past year-end. Unfortunately, they kept making a lot of changes that would have broken any system anywhere out there. Lately, they have made a bunch of changes to the login process, trying to work things out, and our scripts never failed. So, there was zero maintenance required for that.

What do I think about the scalability of the solution?

There are less than 10 people who are doing the scripting for various things. Then, there are probably another 10 to 15 who do execution. So, they don't actually open UFT One, but they leverage it from a perspective of: I need to run this test, go and give me the results. We even have some business units who use it. 

In health care, sometimes it is a lot of work to verify a new plan. For example, there is a new company, and they want a new plan with some specific features. The way you really test out a plan is you file a bunch of claims against that plan in the test region and make sure that they pay out properly, but you might need to test 150 claims. Filling out a claim is dreadful. I wouldn't want to have to fill out one, let alone 150. So, they fill out a simple spreadsheet, giving it a kind of virtual handoff to UFT One, and UFT One does all the work. It then delivers the results. We do this for some business units directly and some indirectly, where there is a QA person involved in the middle. There are more than 15 people who do that, and that's really where the floating licenses help us out to open it up more. 

Who knows where it will go this year? I don't really know what projects there are, but one of the biggest things I like about any automation product is to add efficiencies, however we can.

How are customer service and technical support?

It is nice to know that we have support available for this solution. So, I don't have to go scouring through forum after forum, praying that somebody has run into the same issue that I am having.

Personally, I have had a chance to chat with some of their R&D folks. They are just absolutely wonderful to work with. 

The technical support is pretty decent. The one thing I wish I could do sometimes is tell them, "Listen, please don't give me Tier 1. Jump me to Tier 3 because I know this is a big deal since it rarely happens." For example, when you call up your cable company, "Did you try turning it on and off?" "Yes, I turned it on and off. That is why I'm calling you, because it didn't work." You have to get past some of those things. 

We do actually work with a partner, Melillo Consulting, who handles all of our Tier 1 stuff, which is common for a lot of people who have this product. They will buy it through a partner, and a lot of times the partner will give Tier 1 support. The partner is really great at escalating where it need be. God forbid, tomorrow it all of a sudden stopped working with Chrome, and I need it fixed now. They would be able to escalate the issue more quickly. Sometimes, I wish they could just take a description, work with it, and go. I get why they don't, because not everything is cookie cutter and they want to get to the bottom of it. However, being in health care, it's hard to share sometimes. There are certain things that I can't share with them.

I know the support is starting to do more now, but I would like to see them publish more thought-provoking articles, not like, "Hey, we have these new features. Hooray." I would like more, "Hey, today we're going to do an article about how to do a web test that needs to do an API back-end check." While I know how to do that, it would be cool to see them doing more articles like that, really getting out and selling and talking more about the features. You look at Sauce Labs, and they have this wonderful blog where they are constantly promoting new content all the time. I don't think they should be afraid to do that. They should treat themselves like an open source company that is just constantly promoting the use of their products.

Which solution did I use previously and why did I switch?

They did try stuff with Robot Framework before I started. I don't know the history of that, but this was pretty much a relaunch of test automation efforts.

The AI capabilities provide multi-device test abilities without needing platform-development expertise, which is the best part about it. This sounds lazy, but because of what they have done, I don't have to know a thing about it. Here's what's cool: It can be a hybrid app or a native app. I don't care. As long as it is built, then I can push it to one of the devices and test it. When we first got the app, before we started using the AI stuff, I had the Appium Object Spy app, looking at things was not pleasant nor pretty. I had this laundry list of things that developers were going to have to add for me to even be able to identify the username field from the password field, shy of saying field one or field two. That is a terrible way of doing things. 

UFT One saved development time as well as an immense amount of learning time. For example if I handed somebody a web browser testing tomorrow with traditional automation, and they had never seen the internals of a web page, then they would stumble left and right because understanding what is under the covers of what you are testing is normally incredibly important. With this solution, it's actually not. You have to stop thinking like a back-end developer and start thinking like an end-user. This is a wonderful position to put yourself in, because this is really where the focus should be anyway. For me, it is starting to blend your traditional functional testing with UX testing, almost like they are blending together because of the techniques that I am able to use.

I've used Selenium on and off throughout my career. I have looked at tools from SmartBear. 

We do integrate with Applitools, which is a supportive thing. We don't consider them a competitor in this space. They are actually complimentary. 

We have never done anything with Tricentis. 

How was the initial setup?

Upgrades are usually uninstalling and reinstalling, because you never really know how the upgrades are going to take. Lately, I just uninstall and reinstall because I have usually found that if I have an issue, the first thing they say is, "Well, have you tried uninstalling it and installing it fresh?" Kind of like, did you turn it on and off again? So, I usually just do that.

I don't even know if it's feasible, but if there was a magic box that said, "Here are all my machines and push the upgrade to all of them." It would be awesome.

The installation is pretty much a piece of cake. If you don't know what technologies you are testing, I would argue installing it might not be the first thing you should do anyway.

What about the implementation team?

Everyone does automation and has admin rights on their machine, because we don't necessarily know the frequency in which patches may come. All of a sudden Chrome might change your architecture, then we need a patch. I could work with the desktop team and get a patch deployed, but that might require a lot of paper and time, so we just push it ourselves.

If you don't have admin rights, it's a pretty easy installation. You can do like a silent installer and run a really long command that has the answer to every prompt in it, then you can patch it up. You can do that, and it works quite well. I have never actually worked with our desktop team to get it packaged because they have a six-month backlog right now to get the apps packaged, but I'm sure it could be done: piece of cake.

Once it is installed, it works really well. It is 100 percent up to you if you integrate with ALM Octane. If you don't integrate it with ALM Octane, then it is one extra step that you don't have to do. So, you pretty much install it and walk away for about 15 minutes because it's got a boatload of DLLs to register, then you come back and have it working. 

It is pretty easy; It used to be a lot more complicated, but it seems like it has gotten better. I haven't had a bad installation in a very long time. It works with pretty much everything. The new Chromium Edge is out, and as long as you have 15.0.1 or greater, then it just works. We are on the latest Chrome and Firefox browsers, and it works well. We technically have a network issue that is preventing us from talking to our Macs right now, but once that network issue is fixed, we can remotely control Safari on a Mac with UFT One. 

What was our ROI?

We have it deployed on many machines. Because we don't have a template built, it did require actually going to each of our 40 machines and installing it. However, once it was installed, as long as we don't have to upgrade it, they just run. Honest to goodness, our apps that we test are more unstable than it is in terms of scalable. 

We have a suite of 40-plus virtual machines that are either developing in UFT One or running tests on it on any given day. In terms of the test execution, I have had tests that start with a .NET notification, go to the web, and then go to an API test to do some web service testing of the data that we started with. No issues with that either.

The solution’s AI capabilities cut down test creation time for mobile by at least 60 percent. I am getting to the point where I believe unless the test step is several sentences long, then I can write automation for a test step in 10 minutes or less per step. It is crazy awesome.

The advantage of AI for us has not removed the need for abstraction and having centralized functions for things, e.g., interacting with the page and a lot of the slang folk would know is this page object model. We still embrace a model for each screen, web page, or functional area. We have that abstraction necessary, so when a change is made, it's still in a central place and way easier to make it. Where a change in the past might have taken us 15 minutes to an hour, those changes should now take three or four minutes max.

For traditional automation, approximately half of our tests end up automated. Therefore, we are saving half the testing time by pushing it off to automation. That gives it an intrinsic benefit of more time for manual testers and business testers to work on possibly more important and interesting things. For some of our applications, they don't just have to do happy path testing anymore, they can go more in-depth and breadth into the process. 

On the AI side, we have suggested that we will have at least 60 percent maintenance cost savings, which is huge. That is calculated from:

  1. Not having to maintain both iOS and Android.
  2. Our estimate that there is not that much that we will have to maintain because it's "just gonna work."

What's my experience with pricing, setup cost, and licensing?

It could be cheaper. I feel like it is a little expensive, but I never honestly understood the enterprise software space. For example, with Camtasia, if you look at the price of that, and you're like, "That just seems expensive. Why is it so expensive?" As an end-user, you feel like it could be cheaper. I would love to see them do some things to make it a bit more affordable. We have shifted around our licensing techniques because of the price. We started off with all concurrent users, but that was nearly twice the price of a seat license. So, we just kept a couple of concurrent licenses. because we are only paying maintenance on it now and shift to seat licenses to try to save us money. We also shifted to a couple of run-time licenses. We have equal thirds: run-time, seat, and full concurrent licenses. This is because of the costs. 

I wish you could look at them and price out each individual technology, but I have a feeling it would end up being more costly then. It feels expensive, as it can be upwards of $3,200 a seat or license, depending on how you license it, which sounds expensive. You are getting a lot there. I would love to see if there's anything they can do to reduce the price. We bundle to save, and there is always the ability for them to add discounts. It is like going to the store, where they are like, "Hey, this is on sale." However, if you just didn't raise the price in the beginning, you wouldn't need to have it on sale. 

The way the pricing model works is that you pay a whole boatload year one. Then, every year after, it is around half or less. Because instead of paying for the new product, you are just paying for the support and maintenance of it. That is probably one of the biggest things that I hear from most people, even at conferences, "Yeah, I would love to use UFT One, but we don't have a budget for it."

I expected the AI to require an upfront extra cost in addition to the subscription, and it didn't. There was no cloud service required for it, so I didn't have to go through security hoops because it all runs local.

It has more than 10 technologies that it uses. If you are only using two of them, then why pay for all 10? I guess we have just gotten so used to it, e.g., with LoadRunner, you pay for the technologies that you are using. I would hate to see what the LoadRunner license would look like if it was the same structure as UFT One.

They are an enterprise product. I get that they are expensive. Somehow, I wish they could be cheaper. I don't know how they could do it. 

If I could pick on them for one thing, their licensed portal is just abysmal. It is so hard to use. So, the licenses come via three fashions: 

  1. You have a licensed server with concurrent licenses where I basically lease the license for the time that my program is open. That one is not too bad and works quite well. You pretty much do a one-time setup of the thing, then you pretty much forget that exists and just go. We have some of these licenses.
  2. We also have seat licenses. This is the one where once it's installed, then it's amazing. However, unless you have a partner that can get it for you, using the portal stinks for getting the actual license. It is a terrible experience. Sometimes, it doesn't even work. When it works, it's great but it could be so much more user-friendly to get the actual license. 
  3. You just call your partner or Micro Focus, then they literally mail you a file. That would be easy, but I'm slightly impatient. So, I want the license and I want it now, so I will go into the portal and get it. 

I usually can go into the portal, as long as it is working, but it's not always the most obvious thing to work with. I can see that they're making it better. It's just not best yet.

Which other solutions did I evaluate?

One of the biggest reasons that UFT One was chosen from some others at the time was one of the big projects was bringing on a .NET desktop application to replace an old green screen app. So, we knew that we wanted the web, but we had no idea that we were ever going to have a mobile app. I don't think mobile was supported nine years ago, but we knew we wanted it for the desktop app and web. Obviously, if we were only doing web, then we could have chosen other less expensive things, but we really needed it to do that. We evaluated some other products at the time to determine what would interact well with it. UFT One, which was QTP at the time, won out. The inclusion of its integration with ALM Octane is a big deal for us because we can control a lot of things from there. It just pairs very well.

The results could be a tiny bit better for UFT One. I have gotten used to them, and they're good. However, I am starting to see other tools go further with test results, and there are some tools that have no test results so I probably shouldn't complain. I know that they have an answer for it, and I'm holding because I feel like it's going to change. The UFT One product by name still uses VBScript, which is a tried and true, but a pretty old language. Its API test counterpart does use C#, which is quite wonderful when I am ready to make the jump to UFT Developer. Then, I can also use C#. I shouldn't complain. It's just that the AI feature isn't in UFT Developer yet, and I have fallen in love with it. So, I'm not likely to change.

What other advice do I have?

If you are looking to implement any tool, not just UFT One, you should always go into it with some form of use case or expectation of what you want to do. Opening up a tool and tinkering is never a good idea. If I sit you down in front of Photoshop, and just say, "Have fun.", I don't know what in the world is going to happen. But, if you go into it, and say, "Well, I need to be able to touch up these photos. I need to be able to do this," then those are use cases. 

Everybody starts with a super-duper happy path. "I want to be able to script logging into my application." That's great. 

"Now, I want to be able to take that and run that cross browser." This is good. 

"Now, I want to take that and I want to run them to multiple machines." That all depends on if you're thinking about execution or script building, which is regardless of what tool you are implementing.

For UFT One, you might need to polish up a little bit on your VBScript. However, with any automation tool, there is the totality of the language, and you probably only need to know 15 percent of it to do that automation. You don't need all those other structures. 

As you are beginning to go down your path:

  1. Have fun. 
  2. Don't forget about the need for abstraction. 

Abstraction is your friend. It can make your future maintenance costs incredibly low. Without abstraction, regardless of the tool you use, you are setting yourself up for a maintenance nightmare. Planning out the actions that you want to take are absolutely key. We started off with the AI bits. We did tinker a bit, but with any tinkering you realize, "Okay, I'm just kind of playing around, not really doing anything with nothing productive to show. I might have accidentally made something, but I didn't purposely do anything." So, we started going through our core reusable pieces and scripting them out. 

Do not forget that UFT One is not just for GUI. API testing comes with the products. You are already paying for it, and it is an absolute dream to work with.

What is cool is even just from 15 to 15.0.1 to 15.0.2, I feel like they're definitely investing a lot. They are continually adding to it and making it better to use.

We can build tests faster, then we can repeat the testing that we are doing faster. I don't think it will ever decrease the defects, but we can test with automations sooner and earlier. 

Theoretically, I don't need the application to do the test building. I just need it to proof the test. So, if a UX markup person can give me some screens, like in Photoshop, of what it will be, then we can technically build our automation against that, using just a screen. Or, if a developer can send me some screenshots or give me a sneak peek, then I can get screenshots and we technically should be able to automate and have things built when a release is done. Right now, we are just doing so much new feature development that we haven't been able to do that yet. I don't think it will ever reduce the number of defects, but hopefully it will allow us to find them more reliably and earlier. 

The one thing I think will help us out quite a bit is data permutations. For example, you are registering for site A, B, C, or D, there are a lot of permutations of data that you can push through there. For manual testing, you might pick the top 10 out of 50 because you only have so much time. However, we don't have to do that anymore. We can just send them all through with automation. I think it will help us have those scripts earlier and have them be more stable. There is technically nothing preventing the dev team from running tests. So, a possibility is we can convince them to run some more tests before they actually deliver the app to us. 

We don't use SAP at all at this time.

I would rate this solution as an eight point five to nine (out of 10). You learn to love it. People are really great at picking on things the moment they start using it. They look for reasons to hate it. That is not the way you should think about things for any tool.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Flag as inappropriate
ITCS user
User
Real User
Top 20
Reliable, integrates well with other platforms, and is less costly than our previous solution

Pros and Cons

  • "Without a doubt, LambdaTest is one of the big reasons behind our faster deployment and better team collaboration."
  • "It would be much easier for us to read the test if they provided dashboard analytics."

What is our primary use case?

Partnerize Partner Management Platform (PMP) is an end-to-end, SaaS-based solution for forming, managing, analyzing, and predicting the future results of partner marketing programs using artificial intelligence. It is a real-time technology and, our platform comes up with regular updates and requires it to be tested in all major browsers. This became increasingly important as we built a wide user base across even legacy browsers.

We want our web app to work on every single device, at the peak performance, as any bug or loss of data will result in loss of time contrary to the 'Real Time' promise we give to our clients. So, we have to thoroughly test our whole environment before pushing any changes across all possible use cases.

How has it helped my organization?

Before LambdaTest, we were using our own infrastructure of VMs. This was not easy to manage and consuming significant resources on maintenance only. We were a small team when I started working and the DevOps team was already under constant workload, and a VM infrastructure is very costly, especially for us as we need to test our product on multiple devices.

In dire need of scale, we realized our existing practices will not going to last for long. After a lot of brainstorming and product trials, we switched partially first, and then fully to LambdaTest.

LambdaTest was handy and easy enough to save us a lot of time, effort, and money. Especially parallel testing, which was the breather we needed to scale. Without a doubt, LambdaTest is one of the big reasons behind our faster deployment and better team collaboration.

What is most valuable?

The most valuable feature is reliability. My team automates multiple tests in parallel across hundreds of browsers. The automation grid from LambdaTest is robust enough to execute them without any surprises, every time.

Integrations are very helpful and I have lost the count of integrations that LambdaTest has. The platform is cleverly coupled with all of the other platforms we need on day to day basis for our development needs. Pushing the bug on Slack is a click away, for example. LambdaTest's Slack app gives results of screenshot testing from the Slack command itself. We in fact integrated it with Trello to mark bugs in our tasks and stories. And, we integrated our Jenkins jobs with LambdaTest using its Jenkins Plugin.

What needs improvement?

Though the platform does well in almost every aspect for us, there are a couple of things they can do to give it an absolute edge.

  1. Dashboard Analytics: It would be much easier for us to read the test if they provided dashboard analytics. Though I haven't seen it in this category, I feel this feature can take the platform to another level.
  2. Real Devices: LambdaTest has almost all of the emulators and they are good for the most part, but I can see as we grow that we will need real devices. I hope they catch up with that as well.

For how long have I used the solution?

I have been using LambdaTest for the last year and a half.

Which solution did I use previously and why did I switch?

We used Virtual Machines prior to LambdaTest. That infrastructure was extremely costly to scale for our ever-increasing variety of browsers. It was also consuming a lot of our DevOps hours.

Which other solutions did I evaluate?

We evaluated BrowserStack, CrossBrowserTesting, and Sauce Labs.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Get our free report covering BrowserStack, Perforce, SmartBear, and other competitors of Sauce Labs. Updated: September 2021.
542,823 professionals have used our research since 2012.