CA Runscope Pros and Cons
CA Runscope Pros
As we have the APIs scheduled, we use the notifications. We know the exact moment it is breaking something that might impact the customer system, that might result in a feature not working. We know "in advance" and we can fix it before the customer notices. It helps us to be more proactive.
When it's a problem on our side, the server is down, or there is a lack of memory, the first thing to break is the API, because the calls can't be processed. So we know exactly at that moment because we receive notifications, and we can take action. It doesn't only notify us of the type problem, but it also links it to our servers.
The schedule is definitely the most valuable. Also, being able to check the JSON attribute values is helpful. We always check the attributes. Finally, the notifications are helpful as well.View full review »
This solution has improved our responsiveness to issues in both production and non-production environments. Through integration with Slack, our development and DevOps teams are quickly aware of issues related to new changes deployed to non-production environments, which tightens our feedback loop during testing. Through integration with PagerDuty, our DevOps resources are able to reduce the adverse impact of a given production issue.
Tests are per-step configurable with script capabilities and support for token-based authentication using variables. Configurable test frequency and integration with both Slack and PagerDuty for alerting our DevOps team are also important. General use of the dashboard by multiple team members is also valuable.View full review »
We love the fact that we can have our API tests run on a schedule as often as we need. We also take advantage of being able to set up different environment settings so that we can use the same test in our production, integration, and QA environments, easily. The ability to string together a number of API tests for a test suite is very important to us as well.View full review »
You can run it off the cloud and against different data centers. That's one of the plus points for most people for this product, as opposed to some other solutions.View full review »
It is a cloud-based environment for building and executing tests, which allows me to focus on building, running, and reviewing test results.View full review »
It is a relatively reliable tool for running big amounts of API tests, which you can schedule to run periodically, as the product is very good.
It has a relatively simple user experience.
We can receive an email and slack notifications if tests are failing/passing.View full review »
Valuable features include: Being able to add tests as sub-tests in a parent test for regression testing; notifications for tests that pass/fail; being able to customize expected results with assertions; being able to set tests to run daily.View full review »
It helps us verify that production deployments do not break the features our customers use.
We have the ability to populate environment variables using scripts. This allows us to keep the configuration mostly dynamic and avoid a lot of manual updates when something changes.View full review »
Environment initial variables allow our team to run tests against multiple environments on the fly.
We utilize modular tests based on environment variables. These have saved QA a lot of time when setting up our back-end automation.View full review »
The feature of automatically executing tests at a predefined time interval is great as we can be continuously testing our end-points before and after builds are deployed.View full review »
CA Runscope Cons
One thing that can be improved is the logging. Sometimes, when we have an error, we don't know how to find more information about the problem that has happened. Sometimes, it just returns something like "that gateway." More information would be a way to improve it.
Currently, we can only schedule to the nearest five minutes, one hour, two hours, etc. If we could set up the exact time that we want it to run, that would be good because sometimes we have one service that crosses another service.View full review »
We would like the ability to configure Runscope behavior when a given test has its schedule modified. That is, in a non-production environment, we often need to pause a test (either by removing it from the schedule or adjusting its frequency) and this results in the automatic kick-off of this test in all other environments. We would like the ability to simply pause a test for a specific shared environment without causing additional tests to be started.
We definitely make use of historical test results, especially when tracking more difficult issues such as timeouts. Per our test frequency (15 minutes) we often need to go back further than the last 100 test runs. We are aware that results further back in time are available through the Runscope API, however, it would be great if the dashboard allowed easy access to a larger amount of historical data.View full review »
Navigation can be a little tricky when changing run environments and switching between test run results and editing the tests. It would be nice if you could switch the environment for all tests in a Bucket to run, rather than having to change each test environment.
I would like to see easier integration and display of test results so that the data could be shared via a dashboard within our company. (Some integration is possible, provided you use one of the supported platforms).View full review »
I've noticed, once or twice, when there were some updates pushed by Runscope, at times it didn't refresh the pages. I was running it, there were multiple tests running at the same time, and it didn't update the webpage I was looking at. So I had to refresh it manually. Occasionally, it still happens.
the built-in editor for the scripts can be improved a lot. Instead of making such a basic editor, maybe include a little more so it is easier to look at things and debug.View full review »
File upload is a big part of the products that we test. The lack of file upload in Runscope requires us to still use UI (Selenium) automated tests for these scenarios.View full review »
There are not many options for scalability.
Needs the ability to create code which runs before and after API tests.
If triggering test bucket via trigger API, there is no option to see if the tests which were triggered passed or failed.View full review »
We would like to be able to select the actual time a test will run automatically. It appears that you can set up the automatic schedule, but the test runs based on the last time it ran and not a time you select. Since we want these tests to run in the middle of the night, it would be nice to be able to select that.
It would be nice to be able to generate API documentation.View full review »
The initial setup included a lot of repetitive manual work.
The user interface requires a lot of clicking around.
We would like to have an improved ability to share a set of tests across different environments (e.g., dev, stage, and prod).View full review »
If you forget to set the default environment during test creation, you have to go through every test and make sure that it is set to the environment that you prefer if using shared environments. If there was an option to force all tests within a bucket into a specific environment with one click, this would save a lot of time.View full review »
Reliability. I would like to see more money invested in disaster recovery testing of the application. The product should be hosted across several AWS regions so if there are issues with an EC2 instance in a particular region, the whole application won't be affected. Once or twice a year there are issues with AWS that causes the reliability of the application to suffer.View full review »