2018-05-28T09:18:00Z

What needs improvement with Runscope?

Miriam Tover - PeerSpot reviewer
  • 0
  • 28
PeerSpot user
11

11 Answers

NR
Real User
2018-08-22T11:28:00Z
Aug 22, 2018

I've noticed, once or twice, when there were some updates pushed by Runscope, at times it didn't refresh the pages. I was running it, there were multiple tests running at the same time, and it didn't update the webpage I was looking at. So I had to refresh it manually. Occasionally, it still happens. Another thing I have found is that the built-in editor for the scripts can be improved a lot. Instead of making such a basic editor, maybe include a little more so it is easier to look at things and debug. Finally, other than the Runscope documentation, I haven't seen too much elsewhere. I haven't seen a user community.

Search for a product comparison
CA
Real User
2018-08-19T07:14:00Z
Aug 19, 2018

Navigation can be a little tricky when changing run environments and switching between test run results and editing the tests. It would be nice if you could switch the environment for all tests in a Bucket to run, rather than having to change each test environment. I would also like to see easier integration and display of test results so that the data can be shared via a dashboard within our company. (Some integration is possible, provided you use one of the supported platforms).

MY
Real User
2018-07-30T06:36:00Z
Jul 30, 2018

The documentation of Runscope needs to be improved. For instance, there is very much a lack of examples in the documentation. It is a bit difficult for beginners and intermediate users to get started quickly or to go deeper when creating test cases.

BR
Real User
2018-07-25T12:31:00Z
Jul 25, 2018

File upload is a big part of the products that we test. The lack of file upload in Runscope requires us to still use UI (Selenium) automated tests for these scenarios. Having Runscope support file uploads is the biggest improvement that we would benefit from. I would be willing to pay more if file upload was supported.

BM
Real User
2018-07-25T09:56:00Z
Jul 25, 2018

* Ability to configure Runscope behavior when a given test has its schedule modified. That is, in a non-production environment, we often need to pause a test (either by removing it from the schedule or adjusting its frequency) and this results in the automatic kick-off of this test in all other environments. We would like the ability to simply pause a test for a specific shared environment without causing additional tests to be started. * We definitely make use of historical test results, especially when tracking more difficult issues such as timeouts. Per our test frequency (15 minutes) we often need to go back further than the last 100 test runs. We are aware that results further back in time are available through the Runscope API, however, it would be great if the dashboard allowed easy access to a larger amount of historical data.

SB
Real User
2018-07-25T09:56:00Z
Jul 25, 2018

We would like to have an improved ability to share a set of tests across different environments (e.g., dev, stage, and prod). The shared environments feature is great in practice, but in reality, if you have two or three of them using the same set of tests, it gets complicated to figure out which environment is failing. Overall, the user interface (UI) requires a lot of clicking around. Waiting for asynchronous results requires a lot of "Pause" steps, which is quite difficult to get right. It would be great to have some type of "Waiting for Results" step, which would include success/failure conditions, then eventual timeout for retries.

LL
Real User
2018-07-25T09:56:00Z
Jul 25, 2018

We would like to be able to select the actual time a test will run automatically. It appears that you can set up the automatic schedule, but the test runs based on the last time it ran and not a time you select. Since we want these tests to run in the middle of the night, it would be nice to be able to select that. Also, it would be nice to be able to generate API documentation.

KF
Real User
2018-06-13T08:03:00Z
Jun 13, 2018

One area that I seem to forget adjusting when creating new tests is setting the default environment within test settings. If you forget to set the default environment during test creation, you have to go through every test and make sure that it is set to the environment that you prefer if using shared environments. If there was an option to force all tests within a bucket into a specific environment with one click, this would save a lot of time.

RL
Real User
2018-06-06T08:51:00Z
Jun 6, 2018

One thing that can be improved is the logging. Sometimes, when we have an error, we don't know how to find more information about the problem that has happened. Sometimes, it just returns something like "that gateway." More information would be a way to improve it. Also, currently we can only schedule to the nearest five minutes, one hour, two hours, etc. If we could set up the exact time that we want it to run, that would be good because sometimes we have one service that crosses another service. It's rare that this happens, but sometimes it does happen because the service loads are loading at the same time. What our tests were doing is creation, addition, and deletion. Before we deleted, this one service crossed the results of another one. And then we deleted the thing and broke another service. It was just a coincidence of the same millisecond, so one thing that could solve this would be if we could schedule the exact time that we want, or to schedule the order of service execution. That would be a good improvement.

it_user884064 - PeerSpot reviewer
Real User
2018-06-06T08:51:00Z
Jun 6, 2018

* Ability to create code which runs before and after API tests. * The area of "initial script" is relatively small and incomplete related to code editors. * There are no regular features and validations for editor. * If triggering test bucket via trigger API, there is no option to see if the tests which were triggered passed or failed.

NM
Real User
2018-05-28T09:18:00Z
May 28, 2018

Reliability. I would like to see more money invested in disaster recovery testing of the application. The product should be hosted across several AWS regions so if there are issues with an EC2 instance in a particular region, the whole application won't be affected.

Compare products