What is your greatest success or failure using a testing tool? Please share with the community so we can all learn from your experience.
Greatest success was with a horizontally scaled Oracle environment. At the time it was cutting edge. 20% test environment, same servers, just 1/5th of Production.
Aim was 25,000 users in Production, we aimed for 5000 in Test. Successfully executed with the required amounts of data for 25,000 users, performed a database restore, cleansed the data and hired out a full-size environment from Sun Microsystems.
First test at 25,000 and we were inside 2% of our estimates. This then fed into HyPerformix for a Capacity Plan and the client has used my model as a working methodology for large-scale Performance Testing ever since, and this was 2005.
70% of that was all in the pre-test planning.
My greatest success is optimizing Loadrunner with a risk-based approach while incorporating architecture and performance tuning into a single service. My greatest failure was in misconfiguring the Run-time Settings, thereby arriving at a false sense of performance. I also greatly prefer major commercial testing tools for the immediate analysis and ease of configuration.
Success in using a testing tool is a combination of human involvement not just intervention to define the right test strategy, lot of analysis of the product features, and a product which is at least 60% stable to be tested.
I have experience using QTP and its first cousin UTP in defining a test harness for an ad campaign product using key words (QTP didn't have this feature then as a Mercury product). The testing to deployment was successfully done after a six sigma project on establishing the stability and suitability of a complex product with about 3500 web pages of an ad campaign product from the UK.
My greatest success was the use of LoadRunner to load and stress test company critical software, testing against on HTTP(S)/HTML applications with J2EE deep diagnostics monitoring as well as SAP application chains. It really helped ironing out the major performance wrinkles and memory leaks before pushing to production environments. My greatest failure was using open source tools like JMeter and OpenSTA to achieve the same. It required too much time to parametrize user interaction and associate performance metrics. The maintenance and startup effort/load of these open source tools has been lowered over the past couple of years, but I'd still prefer a major commercial testing tool over open source any time.
IT Central Station users who are researching performance testing tools often look for comparisons between Eggplant Performance and Micro Focus LoadRunner Professional.
Do you have experience with these tools? If yes, please share some insight about which you prefer and why.