Rudi Hacopian, Quality Assurance lead on one of Shopzilla’s highest scalability challenges yet, “Inventory LIVE!”, has posted an excellent account of his performance testing challenges (and ultimate successes).
What’s the optimal approach for performance testing an application with;
- 200 million domain objects
- 90 JVMs, constituting an Oracle Coherence data-grid, and 12 load-balanced Tomcats
- RESTful web-services, using FastInfoset, JAXB and streaming over HTTP
- A core requirement to deliver 100′s of gigabytes of data within a short time frame
In Rudi’s article, “The Web-Service May Not be the Bottleneck!“, he discusses the key challenges inherent with performance testing at such scale, and how scaling at the client layer helped achieve higher network utilization and throughput.
One core goal of the Inventory LIVE! project was to deliver a ‘snapshot’ of our 200 million domain objects to our clients in a short time period. Once the functional testing effort was out of the way, we focused squarely on the complexity of performance testing. I was personally thrilled and excited about our new and unique approach to shipping such an inordinate amount of data to our clients!
With our Coherence grid fully loaded at 200 million ‘mock’ domain objects (we use a mock data loader), we launched multiple clients against the web-service to test the performance and timeliness of our HTTP streaming approach. An hour into the test, I was puzzled and had A LOT of questions for the team. Why had we only streamed 2MM offers so far? Why does it seem so much slower at scale?