Scaling Performance Tests in the Cloud with JMeter

I have very heterogeneous performance test use cases. From simple performance regression tests that are executed from a Jenkins node to eventual large-ish stress tests that run with over 100K requests per second and > 100 load generators. With higher loads, many problems arise, like feeding data to load generators, retrieving results, real-time view, analyzing huge data sets and so on.

JMeter is a great tool, but it has its own limitations. In order to scale, I had to work around a few of it’s limitations and created a test framework to help me execute tests at scale, on Amazon’s EC2.

Having a central data feeder was a problem. Using JMeter’s master node is impossible. A single shared data source might become a bottleneck, so having a way of distributing it was important. I thought about using a feeder model similar to Twitter’s Iago or a clustered, load balanced resource, but settled for something simpler. Since most tests only use a limited data set and loop around it, I just decided to bzip files and upload them to each load generator before the test starts. This way I avoided the problem of making an extra request to get data during execution and requesting the same data multiple times because of the loop. One problem with this approach is that I don’t have centralized control over the data set, since each load generator is using the same input. I mitigate that by managing the data locally on each load generator, with a hash function or introducing random values. I also considered distributing different files to different load generators based on a hash function, but so far, there was no need.

Retrieving results was tricky too. Again, using JMeter’s master node was impossible because of the amount of traffic. I tried having a pooler fetching raw ( only timestamp, label, success and response time ) results in real-time, but that affected the results. Downloading all results at the end of the test worked by checking the status of the test ( running or not ) every minute and downloading after completion, but I settled with having a custom sampler in a tearDown thread group, compressing and uploading results to Amazon’s S3. This could definitely be a plugin too. It works reasonably well, but I loose the real-time view and have to manually add a file writer and sampler to tests.

With real-time view, I started with the same approach as jmeter-ec2, pooling aggregated data (avg response time, rps, etc) from each load generator and printing that, but it proved useless with a large number of load generators. For now, on Java samplers, I’m using Netflix’s servo to publish metrics in real time (averaged over a minute) to our monitoring system. I’m considering writing a listener plugin that could use the same approach to publish data from any sampler. Form the monitoring system I can then analyze and plot real-time data with minor delays. Another option I’m considering is using the same approach, but using StatsD and Graphite.

Analyzing huge result sets was the biggest challenge I believe. For that, I’ve developed a web-based analysis tool. It doesn’t store raw results, but mostly time-based aggregated statistical data from both JMeter and monitoring systems, allowing some data manipulation for analysis and automatic comparison of result sets. Aggregating and analyzing tests with over 1B samples is a problem, even after constant tuning. Loading all data points to memory to calculate percentiles and sorting is practically impossible, just for the fact that the amount of memory I’ll need is impractical, even with small objects. For now, on large tests, I settled on aggregating data while loading results ( second/minute data points ) and accepting the statistical problems, like average of averages. Another option would be to analyze results from each load generator independently and aggregate at the end. In the future, I’m considering having results on a Hadoop cluster and using Map/Reduce the get the aggregated statistical data back.

The framework also helps me automate most of the test process, like creating a new load generator cluster on EC2, copying test artifacts to load generators, executing and monitoring the test while it’s running, collecting results and logs, triggering analysis, tearing down the cluster and cleaning up after the test completes.

Most of this was written in Java or Groovy and I hope to open-source the analysis tool in the future.

Data Generator

My colleague and friend Mauricio (@mvgiacomello) shared a useful tip on Data Generation.

There is a web application that can generate test data based on a set of parameters. It can export the data on HTML, Excel, CSV, SQL and XML.

Their website has a demo that can generate up to 200 records. If you need more, you can download the code and deploy to your own server!

Here is what they say:

Ever needed custom formatted sample / test data, like, bad? Well, that’s the idea of the Data Generator. It’s a free, open source script written in JavaScript, PHP and MySQL that lets you quickly generate large volumes of custom data in a variety of formats for use in testing software, populating databases, and scoring with girls.

You can check it out at:

How MySpace Tested Their Live Site with 1 Million Concurrent Users

In December of 2009 MySpace launched a new wave of streaming music video offerings in New Zealand, building on the previous success of MySpace music.  These new features included the ability to watch music videos, search for artist’s videos, create lists of favorites, and more. The anticipated load increase from a feature like this on a popular site like MySpace is huge, and they wanted to test these features before making them live. If you manage the infrastructure that sits behind a high traffic application you don’t want any surprises.  You want to understand your breaking points, define your capacity thresholds, and know how to react when those thresholds are exceeded.  Testing the production infrastructure with actual anticipated load levels is the only way to understand how things will behave when peak traffic arrives. For MySpace, the goal was to test an additional 1 million concurrent users on their live site stressing the new video features.  The key word here is ‘concurrent’.  Not over the course of an hour or day… 1 million users concurrently active on the site. It should be noted that 1 million virtual users are only a portion of what MySpace typically has on the site during its peaks.  They wanted to supplement the live traffic with test traffic to get an idea of the overall performance impact of the new launch on the entire infrastructure.  This requires a massive amount of load generation capability, which is where cloud computing comes into play. To do this testing, MySpace worked with SOASTA to use the cloud as a load generation platform. Here are the details of the load that was generated during testing.  All numbers relate to the test traffic from virtual users and do not include the metrics for live users:

  • 1 million concurrent virtual users
  • Test cases split between searching for and watching music videos, rating videos, adding videos to favorites, and viewing artist’s channel pages
  • Transfer rate of 16 gigabits per second
  • 6 terabytes of data transferred per hour
  • Over 77,000 hits per second, not including live traffic
  • 800 Amazon EC2 large instances used to generate load (3200 cloud computing cores)

Test Environment Architecture SOASTA CloudTest™  manages calling out to cloud providers, in this case Amazon, and provisioning the servers for testing.  The process for grabbing 800 EC2 instances took less than 20 minutes.  Calls we made to the Amazon EC2 API and requests servers in chunks of 25.  In this case, the team was requesting EC2 Large instances with the following specs to act as load generators and results collectors:

  • 7.5 GB memory
  • 4 EC2 Compute Units (2 virtual CPU cores with 2 EC2 Compute Units each)
  • 850 GB instance storage (2×420 GB plus 10 GB root partition)
  • 64-bit platform
  • Fedora Core 8
  • In addition, there were 2 EC2 Extra-Large instances to act as the test controller instance and the results database with the following specs:
  • 15 GB memory
  • 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each)
  • 1,690 GB instance storage (4×420 GB plus 10 GB root partition)
  • 64-bit platform
  • Fedora Core 8
  • PostgreSQL Database

Once it has all of the servers that it needs for testing it begins doing health checks on them to ensure that they are responding and stable.  As it finds dead servers it discards them and requests additional servers to fill in the gaps.  Provisioning the infrastructure was relatively easy.  The diagram (figure 1.) below shows how the test cloud on EC2 was set up to push massive amounts of load into MySpace’s datacenters. While the test is running, batches of load generators report their performance test metrics back to a single analytics service.  Each of the analytics services connect to the PostgreSQL database to store the performance data in an aggregated repository.  This is part of the way that tests of this magnitude can scale to generate and store so much data – by limiting access to the database to only the metrics aggregators and scaling out horizontally. Challenges Because scale tends to break everything, there were a number of challenges encountered throughout the testing exercise. The test was limited to using 800 EC2 instances SOASTA is one of the largest consumers of cloud computing resources, routinely using hundreds of servers at a time across multiple cloud providers to conduct these massive load tests.  At the time of testing, the team was requesting the maximum number of EC2 instances that it could provision.  The limitation in available hardware meant that each server needed to simulate a relatively large number of users.  Each load generator was simulating between 1,300 and 1,500 users.  This level of load was about 3x what a typical CloudTest™ load generator would drive, and it put new levels of stress on the product that took some creative work by the engineering teams to solve.  Some of the tactics used to alleviate the strain on the load generators included:

  • Staggering every virtual user’s requests so that the hits per load generator were not all firing at once
  • Paring down the data being collected to only include what was necessary for performance analysis

A large portion of MySpace assets are served from Akamai, and the testing repeatedly maxed out the service capability of parts of the Akamai infrastructure

CDN’s typically serve content to site visitors based on their geographic location from a point of presence closest to them.  If you generate all of the test traffic from, say, Amazon’s East coast availability zone, then you are likely going to be hitting only one Akamai point of presence. Under load, the test was generating a significant amount of data transfer and connection traffic towards a handful of Akamai datacenters.  This equated to more load on those datacenters than what would probably be generated during typical peaks, but that would not necessarily be unrealistic given that this feature launch was happening for New Zealand traffic only.  This stress resulted in new connections being broken or refused by Akamai at certain load levels, and generating lots of errors in the test. This is a common hurdle that needs to be overcome when generating load against production sites.  Large-scale production tests need to be designed to take this into account and accurately stress entire production ecosystems.  This means generating load from multiple geographic locations so that the traffic is spread out over multiple datacenters. Ultimately, understanding the capacity of geographic POPs was a valuable takeaway from the test.

Because of the impact of the additional load, MySpace had to reposition some of their servers on-the-fly to support the features being tested

During testing the additional virtual user traffic was stressing some of the MySpace infrastructure pretty heavily.  MySpace’s operations team was able to grab underutilized servers from other functional clusters and use them to add capacity to the video site cluster in a matter of minutes. Probably the most amazing thing about this is that MySpace was able to actually do it.  They were able to monitor capacity in real time across the whole infrastructure and elastically shrink and expand where needed.  People talk about elastic scalability all of the time and it’s a beautiful thing to see in practice. Lessons Learned

  1. For high traffic websites, testing in production is the only way to get an accurate picture of capacity and performance.  For large application infrastructures there are far too many ‘invisible walls’ that can show up if you only test in a lab and then try to extrapolate.
  2. Elastic scalability is becoming an increasingly important part of application architectures.  Applications should be built so that critical business processes can be independently monitored and scaled.  Being able to add capacity relatively quickly is going to be a key architecture theme in the coming year and the big players have known this for a long time.  Facebook, Ebay, Intuit, and many other big web names have evangelized this design principle.  Keeping things loosely coupled has a whole slew of benefits that have been advertised before, but capacity and performance are quickly moving to the front of that list.
  3. Real-time monitoring is critical.  In order to react to capacity or performance problems, you need real-time monitoring in place.  This monitoring should tie in to your key business processes and functional areas, and needs to be as real time as possible.


Configuring WebLogic JMX on LoadRunner Controller

Here is a tip shared by my friend Daisy:

The WebLogic SNMP monitor is enable in the Controller as default. If you need to monitor Weblogic JMX, you have 2 options:

  1. Use the SiteScope as a monitoring tool instead of Controller;
  2. Enable the Weblogic JMX monitor into Controller.
    1. To do this you will  have to configure online_resource_graphs.rmd file – located under <LoadRunner DIR>\dat\online_graphs: update the EnableInUi param of the relevant monitor (i.e. for WebLogic (JMX): search for the [WebLogicJMX] section and update its EnableInUi param).
    2. Also, you might need to install java jdk on Controller (I didn’t have chance to test it):

WebLogic Monitor Requirements and Setup


  • JDK 1.4 on controller
  • Permissions on WebLogic MBeans.

Setup on Controller

  • After JDK installation Copy weblogic.jar from the lib folder of WebLogic Server to LoadRunner root\classes.
  • Remove jmxri.jar from LoadRunner root \classes.
  • Specify the path in the LoadRunner root folder\dat\monitors\WebLogicMon.ini file. Edit the JVM entry in the [WebLogicMon] section. For example: JVM=”E:\Program Files\JavaSoft\JRE\1.4\bin\javaw.exe
  • Edit the JavaVersion entry in the [WebLogicMon] section.

Set permissions on WebLogic

  • Open the WebLogic console (http://<host:port>/console).
  • In the tree on the left, select Security > Realms > myrealm > Users, and click Configure a new User… in the screen on the right. The Create User: General tab opens.
  • In the Name box, type weblogic.admin.mbean, enter a password, confirm the password, and then click Apply.
  • In the Groups tab, enter the name of any user or group you want to use for monitoring, and then click Apply.

How to Obtain an ICA File Through Citrix Web Interface 4.5 and 4.6

You might be thinking why I’m publishing this here or why do I need a Citrix ICA file. For now I’ll just say that this is an important first step towards performance testing Citrix applications. The full article is not ready yet, but I thought it would be useful to reproduce this information here first, in case the source goes down.


This article describes how to obtain an ICA file through Citrix Web Interface 4.5 and Web Interface 4.6.


In Web Interface versions earlier than 4.5, you can obtain the ICA file contents using Internet Explorer by using the Save Target As… option when using a link in the applications page (or a similar operation in other Web browsers). When using Web Interface 4.5 or later, this operation no longer results in the ICA file being downloaded.


For Web Interface 4.5:

Complete the following procedure:

When using an unsecured transport mechanism (HTTP):

Change the file type association property. To do so:

  1. Open Windows Explorer.
  2. From the Tools menu, click Folder options…
  3. Select the File Types tab.
  4. Select the ICA / Citrix ICA Client extension.
  5. Click Advanced and select the Confirm open after download check box.
  6. Click OK and then click Close.

Once this is done, each time the application launch is attempted (by clicking the application launch link), a dialogue displays asking if you want to open or save the ICA file. Clicking Open launches the application. Clicking Save allows you to save the ICA file to the desired location.

When using a secure transport mechanism (HTTPS) with Internet Explorer:

When using a secure transport mechanism, the ActiveX control (ICO) is used to launch the application (this does not involve saving the ICA file), hence the file cannot be saved. However, changing some settings in Internet Explorer can modify this behaviour so that the ICA file is downloaded. To do this, configure Internet Explorer so that it does not trust the ActiveX control and therefore reverts to downloading the ICA file. Use the following procedure:

    1. Go to Tools > Internet Options from the Internet Explorer menu.
    2. Select the Security tab.
    3. Click the Custom level… button to display the Security Settings dialog.
    4. Ensure that File download is set to Enabled in the Downloads section.
  1. If the site is currently in the Trusted sites or Local intranet zone list, remove it (the site should be displayed in the Internet zone).
  2. If you are using an operating system that has the Enhanced Internet Explorer Security Configuration enabled, this feature must be disabled (go to the Control Panel > Add or Remove Programs > Add/Remove Windows Components section).
  3. Adjust the settings so that Internet Explorer can download files using the following procedure:

Once these changes have been made, follow the same operations as described in the HTTP section above.

For Web Interface 4.6:

  1. Use a Firefox browser to enumerate the application icons. Right-Click and save the file to the desired location.
  2. Edit the saved file in Notepad. Remove the “RemoveICAFile=yes” line so that the ICA file is not deleted from the desired location when the application is accessed.


  1. Use Web Interface 4.2 or an earlier version to create the desired ICA file.
  2. Edit the saved file in Notepad. Remove the “RemoveICAFile=yes” line so that the ICA file is not deleted from the desired location when the application is accessed.


If the server farm only contains Presentation Server 4.5 servers, use the Presentation Server Console from Presentation Server 4.0 (Note: Do not use this console to make other changes) and use the built-in feature to create the ICA file by right-clicking the published application. Else, use the Presentation Server Console as normal.


Use Internet Explorer 6 or 7 (if asked to save as .aspx, save as .ica) and ensure the Web Interface site is accessed via the internet zone (e.g. by IP address or FQDN)