The Problem

You have a test. You run a few dry-runs, everything is looking great. Then you decide to scale up things and execute a real-life test. That’s when things start to go the wrong way. JMeter crashes and you have no idea why. I’ll give you two real-life examples that happened to me not too long ago.

The first one was a really high throughput, low latency scenario. Messages were not too large, but size was significant enough for the clients to require a gzip response. Created a quick test plan to simulate that and started the test. The first thing I’ve noticed was the throughput was way lower than what I was expecting. Time to troubleshoot. Checked the service, everything looked good. Resource utilization was low, dependency response times were low and in-container time was low too. It’s a bit odd, since it’s just a simple HTTP sampler and the load generator was a relatively large instance on AWS, but let’s check the client.

Bingo! CPU usage was at peak. That’s strange for a 26 ECU instance, specially when the resource utilization from target instance was significantly lower. Scratched my head a few times, ran a few tests and took a few thread dumps and came to a conclusion. All the CPU time was being spent decompressing something, the HTTP response.

That makes sense. I actually added the following header to the request, in order to get gzip responses:

Accept-Encoding: gzip

So JMeter was spending a huge amount of time decompressing that response, a response that I don’t really care about.

I’ll get to the solution later, but first, the second example.

The second example happened to me this week. A slightly more complex scenario with probably a dozen thread groups. Again, everything ran just fine with a lower load dry-run, but increasing the load for the actual test, caused JMeter to crash. This one was easier to figure out, a nice error message was printed to jmeter.log:

2013/11/16 00:15:41 ERROR - jmeter.threads.JMeterThread: Test failed! java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2271) at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113) at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93) at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.readResponse(HTTPSamplerBase.java:1658) at org.apache.jmeter.protocol.http.sampler.HTTPAbstractImpl.readResponse(HTTPAbstractImpl.java:235) at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.sample(HTTPHC4Impl.java:300) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:62) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1088) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1077) at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:428) at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:256) at java.lang.Thread.run(Thread.java:722)

That’s strange, a 16Gb heap could not be filled so easily. Anyway, let’s bump it to 28Gb. After a few minutes, bam, same thing, OutOfMemory!

Started checking the usual suspects. I had a couple Groovy and BeanShell scripts to execute a few tasks before and after the test, but they shouldn’t be part of the actual test loop. Either way, doubled checked everything, converted Groovy scripts to BeanShell (had my fair share of Groovy-related problems with JMeter) and tested it again. Same deal. This time though, I decided to take heap and thread dumps when things started to get bad.

First, the heap dump. Nothing conclusive, but strangely, 99.8% of all memory was byte[], tracing back to JMeter’s classes.

Screen Shot 2013-11-18 at 10.57.19 AM

I was not expecting JMeter to have a leak like that, so I dismissed the fact, ruled it as inconclusive and went ahead to check the thread dump. Thread dump was even more interesting. Pretty much all threads were stuck at:

parking to wait for java.util.concurrent.locks.AbstractQueuedSynchronizer

And belonged to the same thread group. I had one synchronized timer that was being used to generate bursts every 10 minutes. Interesting. Decided to remove that thread group and execute a test.

Surprisingly, the whole test ran smoothly, by simply removing the synchronized timer. So let’s check what’s inside that thread group. Not much I’m afraid. A Test Action Sampler, that was used to sleep the execution for 10 minutes. A loop controller, that looped a single HTTP Request for a couple times. The HTTP Request had a single header added, the famous:

Accept-Encoding: gzip

That’s interesting. Let’s check the response size, 12.3Mb. That’s a lot, but the transfer rate is quite fast between AWS nodes. Not a problem. 12Mb times 60 threads is too much either, but wait, that’s the compressed size. Let’s check the actual size. It took me quite a bit of time to download the entire message, but here it is:

Screen Shot 2013-11-15 at 3.12.56 PM Wow! 625Mb! Well, 625Mb times 60 threads being fired exactly at the same time, that’s roughly 37Gb just for that, excluding all other thread groups. Unacceptable!

The Solution

You have a service that usually delivers a gzip response to client over HTTP. You want to simulate that behavior using JMeter, so logically you decide to add a header to the request, the magical:

Accept-Encoding: gzip

That’s when everything goes the wrong way and you start having problems like the ones I mentioned above. So how to solve this? I looked for an option to disable message decompression in JMeter, but no luck, besides changing the sampler code itself. Something I would like to avoid. So I decided to go the easy route and create a simple Java sampler that would do the same thing as JMeter’s HTTP Request. I like httpclient4, so let’s use it to get the test going. At the moment, I’m using JMeter 2.9, that already contains Apache’s httpclient 4.2.3. Here is the sampler code I used:

[gist id=7493741]

It’s a bit rough, but as you can see, I just created a simple httpclient, a GET request and added the necessary header information. Checked a few things, like response code and size and returned a sampleResult with that information. Really simple, but the trick is exactly at:

EntityUtils.consume(entity);

I’m consuming the message, meaning, I’m downloading the message, but not doing anything with it after that. So no decompressing, no huge CPU utilization and no OutOfMemoryError. One thing to keep in mind is that since the response is not being decompressed, no response is being returned by the sampler. That means no response parsing. But that’s not a problem for me, since the only thing I care is the response code.

Just tested it using the same sampler that was having OutOfMemory issues before and problem solved!

I’m also planning to put this and a few other things I’ve created for JMeter into a plugin, so next time I face the same problem, I can just reuse the same sampler!