A Formal Performance Tuning Methodology: Wait-Based Tuning

Performance tuning enterprise Java applications can be an arduous and sometimes unfruitful task because of both the complexity of modern applications as well as a lack of formal tuning methodologies. Modern enterprise applications differ significantly from their counterparts as recent as a decade ago in that they support multiple inputs, multiple outputs, and complex frameworks and business processing engines. Ten years ago, web-based enterprise applications could expect input from a web browser, backend processing through interactions with a database or a legacy system, and output back out to a web browser (HTML). Today, input can come in the form of an HTML browser, a thick client, a mobile device, or a web service, which can pass through servlets running in one of a dozen different architectures or a portal container, that in turn may call enterprise beans, external web services, or delegate processing to a business rules engine. Each of these components may then interact with a content management system, a caching layer, a plethora of databases, and legacy systems. The output is then usually contained in a presentation independent form that is then translated to HTML, XML, WML, or any other format that client applications expect. Modern applications have more moving parts and more “black boxes” than in the past, which presents significant performance tuning challenges.

In addition to this increase in complexity, performance tuning is still more “art” than “science” with most performance tuning guides focusing on performance metrics that are sometimes cryptic and may or may not impact the end user experience. This article attempts to transition the process of performance tuning into the realm of “science” by presenting a repeatable process that focuses on the end user experience by analyzing an application’s architecture in terms of “wait-points”, or portions of an application that can cause a request to wait. In short, Wait-Based Tuning allows performance engineers to quickly realize measurable performance gains by optimizing the end-user experience.

Performance Tuning Process

Before reviewing the details of Wait-Based Tuning and Wait-Point Analysis, this section presents an overview, or roadmap, of the process of effective performance tuning. Performance tuning can be summarized simply in four steps:

  1. Load Test
  2. Container Tuning
  3. Application Tuning
  4. Iterate

As with most of computer science, performance tuning is an iterative process. It begins by constructing a proper load test, which contains both balanced and representative service requests, that is met by a container tuning exercise. As containers are tuned, application bottlenecks will emerge, resulting from the increased load. As application bottlenecks are identified and resolved, the application will behave differently, which will require the container to be retuned. This process of alternating between container and application tuning can be repeated until performance is acceptable (or until the project has run out of time and needs to be released.)

Load Testing Methodology

A prerequisite to being able start a performance tuning exercise is the construction of a proper load test suite. A load test must address the following two points:

  • The load must be representative of what end users are doing (or expected to do)
  • The load must be balanced in the same proportion to mimic end user behavior

That is to say that the load must reproduce end user actions in the same proportion that end users are performing them. To illustrate the importance of balancing end user actions, consider the following scenario: in an insurance claims department, employees exhibit the following behavior:

  1. Users login at 8am
  2. On average they process five claims in the morning
  3. About 80% of users forget to logoff before leaving for lunch and hence their sessions expire
  4. After lunch, users re-login into the application
  5. They process an average of five claims in the afternoon
  6. They generate two reports before leaving
  7. 80% of the users logout from the system before going home

This example is probably an over simplification of a real-world application, but it suffices to establish a balance between service requests. This scenario presents the following balance: two logins, ten claims, two reports, and one logout.

What would happen if the load generator distributed load equally among the different service requests? In such a scenario, the user login and logout functionality would receive the same amount of load as the claim processing functionality. Considering an expected load of 1000 simultaneous users, the login functionality might quickly fall apart and cause the organization to invest money to build out a login component that can handle load that it will never receive. Worse yet, tuning efforts focused on tuning the login functionality, which presented the greatest bottleneck in this scenario, but to the expense of missing the claim processing functionality. In short, an unbalanced load can result in tuning portions of an application to support load that they will never receive while not tuning other portions of an application to support load that they will receive!

Determining what load is balanced and representative for an application is different when examining an existing application (or a new version of an existing application) than when building a new application.

Existing Applications

An existing application presents a distinct advantage over its new application counterparts: real user behaviors can be observed in a production environment. Depending on the nature of requests and how they are identified by an application, there are two options to identify end user behavior:

  • Access Logs
  • End User Experience Monitor

For most web-based applications, access logs provide enough insight to facilitate the discovery of the nature of service requests as well as their relative balance. Web servers can be configured to capture end user request information and store it in a log file referred to as an “Access Log” (because the file is typically named “access.log”.) The key to being able to use an access log to identify user behavior is that application interactions need to be distinguishable by their URIs. For example, if the actions in the previous example equated to URIs like “/login.do”, “/processClaim.do”, and “/logout.do”, then it would be very simple to find those in the access log file to determine their relative balance. Furthermore, sorting an access log file by the most frequent URIs would quickly identify the top “n” percent of requests – where “n” should be around 80%.

Access logs are text files that can be examined manually (not a very fruitful task), can be programmatically parsed, or can be analyzed by a tool. There are many commercial solutions, but Quest Software has a product called Funnel Web Analyzer that was retired some years ago, but due to popular demand, they renewed the product as Freeware. Funnel Web Analyzer can analyze most access log files and present the information required to construct proper load tests.

Some applications are not quite as simple and user interactions cannot be easily identified by a simple URI. For example, consider an application that has a single front-controller servlet that accepts an XML payload – and the business logic to process is contained inside the payload. In such a scenario, another tool is needed to inspect that payload to determine the business case being satisfied. This could potentially be built manually using a servlet filter or could require a hardware device known as an end user experience monitor.

Irrespective of how user behavior is obtained, it is a core prerequisite before starting any performance tuning exercise.

New Applications

New applications present a unique challenge because there are not any end user behaviors to analyze. There are three steps to consider when identifying user behaviors in a new application, as illustrated in Figure 1 .

Figure 1 Estimating End User Behavior for a New Application

The first step is to estimate what end users are expected to do. This step is a formal way of saying “take a guess,” but an educated guess. The estimation should come from a discussion between two parties: the application business owner and the application technical owner. The application business owner, which is typically a product manager, is responsible for detailing how the end user is expected to use the application – for example, he might report that the end user is expected to login, process five claims, timeout, process five more claims, generate two reports, and then logout. The application technical owner, which might be the architect or technical lead, is responsible for translating this abstract list of business interactions to technical steps needed to generate the load test – for example, he might report that login is accomplished through the “/login.do” URI and there are five URIs that comprise the steps in processing a claim. Together, these individuals (or groups or committees in some large projects) should provide enough information to construct a baseline load test.

After the load test has been created and used to tune the application and containers and the application has been deployed to a production environment, the tuning work is not complete. The next step in this load testing methodology is to validate the load test suite. This is typically a multi-stage activity:

  • Smoke test validation: validate the estimations against live production end user behavior in the first week or two of operations. This validation step is required to ensure that there were not any gross errors made during the estimation process.
  • Production Validation: some applications require time before users fall into a consistent pattern of usage. This amount of time is application dependent and may take a month or a quarter, but it is important to validate end user behavior against estimations once users settle into using the application.
  • Regression Validation: it is a best practice to validate user behavior periodically throughout an application’s production lifecycle in case user behavior changes or new features or new workflows are introduced that change end user behavior.

The final step, which is typically overlooked, is reflection. It is important to reflect upon the accuracy of estimations against actual end user behavior, because it is only through reflection that user behaviors become better understood and estimations improve in subsequent applications. Without reflection, the same mistakes will be made time after time, which will increase the amount of tuning work in the end.

Wait-Based Tuning

With a load test in hand, it is time to determine where tuning efforts are best spent. Most tuning guides are concerned with “performance ratios” or the relationships between metrics. For example, a tuning guide might emphasize that a cache hit ratio should be 80% or higher, so load test the application while adjusting the cache size until the hit ratio is at 80%. Then move to the next metric in the list, while constantly validating that tuning the new metric does not invalidate the tuning of the previous metrics.

Not only is this is difficult task, but it can also be highly unfruitful. For example, tuning the cache hit ratio to 80% might be a good thing, but there are more important questions such as:

  • How dependent is the application on the cache (what percentage of requests interact with the cache)?
  • How important are these requests with respect to the other requests in the application?
  • What is the nature of the items being cached? Should they be cached at all?

Wait-based tuning promotes the concept of analyzing the business interactions of an application, the underlying architecture that implements those business interactions, and optimizing the processing of those business interactions. The first step is to analyze the architecture of an application to identify the technologies that are employed in satisfying requests. Each employed technology may present a “wait-point”, or a location in the application in which a request may have to wait for something before it can continue processing. For example, if a request performs a database query then it must obtain a database connection from a connection pool – if the connection pool does not have an available connection then the request must wait for a connection to become available. Likewise, if the request invokes a web service, that web service will have a request queue (with an associated thread pool that processing incoming requests) that can potentially cause the request to wait before a thread becomes available.

From this analysis, referred to as Wait-Point Analysis, two categories of wait-points can be identified:

  • Tier-based wait points
  • Technology-based wait points

This section begins by reviewing Wait-Point Architectural Analysis and then surveys the various types of wait-points.

Wait-Point Architectural Analysis

The most important take away from this discussion is that performance tuning must be performed in the context of the architecture of the application being tuned. This is one reason why tuning performance ratios can be so ineffective: tuning an arbitrary performance metric to a best practice setting may or may not be good for the application being tuned – and may or may not positively affect the end user experience.

Wait-Point Analysis is the process of dissecting the major request processing paths through an application in order to identify resources that can potentially cause a request to wait. The most effective strategy to employ in a wait-point analysis exercise is to identify the core processing paths in the application and white board those paths. Include all tiers that a request may pass between, all external services that the request may interact with, all objects that are pooled, and all objects that are cached.

Tier-Based Wait Points

Any time a request passes across a physical tier, such as between a web tier and a business tier, or makes a call to an external server, such as when invoking a web service, there is an implicit wait-point associated with that transition. Consider that servers would not be effective if they were to only service a single request at a time, so they implement a multi-threading strategy. Typically a server listens on a socket for incoming requests – when it receives a request then it quickly places that request in a request queue and returns to listening for additional incoming requests. The request queue then has an associated thread pool that removes the request from the queue and processes it. Figure 2 illustrates how this process is performed with three tiers: a web server, a dynamic web tier, and a business tier.

Figure 2 Tier-Based Wait Points

Because the action of a request passing across a tier involves a request queue, which is serviced by an associated thread pool, the thread pool presents a potentially significant wait-point. The size of each thread pool must be tuned with the following considerations:

  • The pool must be large enough so that incoming requests do not need to wait unnecessarily for a thread
  • The pool must not be so large that it saturates the server. Too many threads will cause the server to spend more time switching between the individual thread contexts and less time processing requests. This is typified by a high CPU utilization and a decrease in request throughput
  • The pool should be optimally sized so as not to saturate any backend resources that it interacts with. For example, if a database can only support 50 requests from an individual server then that server should not send more than 50 requests to the database.

The optimal size for a server thread pool is the number of threads that generate sufficient load on its limiting dependencies – to maximize their usage, but without causing them to saturate. See the “Tuning Backwards” section below for more on sizing limiting dependency pools.

Technology-Based Wait Points

While tier-based wait points are concerned with moving a request between servers, technology-based wait points are concerned with moving a request efficiently through the inner workings of an individual server. Tier-based tuning, which is somewhat analogous to IBM’s Queue Tuning, is an effective first step in tuning an application, but neglecting to tune the inner workings of an application server can have huge ramifications on the performance of an application. This is analogous to tuning JDBC connection pools to send the most optimal amount of load to the database, but neglecting to review the SQL being executed – if the query is joining ten tables each with a million records then the optimal load may be two connections but if the query is optimized then the database may be able to support two hundred connections.

Looking inside an application server and the potential technologies that an application can utilize yields the following common technology-based wait points:

  • Pooled objects (such as Stateless Session Beans or any business objects that the application pools)
  • Caching infrastructure
  • Persistent storage or external dependency pools
  • Messaging infrastructure
  • Garbage collection

In most cases, Stateless Session Bean pool sizes are optimized by the application server and do not present a significant wait-point, unless the pool size has been manually configured improperly. But there are objects that are pooled in applications that must be manually sized – and these can present valid wait-points. Consider that when an application needs a pooled resource, it must obtain an instance of that resource from the pool, use it, and then return it to the pool. If the pool is sized too small and all object instances are in use then a request will be forced to wait for an object to become available. Waiting for a pooled resource increases response time (obviously), but can cause a significant performance degradation if more and more requests continue to backup waiting on the pooled resource. If, on the other hand, the pool is sized too large then it may consume too much memory and negatively affect the performance of the JVM as a whole. It is a tradeoff and the optimal size for these pools can only be determined after a thorough analysis of the pool’s usage.

Pooled objects are stateless, meaning that it does not matter which instance the application obtains from the pool – any instance will suffice. Cached objects, on the other hand, are stateful, meaning that when the application requests an object from the cache, it needs a specific instance. A very crude analogy that illustrates this difference is this: consider two common activities that occur in many people’s day: shopping at a supermarket and then picking up one’s child from school. At the supermarket, any cashier can check out any customer, it does not matter which cashier a customer selects, any cashier will suffice. Therefore cashiers would be pooled. But when picking up a child from school, a parent wants his or her child, another child will not suffice. Therefore children would be cached.

With that said, caches present a unique tuning challenge. The purpose of a cache, from a simplistic perspective, is to store objects locally in memory and make them readily available to the application rather than obtaining them on demand. A properly sized cache can provide a significant performance improvement over making a remote call to load an object. An improperly sized cache, however, can create a significant performance hindrance. Because caches hold stateful objects, it is important for the cache to maintain the most frequently accessed objects in the cache and provide enough additional space in the cache for infrequently accessed objects to pass through. Consider the behavior of a request that accesses a cache that is sized too small:

  1. The request checks the cache for an object, but it is not in the cache
  2. The request then needs to query the external resource for the object’s data and build an object from that data
  3. Because caches are typically meant to maintain the most recently accessed data, the new item needs to be added to the cache (it is being accessed now)
  4. But if the cache is full, an object must be selected from the cache to be removed using an algorithm like the “least recently used” algorithm
  5. If the cached object’s state is not persisted to the external resource then the external resource must be updated before the object is discarded
  6. The new object can now be added to the cache
  7. The new object can finally be returned to the request

This is a cumbersome process and if the majority of requests have to perform each of these steps then the cache will truly hinder performance. The cache must be sized large enough to minimize cache “misses”, where a miss essentially equates to performing each of the seven aforementioned steps, but not so large as to consume too much JVM memory. If the cache needs to be substantially large in order to be effective then it is important to reconsider the nature of the objects being cached and whether they should be cached at all.

Similar to object pools, external resource pools, such as database connection pools, must be sized large enough so that requests are not forced to wait for a connection to become available in the pool, but not so large that the application saturates the external resource. The “Tuning Backwards” section below discusses how to determine the optimal size for these pools, but in the context of this section, be aware that they present another significant wait-point.

Tuning messaging infrastructures is well beyond the scope of this article, with implementations varying significantly between major products like MSMQ, MQSeries, TIBCO, and so forth, but be aware that, if an application is interacting with a messaging server, it must be properly tuned or it too can present a wait-point.

The final wait-point that can significantly impact the performance of a JVM is garbage collection. It does not fit nicely into the wait-point analysis process described in this article (examining a request with the intention of identifying technologies that can cause a request to wait), but because it can have such a profound impact on performance, it is listed here. Different JVM implementations and different garbage collection strategies affect how garbage collection is performed, but in many cases, a major garbage collection (or a mark-sweep-compact garbage collection) can cause an entire JVM to freeze until the garbage collection is complete. One of the single biggest performance improvements that can be made to a JVM is to optimize its garbage collection behavior. For more information on garbage collection, join the GeekCap discussions on Application Infrastructure Tuning.

Tuning Backwards

Now that all of the tier-based and technology-based wait-points have been called out, the final step is to optimize the configuration of each wait-point. This step is sometimes referred to as “tuning backwards” and is conceptually very simple:

  1. Open all tier-based wait-points and external dependency pools – in other words configure them to allow too much load to pass through the server
  2. Generate balanced and representative service requests against the application
  3. Identify the wait-points that saturate first, which will typically be external dependencies, such as a database
  4. Tighten the configuration of the limiting wait-points to allow enough load to pass to the external dependency without saturating it
  5. Tune all other tier-based wait-points to only send enough load through the server to maximize the limiting wait-points but not cause requests to wait
  6. Allow all other requests to wait at a business logic-lite tier, such as at the web server

The principle in place here is that the application should only send the amount of load to its external dependencies to maximize their usage without causing saturation – and all other wait-points should be configured to only pass enough load to maximize the limiting wait-points. For example, if a database becomes saturated by 50 connections from each application server then the database connection pool should be configured to send less than 50 requests to the database (for example, configure the pool to contain 40 or 45 connections.) Next, if 80 threads generate 40 database connections, then the thread pool for the application should be configured to 80. Finally, the web server should not send more than 80 requests to each application server at any given time.

All technology wait-points, such as object pools, caches, and garbage collection, should be tuned to maximize the throughput of a request so that it can pass through a server, or between tier-based wait points, as quickly as possible.

Summary

Performance tuning was once more “art” than “science”, but after a combination of abstract analysis and trial-and-error, wait-based tuning has proven to make the exercise far more scientific and far more effective. Wait-based tuning begins by performing a wait-point analysis of an application’s architecture in order to identify technologies employed by the architecture that can potentially cause a request to wait. Wait-points come in two flavors: tier-based wait-points, which are indicative of any transition between application tiers, and technology-based wait-points, which are technology features such as caches, pools, and messaging infrastructures that can improve or hinder performance. With a set of wait-points identified, the tuning process is implemented by opening all tier-based wait-points and external dependency pools, generating balanced and representative load against the application, and tuning backwards, or tightening wait-points to maximize the performance of a request’s weakest link, but without saturating it.

Wait-based tuning has proven itself time and time again in real-world production environments to not only be effective, but to allow a performance engineer to realize measurable performance improvements very quickly.

By Steven Haines (Via InfoQ)

Performance-tuning Adobe AIR applications

Searching about a few issues I’m having with an Adobe Air application, I found this cool article about Performance-Tuning Adobe Air applications written by Oliver Goldman from Adobe.

Application performance is perennial. It’s in its nature. In order for an application to perform well, every part of the application has to perform well. Lapse in one area and it brings your entire application down. It’s dif?cult to write a large application without letting your guard down once in a while.

Questions about performance often indicate a failure to understand this weakest-link-in-the-chain aspect of the problem. Here are some of my favorite lousy questions about performance and AIR applications:

  • Will my AIR application be fast?
  • Is AIR fast enough to do X?
  • Isn’t AIR too slow to do Y?

(Here’s proof also that no matter what your kindergarten teacher told you, there is such a thing as a lousy question.)

AIR almost never makes it impossible to achieve good performance in your application. On the other hand, AIR can’t do it for you, either. Like I said, it’s the nature of the problem.

Fortunately, standard tuning techniques apply to AIR as much as they’d apply to writing any piece of desktop software.

Asking good questions

Achieving good performance starts, like most engineering problems, with understanding the problem you’re trying to solve. Here are some good questions to ask about your application:

  • Which operations in my application are performance sensitive?
  • What metric can I use to measure this sensitivity?
  • How can I optimize my application to that metric?

Most applications contain a lot of code that runs well enough. Don’t spend your time on that stuff, especially if any gains would be below the threshold at which users could notice them. Make sure you’re focused on things that matter.

Common examples of operations worth optimizing are:

  • Image, sound, and video processing
  • Rendering large data sets or 3D models
  • Searching
  • Responding to user input

Defining metrics

Performance is often equated with speed, but don’t be lulled into thinking that’s the only metric that matters. You may ?nd that you need to tune for memory use or battery life. Applications that minimize use of these may also be considered better performing than those that don’t. Sometimes optimizing for other metrics also speeds things up, but other times trade-offs are required.

Regardless of what you’re measuring, you must have something to measure. If you’re not measuring anything, you can’t tell whether changes improve performance or harm it. Good metrics have these three properties:

  • They’re quanti?able. You can measure them and record them as a number.
  • They’re consistent. You can measure them repeatedly and usefully compare measurements.
  • They’re meaningful. Changes in the measured value correspond to the thing you’re optimizing for.

To make this concrete, suppose you’re writing an application that’s going to perform some image-processing tasks on a large set of images. During the processing, the application needs to display feedback on its progress to the user. It also needs to allow the user to cancel an operation, rather than waiting for it to complete. This is a simple application, but even it has at least three interesting metrics that we can examine.

Example: Throughput

The ?rst and most obvious metric is throughput. It’s meaningful, in this example, because we know we must process a large number of images. The higher the throughput, the faster that processing completes.

Throughput is easily quanti?ed as processing per unit time. Although it could be measured as the number of images processed, measuring the number of bytes can produce a more consistent value when image sizes vary. Throughput for this example is easily measured in bytes per millisecond.

Example: Memory use

A less obvious metric for this application is memory use. Memory use is not as visible a metric to end users as is throughput. Users have to run another application, such as Activity Monitor, in order to monitor memory use. But memory use can be a limiting factor: run out of memory, and your application won’t work.

Memory use is of interest in our image-processing example because the images themselves are large. We’d like to be able to process large images—even those that exceed available RAM—without running out of memory. Memory use is straightforward to measure in bytes.

Example: Response time

The ?nal metric for our sample application is one that’s often overlooked: response time to user input. This metric is immediately visible to all of your users, even if they rarely stop to measure it. It’s also pervasive. Users expect all operations—from resizing windows, to canceling an operation, to typing text, to respond immediately.

Whereas some metrics are perceived linearly by users, response time has an important threshold. Any lag in response to input over approximately 100 milliseconds is perceptible to users as slow. If your application consistently responds below this threshold, no further optimization is necessary. Clearly, this metric is easily quanti?ed in milliseconds.

Response time is a particular challenge for the image-processing application because processing any individual image will take well over 100 milliseconds. In some programming environments this is addressed by handling user input on a different thread from long-running calculations. Under the covers, this solution depends on the operating system switching thread contexts quickly enough such that the user input thread can respond in time. AIR, however, doesn’t offer an explicit threading model and so this switch must be done explicitly. This is illustrated in the next section. The following sample demonstrates three different ways of setting up image processing, optimizing for different metrics:

<?xml version="1.0" encoding="utf-8"?>
 <mx:WindowedApplication xmlns:mx="http://www.adobe.com/2006/mxml" layout="horizontal" frameRate='45'>
 <mx:Script>
 <![CDATA[
 private static const DATASET_SIZE_MB:int = 100;
 private function doThroughput():void {
 var start:Number = new Date().time;
 var data:ByteArray = new ByteArray();
 data.length = DATASET_SIZE_MB * 1024 * 1024;
 filter( data );
 var end:Number = new Date().time;
 _throughputLabel.text = ( data.length / ( end - start )) + " bytes/msec";
 }
 private function doMemory():void {
 var start:Number = new Date().time;
 var data:ByteArray = new ByteArray();
 data.length = 1024 * 1024;
 for( var chunk:int = 0; chunk < DATASET_SIZE_MB; chunk++ ) {
 filter( data );
 }
 var end:Number = new Date().time;
 _memoryLabel.text = ( DATASET_SIZE_MB * data.length / ( end - start )) + " bytes/msec";
 }
 private function doResponse():void {
 _chunkStart = new Date().time;
 _chunkData = new ByteArray();
 _chunkData.length = 100 * 1024;
 _chunksRemaining = DATASET_SIZE_MB * 1024 / 100;
 _chunkTimer = new Timer( 1, 1 );
 _chunkTimer.addEventListener( TimerEvent.TIMER_COMPLETE, doChunk );
 _chunkTimer.start();
 }
 private function doChunk( event:TimerEvent ):void {
 var iterStart:Number = new Date().time;
 while( _chunksRemaining > 0 ) {
 filter( _chunkData );
 _chunksRemaining--;
 var now:Number = new Date().time;
 if( now - iterStart > 90 ) break;
 }
 if( _chunksRemaining > 0 ) {
 _chunkTimer.start();
 } else {
 var end:Number = new Date().time;
 _responseLabel.text = ( DATASET_SIZE_MB * 1024 * 1024 / ( end - _chunkStart )) + " bytes/msec";
 }
 }
 private var _chunkStart:Number;
 private var _chunkData:ByteArray;
 private var _chunksRemaining:int;
 private var _chunkTimer:Timer;
 private function filter( data:ByteArray ):void {
 for( var i:int = 0; i < data.length; i++ ) {
 data[i] = data[i] * data[i] + 2;
 }
 }
 private function onMouseMove( event:MouseEvent ):void {
 var global:Point = new Point( event.stageX, event.stageY );
 var local:Point = _canvas.globalToLocal( global );
 _button.x = local.x;
 _button.y = local.y;
 }
 ]]>
 </mx:Script>
 <mx:HBox width='100%' height='100%'>
 <mx:VBox width='50%' height='100%'>
 <mx:Button label='Measure throughput' click='doThroughput();'/>
 <mx:Label id='_throughputLabel'/>
 <mx:Button label='Reduce memory use' click='doMemory();'/>
 <mx:Label id='_memoryLabel'/>
 <mx:Button label='Maintain responsiveness' click='doResponse();'/>
 <mx:Label id='_responseLabel'/>
 </mx:VBox>
 <mx:Canvas
 width='50%' height='100%'
 id="_canvas"
 horizontalScrollPolicy="off"
 verticalScrollPolicy="off"
 backgroundColor="white"
 mouseMove='onMouseMove( event );'
 >
 <mx:Label text="Move Me" id="_button"/>
 </mx:Canvas>
 </mx:HBox>
 </mx:WindowedApplication>

Taking measurements

Once you’ve identi?ed and de?ned your metrics but before you can address them, you must be able to measure them. Only by measuring and tracking your metrics before and after can you determine the impact of those changes. If possible, track all of your metrics together so you can see how changes made to optimize one metric might impact others.

Measuring throughput

Throughput can be conveniently measured programmatically. The basic pattern for measuring throughput is:

start_msec = new Date().time
do_work()
end_msec = new Date().time
rate = bytes_processed / ( end_msec - start_msec )

Measuring memory

Memory is a more complex subject. Most runtime environments, including AIR, don’t provide good APIs for determining an application’s memory use. Memory use is best monitored using an external tool such as Activity Monitor (Mac OS X), Task Manager (Windows), BigTop (Mac OS X), and the like. After selecting a monitoring tool, you need to determine which memory metric you want to track.

Virtual memory is the biggest number reported by tracking tools. As the name suggests, this does not measure the amount of physical RAM the process is using. It’s better thought of as the amount of memory address space the process is using. At any given time, some portion of the memory allocated to the process is typically being stored on disk instead of RAM. The amount of RAM plus space on disk taken together is often thought of as being equivalent to a process’ virtual memory, but it is possible that portions of the address space are in neither place. The details depend on the operating system and how it allocates portions of virtual memory for different purposes.

The absolute size of virtual memory your application is using, given what virtual memory encompasses, is likely not an interesting metric. Virtual memory of your application relative to other, similar applications may be of interest, but is still dif?cult to usefully compare. The most interesting aspect of virtual memory is its behavior over time: growth without bound generally indicates a memory leak. Memory leaks may not show up in other memory metrics because the leaked memory, if not referenced, gets paged to disk and then simply stays there.

The best memory metric to monitor is private bytes, which measures the amount of RAM your process is using and which is used only by your process. This metric speaks directly to the impact your application has on the overall system, courtesy of its use of a shared resource.

Private bytes will ?uctuate as your application allocates and de-allocates memory. It will also ?uctuate as your application is active or idle as, when its idle, some of its pages may be paged to disk. To track private bytes, I recommend using a monitoring tool to take periodic samples (that is, one per second) during the operations you’re optimizing.

Other memory metrics you may see in monitoring tools include resident size and shared bytes. Resident size is the total RAM use of your process, made up of private and shared bytes. Shared bytes are sections of RAM that are shared with other processes. Usually these sections contain read-only resources, such as code, from shared libraries or system frameworks. Although you can track these metrics, applications have by far the most control over—and problems with—the private bytes value.

Response time

Response time is best measured with a stopwatch. Start when the user takes an action, for example, clicking a button. Stop when the application responds, typically by changing the displayed user interface. Subtract the two and you have your measurement.

The optimization process

With goals and metrics in place you’re ready to optimize. The process itself is straightforward and should be familiar. Repeat these three steps until done:

  1. Measure
  2. Analyze
  3. Modify

Broadly speaking, analysis can lead you to one of two kinds of changes: design or code.

Design changes

Design changes generally have the largest impact. They can be more dif?cult to make later in the game, however, so be sure not to wait too long before de?ning and measuring against your performance goals.

For an example, let’s return to our image-processing application. A naive implementation might load each image in its entirety into memory, process it, and then write the results back to disk. The peak memory use (private bytes) of this application is then primarily a function of the size of the loaded images. If the images exceed available RAM, the application will fail.

Few image-processing operations are global; most can be performed on one portion of an image at a time. By dividing the image into ?xed-size chunks and processing them one at a time you can limit the peak memory use of the application to a number of your choosing. This also enables processing images that are larger than available RAM.

After modifying your design, be sure to re-evaluate all of your metrics. There is always some interplay between them as designs are evolved. Those changes may not always be what you expect. When I prototyped this sample application, processing images in ?xed-size chunks did not signi?cantly alter the throughput of the application, despite my expectation that it would be slower.

Code changes

When no further design enhancements present themselves, turn to tuning your code. There are many techniques to experiment with in this arena. Some are unique to ActionScript; some are not.

Be careful not to apply code changes too early. They tend to sacri?ce readability and structure in the name of performance. This isn’t necessarily bad, but if applied too early they can reduce your ability to evolve and maintain your application. As Donald Knuth said, “premature optimization is the root of all evil.”

Purpose-built test applications

Real-world applications are often large, complex, and full of code that runs fast enough. To help focus your optimization on key operations, consider creating a test application for just that purpose.

Among other advantages, the test application provides a place to include instrumentation (that is, for measuring throughput) without requiring that you include that code in your ?nal application.

Of course, you need to validate that your optimization results still apply when your improvements are ported back to your application.

Chunking work

As mentioned earlier, the AIR runtime does not provide a mechanism for executing application code on a background thread. This is particularly problematic when attempting to maintain responsiveness during computationally intensive tasks.

Much like chunking in space can be used to optimize memory use, chunking in time can be used to break up computations into short-running segments. You can keep your application responsive by responding to user input between segments.

The following pseudo-code arranges to perform about 90 msec of work at a time before relinquishing control to the main event loop. The main event loop ensures that, for example, mouse-clicks are processed. With this timing, most user input will be processed within 100 msec, keeping the application responsive enough from the user’s point of view.

var timer:Timer = new Timer( 1, 1 )
 timer.addEventListener( TimerEvent.TIMER, doChunk )
 function doChunk( event:Event ):void {
 var start:Number = new Date().time
 while( workRemaining ) {
 doWork()
 var now:Number = new Date().time
 if( now - start > 90 ) {
 // reschedule more work to occur after input
 if( workRemaining )
  timer.start()
 break
 }
 }
 }

In this example, it’s important that doWork() runs for signi?cantly less time than the chunk duration in order to maintain responsiveness. To keep under 100 msec worse case, it should run for no longer than 10 msec.

Again, re-measure all metrics after adopting an approach like this. In my image-processing application, my throughput dropped by about 10% after adopting this chunking approach. On the other hand, my application was responsive within 100 msec to all user input—instead of only between images. I consider that a reasonable trade-off.

Wrapping up

Creating high-performance applications isn’t easy, but it is a problem that responds to disciplined measurement, analysis, and incremental improvement. AIR applications are not fundamentally different in this regard.

Performance is also an evolving target. Not only does each set of improvements potentially impact your other metrics, but underlying hardware, operating system, and other changes can also shift the balance between what’s fast and what’s slow. Even what you’re optimizing for might change over time.

With good practices in place you’ll be able to create high-performance AIR applications—and keep them that way. Just don’t let your guard down. All it takes is one slow feature to have users asking, “Is your application fast enough to do X?”

Via adobe.com

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License