When stress testing Enterprise software, we might encounter "OutOfMemory" exceptions.
This post talks about:
This post talks about:
- Memory leaks and Memory Analyzing tools
- What are some tips for managing memory leaks ?
1. Memory leaks and Memory Analyzing tools
Your starting point could be your server logs. The server logs might have "OutOfMemory" exceptions in it. Also, you could use a Memory analyzer tool to delve deep into your software and detect memory leaks.
Some of the Memory analyzing tools are given here:
Your starting point could be your server logs. The server logs might have "OutOfMemory" exceptions in it. Also, you could use a Memory analyzer tool to delve deep into your software and detect memory leaks.
Some of the Memory analyzing tools are given here:
2. What are some tips for managing memory leaks ?
For our discussion here, let's consider a J2EE Application server (e.g.: JBoss, IBM WebSphere, Oracle WebLogic, etc.,) as an example. Assume the Application server is configured in cluster mode.
- Bench marking: To begin with, we need some reference Memory footprints which would serve as bench marking data points for our analysis. We can start with at least 3 use cases with varying stress loads - LIGHT, MODERATE, HEAVY. Based on these 3 data points, we can extrapolate and guesstimate the required minimum memory for the various loads.
- -Xms and -Xmx:
- The JVM command line options -Xms and -Xmx to start the Application Server instances could be increased. To begin with, the -Xmx could be increased to approximately 70% (this is just a thumb rule; based on trial and error, this threshold can be played with appropriately for your use case) of the maximum allowed by the system. This should at least delay the occurrence of "OutOfMemory". This is a measurable indicator.
- When a service call is made in a cluster environment (with many server instances), we can not predict how calls are getting routed. The routing algorithm could be round-robin, least busy, etc. (My understanding is, this routing algorithm is chosen during the configuration of cluster environment). Based on that, the service call gets routed to an instance of server which handles the request. When we see "OutOfMemory" issues in the server logs, the first thing to be done is to correlate the request with server instance. That would lead to the server instance which is starving for memory. Memory needs to be pumped into that server instance. To be consistent, we can bump the JVM command line options -Xms and -Xmx to the same value for every server instance.
- References of objects which are no longer needed (ie, suppose the object has gone out of scope of the control) could be initialized to null. This would indicate the Garbage Collector that that particular object is no longer needed and so its associated memory can be relinquished back to the heap.
- Database Tier:
- When querying from DB tables, if you are using resultSet, the resultSet has a configurable parameter "fetchSize". Play with that value and see if that helps.
- If possible, consider using cursors for querying DB tables. Lets take a typical table query, "select * from <Table>". This is a commonly used full table scan query on all tables. If resultSets are being used, this could return a ginormous resultSet object. This resultSet is transformed into other objects (like JAXB objects, etc.,) and passed around to the upper tiers. This resultSet would hold up lots of memory. Using cursors, we could fine tune it, and deal it on demand, thereby consuming less memory.
- EJB: This is with respect to EJB technologies.
- The number of EJB pool instances configured for each EJB could be bumped down or up. Even if this does not make a direct impact, this would not harm.
- Similarly, if JMS Queues or Topics are used, then you can play with the Flow control config parameters. Bump down or up the pool instance counts and see if there is an impact.
- Suppose you have a Stateless Session Bean in your application which supports methods that are responsible for fetching all rows from a DB table in a bulk fashion. If the DB table is populated with tons and tons of rows, then trying to fetch all rows in one call could lead to "OutOfMemory" issue. Instead of trying to fetch all rows in one call, you can consider a Stateful Session Bean approach, and use the Iterator pattern for fetching rows in a batch fashion from the DB table. The Stateful Session Bean can be programmed to remember the number of rows fetched each time.
- XML Parsing: (DOM, SAX, StAX, TrAX) If possible, pick the optimal XML parser for the application. DOM consumes more memory as it loads up the entire object tree.