Sizing memory pools for Service Manager servlets

  • KM02983782
  • 06-Oct-2017
  • 17-Dec-2018

Summary

This document explains the different memory pools and how they can be sized, the aspects to consider when sizing them, as well as monitoring memory pool allocation and log file review.

Question

Sizeing the memory pools of Service Manager is an important topic for reliability, capacity management and scalability.

A single Service Manager process (or "servlet) can address up to 4GB virtual address space (VAS), which contains different memory pools:

  • The operating system allocates some part of the VAS, but insignificant on 64-bit Operating system,
  • Some files like executables get loaded
  • The shared memory is mapped into the VAS,
  • The Java Heap Size is reserved in VAS
  • The rest is referred to as native heap and each session request memory from here

This document describes how to size these three pools for optimal benefit of the Service Manager system.

Answer

Only shared memory and Java Heap pool can be sized by configuration in sm.ini (shared_memory) respective in sm.ini or sm.cfg (JVMOption<n>:-Xmx), and the native heap is the only variable sized pool remaining - so you actually increase available size of native heap, when you reduce shared memory or Java Heap and vice versa. 

Whenever there is a request for memory inside one of these pools that cannot be served, the requesting session will terminate by failure. So there are different out-of-memory scenarios by the pool that cannot serve the request. As these pools are shared by all sessions running on the same servlet, respective for shared memory all sessions running on the same host, a servlet may fail because misbehaviour of another session.

 

This is the reliability aspect:

  • All pools must not be sized too small.
  • And sessions should not request too high memory in short time (i.e. using extreme sized XML requests boosting up Java Heap requiremets i.e in web services (instead use web service pageing where possible), respective memory-unaware JS implementation causeing high native heap allocation).

 

Native pool size stored the session private data, so the capacity management aspect is:

  • How many sessions can be run concurrently on one servlet (parameter: threadsperprocess)?
  • How much memory do the sessions consume by average respective in typical scenarios?

 

The scalability aspect derives from that:

  • How many servlets do I require to run with the given capacity?
  • How many hosts do I require to run this amount of servlets?
  • How much physical memory do I require on each host?

The less SM processes I need to start, the better for my overall memory requirements: While shared memory is only allocated once, and the native heap allocation depends on the number of sessions started (and not the number of servlets), the Java heap is allocated for each SM process.

For this reason, it is memory-aware configuration to increase threadsperprocess, and to pool multiple background schedulers into single SM process by defining startup records in info dbdict instead of starting each scheduler directly as single SM process from sm.cfg. Certainly, all these sessions and background processes may not request more memory as is available in native heap.

 

Right-sizing shared memory

 

The initial size required for shared memory can by calculated by this formular:

          48 MB + 1MB per 10 users + IR cache size 

It is advisable to monitor the shared memory usage during run-time: The free memory should allways be between 25 and 75 percent of the shared memory size without IR cache size. 

Note:

The command sm -reportshm is used to output shared memory size and Free space. The percentage calculated by the report is for total shared memory size and therefore may falsely indicate that shared memory size increase is required: It is advisable to calculate the 25 and 75 percentage alert points in forehand and compare with these values.

The IR cache size can be configured by ir_max_shared parameter. It is recommended to set this parameter. The higher the amount of IR index that can be hold in IR cache, the better the IR performance. For that reason reducing IR index size by specifying what is indexed by IR and keepting the stop word list up to date is important to keep shared memory small.

https://docs.microfocus.com/SM/9.52/Codeless/Content/performance/shared_memory/shared_memory_sizing.htm

 

Right-sizing Java Heap

Minimum size for Service Managaer Servlets is considered 96M. This is the default size for servlets running background schedulers, and system state reports. For servlets communicating with SM clients or external applications, the default is 256M, as these produce and process XML documents.

In Service Manager 9.x the reasons to size java heap different from these default are rare. It is best practice to size Java Heap per servlet in sm.cfg file and not per default in sm.ini.

  

Native Heap

After shared memory and Java heap are sized, the remaining space in VAS is available for Java heap and so shared for all sessions running on this servlet.

One sizable internal part of the native heap is the number and the size of RAD stacks: Each session can start multiple RAD threads (typically each RAD thread corresponse with one tabs in the client).

The number of RAD stacks can be limited by appthreadsperprocess parameter, while the size of each RAD stack is specified by agstackl parameter. agstackl parameter specified the size of a RAD stack by the number of so called "frames" of 32 bytes.

Recursive RAD implementations may cause failure because RAD stack is exceeded.

A RAD stack is created for each application thread of size agstackl * 32 bytes. There are up to threadsperprocess * appthreadpersession RAD stacks generated per servlet.

 

Memory monitoring

Service Manager implements memory monitoring features for java heap, native memory and RAD stack controlled by sm.ini parameter memorypollinterval. By default, each servlet tests every 15 seconds available space in both pools. When allocation increases beyond 90 percent, the servlet will move into low memory mode until allocation falls again below 70 percent. In low memory mode, the servlet will not accept new sessions and block opening more application threads.

https://docs.microfocus.com/SM/9.52/Codeless/Content/serversetup/concepts/monitoring_memory_in_service_manager_processes.htm

 

Shared memory can by analysed by running a system report sm -reportshm. As this generates another process, the execution frequency should not be too frequent (say, no more than once every 5 minutes).

 

Log review for memory allocation issues

 

Search log files for these strings:

"Process Low on Java Memory"

"JavaMemory"

"NativeMemory"

 Examples:

    JRTE D JavaMemory Max(123928576) Used(1011872) %Used(0.0)

    JRTE D NativeMemory Max(2147352576) Used(358121472) %Used(16.0)

    JRTE W Process Low on Java Memory. Max(123928576) Used(119603800) PercentUsed(96.0)

    JRTE W Send error response: Server is running low on memory try again.

    RTE I Process Java Heap Memory is back to normal range.

These message refer to memory monitoring, please refer to documentation link above.

  

"-Memory"

Example:

    RTE I -Memory : S(20133571) O(6239881) MAX(26373452) - MALLOC's Total(37655779)

This messages are printed into the log file when native heap allocation exceeds specific limits. RTM:2 prints additional "-Memory" messages to the log providing delta information. Not each "-Memory" message is relevant. Look at the MAX() value: If it exceeds 40M, it is worth analysing the root cause of this allocation.

If the same session logs these messages every few sessions with increasing MAX() value, it is very likely caused by and implementation issue - typically memory-unware use of JavaScript, like:

  • declaring variables for large contents inside a loop. See https://docs.microfocus.com/SM/9.61/Codeless/Content/programming/javascript/concepts/avoiding_variable_declaration_inside_a_loop.htm
  • extensive string concatenation. Consider to use an Array object instead to push data into (method push()) and then build the string from the Array (method join()).

 

"RAD Stack"

 Example: 

    RTE I RAD stack is 71% used, please exit out of current application.

This is a memory monitoring message for RAD Stack. As typical the RAD stack is sufficiently sized (agstackl parameter), this message indicate a recursive RAD implementation that should be reviewed.