eDirectory Best Practices Guide, memory configuration for eDirectory

  • 3178089
  • 20-Dec-2006
  • 09-Jan-2017

Environment

Novell eDirectory 8.7.3 for All Platforms
Novell SUSE Linux
Red Hat Enterprise Server
Novell NetWare 6.5
Large eDirectory database (DIB) - typically above 700 MB
Novell eDirectory 8.8 for All Platforms

Situation

eDirectory Best Practices Guide, memory configuration for eDirectory
NDSD process consumes memory beyond the expected limits.
ndstrace show -150 errors in synchronization

Resolution

NOTE: the following information applies to 32-bit eDirectory 8.7.3.  Many of the memory limitations are not present in current versions.  For 64-bit versions of eDirectory it is suggested to consult the following sources for tuning:

General Subsystems
- The eDirectory Tuning Guide found in either the 8.8 SP8 and 9.0.x documentation as applicable.

Synchronization Tuning (which can also affect memory use)
- KB 7015361 - Quick Start: New tuning parameters introduced in eDirectory 8.8 SP8 explained

Additional Information

The following is an excerpt from the eDirectory Best Practices guide found in LogicSource:

For information on accessing the complete eDirectory Best Practices guide in LogicSource, see the Novell Technical Subscriptions

Memory

When discussing memory usage with eDirectory, two factors need to be considered.

- eDirectory Application memory.
- eDirectory Database Cache.



NetWare

Generally speaking, NetWare has only one virtual memory space in which all processes run. With this in mind, all eDirectory modules (DS.NLM, NLDAP.NLM, EMBOX.NLM, etc) share the virtual memory space with all other applications, such as NSS, TCPIP, GROUPWISE, etc.



Windows
Windows assigns a virtual memory space for every process that runs. This means that if a resource is required to load into memory for the said process, it must be loaded within the virtual memory space assigned to the said process. The maximum amount of virtual memory that can be assigned to each process is 2 GB.

NOTE: With some modifications to some of the Microsoft Windows files, this maximum can be increased to 3 GB.

The 2 GB limit applies to DHOST (the main eDirectory process) as well. The implication here is that any library, thread, or module or cache, like eDirectory's database cache that DHOST needs to load into memory must reside within the assigned 2 GB virtual memory space.



Linux and Unix:
Like Microsoft Windows, Linux and Unix operating systems also assign virtual memory space to the eDirectory process (NDSD). The Linux and Unix operating systems will assign a 3 GB limit for all NDSD processes and necessary resources, including eDirectory's Flaim cache.

IMPORTANT:It is important to understand that based off of the maximum amount of memory assigned to the eDirectory's processes, Dhost on Windows and NDSD on the Linux and Unix platforms can not exceed maximums imposed by the Operating System. When the maximums are met, depending on the operations that are being performed at the time, adverse affects like non-responsiveness, sluggishness, application time-outs and possible shut down of the eDirectory processes can occur.

Novell has made every attempt possible to minimize the impact that this has on end users. However, it is recommended that if eDirectory is consistently consuming memory close to the maximum allowed, configuration options are explored to find ways of decreasing the amount of consumed memory.

It is the intention of this document to help understand some of these configuration options.



eDirectory's Database Cache Options

eDirectory's database cache is an important factor when speaking about performance. It is well known that accessing and updating information in memory is much faster than having to go to disk, find the information, load it into memory, make the changes or read the information and then write it back to disk. It stands to reason that if eDirectory loads as much information into memory as possible, without starving other processes (within eDirectory's virtual address space or without its space) of memory, eDirectory can achieve optimal performance. The trick for an eDirectory administrator is to determine the maximum amount of memory to allocate towards eDirectory's database cache to achieve optimal performance. There are two modes in which eDirectory can determine the maximum amount of memory to use for database cache, namely, dynamic mode and static mode.

The basic difference between the two modes is that with dynamic mode, eDirectory polls the operating system, based off of a specified time interval to find available memory and requests the available memory from the operating system. There are also configurable parameters with dynamic cache which allows the administrator to control how much of the free memory can be used by eDirectory.

Although dynamic mode is very efficient and will work well for most eDirectory implementations, there are some limitations. By default, dynamic mode will consume up to 80% of available cache. This can cause several issues:

- The maximum virtual memory space can be reached. If there is a large amount of memory on the system, it is possible for the maximum amount of virtual memory space per process that is defined by the operating system can be reached. As stated previously, the adverse effect of this varies depending on what eDirectory operations are occurring when this state is hit.

- eDirectory can starve other processes running on the server of memory. The file system, communication protocols and other applications require memory as well. If only 20% of memory is left for these applications, which also includes the eDirectory application itself, depending on the amount of memory on the server, the other applications may show performance degradation.

- eDirectory Application can be starved of memory. As just mentioned, if only 20% of available memory is left on the server, it is possible that all of the processes that require memory from the eDirectory application could experience slowness. If there is not enough available memory to perform a said operation, the information will have to be swapped in and out of memory to disk. Disk I/O is slow and will cause slower performance in many eDirectory operations.

IMPORTANT:
When dealing with available memory, most operating systems will utilize a SWAP file. If information has to be "Swapped" in and out of memory to disk, performances will suffer.

- Increased overhead with database functions. It stands to reason that the more database cache allocated, the more overhead will be required to maintain the cache. With this in mind, more memory assigned to database cache is not always better. There is a point in which the overhead required to maintain the database cache creates more of a performance hit to the eDirectory operations than going to disk to access the information in the first place.

- The database is too large to load a reasonable amount of cache into memory. eDirectory is a highly scalable identity management system. It is not atypical to see an eDirectory database that exceeds several GB. In these circumstances, it does not make sense to load a lot of information (that represents only a small set of data) into memory. Wasted resources are used to scan through memory, just to find that in most cases, eDirectory needs to go to disk anyway to access the desired information. By limiting the amount of database cache in these cases will decrease the amount of overhead required to scan through the cache and determine that it is necessary to go to disk to get the information.

With the above factors in mind, the maximum amount of cache allowed for database cache becomes a crucial factor to performance. As a general rule, the recommendation for most systems would be to implement eDirectory with the default cache mode, which is dynamic mode. Make configuration changes only if eDirectory, as well as other applications on the same server, performance is not acceptable and the performance can be directed at memory consumption.

In the case where performance is not acceptable, eDirectory, through NDS iMonitor has provide some tools that will assist in determining the correct settings. It is recommended that static mode is used in conjunction with the statistics in NDS iMonitor as well as other performance tests.

IMPORTANT:NDS iMonitor provides statistics about how cache is being used. However, it is a mistake to rely only on the statistics provided through NDS iMonitor because the statistics can not identify adverse affects that could occur if too much memory is allocated to eDirectory's database cache.



Dynamic Mode Versus Static Mode

Although dynamic mode configuration options through NDS iMonitor can be used to control the maximum amount of memory that is assigned to database cache, it is recommended that static mode is used while determining the optimal settings. When memory is released by a process, it is the responsibility of the operating system to add the released memory back to the free memory pool. Each operating system handles this differently. False assumptions can be made if this is not understood and watched. If, for instance, the NSS file system is being starved of memory which is causing I/O performance issues, a logical step could be to decrease the amount of cache that eDirectory is using, thus freeing the memory for NSS to use. An administrator could decrease the amount of cache used by eDirectory's database (either with static or dynamic settings). If NetWare does not add the released memory to the free pool, NSS will not be able to immediately use the newly freed memory. Before NSS consumes the memory, a new performance test is ran and the results are that there is no change when in actuality, the change made may increase performance after NSS consumes the memory.

Because dynamic cache scans the available memory pool, there are a lot of variables to consider. This can be simplified by setting a static limit when determining optimal settings because the setting is not completely based off of available memory.

TIP:With both static and dynamic memory, memory is not automatically allocated to eDirectory. eDirectory will request memory as needed until the limits are hit.



Determining the Database Cache settings

It is a common misconception that more memory is better. Remembering that the main bottleneck is the file system, it does make sense to load as much of the directory data as you can into memory. However, too much memory allocated toward Novell eDirectory can cause unwanted effects. By default, eDirectory database cache will consume up to 80% of available RAM. Often times, in large environments, this is too much. It becomes very costly for the server to manage a large amount of memory. As items are cached, the cache must be continually scanned for required entries. If the entries are not available, the disk must be accessed to get them.

If, for instance, there is a 4 GB database and the hardware limits memory to 2 GB for database cache, it would be unwise to allocate all of the 2 GB for database cache. The reason for this is that each entry can potentially be written to cache 3 or more times. This means that eDirectory would need up to 16 GB to cache the entire database. Basic mathematics suggests that eDirectory will be going to disk to get entries more than cache. It does not make sense to spend most of the time scanning large amounts of memory and then going to disk anyway.

Novell's testing has found that in large eDirectory implementations (especially with a lot of LDAP traffic) setting the cache between 250 MB to 1 GB achieves optimal performance. If there are a lot of writes (such as bulkloads), Novell's testing has found that optimal performance is achieve by setting the cache limit between 250 MB and 500 MB, with block cache at 75%.

Novell's testing has found that setting cache limits below 250 MB causes adverse performance effects. Also, increasing database cache above 1 GB show minimal if any performance increase (in most, but not all cases) and often times will show significant decrease in performance.

NOTE:As explained previously in this document, increasing cache too far will cause adverse affects. Although Novell's testing has shown that 1 GB is optimal, there may be cases where cache settings greater than 1 GB will give optimal performance. Use the NDS iMonitor statistics and performance tests to determine the optimal setting.

It is advisable to use the database cache statistics found in NDS iMonitor | Agent Configuration | Database Cache to help determine optimal performance.

WARNING:
When setting static limits, undesired performance degradation will occur if too little or too much memory is allocated. If too little memory is allocated, eDirectory will not be able to perform even basic functions without going to disk which will significantly impact the performance. In larger systems, do not set the static limit lower than 250 MB.

Similar performance issues can occur indirectly by setting the static limit too high. A good rule of thumb is to not set the static limit over 50-75% of total physical memory and to avoid exceeding over 1 gig of memory allocated to eDirectory database cache.


Using NDS iMonitor to Fine Tune Database Settings

There are four types of cache measurements shown in NDS iMonitor.

- Cache Hits: Measures how many times an item in cache is found and used.

- Cache Looks: This measures how many links (more specifically how many links on the collision chain) are followed through cache until the entry is found.

- Faults: How many times an attempt was made to find an entry in cache and the entry was not found in cache.

- Fault Looks: How many links are followed through cache only to find out that the entry is not in cache, thus generating a fault.

The formula cache looks / cache hits * 2 gives an average of how long the collision chain (the number of links that must be gone through to find an object) are. The factor of 2 is derived from that fact that on average, a hit would be found half way down the collision chaining.

The formula fault looks / faults is also a way to determine how large the collision chain potential is since every fault results in as many fault looks as there are collision links.

The primary metric by which cache performance should be measured is by determining the number of faults per request. Each fault results in expensive increases in overhead. Each fault, depending on the operating system, could result in hundreds of CPU instructions.

WARNING:Although these statistics identify cache performance, they do not identify overall performance. It is logical that, generally speaking, increasing cache performance will increase overall performance. However, increasing cache performance at the cost of starving resources from other processes could create a degradation in performance.

Increasing cache hits by adding more memory, thus loading more entries to cache may realize little if no gain if the fault and especially fault looks are not decreased.

The balance comes in trying to limit the collision chain links by decreasing the amount of memory being used without starving available memory so much that cache hits drop too low.


Recommendation Summary

When it has been determined that performance is unacceptable, set the static cache settings between 500 MG and 1 GB. Take a base line of the database cache statistics in NDS iMonitor. Adjust the settings up and down, monitoring the results in the statistics until optimal balance is found where the collision chain links are minimized and the cache hits are maximized and performance test results are acceptable.

Correctly set the eDirectory cache settings according the the dib size and actual memory available to the application by the OS.

On Linux, if the dib is larger than 700 MB, a static cache setting of no more than 1 GB should be set, but the optimal cache setting will most likely be somewhere between 300MB and 600MB.

Formerly known as TID# 10094523