HP Operations Agent - Performance Collection Components for AIX Dictionary of Operating System Performance Metrics Print Date 09/2015 HP Operations Agent for AIX Release 12.00 ********************************************************** Legal Notices ============= Warranty -------- The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. The information contained herein is subject to change without notice. Restricted Rights Legend ------------------------ Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. Copyright Notices ----------------- ©Copyright 2015 Hewlett-Packard Development Company, L.P. All rights reserved. ********************************************************** Introduction ============ This dictionary contains definitions of the AIX operating system performance metrics for the Performance Collection Component. Please note that the metric help has been put in a more generic format and references are made to the other platforms that also support each of the metrics. ====== Application ====== APP_TIME The end time of the measurement interval. ====== APP_INTERVAL The amount of time in the interval. ====== APP_PRM_MEM_ENTITLEMENT The PRM MEM entitlement for this PRM Group ID entry as defined in the PRM configuration file. ====== APP_PRM_MEM_UPPERBOUND The PRM MEM upperbound for this PRM Group ID entry as defined in the PRM configuration file. ====== APP_ACTIVE_APP The number of applications that had processes active (consuming cpu resources) during the interval. ====== APP_SAMPLE The number of samples of process data that have been averaged or accumulated during this sample. ====== APP_PRM_MEM_STATE The PRM MEM state on this system: 0 = PRM is not installed or no memory specification 1 = reset (PRM is installed in reset condition or no memory specification) 2 = configured/disabled (The PRM memory scheduler is configured, but the standard HP-UX scheduler is in effect) 3 = enabled (The PRM memory scheduler is configured and in effect) ====== APP_PRM_SUSPENDED_PROC The number of processes within the PRM groups that were suspended during the interval. ====== APP_ACTIVE_APP_PRM The number of PRM groups with at least one process that had activity during the interval. ====== APP_PRM_STATE The PRM CPU state on this system: 0 = PRM is not installed 1 = reset (PRM is configured with only the system group. The standard HP-UX CPU scheduler is in effect) 2 = configured/disabled (the PRM CPU scheduler is configured, but the standard HP-UX scheduler is in effect) 3 = enabled (the PRM CPU scheduler is configured and in effect) ====== APP_PRM_GROUPID The PRM Group ID. The PRM group configuration is kept in the PRM configuration file. ====== APP_NAME_PRM_GROUPNAME The PRM group name. The PRM group configuration is kept in the PRM configuration file. ====== APP_CPU_NNICE_UTIL The percentage of time that processes in this group were using the CPU in user mode at a nice priority calculated from using negative nice values during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== APP_CPU_NNICE_TIME The time, in seconds, that processes in this group were using the CPU in user mode at a nice priority calculated from using negative nice values during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== APP_PRM_MEM_UTIL The percent of PRM memory used by processes (process private space plus a process' portion of shared memory) within the PRM groups during the interval. PRM available memory is the amount of physical memory less the amount of memory reserved for the kernel and system processes running in the PRM_SYS group 0. PRM available memory is a dynamic value that changes with system usage. ====== APP_PRM_MEM_AVAIL PRM available memory is the amount of physical memory less the amount of memory reserved for the kernel and system processes running in the PRM_SYS group 0. PRM available memory is a dynamic value that changes with system usage. ====== APP_PRM_DISK_STATE The PRM DISK state on this system: 0 = PRM is not installed or no disk specification 1 = reset (PRM is installed in reset condition or no disk specification) 2 = configured/disabled (The PRM disk management is configured) 3 = enabled/configured (The PRM disk management is enabled and volume groups are configured) 4 = enabled/unconfigured (The PRM disk management is enabled, however, no volume groups are configured) ====== APP_NUM The sequentially assigned number of this application or, on Solaris, the project ID when application grouping by project is enabled. ====== APP_NAME The name of the application (up to 20 characters). This comes from the parm file where the applications are defined. The application called "other" captures all processes not aggregated into applications specifically defined in the parm file. In other words, if no applications are defined in the parm file, then all process data would be reflected in the "other" application. ====== APP_ALIVE_PROC An alive process is one that exists on the system. APP_ALIVE_PROC is the sum of the alive-process-time/interval-time ratios for every process belonging to a given application. The following diagram of a four second interval showing two processes, A and B, for an application should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval but consumes no CPU. A's contribution to APP_ALIVE_PROC is 4 1/4. A contributes 0 1/4 to APP_ACTIVE_PROC. B's contribution to APP_ALIVE_PROC is 3 1/4. B contributes 2 1/4 to APP_ACTIVE_PROC. Thus, for this interval, APP_ACTIVE_PROC equals 0.5 and APP_ALIVE_PROC equals 1.75. Because a process may be alive but not active, APP_ACTIVE_PROC will always be less than or equal to APP_ALIVE_PROC. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. ====== APP_ACTIVE_PROC An active process is one that exists and consumes some CPU time. APP_ACTIVE_PROC is the sum of the alive-process-time/interval-time ratios of every process belonging to an application that is active (uses any CPU time) during an interval. The following diagram of a four second interval showing two processes, A and B, for an application should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval, but consumes no CPU. A's contribution to APP_ALIVE_PROC is 4 1/4. A contributes 0 1/4 to APP_ACTIVE_PROC. B's contribution to APP_ALIVE_PROC is 3 1/4. B contributes 2 1/4 to APP_ACTIVE_PROC. Thus, for this interval, APP_ACTIVE_PROC equals 0.5 and APP_ALIVE_PROC equals 1.75. Because a process may be alive but not active, APP_ACTIVE_PROC will always be less than or equal to APP_ALIVE_PROC. This metric indicates the number of processes in an application group that are competing for the CPU. This metric is useful, along with other metrics, for comparing loads placed on the system by different groups of processes. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. ====== APP_COMPLETED_PROC The number of processes in this group that completed during the interval. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. ====== APP_PRM_CPU_ENTITLEMENT The PRM CPU entitlement for this PRM Group ID entry as defined in the PRM configuration file. ====== APP_PROC_RUN_TIME The average run time for processes in this group that completed during the interval. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. ====== APP_CPU_TOTAL_UTIL The percentage of the total CPU time devoted to processes in this group during the interval. This indicates the relative CPU load placed on the system by processes in this group. On AIX SPLPAR, this metric indicates the total physical processing units consumed by applications. Hence sum of the APP_CPU_TOTAL_UTIL for all applications must be compared with GBL_CPU_PHYS_TOTAL_UTIL. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. Large values for this metric may indicate that this group is causing a CPU bottleneck. This would be normal in a computation-bound workload, but might mean that processes are using excessive CPU time and perhaps looping. If the "other" application shows significant amounts of CPU, you may want to consider tuning your parm file so that process activity is accounted for in known applications. APP_CPU_TOTAL_UTIL = APP_CPU_SYS_MODE_UTIL + APP_CPU_USER_MODE_UTIL NOTE: On Windows, the sum of the APP_CPU_TOTAL_UTIL metrics may not equal GBL_CPU_TOTAL_UTIL. Microsoft states that "this is expected behavior" because the GBL_CPU_TOTAL_UTIL metric is taken from the NT performance library Processor objects while the APP_CPU_TOTAL_UTIL metrics are taken from the Process objects. Microsoft states that there can be CPU time accounted for in the Processor system objects that may not be seen in the Process objects. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== APP_CPU_SYS_MODE_UTIL The percentage of time during the interval that the CPU was used in system mode for processes in this group. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. High system CPU utilizations are normal for IO intensive groups. Abnormally high system CPU utilization can indicate that a hardware problem is causing a high interrupt rate. It can also indicate programs that are not making efficient system calls. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== APP_CPU_NICE_UTIL The percentage of time that processes in this group were using the CPU in user mode at a nice priority during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== APP_CPU_REALTIME_UTIL The percentage of time that processes in this group were in user mode at a "realtime" priority during the interval. "Realtime" priority is 0-127. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== APP_CPU_NORMAL_UTIL The percentage of time that processes in this group were in user mode running at normal priority during the interval. Normal priority user mode CPU excludes CPU used at real-time and nice priorities. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== APP_CPU_USER_MODE_UTIL The percentage of time that processes in this group were using the CPU in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. High user mode CPU percentages are normal for computation-intensive groups. Low values of user CPU utilization compared to relatively high values for APP_CPU_SYS_MODE_UTIL can indicate a hardware problem or improperly tuned programs in this group. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== APP_CPU_TOTAL_TIME The total CPU time, in seconds, devoted to processes in this group during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== APP_CPU_SYS_MODE_TIME The time, in seconds, during the interval that the CPU was in system mode for processes in this group. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== APP_CPU_NICE_TIME The time, in seconds, that processes in this group were using the CPU in user mode at a nice priority during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== APP_CPU_REALTIME_TIME The time, in seconds, that the processes in this group were in user mode at a "realtime" priority during the interval. "Realtime" priority is 0-127. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== APP_CPU_NORMAL_TIME The time, in seconds, that processes in this group were in user mode at a normal priority during the interval. Normal priority user mode CPU excludes CPU used at real-time and nice priorities. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== APP_CPU_USER_MODE_TIME The time, in seconds, that processes in this group were in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== APP_DISK_LOGL_IO_RATE The number of logical IOs per second for processes in this group during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the read and write system calls that are directed to disk devices. Also counted are read and write system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, writev, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. ====== APP_DISK_LOGL_READ_RATE The number of logical reads per second for processes in this group during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the read system calls that are directed to disk devices. Also counted are read system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. ====== APP_DISK_LOGL_WRITE_RATE The number of logical writes per second for processes in this group during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the write system calls that are directed to disk devices. Also counted are write system calls made indirectly through other system calls, including writev, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. ====== APP_DISK_LOGL_READ The number of logical reads for processes in this group during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the read system calls that are directed to disk devices. Also counted are read system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. ====== APP_DISK_LOGL_WRITE The number of logical writes for processes in this group during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the write system calls that are directed to disk devices. Also counted are write system calls made indirectly through other system calls, including writev, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. ====== APP_DISK_BLOCK_IO The number of block IOs to the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical IOsgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. ====== APP_DISK_BLOCK_IO_RATE The number of block IOs per second to the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical IOsgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. ====== APP_DISK_BLOCK_READ_RATE The number of block reads per second from the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical readsgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. ====== APP_DISK_BLOCK_WRITE_RATE The number of block writes per second from the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical writesgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. ====== APP_DISK_BLOCK_READ The number of block reads from the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical readsgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. ====== APP_DISK_BLOCK_WRITE The number of block writes to the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical writesgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. ====== APP_DISK_PHYS_IO_RATE The number of physical IOs per second for processes in this group during the interval. ====== APP_DISK_PHYS_READ_RATE The number of physical reads per second for processes in this group during the interval. ====== APP_DISK_PHYS_WRITE_RATE The number of physical writes per second for processes in this group during the interval. ====== APP_DISK_PHYS_READ The number of physical reads for processes in this group during the interval. ====== APP_DISK_PHYS_WRITE The number of physical writes for processes in this group during the interval. ====== APP_IO_BYTE_RATE The number of characters (in KB) per second transferred for processes in this group to all devices during the interval. This includes IO to disk, terminal, tape and printers. ====== APP_IO_BYTE The number of characters (in KB) transferred for processes in this group to all devices during the interval. This includes IO to disk, terminal, tape and printers. ====== APP_DISK_FS_IO_RATE The number of file system disk IOs for processes in this group during the interval. Only local disks are counted in this measurement. NFS devices are excluded. These are physical IOs generated by user file system access and do not include virtual memory IOs, system IOs (inode updates), or IOs relating to raw disk access. An exception is user files accessed via the mmap(2) call, which will not show their physical IOs in this category. They appear under virtual memory IOs. ====== APP_DISK_VM_IO_RATE The number of virtual memory IOs per second made on behalf of processes in this group during the interval. IOs to user file data are not included in this metric unless they were done via the mmap(2) system call. ====== APP_MINOR_FAULT The number of minor page faults satisfied in memory (a page was reclaimed from one of the free lists) for processes in this group during the interval. ====== APP_MINOR_FAULT_RATE The number of minor page faults per second satisfied in memory (pages were reclaimed from one of the free lists) for processes in this group during the interval. ====== APP_DISK_SYSTEM_IO_RATE The number of physical IOs per second generated by the kernel for file system management (inode accesses or updates) for processes in this group during the interval. ====== APP_DISK_RAW_IO_RATE The total number of raw IOs for processes in this group during the interval. Only accesses to local disk devices are counted. ====== APP_MEM_RES On Unix systems, this is the sum of the size (in MB) of resident memory for processes in this group that were alive at the end of the interval. This consists of text, data, stack, and shared memory regions. On HP-UX, since PROC_MEM_RES typically takes shared region references into account, this approximates the total resident (physical) memory consumed by all processes in this group. On all other Unix systems, this is the sum of the resident memory region sizes for all processes in this group. When the resident memory size for processes includes shared regions, such as shared memory and library text and data, the shared regions are counted multiple times in this sum. For example, if the application contains four processes that are attached to a 500MB shared memory region that is all resident in physical memory, then 2000MB is contributed towards the sum in this metric. As such, this metric can overestimate the resident memory being used by processes in this group when they share memory regions. Refer to the help text for PROC_MEM_RES for additional information. On Windows, this is the sum of the size (in MB) of the working sets for processes in this group during the interval. The working set counts memory pages referenced recently by the threads making up this group. Note that the size of the working set is often larger than the amount of pagefile space consumed. ====== APP_MAJOR_FAULT The number of major page faults that required a disk IO for processes in this group during the interval. ====== APP_MAJOR_FAULT_RATE The number of major page faults per second that required a disk IO for processes in this group during the interval. ====== APP_LS_ID APP_LS_ID represents the zone-id of the zone associated with this application. This metric is only available on Solaris 10 and above versions when the zone_app flag in parm file is set. ====== APP_MEM_UTIL On Unix systems, this is the approximate percentage of the system's physical memory used as resident memory by processes in this group that were alive at the end of the interval. This metric summarizes process private and shared memory in each application. On Windows, this is an estimate of the percentage of the system's physical memory allocated for working set memory by processes in this group during the interval. On HP-UX, this consists of text, data, stack, as well the process' portion of shared memory regions (such as, shared libraries, text segments, and shared data). The sum of the shared region pages is typically divided by the number of references. ====== APP_MEM_VIRT On Unix systems, this is the sum (in MB) of virtual memory for processes in this group that were alive at the end of the interval. This consists of text, data, stack, and shared memory regions. On HP-UX, since PROC_MEM_VIRT typically takes shared region references into account, this approximates the total virtual memory consumed by all processes in this group. On all other Unix systems, this is the sum of the virtual memory region sizes for all processes in this group. When the virtual memory size for processes includes shared regions, such as shared memory and library text and data, the shared regions are counted multiple times in this sum. For example, if the application contains four processes that are attached to a 500MB shared memory region, then 2000MB is reported in this metric. As such, this metric can overestimate the virtual memory being used by processes in this group when they share memory regions. On Windows, this is the sum (in MB) of paging file space used for all processes in this group during the interval. Groups of processes may have working set sizes (APP_MEM_RES) larger than the size of their pagefile space. ====== APP_DISK_LOGL_IO The number of logical IOs for processes in this group during the interval. ====== APP_SUSPENDED_PROCS The average number of processes in this group which have been either marked as should be suspended (SGETOUT) or have been suspended (SSWAPPED) during the interval. Processes are suspended when the OS detects that memory thrashing is occurring. The scheduler looks for processes that have a high repage rate when compared with the number of major page faults the process has done and suspends these processes. If this metric is not zero, there is a memory bottleneck on the system. ====== APP_DISK_FS_IO The number of file system disk IOs for processes in this group during the interval. ====== APP_DISK_PHYS_IO The number of physical IOs for processes in this group during the interval. ====== APP_DISK_RAW_IO The total number of raw IOs for processes in this group during the interval. Only accesses to local disk devices are counted. ====== APP_DISK_SYSTEM_IO The number of physical IOs generated by the kernel for file system management (inode accesses or updates) for processes in this group during the interval. ====== APP_DISK_VM_IO The number of virtual memory IOs made on behalf of processes in this group during the interval. IOs to user file data are not included in this metric unless they were done via the mmap(2) system call. ====== APP_PRM_CPUCAP_MODE The PRM CPU Cap Mode state on this system: 0 = PRM is not installed or not configured. 1 = CPU Cap Mode is not enabled (PRM CPU entitlements are in effect) 2 = CPU Cap Mode is enabled (The PRM CPU entitlements behave as caps or limits) ====== APP_IPC_SUBSYSTEM_WAIT_PCT The percentage of time processes or kernel threads in this group were blocked on the InterProcess Communication (IPC) subsystems (waiting for their interprocess communication activity to complete) during the interval. This is the sum of processes or kernel threads in the IPC, MSG, SEM, PIPE, SOCKT (that is, sockets) and STRMS (that is, streams IO) wait states. A percentage of time spent in a wait state is calculated as the accumulated time kernel threads belonging to processes in this group spent waiting in this state, divided by accumulated alive time of kernel threads belonging to processes in this group during the interval. For example, assume an application has 20 kernel threads. During the interval, ten kernel threads slept the entire time, while ten kernel threads waited on terminal input. As a result, the application wait percent values would be 50% for SLEEP and 50% for TERM (that is, terminal IO). The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== APP_IPC_SUBSYSTEM_QUEUE The average number of processes or kernel threads in this group blocked on the InterProcess Communication (IPC) subsystems (waiting for their interprocess communication activity to complete) during the interval. This is the sum of processes or kernel threads in the IPC, MSG, SEM, PIPE, SOCKT (that is, sockets) and STRMS (that is, streams IO) wait states. This is calculated as the accumulated time that all processes or kernel threads spent blocked on (IPC + MSG + SEM + PIPE + SOCKT + STRMS) divided by the interval time. The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== APP_PRI_WAIT_PCT The percentage of time processes or kernel threads in this group were blocked on PRI (waiting for their priority to become high enough to get the CPU) during the interval. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. A percentage of time spent in a wait state is calculated as the accumulated time kernel threads belonging to processes in this group spent waiting in this state, divided by accumulated alive time of kernel threads belonging to processes in this group during the interval. For example, assume an application has 20 kernel threads. During the interval, ten kernel threads slept the entire time, while ten kernel threads waited on terminal input. As a result, the application wait percent values would be 50% for SLEEP and 50% for TERM (that is, terminal IO). The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== APP_DISK_SUBSYSTEM_WAIT_PCT The percentage of time processes or kernel threads in this group were blocked on the disk subsystem (waiting for their file system IOs to complete) during the interval. On HP-UX, this is based on the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On Linux, this is based on the sum of all processes or kernel threads blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. A percentage of time spent in a wait state is calculated as the accumulated time kernel threads belonging to processes in this group spent waiting in this state, divided by accumulated alive time of kernel threads belonging to processes in this group during the interval. For example, assume an application has 20 kernel threads. During the interval, ten kernel threads slept the entire time, while ten kernel threads waited on terminal input. As a result, the application wait percent values would be 50% for SLEEP and 50% for TERM (that is, terminal IO). The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== APP_DISK_SUBSYSTEM_QUEUE The average number of processes or kernel threads in this group that were blocked on the disk subsystem (waiting for their file system IOs to complete) during the interval. On HP-UX, this is based on the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On Linux, this is based on the sum of all processes or kernel threads blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== APP_MEM_WAIT_PCT The percentage of time processes or kernel threads in this group were blocked on memory (waiting for virtual memory disk accesses to complete) during the interval. A percentage of time spent in a wait state is calculated as the accumulated time kernel threads belonging to processes in this group spent waiting in this state, divided by accumulated alive time of kernel threads belonging to processes in this group during the interval. For example, assume an application has 20 kernel threads. During the interval, ten kernel threads slept the entire time, while ten kernel threads waited on terminal input. As a result, the application wait percent values would be 50% for SLEEP and 50% for TERM (that is, terminal IO). The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== APP_MEM_QUEUE The average number of processes or kernel threads in this group blocked on memory (waiting for virtual memory disk accesses to complete) during the interval. This typically happens when processes or kernel threads are allocating a large amount of memory. It can also happen when processes or kernel threads access memory that has been paged out to disk (deactivated) because of overall memory pressure on the system. Note that large programs can block on VM disk access when they are initializing, bringing their text and data pages into memory. The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== APP_SEM_WAIT_PCT The percentage of time processes or kernel threads in this group were blocked on semaphores (waiting for their semaphore operations to complete) during the interval. A percentage of time spent in a wait state is calculated as the accumulated time kernel threads belonging to processes in this group spent waiting in this state, divided by accumulated alive time of kernel threads belonging to processes in this group during the interval. For example, assume an application has 20 kernel threads. During the interval, ten kernel threads slept the entire time, while ten kernel threads waited on terminal input. As a result, the application wait percent values would be 50% for SLEEP and 50% for TERM (that is, terminal IO). The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== APP_SEM_QUEUE The average number of processes or kernel threads in this group that were blocked on semaphores (waiting for their semaphore operations to complete) during the interval. The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== APP_TERM_IO_WAIT_PCT The percentage of time processes or kernel threads in this group were blocked on terminal IO (waiting for terminal IO to complete) during the interval. A percentage of time spent in a wait state is calculated as the accumulated time kernel threads belonging to processes in this group spent waiting in this state, divided by accumulated alive time of kernel threads belonging to processes in this group during the interval. For example, assume an application has 20 kernel threads. During the interval, ten kernel threads slept the entire time, while ten kernel threads waited on terminal input. As a result, the application wait percent values would be 50% for SLEEP and 50% for TERM (that is, terminal IO). This metric is available on HP-UX 10.20. ====== APP_TERM_IO_QUEUE The average number of processes or kernel threads in this group that were blocked on terminal IO (waiting for their terminal IO to complete) during the interval. This metric is available on HP-UX 10.20. ====== APP_OTHER_IO_WAIT_PCT The percentage of time processes or kernel threads in this group were blocked on "other IO" during the interval. "Other IO" includes all IO directed at a device (connected to the local computer) which is not a terminal or LAN. Examples of "other IO" devices are local printers, tapes, instruments, and disks. Time waiting for character (raw) IO to disks is included in this measurement. Time waiting for file system buffered IO to disks will typically been seen as IO or CACHE wait. Time waiting for IO to NFS disks is reported as NFS wait. A percentage of time spent in a wait state is calculated as the accumulated time kernel threads belonging to processes in this group spent waiting in this state, divided by accumulated alive time of kernel threads belonging to processes in this group during the interval. For example, assume an application has 20 kernel threads. During the interval, ten kernel threads slept the entire time, while ten kernel threads waited on terminal input. As a result, the application wait percent values would be 50% for SLEEP and 50% for TERM (that is, terminal IO). The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== APP_OTHER_IO_QUEUE The average number of processes or kernel threads in this group that were blocked on "other IO" during the interval. "Other IO" includes all IO directed at a device (connected to the local computer) which is not a terminal or LAN. Examples of "other IO" devices are local printers, tapes, instruments, and disks. Time waiting for character (raw) IO to disks is included in this measurement. Time waiting for file system buffered IO to disks will typically been seen as IO or CACHE wait. Time waiting for IO to NFS disks is reported as NFS wait. This is calculated as the accumulated time that all processes or kernel threads in this group spent blocked on other IO divided by the interval time. The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== APP_NETWORK_SUBSYSTEM_WAIT_PCT The percentage of time processes or kernel threads in this group were blocked on the network subsystem (waiting for their network activity to complete) during the interval. This is the sum of processes or kernel threads in the LAN, NFS, and RPC wait states. This does not include processes or kernel threads blocked on SOCKT (that is, socket) waits, as some processes or kernel threads sit idle in SOCKT waits for long periods. This is calculated as the accumulated time that all processes or kernel threads in this group spent blocked on (LAN + NFS + RPC) divided by the interval time. A percentage of time spent in a wait state is calculated as the accumulated time kernel threads belonging to processes in this group spent waiting in this state, divided by accumulated alive time of kernel threads belonging to processes in this group during the interval. For example, assume an application has 20 kernel threads. During the interval, ten kernel threads slept the entire time, while ten kernel threads waited on terminal input. As a result, the application wait percent values would be 50% for SLEEP and 50% for TERM (that is, terminal IO). The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== APP_NETWORK_SUBSYSTEM_QUEUE The average number of processes or kernel threads in this group were blocked on the network subsystem (waiting for their network activity to complete) during the interval. This is the sum of processes or kernel threads in the LAN, NFS, and RPC wait states. This does not include processes or kernel threads blocked on SOCKT (that is, socket) waits, as some processes or kernel threads sit idle in SOCKT waits for long periods. The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== APP_SLEEP_WAIT_PCT The percentage of time processes or kernel threads in this group were blocked on SLEEP (waiting to awaken from sleep system calls) during the interval. A process or kernel thread enters the SLEEP state by putting itself to sleep using system calls such as sleep, wait, pause, sigpause, sigsuspend, poll and select. A percentage of time spent in a wait state is calculated as the accumulated time kernel threads belonging to processes in this group spent waiting in this state, divided by accumulated alive time of kernel threads belonging to processes in this group during the interval. For example, assume an application has 20 kernel threads. During the interval, ten kernel threads slept the entire time, while ten kernel threads waited on terminal input. As a result, the application wait percent values would be 50% for SLEEP and 50% for TERM (that is, terminal IO). The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== APP_SLEEP_QUEUE The average number of processes or kernel threads in this group that were blocked on SLEEP (waiting to awaken from sleep system calls) during the interval. A process or kernel thread enters the SLEEP state by putting itself to sleep using system calls such as sleep, wait, pause, sigpause, sigsuspend, poll and select. The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== APP_REVERSE_PRI The average priority of the processes in this group during the interval. Lower values for this metric always imply higher processing priority. The range is from 0 to 127. Since priority ranges can be customized on this OS, this metric provides a standardized way of interpreting priority that is consistent with other versions of Unix. See also the APP_PRI metric. This is derived from the PRI field of the ps command when the -c option is not used. ====== APP_REV_PRI_STD_DEV The standard deviation of priorities of the processes in this group during the interval. Priorities are mapped into a traditional lower value implies higher priority scheme. ====== APP_PRI On Unix systems, this is the average priority of the processes in this group during the interval. On Windows, this is the average base priority of the processes in this group during the interval. ====== APP_PRI_STD_DEV The standard deviation of priorities of the processes in this group during the interval. This metric is available on HP-UX 10.20. ====== Core ====== BYCORE_TOTAL_UTIL The percentage of time that this Core was not Idle ====== BYCORE_TOTAL_TIME Total time consumed by this Core in the current interval. ====== BYCORE_SYS_MODE_UTIL The percentage of time that this Core was in system mode during the interval. ====== BYCORE_SYS_MODE_TIME The time consumed by this Core was in system mode during the interval. ====== BYCORE_USER_MODE_UTIL The percentage of time that this Core was in user mode during the interval. ====== BYCORE_USER_MODE_TIME The time consumed by this Core was in user mode during the interval. ====== BYCORE_NICE_UTIL The percentage of time that this Core was in nice mode during the interval. ====== BYCORE_NICE_TIME The time consumed by this Core was in nice mode during the interval. ====== BYCORE_INTERRUPT_UTIL The percentage of time that this Core was in interrupt mode during the interval. ====== BYCORE_INTERRUPT_TIME The time consumed by this Core was in interrupt mode during the interval. ====== BYCORE_IDLE_UTIL The percentage of time that this Core was in idle mode during the interval. ====== BYCORE_IDLE_TIME The time consumed by this Core was in idle mode during the interval. ====== BYCORE_WAIT_UTIL The percentage of time that this Core was in wait mode during the interval. ====== BYCORE_WAIT_TIME The time consumed by this Core was in wait mode during the interval. ====== BYCORE_CORE_ID The ID number of the core . ====== BYCORE_SOCKET_ID The ID number of the socket . ====== CPU ====== BYCPU_ACTIVE Indicates whether or not this CPU is online. A CPU that is online is considered active. For HP-UX and certain versions of Linux, the sar(1M) command allows you to check the status of the system CPUs. For SUN and DEC, the commands psrinfo(1M) and psradm(1M) allow you to check or change the status of the system CPUs. For AIX, the pstat(1) command allows you to check the status of the system CPUs. ====== BYCPU_ID The ID number of this CPU. On some Unix systems, such as SUN, CPUs are not sequentially numbered. ====== BYCPU_LAST_PROC_ID The process id (pid) of the last process to have used this CPU. ====== BYCPU_CPU_TOTAL_UTIL The percentage of time that this CPU (or logical processor) was not idle during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_RUN_QUEUE_1_MIN This represents the 1 minute load average for this processor. ====== BYCPU_RUN_QUEUE_5_MIN This represents the 5 minute load average for this processor. ====== BYCPU_RUN_QUEUE_15_MIN This represents the 15 minute load average for this processor. ====== BYCPU_STATE A text string indicating the current state of a processor. On HP-UX, this is either "Enabled", "Disabled" or "Unknown". On AIX, this is either "Idle/Offline" or "Online". On all other systems, this is either "Offline", "Online" or "Unknown". ====== BYCPU_CPU_TYPE The type of processor in the current slot. The Linux kernel currently doesn't provide any metadata information for disabled CPUs. This means that there is no way to find out types, speeds, as well as hardware IDs or any other information that is used to determine the number of cores, the number of threads, the HyperThreading state, etc... If the agent (or Glance) is started while some of the CPUs are disabled, some of these metrics will be "na", some will be based on what is visible at startup time. All information will be updated if/when additional CPUs are enabled and information about them becomes available. The configuration counts will remain at the highest discovered level (i.e. if CPUs are then disabled, the maximum number of CPUs/cores/etc... will remain at the highest observed level). It is recommended that the agent be started with all CPUs enabled. ====== BYCPU_CPU_CLOCK The clock speed of the CPU in the current slot. The clock speed is in MHz for the selected CPU. The Linux kernel currently doesn't provide any metadata information for disabled CPUs. This means that there is no way to find out types, speeds, as well as hardware IDs or any other information that is used to determine the number of cores, the number of threads, the HyperThreading state, etc... If the agent (or Glance) is started while some of the CPUs are disabled, some of these metrics will be "na", some will be based on what is visible at startup time. All information will be updated if/when additional CPUs are enabled and information about them becomes available. The configuration counts will remain at the highest discovered level (i.e. if CPUs are then disabled, the maximum number of CPUs/cores/etc... will remain at the highest observed level). It is recommended that the agent be started with all CPUs enabled. On Linux, this value is always rounded up to the next MHz. Note that Linux supports dynamic frequency scaling and if it is enabled then there can be a change in CPU speed with varying load. ====== BYCPU_CSWITCH The number of context switches for this CPU during the interval. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. ====== BYCPU_CSWITCH_RATE The average number of context switches per second for this CPU during the interval. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. ====== BYCPU_INTERRUPT_RATE The average number of device interrupts per second for this CPU during the interval. On HP-UX, a value of "na" is displayed on a system with multiple CPUs. ====== BYCPU_INTERRUPT The number of device interrupts for this CPU during the interval. On HP-UX, a value of "na" is displayed on a system with multiple CPUs. ====== BYCPU_FORK The number of "fork" or "vfork" system calls for this CPU during the interval. ====== BYCPU_FORK_RATE The average number of "fork" or "vfork" system calls per second for this CPU during the interval. Each of these system calls creates a new process. ====== BYCPU_CPU_TRAP_TIME The time, in seconds, this CPU was in trap handler code during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_TRAP_UTIL The percentage of time this CPU was in trap handler code during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_USER_MODE_TIME The time, in seconds, during the interval that this CPU (or logical processor) was in user mode. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_USER_MODE_UTIL The percentage of time that this CPU (or logical processor) was in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_NICE_TIME The time, in seconds, that this CPU was in user mode at a nice priority during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_NICE_UTIL The percentage of time that this CPU was in user mode at a nice priority during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_NNICE_TIME The time, in seconds, that this CPU was in user mode at a nice priority calculated from processes with negative nice values during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_NNICE_UTIL The percentage of time that this CPU was in user mode at a nice priority calculated from processes with negative nice values during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_TOTAL_TIME The total time, in seconds, that this CPU (or logical processor) was not idle during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_SYS_MODE_TIME The time, in seconds, that this CPU (or logical processor) was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_SYS_MODE_UTIL The percentage of time that this CPU (or logical processor) was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_INTERRUPT_TIME The time, in seconds, that this CPU was performing interrupt processing during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_INTERRUPT_UTIL The percentage of time that this CPU was performing interrupt processing during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_CSWITCH_TIME The time, in seconds, that this CPU was performing context switches during the interval. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_CSWITCH_UTIL The percentage of time that this CPU was performing context switches during the interval. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_VFAULT_TIME The time, in seconds, this CPU was handling page faults during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_VFAULT_UTIL The percentage of time this CPU was handling page faults during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_REALTIME_TIME The time, in seconds, that this CPU was running at a realtime priority during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_REALTIME_UTIL The percentage of time that this CPU was running at a realtime priority during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_EXEC_RATE The average number of "exec" system calls per second for this CPU during the interval. Each of these system calls start a new process. ====== BYCPU_READ_RATE The average number of "read" system calls per second for this CPU during the interval. ====== BYCPU_WRITE_RATE The average number of "write" system calls per second for this CPU during the interval. ====== BYCPU_CPU_NORMAL_TIME The time, in seconds, that this CPU was running in user mode at a normal priority during the interval. Normal priority user mode CPU excludes CPU used at real-time and nice priorities. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_NORMAL_UTIL The percentage of time that this CPU was running in user mode at a normal priority during the interval. Normal priority user mode CPU excludes CPU used at real-time and nice priorities. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_SYSCALL_TIME The time, in seconds, that this CPU was running in system mode (not including interrupt, context switch, trap or vfault CPU) during the last interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_CPU_SYSCALL_UTIL The percentage of time that this CPU was running in system mode (not including interrupt, context switch, trap or vfault CPU) during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== BYCPU_LAST_THREAD_ID The thread ID (TID) number of the last kernel thread to have used this CPU. ====== BYCPU_LAST_USER_THREAD_ID The user thread ID number of the last user thread to have used this CPU within the context of its associated process. A process may have multiple user threads. This indicates the most recently executed user thread of the process identified in BYCPU_LAST_PROC_ID. ====== BYCPU_INTERRUPT_STATE A text string indicating whether the current processor is "enabled" or "disabled" for servicing IO interrupts. ====== BYCPU_CPU_PHYSC The total processing units of physical CPU consumed by this logical CPU during this interval. ====== BYCPU_CPU_STOLEN_TIME The time, in seconds, that was stolen from this CPU during the interval. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as '%steal' in 'sar' and 'st' in 'vmstat'. ====== BYCPU_CPU_STOLEN_UTIL The time, in seconds, that was stolen from this CPU during the interval. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as '%steal' in 'sar' and 'st' in 'vmstat'. ====== BYCPU_CPU_GUEST_TIME The time, in seconds, that this CPU was servicing guests during the interval. Guest time, on Linux KVM hosts, is the time that is spent servicing guests. Xen hosts, as of this release, do not update these counters, neither do other OSes. ====== BYCPU_CPU_GUEST_UTIL The percentage of time that this CPU was servicing guests during the interval. Guest time, on Linux KVM hosts, is the time that is spent servicing guests. Xen hosts, as of this release, do not update these counters, neither do other OSes. ====== BYCPU_CORE_ID The Core ID is this cpu. ====== Disk ====== BYDSK_TIME The time of day of the interval. ====== BYDSK_INTERVAL The amount of time in the interval. ====== BYDSK_BUS The name of the bus interface used by this disk. ====== BYDSK_PRODUCT_ID The disk product ID. ====== BYDSK_CONTROLLER The disk bus controller name. This information is only available for disks using the hpib or hpfl interfaces. ====== BYDSK_ID The ID of the current disk device. ====== BYDSK_DEVNAME The name of this disk device. On HP-UX, the name identifying the specific disk spindle is the hardware path which specifies the address of the hardware components leading to the disk device. On SUN, these names are the same disk names displayed by "iostat". On AIX, this is the path name string of this disk device. This is the fsname parameter in the mount(1M) command. If more than one file system is contained on a device (that is, the device is partitioned), this is indicated by an asterisk (" ") at the end of the path name. On OSF1, this is the path name string of this disk device. This is the file-system parameter in the mount(1M) command. On Windows, this is the unit number of this disk device. ====== BYDSK_DIRNAME The name of the file system directory mounted on this disk device. If more than one file system is mounted on this device, "Multiple FS" is seen. ====== BYDSK_DEVNO Major / Minor number of the device. ====== BYDSK_DISKNAME The device special file(DSF) representing this disk. This metric only gives the last component in the DSF path. On HP-UX 11iv1 and 11iv2, the DSF is of the form /dev/dsk/c#t#d# and hence value of DISKNAME metric will be "c#t#d#" On HP-UX 11iv3, this metric gives the path independent DSF name. So value of DISKNAME metric will be "disk#". See intro(7) for more details. ====== BYDSK_PHYS_IO The number of physical IOs for this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw reads. ====== BYDSK_PHYS_READ The number of physical reads for this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On AIX, this is an estimated value based on the ratio of read bytes to total bytes transferred. The actual number of reads is not tracked by the kernel. This is calculated as BYDSK_PHYS_READ = BYDSK_PHYS_IO (BYDSK_PHYS_READ_BYTE / BYDSK_PHYS_IO_BYTE) ====== BYDSK_PHYS_WRITE The number of physical writes for this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On AIX, this is an estimated value based on the ratio of write bytes to total bytes transferred because the actual number of writes is not tracked by the kernel. This is calculated as BYDSK_PHYS_WRITE = BYDSK_PHYS_IO (BYDSK_PHYS_WRITE_BYTE / BYDSK_PHYS_IO_BYTE) ====== BYDSK_PHYS_IO_RATE The average number of physical IO requests per second for this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system IO, virtual memory and raw IO. ====== BYDSK_PHYS_READ_RATE The average number of physical reads per second for this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On AIX, this is an estimated value based on the ratio of read bytes to total bytes transferred. The actual number of reads is not tracked by the kernel. This is calculated as BYDSK_PHYS_READ_RATE = BYDSK_PHYS_IO_RATE (BYDSK_PHYS_READ_BYTE / BYDSK_PHYS_IO_BYTE) ====== BYDSK_PHYS_WRITE_RATE The average number of physical writes per second for this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On AIX, this is an estimated value based on the ratio of write bytes to total bytes transferred. The actual number of writes is not tracked by the kernel. This is calculated as BYDSK_PHYS_WRITE_RATE = BYDSK_PHYS_IO_RATE (BYDSK_PHYS_WRITE_BYTE / BYDSK_PHYS_IO_BYTE) ====== BYDSK_PHYS_BYTE_RATE The average KBs per second transferred to or from this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw IO. ====== BYDSK_PHYS_READ_BYTE_RATE The average KBs per second transferred from this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw IO. ====== BYDSK_PHYS_WRITE_BYTE_RATE The average KBs per second transferred to this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system, virtual memory, and raw IO. ====== BYDSK_PHYS_READ_BYTE The KBs transferred from this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw IO. ====== BYDSK_PHYS_WRITE_BYTE The KBs transferred to this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system, virtual memory, and raw IO. ====== BYDSK_LOGL_READ The number of logical reads for this disk device during the interval. On HP-UX, the logical IO rates by disk device cannot be obtained in a multi-disk LVM configuration because there is no reasonable means of tying logical IO transactions to physical spindles spanned on the logical volume. Therefore, if you have a multi-disk LVM configuration, you always see "na" for this metric. ====== BYDSK_LOGL_WRITE The number of logical writes for this disk device during the interval. On HP-UX, the logical IO rates by disk device cannot be obtained in a multi-disk LVM configuration because there is no reasonable means of tying logical IO transactions to physical spindles spanned on the logical volume. Therefore, if you have a multi-disk LVM configuration, you always see "na" for this metric. ====== BYDSK_LOGL_READ_RATE The number of logical reads per second for this disk device during the interval. On HP-UX, the logical IO rates by disk device cannot be obtained in a multi-disk LVM configuration because there is no reasonable means of tying logical IO transactions to physical spindles spanned on the logical volume. Therefore, if you have a multi-disk LVM configuration, you always see "na" for this metric. ====== BYDSK_LOGL_WRITE_RATE The number of logical writes per second for this disk device during the interval. On HP-UX, the logical IO rates by disk device cannot be obtained in a multi-disk LVM configuration because there is no reasonable means of tying logical IO transactions to physical spindles spanned on the logical volume. Therefore, if you have a multi-disk LVM configuration, you always see "na" for this metric. ====== BYDSK_VENDOR_ID The disk vendor ID. This information is only available for disks using the scsi interface. ====== BYDSK_LOGL_IO_RATE The total number of logical IOs per second for this disk device during the interval. On HP-UX, the logical IO rates by disk device cannot be obtained in a multi-disk LVM configuration because there is no reasonable means of tying logical IO transactions to physical spindles spanned on the logical volume. Therefore, if you have a multi-disk LVM configuration, you always see "na" for this metric. ====== BYDSK_LOGL_BYTE_RATE The number of logical read or write KBs per second to this disk device during the interval. On HP-UX, the logical IO rates by disk device cannot be obtained in a multi-disk LVM configuration because there is no reasonable means of tying logical IO transactions to physical spindles spanned on the logical volume. Therefore, if you have a multi-disk LVM configuration, you always see "na" for this metric. ====== BYDSK_LOGL_READ_BYTE_RATE The number of logical read KBs per second from this disk device during the interval. On HP-UX, the logical IO rates by disk device cannot be obtained in a multi-disk LVM configuration because there is no reasonable means of tying logical IO transactions to physical spindles spanned on the logical volume. Therefore, if you have a multi-disk LVM configuration, you always see "na" for this metric. ====== BYDSK_LOGL_WRITE_BYTE_RATE The number of logical writes KBs per second to this disk device during the interval. On HP-UX, the logical IO rates by disk device cannot be obtained in a multi-disk LVM configuration because there is no reasonable means of tying logical IO transactions to physical spindles spanned on the logical volume. Therefore, if you have a multi-disk LVM configuration, you always see "na" for this metric. ====== BYDSK_FS_READ The number of physical file system reads from this disk device during the interval. ====== BYDSK_FS_WRITE The number of physical file system writes to this disk device during the interval. ====== BYDSK_FS_READ_RATE The number of physical file system reads per second from this disk device during the interval. ====== BYDSK_FS_WRITE_RATE The number of physical file system writes per second to this disk device during the interval. ====== BYDSK_FS_IO_RATE The number of physical file system reads and writes per second to this disk device during the interval. ====== BYDSK_VM_READ_RATE The number of virtual memory reads per second from this disk device during the interval. ====== BYDSK_VM_WRITE_RATE The number of virtual memory writes per second to this disk device during the interval. ====== BYDSK_VM_IO_RATE The number of virtual memory IOs per second to this disk device during the interval. ====== BYDSK_VM_IO The number of virtual memory IOs to this disk device during the interval. ====== BYDSK_SYSTEM_READ_RATE The number of physical system reads per second from this disk device during the interval. ====== BYDSK_SYSTEM_WRITE_RATE The number of physical system writes per second to this disk device during the interval. ====== BYDSK_SYSTEM_IO_RATE The number of physical system reads or writes per second to this disk device during the interval. ====== BYDSK_SYSTEM_IO The number of physical system reads or writes to this disk device during the interval. ====== BYDSK_RAW_READ The number of physical raw reads made from this disk device during the interval. ====== BYDSK_RAW_WRITE The number of physical raw writes made to this disk device during the interval. ====== BYDSK_RAW_READ_RATE The number of raw reads per second made from this disk device during the interval. ====== BYDSK_RAW_WRITE_RATE The number of raw writes per second made to this disk device during the interval. ====== BYDSK_RAW_IO_RATE The number of raw reads or writes per second made to this disk device during the interval. ====== BYDSK_PHYS_BYTE The number of KBs of physical IOs transferred to or from this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw IO. ====== BYDSK_BUSY_TIME The time, in seconds, that this disk device was busy transferring data during the interval. On HP-UX, this is the time, in seconds, during the interval that the disk device had IO in progress from the point of view of the Operating System. In other words, the time, in seconds, the disk was busy servicing requests for this device. ====== BYDSK_UTIL On HP-UX, this is the percentage of the time during the interval that the disk device had IO in progress from the point of view of the Operating System. In other words, the utilization or percentage of time busy servicing requests for this device. On the non-HP-UX systems, this is the percentage of the time that this disk device was busy transferring data during the interval. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be "na" on the affected kernels. The "sar -d" command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. This is a measure of the ability of the IO path to meet the transfer demands being placed on it. Slower disk devices may show a higher utilization with lower IO rates than faster disk devices such as disk arrays. A value of greater than 50% utilization over time may indicate that this device or its IO path is a bottleneck, and the access pattern of the workload, database, or files may need reorganizing for better balance of disk IO load. ====== BYDSK_AVG_SERVICE_TIME The average time, in milliseconds, that this disk device spent processing each disk request during the interval. For example, a value of 5.14 would indicate that disk requests during the last interval took on average slightly longer than five one-thousandths of a second to complete for this device. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be "na" on the affected kernels. The "sar -d" command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. This is a measure of the speed of the disk, because slower disk devices typically show a larger average service time. Average service time is also dependent on factors such as the distribution of I/O requests over the interval and their locality. It can also be influenced by disk driver and controller features such as I/O merging and command queueing. Note that this service time is measured from the perspective of the kernel, not the disk device itself. For example, if a disk device can find the requested data in its cache, the average service time could be quicker than the speed of the physical disk hardware. This metric can be used to help determine which disk devices are taking more time than usual to process requests. ====== BYDSK_AVG_WAIT_TIME ====== BYDSK_REQUEST_QUEUE The average number of IO requests that were in the wait queue for this disk device during the interval. These requests are the physical requests (as opposed to logical IO requests). Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be "na" on the affected kernels. The "sar -d" command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. ====== BYDSK_QUEUE_0_UTIL The percentage of intervals during which there were no IO requests pending for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting "o/f" (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won't include times accumulated prior to the performance tool's start and a message will be logged to indicate this. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 1.5, 0, and 3, then the value for this metric would be 50% since 50% of the intervals had a zero queue length. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be "na" on the affected kernels. The "sar -d" command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. ====== BYDSK_QUEUE_2_UTIL The percentage of intervals during which there were 1 or 2 IO requests pending for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting "o/f" (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won't include times accumulated prior to the performance tool's start and a message will be logged to indicate this. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 1, 0, and 2, then the value for this metric would be 50% since 50% of the intervals had a 1-2 queue length. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be "na" on the affected kernels. The "sar -d" command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. ====== BYDSK_QUEUE_4_UTIL The percentage of intervals during which there were 3 or 4 IO requests waiting to use this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting "o/f" (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won't include times accumulated prior to the performance tool's start and a message will be logged to indicate this. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 3, 0, and 4, then the value for this metric would be 50% since 50% of the intervals had a 3-4 queue length. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be "na" on the affected kernels. The "sar -d" command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. ====== BYDSK_QUEUE_8_UTIL The percentage of intervals during which there were between 5 and 8 IO requests pending for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting "o/f" (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won't include times accumulated prior to the performance tool's start and a message will be logged to indicate this. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 8, 0, and 5, then the value for this metric would be 50% since 50% of the intervals had a 5-8 queue length. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be "na" on the affected kernels. The "sar -d" command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. ====== BYDSK_QUEUE_X_UTIL The percentage of intervals during which there were more than 8 IO requests pending for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting "o/f" (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won't include times accumulated prior to the performance tool's start and a message will be logged to indicate this. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 9, 0, and 10, then the value for this metric would be 50% since 50% of the intervals had queue length greater than 8. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be "na" on the affected kernels. The "sar -d" command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. ====== BYDSK_AVG_REQUEST_QUEUE The average number of IO requests that were in the wait and service queues for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting "o/f" (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won't include times accumulated prior to the performance tool's start and a message will be logged to indicate this. For example, if 4 intervals have passed with average queue lengths of 0, 2, 0, and 6, then the average number of IO requests over all intervals would be 2. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be "na" on the affected kernels. The "sar -d" command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. ====== BYDSK_CURR_QUEUE_LENGTH The average number of physical IO requests that were in the wait and service queues for this disk device during the interval. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be "na" on the affected kernels. The "sar -d" command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. ====== BYDSK_AVG_WRITE_SERVICE_TIME The average time, in milliseconds, that this disk device spent processing each disk write request during the interval. For example, a value of 5.14 would indicate that disk write requests during the last interval took on average slightly longer than five one-thousandths of a second to complete for this device. This is a measure of the speed of the disk, because slower disk devices typically show a larger average write service time. Average write service time is also dependent on factors such as the distribution of I/O requests over the interval and their locality. It can also be influenced by disk driver and controller features such as I/O merging and command queueing. Note that this write service time is measured from the perspective of the kernel, not the disk device itself. For example, if a disk device can find the requested data in its cache, the average service time could be quicker than the speed of the physical disk hardware. This metric can be used to help determine which disk devices are taking more time than usual to process write requests. ====== BYDSK_AVG_READ_SERVICE_TIME The average time, in milliseconds, that this disk device spent processing each disk read request during the interval. For example, a value of 5.14 would indicate that disk read requests during the last interval took on average slightly longer than five one-thousandths of a second to complete for this device. This is a measure of the speed of the disk, because slower disk devices typically show a larger average read service time. Average read service time is also dependent on factors such as the distribution of I/O requests over the interval and their locality. It can also be influenced by disk driver and controller features such as I/O merging and command queueing. Note that this write service time is measured from the perspective of the kernel, not the disk device itself. For example, if a disk device can find the requested data in its cache, the average service time could be quicker than the speed of the physical disk hardware. This metric can be used to help determine which disk devices are taking more time than usual to process read requests. ====== BYDSK_AVG_QUEUE_TIME The average time, in milliseconds, that a disk request spent waiting in the queue during the interval. For example, a value of 1.14 would indicate that disk requests during the last interval spent on average slightly longer than one-thousandths of a second wating in the queue of this device. ====== BYDSK_AVG_READ_QUEUE_TIME The average time, in milliseconds, that a disk read request spent waiting in the queue during the interval. For example, a value of 1.14 would indicate that disk read requests during the last interval spent on average slightly longer than one-thousandths of a second waiting in the queue of this device. ====== BYDSK_AVG_WRITE_QUEUE_TIME The average time, in milliseconds, that a disk write request spent waiting in the queue during the interval. For example, a value of 1.14 would indicate that disk write requests during the last interval spent on average slightly longer than one-thousandths of a second waiting in the queue of this device. ====== FileSystem ====== FS_DEVNAME On Unix systems, this is the path name string of the current device. On Windows, this is the disk drive string of the current device. On HP-UX, this is the "fsname" parameter in the mount(1M) command. For NFS devices, this includes the name of the node exporting the file system. It is possible that a process may mount a device using the mount(2) system call. This call does not update the "/etc/mnttab" and its name is blank. This situation is rare, and should be corrected by syncer(1M). Note that once a device is mounted, its entry is displayed, even after the device is unmounted, until the midaemon process terminates. On SUN, this is the path name string of the current device, or "tmpfs" for memory based file systems. See tmpfs(7). ====== FS_DIRNAME On Unix systems, this is the path name of the mount point of the file system. On Windows, this is the drive letter associated with the selected disk partition. On HP-UX, this is the path name of the mount point of the file system if the logical volume has a mounted file system. This is the directory parameter of the mount(1M) command for most entries. Exceptions are: For lvm swap areas, this field contains "lvm swap device". For logical volumes with no mounted file systems, this field contains "Raw Logical Volume" (relevant only to Perf Agent). On HP-UX, the file names are in the same order as shown in the "/usr/sbin/mount -p" command. File systems are not displayed until they exhibit IO activity once the midaemon has been started. Also, once a device is displayed, it continues to be displayed (even after the device is unmounted) until the midaemon process terminates. On SUN, only "UFS", "HSFS" and "TMPFS" file systems are listed. See mount(1M) and mnttab(4). "TMPFS" file systems are memory based filesystems and are listed here for convenience. See tmpfs(7). On AIX, see mount(1M) and filesystems(4). On OSF1, see mount(2). ====== FS_TYPE A string indicating the file system type. On Unix systems, some of the possible types are: hfs - user file system ufs - user file system ext2 - user file system cdfs - CD-ROM file system vxfs - Veritas (vxfs) file system nfs - network file system nfs3 - network file system Version 3 On Windows, some of the possible types are: NTFS - New Technology File System FAT - 16-bit File Allocation Table FAT32 - 32-bit File Allocation Table FAT uses a 16-bit file allocation table entry (216 clusters). FAT32 uses a 32-bit file allocation table entry. However, Windows 2000 reserves the first 4 bits of a FAT32 file allocation table entry, which means FAT32 has a theoretical maximum of 228 clusters. NTFS is native file system of Windows NT and beyond. ====== FS_DEVNO On Unix systems, this is the major and minor number of the file system. On Windows, this is the unit number of the disk device on which the logical disk resides. The scope collector logs the value of this metric in decimal format. ====== FS_FRAG_SIZE The fundamental file system block size, in bytes. A value of "na" may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. On Windows, this is the same as FS_BLOCK_SIZE metric. ====== FS_BLOCK_SIZE The maximum block size of this file system, in bytes. A value of "na" may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. ====== FS_MAX_SIZE Maximum number that this file system could obtain if full, in MB. Note that this is the user space capacity - it is the file system space accessible to non root users. On most Unix systems, the df command shows the total file system capacity which includes the extra file system space accessible to root users only. The equivalent fields to look at are "used" and "avail". For the target file system, to calculate the maximum size in MB, use FS Max Size = (used + avail)/1024 A value of "na" may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. On HP-UX, this metric is updated at 4 minute intervals to minimize collection overhead. ====== FS_MAX_INODES Number of configured file system inodes. A value of "na" may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. ====== FS_SPACE_USED The amount of file system space in MBs that is being used. ====== FS_SPACE_RESERVED The amount of file system space in MBs reserved for superuser allocation. On AIX, this metric is typically zero for local filesystems because by default AIX does not reserve any file system space for the superuser. ====== FS_LOGL_IO_RATE The number of logical IOs per second directed to this file system during the interval. Logical IOs are generated by calling the read() or write() system calls. ====== FS_PHYS_IO_RATE The number of physical IOs per second directed to this file system during the interval. ====== FS_FILE_IO_RATE The number of file system related physical IOs per second directed to this file system during the interval. This value is similar to the values returned by the vmstat -d command except that vmstat reports all IOs and does not break them out by file system. Also, vmstat reports IOs from the kernel's view, which may get broken down by the disk driver into multiple physical IOs. Since this metric reports values from the disk driver's point of view, it is more accurate than vmstat. ====== FS_VM_IO_RATE The number of virtual memory IOs per second directed to this file system during the interval. ====== FS_SPACE_UTIL Percentage of the file system space in use during the interval. Note that this is the user space capacity - it is the file system space accessible to non root users. On most Unix systems, the df command shows the total file system capacity which includes the extra file system space accessible to root users only. A value of "na" may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. On HP-UX, this metric is updated at 4 minute intervals to minimize collection overhead. ====== FS_INODE_UTIL Percentage of this file system's inodes in use during the interval. A value of "na" may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. ====== FS_LOGL_READ_RATE The number of logical reads per second directed to this file system during the interval. Logical reads are generated by calling the read() system call. ====== FS_LOGL_WRITE_RATE The number of logical writes per second directed to this file system during the interval. Logical writes are generated by calling the write() system call. ====== FS_LOGL_READ_BYTE_RATE The number of logical read KBs per second from this file system during the interval. ====== FS_LOGL_WRITE_BYTE_RATE The number of logical writes KBs per second to this file system during the interval. ====== FS_PHYS_READ_RATE The number of physical reads per second directed to this file system during the interval. On Unix systems, physical reads are generated by user file access, virtual memory access (paging), file system management, or raw device access. ====== FS_PHYS_WRITE_RATE The number of physical writes per second directed to this file system during the interval. ====== FS_PHYS_READ_BYTE_RATE The number of physical KBs per second read from this file system during the interval. ====== FS_PHYS_WRITE_BYTE_RATE The number of physical KBs per second written to this file system during the interval. ====== FS_INTERVAL The amount of time in the interval. ====== FS_IS_LVM Returns true (1) if this file system is a logical volume or 0 if a hard-partitioned file system. ====== FS_REQUEST_QUEUE The average number of both i/o requests that were queued for the selected filesystem during the interval. ====== Global ====== GBL_INTERVAL The amount of time in the interval. This measured interval is slightly larger than the desired or configured interval if the collection program is delayed by a higher priority process and cannot sample the data immediately. ====== GBL_BLANK A string of blanks. ====== GBL_PROC_SAMPLE The number of process data samples that have been averaged into global metrics (such as GBL_ACTIVE_PROC) that are based on process samples. ====== GBL_LOST_MI_TRACE_BUFFERS The number of trace buffers lost by the measurement processing daemon. On HP-UX systems, if this value is > 0, the measurement subsystem is not keeping up with the system events that generate traces. For other Unix systems, if this value is > 0, the measurement subsystem is not keeping up with the ARM API calls that generate traces. Note: The value reported for this metric will roll over to 0 once it crosses INTMAX. ====== GBL_SYSTEM_ID The network node hostname of the system. This is the same as the output from the "uname -n" command. On Windows, the name obtained from GetComputerName. ====== GBL_RENICE_PRI_LIMIT User priorities range from -x to +x where the value of x is configurable. This is the configured value x. This defines the range of possible values for altering the priority of processes in the time-sharing class. Displays the value of the range which users may adjust the user priority of a time-sharing process. If the value is x then the valid range for adjusting priorities is: -x to +x. This value is configured in /etc/conf/cf.f/mtune file as TSMAXUPRI. The default configuration value is 20 which emulates the behavior of the older, less general scheduler interfaces "nice" and "setpriority". Configuring a higher value gives users more control over the priority of their processes. Displays the value of the range which users may adjust the user priority of a time-sharing process. If the value is x then the valid range for adjusting priorities is: -x to +x. This value is configured in /etc/conf/cf.d/mtune file as TSMAXUPRI. The default configuration value is 20, which emulates the behavior of the older, less general scheduler interfaces "nice" and "setpriority". Configuring a higher value gives users more control over the priority of their processes. ====== GBL_SYSTEM_TYPE On Unix systems, this is either the model of the system or the instruction set architecture of the system. On Windows, this is the processor architecture of the system. ====== GBL_SERIALNO On HP-UX, this is the ID number of the computer as returned by the command "uname -i". If this value is not available, an empty string is returned. On SUN, this is the ASCII representation of the hardware-specific serial number. This is printed in hexadecimal as presented by the "hostid" command when possible. If that is not possible, the decimal format is provided instead. On AIX, this is the machine ID number as returned by the command "uname -m". This number has the form xxyyyyyymmss. For the RISC System/6000, "xx" position is always 00. The "yyyyyy" positions contain the unique ID number for the central processing unit (cpu). While "mm" represents the model number, and "ss" is the submodel number (always 00). On Linux, this is the ASCII representation of the hardware-specific serial number, as returned by the command "hostid". ====== GBL_NODENAME On Unix systems, this is the name of the computer as returned by the command "uname -n" (that is, the string returned from the "hostname" program). On Windows, this is the name of the computer as returned by GetComputerName. ====== GBL_MACHINE An ASCII string representing the Processor Architecture. And machine hardware model is represented by GBL_MACHINE_MODEL metric. ====== GBL_STATTIME An ASCII string representing the time at the end of the interval, based on local time. ====== GBL_STATDATE The date at the end of the interval, based on local time. ====== GBL_STARTDATE The date that the collector started. ====== GBL_STARTTIME The time of day that the collector started. ====== GBL_OSRELEASE The current release of the operating system. On most Unix systems, this is same as the output from the "uname -r" command. On AIX, this is the actual patch level of the operating system. This is similar to what is returned by the command "lslpp -l bos.rte" as the most recent level of the COMMITTED Base OS Runtime. For example, "5.2.0". ====== GBL_OSNAME A string representing the name of the operating system. On Unix systems, this is the same as the output from the "uname -s" command. ====== GBL_OSVERSION A string representing the version of the operating system. This is the same as the output from the "uname -v" command. This string is limited to 20 characters, and as a result, the complete version name might be truncated. On Windows, this is a string representing the service pack installed on the operating system. ====== GBL_NUM_USER The number of users logged in at the time of the interval sample. This is the same as the command "who | wc -l". For Unix systems, the information for this metric comes from the utmp file which is updated by the login command. For more information, read the man page for utmp. Some applications may create users on the system without using login and updating the utmp file. These users are not reflected in this count. This metric can be a general indicator of system usage. In a networked environment, however, users may maintain inactive logins on several systems. On Windows, the information for this metric comes from the Server Sessions counter in the Performance Libraries Server object. It is a count of the number of users using this machine as a file server. ====== GBL_TT_OVERFLOW_COUNT The number of new transactions that could not be measured because the Measurement Processing Daemon's (midaemon) Measurement Performance Database is full. If this happens, the default Measurement Performance Database size is not large enough to hold all of the registered transactions on this system. This can be remedied by stopping and restarting the midaemon process using the -smdvss option to specify a larger Measurement Performance Database size. The current Measurement Performance Database size can be checked using the midaemon -sizes option. ====== GBL_ACTIVE_PROC An active process is one that exists and consumes some CPU time. GBL_ACTIVE_PROC is the sum of the alive-process-time/interval-time ratios of every process that is active (uses any CPU time) during an interval. The following diagram of a four second interval during which two processes exist on the system should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval but consumes no CPU. A's contribution to GBL_ALIVE_PROC is 4 1/4. A contributes 0 1/4 to GBL_ACTIVE_PROC. B's contribution to GBL_ALIVE_PROC is 3 1/4. B contributes 2 1/4 to GBL_ACTIVE_PROC. Thus, for this interval, GBL_ACTIVE_PROC equals 0.5 and GBL_ALIVE_PROC equals 1.75. Because a process may be alive but not active, GBL_ACTIVE_PROC will always be less than or equal to GBL_ALIVE_PROC. This metric is a good overall indicator of the workload of the system. An unusually large number of active processes could indicate a CPU bottleneck. To determine if the CPU is a bottleneck, compare this metric with GBL_CPU_TOTAL_UTIL and GBL_RUN_QUEUE. If GBL_CPU_TOTAL_UTIL is near 100 percent and GBL_RUN_QUEUE is greater than one, there is a bottleneck. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. ====== GBL_COMPLETED_PROC The number of processes that terminated during the interval. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. ====== GBL_PROC_RUN_TIME The average run time, in seconds, for processes that terminated during the interval. ====== GBL_NUM_CPU The number of physical CPUs on the system. This includes all CPUs, either online or offline. For HP-UX and certain versions of Linux, the sar(1M) command allows you to check the status of the system CPUs. For SUN and DEC, the commands psrinfo(1M) and psradm(1M) allow you to check or change the status of the system CPUs. For AIX, this metric indicates the maximum number of CPUs the system ever had. On a logical system, this metric indicates the number of virtual CPUs configured. When hardware threads are enabled, this metric indicates the number of logical processors. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. The Linux kernel currently doesn't provide any metadata information for disabled CPUs. This means that there is no way to find out types, speeds, as well as hardware IDs or any other information that is used to determine the number of cores, the number of threads, the HyperThreading state, etc... If the agent (or Glance) is started while some of the CPUs are disabled, some of these metrics will be "na", some will be based on what is visible at startup time. All information will be updated if/when additional CPUs are enabled and information about them becomes available. The configuration counts will remain at the highest discovered level (i.e. if CPUs are then disabled, the maximum number of CPUs/cores/etc... will remain at the highest observed level). It is recommended that the agent be started with all CPUs enabled. ====== GBL_NUM_DISK The number of disks on the system. Only local disk devices are counted in this metric. On HP-UX, this is a count of the number of disks on the system that have ever had activity over the cumulative collection time. On Solaris non-global zones, this metric shows value as 0. On AIX System WPARs, this metric shows value as 0. ====== GBL_NUM_NETWORK The number of network interfaces on the system. This includes the loopback interface. On certain platforms, this also include FDDI, Hyperfabric, ATM, Serial Software interfaces such as SLIP or PPP, and Wide Area Network interfaces (WAN) such as ISDN or X.25. The "netstat -i" command also displays the list of network interfaces on the system. ====== GBL_STARTED_PROC The number of processes that started during the interval. ====== GBL_ACTIVE_CPU The number of CPUs online on the system. For HP-UX and certain versions of Linux, the sar(1M) command allows you to check the status of the system CPUs. For SUN and DEC, the commands psrinfo(1M) and psradm(1M) allow you to check or change the status of the system CPUs. For AIX, the pstat(1) command allows you to check the status of the system CPUs. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment if RSET is not configured for the System WPAR. If RSET is configured for the System WPAR, this metric value will report the number of CPUs in the RSET. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_SAMPLE The number of data samples (intervals) that have occurred over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting "o/f" (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won't include times accumulated prior to the performance tool's start and a message will be logged to indicate this. ====== GBL_SYSCALL_RATE The average number of system calls per second during the interval. High system call rates are normal on busy systems, especially with IO intensive applications. Abnormally high system call rates may indicate problems such as a "hung" terminal that is stuck in a loop generating read system calls. On HP-UX, system call rates affect the overhead of the midaemon. Due to the system call instrumentation on HP-UX, the fork and vfork system calls are double counted. In the case of fork and vfork, one process starts the system call, but two processes exit. HP-UX lightweight system calls, such as umask, do not show up in the Glance System Calls display, but will get added to the global system call rates. If a process is being traced (debugged) using standard debugging tools (such as adb or xdb), all system calls used by that process will show up in the System Calls display while being traced. On HP-UX, compare this metric to GBL_DISK_LOGL_IO_RATE to see if high system callrates correspond to high disk IO. GBL_CPU_SYSCALL_UTIL shows the CPU utilization due to processing system calls. ====== GBL_CSWITCH_RATE The average number of context switches per second during the interval. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. On Windows, this includes switches from one thread to another either inside a single process or across processes. A thread switch can be caused either by one thread asking another for information or by a thread being preempted by another higher priority thread becoming ready to run. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_INTERRUPT_RATE The average number of IO interrupts per second during the interval. On HP-UX and SUN this value includes clock interrupts. To get non-clock device interrupts, subtract clock interrupts from the value. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_MACHINE_MODEL The Machine model. This is similar to the information returned by the GBL_MACHINE metric and the uname command(except for Solaris 10 x86/x86_64). However, this metric returns more information on some processors. On HP-UX, this is the same information returned by the model command. On some Platforms, this Metric value is returned as NA ( not available) for non-root environments. ====== GBL_NUM_LV The sum of configured logical volumes. The number of configured virtual disks. ====== GBL_NUM_VG The number of available volume groups. On disabling the Logical Volume Class of metrics, this value will be reported as NA ====== GBL_NUM_SWAP The number of configured swap areas. ====== GBL_NUM_APP The number of applications defined in the parm file plus one (for "other"). The application called "other" captures all other processes not defined in the parm file. You can define up to 999 applications. ====== GBL_NUM_APP_PRM The number of PRM groups configured - 1 per PRM Group ID. HP-UX supports up to 64 unique PRM Groups. ====== GBL_NUM_TT The number of unique Transaction Tracker (TT) transactions that have been registered on this system. ====== GBL_ALIVE_PROC An alive process is one that exists on the system. GBL_ALIVE_PROC is the sum of the alive-process-time/interval-time ratios for every process. The following diagram of a four second interval during which two processes exist on the system should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval but consumes no CPU. A's contribution to GBL_ALIVE_PROC is 4 1/4. A contributes 0 1/4 to GBL_ACTIVE_PROC. B's contribution to GBL_ALIVE_PROC is 3 1/4. B contributes 2 1/4 to GBL_ACTIVE_PROC. Thus, for this interval, GBL_ACTIVE_PROC equals 0.5 and GBL_ALIVE_PROC equals 1.75. Because a process may be alive but not active, GBL_ACTIVE_PROC will always be less than or equal to GBL_ALIVE_PROC. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. ====== GBL_OSKERNELTYPE This indicates the word size of the current kernel on the system. Some hardware can load the 64-bit kernel or the 32-bit kernel. ====== GBL_OSKERNELTYPE_INT This indicates the word size of the current kernel on the system. Some hardware can load the 64-bit kernel or the 32-bit kernel. ====== GBL_SYSTEM_UPTIME_HOURS The time, in hours, since the last system reboot. ====== GBL_SYSTEM_UPTIME_SECONDS The time, in seconds, since the last system reboot. ====== GBL_COLLECTOR ASCII field containing collector name and version. This will be shown as "Nums" followed by the version information. ====== GBL_DISK_TIME_PEAK The time, in seconds, during the interval that the busiest disk was performing IO transfers. This is for the busiest disk only, not all disk devices. This counter is based on an end-to-end measurement for each IO transfer updated at queue entry and exit points. Only local disks are counted in this measurement. NFS devices are excluded. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_DISK_UTIL_PEAK The utilization of the busiest disk during the interval. On HP-UX, this is the percentage of time during the interval that the busiest disk device had IO in progress from the point of view of the Operating System. On all other systems, this is the percentage of time during the interval that the busiest disk was performing IO transfers. It is not an average utilization over all the disk devices. Only local disks are counted in this measurement. NFS devices are excluded. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be "na" on the affected kernels. The "sar -d" command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. A peak disk utilization of more than 50 percent often indicates a disk IO subsystem bottleneck situation. A bottleneck may not be in the physical disk drive itself, but elsewhere in the IO path. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_DISK_UTIL_PEAK_VM The VM IO percent of the total utilization percent of the busiest disk during the interval. Utilization is the percentage of time in use versus the time in the measurement interval. Only local disks are counted in this measurement. NFS devices are excluded. ====== GBL_DISK_UTIL_PEAK_OTHERS The non-VM IO percent of the total utilization percent of the busiest disk during the interval. Utilization is the percentage of time in use versus the time in the measurement interval. Only local disks are counted in this measurement. NFS devices are excluded. ====== GBL_FS_SPACE_UTIL_PEAK The percentage of occupied disk space to total disk space for the fullest file system found during the interval. Only locally mounted file systems are counted in this metric. This metric can be used as an indicator that at least one file system on the system is running out of disk space. On Unix systems, CDROM and PC file systems are also excluded. This metric can exceed 100 percent. This is because a portion of the file system space is reserved as a buffer and can only be used by root. If the root user has made the file system grow beyond the reserved buffer, the utilization will be greater than 100 percent. This is a dangerous situation since if the root user totally fills the file system, the system may crash. On Windows, CDROM file systems are also excluded. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_DISK_SPACE_UTIL The average percentage of occupied disk space to total disk space for all file systems during the interval. Only locally mounted file systems are counted in this metric. CDROM and PC file systems are also excluded. ====== GBL_COLLECTION_MODE This metric reports whether the data collection is running as "root" (super-user) or "non-root" (regular user). Running as non-root results in a loss of functionality which varies across Unix platforms. Running non-root is not available on HP-UX. The value is always "admin" on Windows. ====== GBL_DISK_UTIL On HP-UX, this is the average percentage of time during the interval that all disks had IO in progress from the point of view of the Operating System. This is the average utilization for all disks. On all other Unix systems, this is the average percentage of disk in use time of the total interval (that is, the average utilization). Only local disks are counted in this measurement. NFS devices are excluded. ====== GBL_DISK_REQUEST_QUEUE The total length of all of the disk queues at the end of the interval. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be "na" on the affected kernels. The "sar -d" command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_DISK_FILE_IO_RATE The number of file IOs per second excluding virtual memory IOs during the interval. This is the sum of block IOs and raw IOs. Only local disks are counted in this measurement. NFS devices are excluded. On SUN, when a file is accessed, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). ====== GBL_DISK_FILE_IO The number of file IOs, excluding virtual memory IOs, during the interval. Only local disks are counted in this measurement. NFS devices are excluded. ====== GBL_DISK_FILE_IO_PCT The percentage of file IOs of the total physical IO during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. ====== GBL_DISK_QUEUE The average number of processes or kernel threads blocked on disk (in a "queue" within the disk drivers waiting for their file system disk IO to complete) during the interval. Processes or kernel threads doing raw IO to a disk are not included in this measurement. As this number rises, it is an indication of a disk bottleneck. This is calculated as the accumulated time that all processes or kernel threads spent blocked on DISK divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_DISK_SUBSYSTEM_QUEUE The average number of processes or kernel threads blocked on the disk subsystem (in a "queue" waiting for their file system disk IO to complete) during the interval. On HP-UX, this is based on the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On Linux, this is based on the sum of all processes or kernel threads blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. This is calculated as the accumulated time mentioned above divided by the interval time. As this number rises, it is an indication of a disk bottleneck. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_DISK_SUBSYSTEM_WAIT_PCT The percentage of time processes or kernel threads were blocked on the disk subsystem (waiting for their file system IOs to complete) during the interval. On HP-UX, this is based on the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On Linux, this is based on the sum of all processes or kernel threads blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. This is calculated as the accumulated time mentioned above divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_NETWORK_SUBSYSTEM_QUEUE The average number of processes or kernel threads blocked on the network subsystem (waiting for their network activity to complete) during the interval. This is the sum of processes or kernel threads in the LAN, NFS, and RPC wait states. This does not include processes or kernel threads blocked on SOCKT (that is, sockets) waits, as some processes or kernel threads sit idle in SOCKT waits for long periods. This is calculated as the accumulated time that all processes or kernel threads spent blocked on (LAN + NFS + RPC) divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_NETWORK_SUBSYSTEM_WAIT_PCT The percentage of time processes or kernel threads were blocked on the network subsystem (waiting for their network activity to complete) during the interval. This is the sum of processes or kernel threads in the LAN, NFS, and RPC wait states. This does not include processes or kernel threads blocked on SOCKT (that is, sockets) waits, as some processes or kernel threads sit idle in SOCKT waits for long periods. This is calculated as the accumulated time that all processes or kernel threads spent blocked on (LAN + NFS + RPC) divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_IPC_SUBSYSTEM_QUEUE The average number of processes or kernel threads blocked on the InterProcess Communication (IPC) subsystems (waiting for their interprocess communication activity to complete) during the interval. This is the sum of processes or kernel threads in the IPC, MSG, SEM, PIPE, SOCKT (that is, sockets) and STRMS (that is, streams IO) wait states. This is calculated as the accumulated time that all processes or kernel threads spent blocked on (IPC + MSG + SEM + PIPE + SOCKT + STRMS) divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_IPC_SUBSYSTEM_WAIT_PCT The percentage of time processes or kernel threads were blocked on the InterProcess Communication (IPC) subsystems (waiting for their interprocess communication activity to complete) during the interval. This is the sum of processes or kernel threads in the IPC, MSG, SEM, PIPE, SOCKT (that is, sockets) and STRMS (that is, streams IO) wait states. This is calculated as the accumulated time that all processes or kernel threads spent blocked on (IPC + MSG + SEM + PIPE + SOCKT + STRMS) divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_NUM_HBA The number of Host Bus adaptors on the system. This metric is supported on HP-UX 11iv3 and above. ====== GBL_NUM_TAPE The number of Tape devices attached to the system. This metric is supported on HP-UX 11iv3 and above. ====== GBL_NUM_ONLINE_VCPU The number of virtual processors currently online. This metric is same as "Online Virtual CPUs" field of 'lparstat -i' command. ====== GBL_BOOT_TIME The date and time when the system was last booted. ====== GBL_GMTOFFSET The difference, in minutes, between local time and GMT (Greenwich Mean Time). ====== GBL_STARTED_PROC_RATE The number of processes that started per second during the interval. ====== GBL_SYSCALL The number of system calls during the interval. High system call rates are normal on busy systems, especially with IO intensive applications. Abnormally high system call rates may indicate problems such as a "hung" terminal that is stuck in a loop generating read system calls. ====== GBL_INTERRUPT The number of IO interrupts during the interval. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_ZOMBIE_PROC The total number of zombie process present during the interval. ====== GBL_MACHINE_VENDOR The Machine Vendor. This metric returns the vendor name for this machine. On some Platforms and virtual environments, this Metric value is returned as NA ( not available). ====== GBL_MACHINE_UUID The System Unique Identifier. This metric returns the System Unique Identifier String. This metric is available only on Linux and Windows Platforms only. On some platforms, if metric is not available, SystemID will be returned as value for this metric. ====== GBL_DISTRIBUTION The software distribution, if available. ====== GBL_CPU_TOTAL_UTIL Percentage of time the CPU was not idle during the interval. This is calculated as GBL_CPU_TOTAL_UTIL = GBL_CPU_USER_MODE_UTIL + GBL_CPU_SYS_MODE_UTIL On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_TOTAL_UTIL + GBL_CPU_IDLE_UTIL = 100% This metric varies widely on most systems, depending on the workload. A consistently high CPU utilization can indicate a CPU bottleneck, especially when other indicators such as GBL_RUN_QUEUE and GBL_ACTIVE_PROC are also high. High CPU utilization can also occur on systems that are bottlenecked on memory, because the CPU spends more time paging and swapping. NOTE: On Windows, this metric may not equal the sum of the APP_CPU_TOTAL_UTIL metrics. Microsoft states that "this is expected behavior" because this GBL_CPU_TOTAL_UTIL metric is taken from the performance library Processor objects while the APP_CPU_TOTAL_UTIL metrics are taken from the Process objects. Microsoft states that there can be CPU time accounted for in the Processor system objects that may not be seen in the Process objects. On a logical system, this metric indicates the logical utilization with respect to number of processors available for the logical system (GBL_NUM_CPU). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_TOTAL_TIME The total time, in seconds, that the CPU was not idle in the interval. This is calculated as GBL_CPU_TOTAL_TIME = GBL_CPU_USER_MODE_TIME + GBL_CPU_SYS_MODE_TIME On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On AIX System WPARs, this metric value is calculated against physical cpu time. ====== GBL_TOTAL_DISPATCH_TIME Total lpar dispatch time in seconds during the interval. On AIX 5.3 or below, value of this metric will be "na". On AIX System WPARs, this metric is NA. ====== GBL_CPU_SYS_MODE_UTIL Percentage of time the CPU was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. This metric is a subset of the GBL_CPU_TOTAL_UTIL percentage. This is NOT a measure of the amount of time used by system daemon processes, since most system daemons spend part of their time in user mode and part in system calls, like any other process. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. High system mode CPU percentages are normal for IO intensive applications. Abnormally high system mode CPU percentages can indicate that a hardware problem is causing a high interrupt rate. It can also indicate programs that are not calling system calls efficiently. On a logical system, this metric indicates the percentage of time the logical processor was in kernel mode during this interval. On Hyper-V host, this metric indicates the percentage of time spent in Hypervisor code. ====== GBL_CPU_SYS_MODE_TIME The time, in seconds, that the CPU was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On AIX System WPARs, this metric value is calculated against physical cpu time. On Hyper-V host, this metric indicates the time spent in Hypervisor code. ====== GBL_CPU_TRAP_TIME The time the CPU was in trap handler code during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_TRAP_UTIL The percentage of time the CPU was executing trap handler code during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_USER_MODE_TIME The time, in seconds, that the CPU was in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On AIX System WPARs, this metric value is calculated against physical cpu time. On Hyper-V host, this metric indicates the time spent in guest code. ====== GBL_CPU_USER_MODE_UTIL The percentage of time the CPU was in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. This metric is a subset of the GBL_CPU_TOTAL_UTIL percentage. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. High user mode CPU percentages are normal for computation-intensive applications. Low values of user CPU utilization compared to relatively high values for GBL_CPU_SYS_MODE_UTIL can indicate an application or hardware problem. On a logical system, this metric indicates the percentage of time the logical processor was in user mode during this interval. On Hyper-V host, this metric indicates the percentage of time spent in guest code. ====== GBL_CPU_NICE_UTIL The percentage of time that the CPU was in user mode at a nice priority during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_NICE_TIME The time, in seconds, that the CPU was in user mode at a nice priority during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_NNICE_UTIL The percentage of time that the CPU was in user mode at a nice priority calculated from processes with negative nice values during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_NNICE_TIME The time, in seconds, that the CPU was in user mode at a nice priority calculated from processes with negative nice values during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_REALTIME_UTIL The percentage of time that the CPU was in user mode at a realtime priority during the interval. Running at a realtime priority means that the process or kernel thread was run using the rtprio command or the rtprio system call to alter its priority. Realtime priorities range from zero to 127 and are absolute priorities, meaning the realtime process with the lowest priority runs as long as it wants to. Since this can have a huge impact on the system, the realtime CPU is tracked separately to make visible the effect of using realtime priorities. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_REALTIME_TIME The time, in seconds, that the CPU was in user mode at a realtime priority during the interval. Running at a realtime priority means that the process or kernel thread was run using the rtprio command or the rtprio system call to alter its priority. Realtime priorities range from zero to 127 and are absolute priorities, meaning the realtime process with the lowest priority runs as long as it wants to. Since this can have a huge impact on the system, the realtime CPU is tracked separately to make visible the effect of using realtime priorities. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_CSWITCH_UTIL The percentage of time that the CPU spent context switching during the interval. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_CSWITCH_TIME The time, in seconds, that the CPU spent context switching during the interval. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_INTERRUPT_UTIL The percentage of time that the CPU spent processing interrupts during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On Hyper-V host, this metric is NA. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_INTERRUPT_TIME The time, in seconds, that the CPU spent processing interrupts during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On Hyper-V host, this metric is NA. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_VFAULT_TIME The time, in seconds, the CPU was handling page faults during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_VFAULT_UTIL The percentage of time the CPU was handling page faults during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_IDLE_UTIL The percentage of time that the CPU was idle during the interval. This is the total idle time, including waiting for I/O (and stolen time on Linux). On Unix systems, this is the same as the sum of the "%idle" and "%wio" fields reported by the "sar -u" command. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. On Solaris non-global zones, this metric is N/A. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_IDLE_TIME The time, in seconds, that the CPU was idle during the interval. This is the total idle time, including waiting for I/O (and stolen time on Linux). On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. On AIX System WPARs, this metric value is calculated against physical cpu time. On Solaris non-global zones, this metric is N/A. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_NORMAL_UTIL The percentage of time that the CPU was in user mode at normal priority during the interval. Normal priority user mode CPU excludes CPU used at real-time and nice priorities. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_NORMAL_TIME The time, in seconds, that the CPU was in user mode at normal priority during the interval. Normal priority user mode CPU excludes CPU used at real-time and nice priorities. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_SYSCALL_UTIL The percentage of time that the CPU was in system mode (excluding interrupt, context switch, trap, or vfault CPU) during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_SYSCALL_TIME The time, in seconds, that the CPU was in system mode (excluding interrupt, context switch, trap, or vfault CPU) during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== GBL_CPU_WAIT_UTIL The percentage of time during the interval that the CPU was idle and there were processes waiting for physical IOs to complete. IO wait time is included in idle time on all systems. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On Solaris non-global zones, this metric is N/A. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On Linux, wait time includes CPU steal time. ====== GBL_CPU_WAIT_TIME The time, in seconds, that the CPU was idle and there were processes waiting for physical IOs to complete during the interval. IO wait time is included in idle time on all systems. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On AIX System WPARs, this metric value is calculated against physical cpu time. On Solaris non-global zones, this metric is N/A. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On Linux, wait time includes CPU steal time. ====== GBL_CPU_CLOCK The clock speed of the CPUs in MHz if all of the processors have the same clock speed. Otherwise, "na" is shown if the processors have different clock speeds. Note that Linux supports dynamic frequency scaling and if it is enabled then there can be a change in CPU speed with varying load. ====== GBL_CPU_STOLEN_UTIL The percentage of time that was stolen from all CPUs during the interval. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as '%steal' in 'sar' and 'st' in 'vmstat'. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. ====== GBL_CPU_STOLEN_TIME The time, in seconds, that was stolen from all the CPUs during the interval. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as '%steal' in 'sar' and 'st' in 'vmstat'. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. ====== GBL_CPU_GUEST_UTIL The percentage of time that the CPUs were used to service guests during the interval. Guest time, on Linux KVM hosts, is the time that is spent servicing guests. Xen hosts, as of this release, do not update these counters, neither do other OSes. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. ====== GBL_CPU_GUEST_TIME The time, in seconds, spent by CPUs to service guests during the interval. Guest time, on Linux KVM hosts, is the time that is spent servicing guests. Xen hosts, as of this release, do not update these counters, neither do other OSes. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. ====== GBL_NET_PACKET The total number of successful inbound and outbound packets for all network interfaces during the interval. These are the packets that have been processed without errors or collisions. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. ====== GBL_NET_PACKET_RATE The number of successful packets per second (both inbound and outbound) for all network interfaces during the interval. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_NET_IN_PACKET The number of successful packets received through all network interfaces during the interval. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. For HP-UX, this will be the same as the sum of the "Inbound Unicast Packets" and "Inbound Non-Unicast Packets" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the "Ipkts" column (RX-OK on Linux) from the "netstat -i" command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_NET_IN_PACKET_RATE The number of successful packets per second received through all network interfaces during the interval. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_NET_OUT_PACKET The number of successful packets sent through all network interfaces during the last interval. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. For HP-UX, this will be the same as the sum of the "Outbound Unicast Packets" and "Outbound Non-Unicast Packets" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the "Opkts" column (TX-OK on Linux) from the "netstat -i" command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_NET_OUT_PACKET_RATE The number of successful packets per second sent through the network interfaces during the interval. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_NET_COLLISION The number of collisions that occurred on all network interfaces during the interval. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not include deferred packets. This does not include data for loopback interface. For HP-UX, this will be the same as the sum of the "Single Collision Frames", "Multiple Collision Frames", "Late Collisions", and "Excessive Collisions" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the "Coll" column from the "netstat -i" command ("collisions" from the "netstat -i -e" command on Linux) for a network device. See also netstat(1). AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== GBL_NET_COLLISION_RATE The number of collisions per second on all network interfaces during the interval. This metric does not include deferred packets. This does not include data for loopback interface. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_NET_COLLISION_PCT The percentage of collisions to total outbound packet attempts during the interval. Outbound packet attempts include both successful packets and collisions. This does not include data for loopback interface. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets. AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_NET_IP_FRAGMENTS_RECEIVED The number of valid IPv4 datagram fragments received by the host. ====== GBL_NET_IP_FWD_DATAGRAMS The number of IPv4 datagrams this host has forwarded. In other words, the number of IPv4 datagrams for which this host has been used as a router. ====== GBL_NET_IP_REASSEMBLY_REQUIRED The number of IPv4 datagram fragments sent to this host for local delivery which required reassembly before being given to the Upper Layer Protocol(s). ====== GBL_NET_ERROR The number of errors that occurred on all network interfaces during the interval. This does not include data for loopback interface. For HP-UX, this will be the same as the sum of the "Inbound Errors" and "Outbound Errors" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of "Ierrs" (RX-ERR on Linux) and "Oerrs" (TX-ERR on Linux) from the "netstat -i" command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== GBL_NET_ERROR_RATE The number of errors per second on all network interfaces during the interval. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_NET_IN_ERROR The number of inbound errors that occurred on all network interfaces during the interval. A large number of errors may indicate a hardware problem on the network. This does not include data for loopback interface. For HP-UX, this will be the same as the sum of the "Inbound Errors" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of "Ierrs" (RX-ERR on Linux) and "Oerrs" (TX-ERR on Linux) from the "netstat -i" command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== GBL_NET_IN_ERROR_PCT The percentage of inbound network errors to total inbound packet attempts during the interval. Inbound packet attempts include both packets successfully received and those that encountered errors. This does not include data for loopback interface. A large number of errors may indicate a hardware problem on the network. The percentage of inbound errors to total packets attempted should remain low. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_NET_IN_ERROR_RATE The number of inbound errors per second on all network interfaces during the interval. This does not include data for loopback interface. A large number of errors may indicate a hardware problem on the network. The percentage of inbound errors to total packets attempted should remain low. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_NET_OUT_ERROR The number of outbound errors that occurred on all network interfaces during the interval. This does not include data for loopback interface. For HP-UX, this will be the same as the sum of the "Outbound Errors" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of "Oerrs" (TX-ERR on Linux) from the "netstat -i" command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== GBL_NET_OUT_ERROR_PCT The percentage of outbound network errors to total outbound packet attempts during the interval. Outbound packet attempts include both packets successfully sent and those that encountered errors. This does not include data for loopback interface. The percentage of outbound errors to total packets attempted to be transmitted should remain low. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_NET_OUT_ERROR_RATE The number of outbound errors per second on all network interfaces during the interval. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_NET_COLLISION_1_MIN_RATE The number of collisions per minute on all network interfaces during the interval. This metric does not include deferred packets. This does not include data for loopback interface. Collisions occur on any busy network, but abnormal collision rates could indicate a hardware or software problem. AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_NET_ERROR_1_MIN_RATE The number of errors per minute on all network interfaces during the interval. This rate should normally be zero or very small. A large error rate can indicate a hardware or software problem. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== GBL_NET_OUTQUEUE The sum of the outbound queue lengths for all network interfaces (BYNETIF_QUEUE). This metric is derived from the same source as the Outbound Queue Length shown in the lanadmin(1M) program. This does not include data for loopback interface. For most interfaces, the outbound queue is usually zero. When the value is non-zero over a period of time, the network may be experiencing a bottleneck. Determine which network interface has a non-zero queue and compare its traffic levels to normal. Also see if processes are blocking on network wait states. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== GBL_NET_DEFERRED The number of outbound deferred packets due to the network being in use during the interval. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== GBL_NET_DEFERRED_PCT The percentage of deferred packets to total outbound packet attempts during the interval. Outbound packet attempts include both packets successfully transmitted and those that were deferred. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_NET_DEFERRED_RATE The number of deferred packets per second on all network interfaces during the interval. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== GBL_DISK_PHYS_IO_RATE The number of physical IOs per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk IOs are counted, including file system IO, virtual memory IO and raw IO. On HP-UX, this is calculated as GBL_DISK_PHYS_IO_RATE = GBL_DISK_FS_IO_RATE + GBL_DISK_VM_IO_RATE + GBL_DISK_SYSTEM_IO_RATE + GBL_DISK_RAW_IO_RATE On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_DISK_PHYS_IO The number of physical IOs during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk IOs are counted, including file system IO, virtual memory IO and raw IO. On HP-UX, this is calculated as GBL_DISK_PHYS_IO = GBL_DISK_FS_IO + GBL_DISK_VM_IO + GBL_DISK_SYSTEM_IO + GBL_DISK_RAW_IO On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_DISK_PHYS_READ_RATE The number of physical reads per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On HP-UX, this is calculated as GBL_DISK_PHYS_READ_RATE = GBL_DISK_FS_READ_RATE + GBL_DISK_VM_READ_RATE + GBL_DISK_SYSTEM_READ_RATE + GBL_DISK_RAW_READ_RATE On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_DISK_PHYS_READ The number of physical reads during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On HP-UX, there are many reasons why there is not a direct correlation between the number of logical IOs and physical IOs. For example, small sequential logical reads may be satisfied from the buffer cache, resulting in fewer physical IOs than logical IOs. Conversely, large logical IOs or small random IOs may result in more physical than logical IOs. Logical volume mappings, logical disk mirroring, and disk striping also tend to remove any correlation. On HP-UX, this is calculated as GBL_DISK_PHYS_READ = GBL_DISK_FS_READ + GBL_DISK_VM_READ + GBL_DISK_SYSTEM_READ + GBL_DISK_RAW_READ On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_DISK_PHYS_READ_PCT The percentage of physical reads of total physical IO during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_DISK_REM_RAW_IO_RATE The total number of remote raw IOs per second during the interval. On HP-UX, remote raw disk IO typically occurs when a client accesses a server disk in raw mode. ====== GBL_DISK_REM_RAW_IO The number of remote raw IOs during the interval. On HP-UX, remote raw disk IO typically occurs when a client accesses a server disk in raw mode. ====== GBL_DISK_REM_RAW_IO_PCT The percentage of remote raw IOs to total remote physical disk IOs made during the interval. On HP-UX, remote raw disk IO typically occurs when a client accesses a server disk in raw mode. ====== GBL_DISK_PHYS_WRITE_RATE The number of physical writes per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On HP-UX, this is calculated as GBL_DISK_PHYS_WRITE_RATE = GBL_DISK_FS_WRITE_RATE + GBL_DISK_VM_WRITE_RATE + GBL_DISK_SYSTEM_WRITE_RATE + GBL_DISK_RAW_WRITE_RATE On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_DISK_PHYS_WRITE The number of physical writes during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On HP-UX, there are many reasons why there is not a direct correlation between logical IOs and physical IOs. For example, small logical writes may end up entirely in the buffer cache, and later generate fewer physical IOs when written to disk due to the larger IO size. Or conversely, small logical writes may require physical prefetching of the corresponding disk blocks before the data is merged and posted to disk. Logical volume mappings, logical disk mirroring, and disk striping also tend to remove any correlation. On HP-UX, this is calculated as GBL_DISK_PHYS_WRITE = GBL_DISK_FS_WRITE + GBL_DISK_VM_WRITE + GBL_DISK_SYSTEM_WRITE + GBL_DISK_RAW_WRITE On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_DISK_PHYS_WRITE_PCT The percentage of physical writes of total physical IO during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. ====== GBL_DISK_PHYS_BYTE_RATE The average number of KBs per second at which data was transferred to and from disks during the interval. The bytes for all types physical IOs are counted. Only local disks are counted in this measurement. NFS devices are excluded. This is a measure of the physical data transfer rate. It is not directly related to the number of IOs, since IO requests can be of differing lengths. This is an indicator of how much data is being transferred to and from disk devices. Large spikes in this metric can indicate a disk bottleneck. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw reads. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_DISK_PHYS_READ_BYTE_RATE The average number of KBs transferred from the disk per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_DISK_PHYS_READ_BYTE The number of KBs physically transferred from the disk during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. ====== GBL_DISK_PHYS_WRITE_BYTE_RATE The average number of KBs transferred to the disk per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_DISK_PHYS_WRITE_BYTE The number of KBs (or MBs if specified) physically transferred to the disk during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. ====== GBL_DISK_PHYS_BYTE The number of KBs transferred to and from disks during the interval. The bytes for all types of physical IOs are counted. Only local disks are counted in this measurement. NFS devices are excluded. It is not directly related to the number of IOs, since IO requests can be of differing lengths. On Unix systems, this includes file system IO, virtual memory IO, and raw IO. On Windows, all types of physical IOs are counted. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_DISK_REM_RAW_BYTE The number of remote KBs (or MBs if specified) transferred to or from a raw disk during the interval. On HP-UX, remote raw disk IO typically occurs when a client accesses a server disk in raw mode. ====== GBL_DISK_LOGL_IO_RATE The number of logical IOs per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the read and write system calls that are directed to disk devices. Also counted are read and write system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, writev, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. ====== GBL_DISK_LOGL_IO The number of logical IOs made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the read and write system calls that are directed to disk devices. Also counted are read and write system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, writev, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. ====== GBL_DISK_LOGL_READ_RATE On most systems, this is The average number of logical reads per second made during the interval. On SUN, this is the average number of logical block reads per second made during the interval. On Windows, this includes both buffered (cached) read requests and unbuffered reads. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the read system calls that are directed to disk devices. Also counted are read system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_DISK_LOGL_READ On most systems, this is the number of logical reads made during the interval. On SUN, this is the number of logical block reads made during the interval. On Windows, this includes both buffered (cached) read requests and unbuffered reads. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the read system calls that are directed to disk devices. Also counted are read system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. ====== GBL_DISK_LOGL_READ_PCT On most systems, this is the percentage of logical reads of the total logical IO during the interval. On SUN, this is the percentage of logical block reads of the total logical IOs during the interval. On many Unix systems, logical disk IOs are measured by counting the read system calls that are directed to disk devices. Also counted are read system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. ====== GBL_DISK_CACHE_READ_RATE The number of cached reads per second made during the interval. ====== GBL_DISK_CACHE_READ The number of cached reads made during the interval. ====== GBL_DISK_LOGL_WRITE_RATE On most systems, this is the average number of logical writes per second made during the interval. On SUN, this is the average number of logical block writes per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the write system calls that are directed to disk devices. Also counted are write system calls made indirectly through other system calls, including writev, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_DISK_LOGL_WRITE On most systems, this is the number of logical writes made during the interval. On SUN, this is the number of logical block writes during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the write system calls that are directed to disk devices. Also counted are write system calls made indirectly through other system calls, including writev, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. ====== GBL_DISK_LOGL_WRITE_PCT On most systems, this is the percentage of logical writes of the logical IO during the interval. On SUN, this is the percentage of logical block writes of the total logical block IOs during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the write system calls that are directed to disk devices. Also counted are write system calls made indirectly through other system calls, including writev, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. ====== GBL_DISK_LOGL_BYTE_RATE The number of KBs transferred per second via disk IO calls during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the read and write system calls that are directed to disk devices. Also counted are read and write system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, writev, send, sento, sendmsg, and ipcsend. ====== GBL_DISK_LOGL_READ_BYTE_RATE The number of KBs transferred per second via logical reads during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the read system calls that are directed to disk devices. Also counted are read system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. ====== GBL_DISK_LOGL_READ_BYTE The number of KBs transferred through logical reads during the last interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the read system calls that are directed to disk devices. Also counted are read system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. ====== GBL_DISK_LOGL_WRITE_BYTE_RATE The number of KBs per second transferred via logical writes during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the write system calls that are directed to disk devices. Also counted are write system calls made indirectly through other system calls, including writev, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. ====== GBL_DISK_LOGL_WRITE_BYTE The number of KBs transferred via logical writes during the last interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the write system calls that are directed to disk devices. Also counted are write system calls made indirectly through other system calls, including writev, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. ====== GBL_DISK_FS_READ The number of file system disk reads during the interval. Only local disks are counted in this measurement. NFS devices are excluded. These are physical reads generated by user file system access and do not include virtual memory reads, system reads (inode access), or reads relating to raw disk access. An exception is user files accessed via the mmap(2) call, which does not show their physical reads in this category. They appear under virtual memory reads. ====== GBL_DISK_FS_WRITE The number of file system disk writes during the interval. Only local disks are counted in this measurement. NFS devices are excluded. These are physical writes generated by user file system access and do not include virtual memory writes, system writes (inode updates), or writes relating to raw disk access. An exception is user files accessed via the mmap(2) call, which does not show their physical writes in this category. They appear under virtual memory writes. ====== GBL_DISK_FS_READ_RATE The number of file system disk reads per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. These are physical reads generated by user file system access and do not include virtual memory reads, system reads (inode access), or reads relating to raw disk access. An exception is user files accessed via the mmap(2) call, which does not show their physical reads in this category. They appear under virtual memory reads. ====== GBL_DISK_FS_WRITE_RATE The number of file system disk writes per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. These are physical writes generated by user file system access and do not include virtual memory writes, system writes (inode updates), or writes relating to raw disk access. An exception is user files accessed via the mmap(2) call, which does not show their physical writes in this category. They appear under virtual memory writes. ====== GBL_DISK_FS_IO_RATE The total of file system disk physical reads and writes per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. These are physical IOs generated by user file system access and do not include virtual memory IOs, system IOs (inode updates), or IOs relating to raw disk access. An exception is user files accessed via the mmap(2) call, which will not show their physical IOs in this category. They appear under virtual memory IOs. ====== GBL_DISK_FS_IO The total of physical file system disk reads and writes during the interval. Only local disks are counted in this measurement. NFS devices are excluded. These are physical IOs generated by user file system access and do not include virtual memory IOs, system IOs (inode updates), or IOs relating to raw disk access. An exception is user files accessed via the mmap(2) call, which will not show their physical IOs in this category. They appear under virtual memory IOs. ====== GBL_DISK_FS_IO_PCT The percentage of file system generated physical IOs of the total physical IOs during the interval. Only local disks are counted in this measurement. NFS devices are excluded. These are physical IOs generated by user file system access and do not include virtual memory IOs, system IOs (inode updates), or IOs relating to raw disk access. An exception is user files accessed via the mmap(2) call, which will not show their physical IOs in this category. They appear under virtual memory IOs. ====== GBL_DISK_FS_BYTE The number of file system KBs (or MBs if specified) physically transferred to or from the disk during the interval. Only local disks are counted in this measurement. NFS devices are excluded. These are bytes transferred by user file system access and do not include bytes transferred via virtual memory IOs, system IOs (inode updates), or IOs relating to raw disk access. An exception is user files accessed via the mmap(2) call, which will not show their bytes transferred in this category. They appear under virtual memory bytes transferred. ====== GBL_DISK_VM_READ The number of virtual memory reads made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the reads to user file data are not included in this metric unless they were accessed via the mmap(2) system call. On AIX System WPARs, this metric is NA. ====== GBL_DISK_VM_READ_RATE The number of virtual memory reads per second made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the reads to user file data are not included in this metric unless they were accessed via the mmap(2) system call. On AIX System WPARs, this metric is NA. ====== GBL_DISK_VM_WRITE The number of virtual memory writes made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the writes to user file data are not included in this metric unless they were done via the mmap(2) system call. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On AIX System WPARs, this metric is NA. ====== GBL_DISK_VM_WRITE_RATE The number of virtual memory writes per second made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the writes to user file data are not included in this metric unless they were done via the mmap(2) system call. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On AIX System WPARs, this metric is NA. ====== GBL_DISK_VM_IO_RATE The number of virtual memory IOs per second made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the IOs to user file data are not included in this metric unless they were done via the mmap(2) system call. On SUN, when a file is accessed, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On SUN, this metric is calculated by subtracting raw and block IOs from physical IOs. Tape drive accesses are included in the raw IOs, but not in the physical IOs. Therefore, when tape drive accesses are occurring on a system, all virtual memory and raw IO is counted as raw IO. For example, you may see heavy raw IO occurring during system backup. Raw IOs for disks are counted in the physical IOs. To determine if the raw IO is tape access versus disk access, compare the global physical disk accesses to the total of raw, block, and VM IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_DISK_VM_IO The total number of virtual memory IOs made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the IOs to user file data are not included in this metric unless they were done via the mmap(2) system call. On SUN, when a file is accessed, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On SUN, this metric is calculated by subtracting raw and block IOs from physical IOs. Tape drive accesses are included in the raw IOs, but not in the physical IOs. Therefore, when tape drive accesses are occurring on a system, all virtual memory and raw IO is counted as raw IO. For example, you may see heavy raw IO occurring during system backup. Raw IOs for disks are counted in the physical IOs. To determine if the raw IO is tape access versus disk access, compare the global physical disk accesses to the total of raw, block, and VM IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_DISK_VM_IO_PCT On HP-UX and AIX, this is the percentage of virtual memory IO requests of total physical disk IOs during the interval. On the other Unix systems, this is the percentage of virtual memory IOs of the total number of physical IOs during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the IOs to user file data are not included in this metric unless they were done via the mmap(2) system call. On SUN, when a file is accessed, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On SUN, this metric is calculated by subtracting raw and block IOs from physical IOs. Tape drive accesses are included in the raw IOs, but not in the physical IOs. Therefore, when tape drive accesses are occurring on a system, all virtual memory and raw IO is counted as raw IO. For example, you may see heavy raw IO occurring during system backup. Raw IOs for disks are counted in the physical IOs. To determine if the raw IO is tape access versus disk access, compare the global physical disk accesses to the total of raw, block, and VM IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. ====== GBL_DISK_VM_BYTE The number of virtual memory KBs (or MBs if specified) transferred to or from the disk during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the user file data transfers are not included in this metric unless they were done via the mmap(2) system call. ====== GBL_DISK_BLOCK_READ The number of block reads during the interval. On SUN, these are physical readsgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical reads generated by file system access and do not include virtual memory reads, or reads relating to raw disk access. These do include the read of the inode (system read) and the file data read. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_DISK_BLOCK_WRITE The number of block writes during the interval. On SUN, these are physical writesgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical writes generated by file system access and do not include virtual memory writes, or writes relating to raw disk access. These do include the write of the inode (system write) and the file system data write. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_DISK_SYSTEM_READ Number of physical disk reads generated by the kernel for file system management (inode accesses) during the interval. Only local disks are counted in this measurement. NFS devices are excluded. ====== GBL_DISK_SYSTEM_WRITE Number of physical disk writes generated by the kernel for file system management (inode updates) during the interval. Only local disks are counted in this measurement. NFS devices are excluded. ====== GBL_DISK_SYSTEM_READ_RATE Number of physical disk reads per second generated by the kernel for file system management (inode accesses) during the interval. Only local disks are counted in this measurement. NFS devices are excluded. ====== GBL_DISK_SYSTEM_WRITE_RATE Number of physical disk writes per second generated by the kernel for file system management (inode updates) during the interval. Only local disks are counted in this measurement. NFS devices are excluded. ====== GBL_DISK_SYSTEM_IO_RATE The number of physical disk IOs per second generated by the kernel for file system management (inode accesses or updates) during the interval. Only local disks are counted in this measurement. NFS devices are excluded. ====== GBL_DISK_SYSTEM_IO The number of physical disk IOs generated by the kernel for file system management (inode accesses or updates) during the interval. Only local disks are counted in this measurement. NFS devices are excluded. ====== GBL_DISK_SYSTEM_IO_PCT The percentage of physical disk IOs generated by the kernel for file system management (inode accesses or updates) to the total number of physical disk IOs during the interval. Only local disks are counted in this measurement. NFS devices are excluded. ====== GBL_DISK_SYSTEM_BYTE The number of KBs (or MBs if specified) transferred by the kernel from or to the disk for file system management access or updates during the interval. Only local disks are counted in this measurement. NFS devices are excluded. File system management IOs are the physical accesses required to obtain or update internal information about the file system structure (inode access). Accesses or updates to user data are not included in this metric. ====== GBL_DISK_RAW_READ The number of raw reads during the interval. Only accesses to local disk devices are counted. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_DISK_RAW_WRITE The number of raw writes during the interval. Only accesses to local disk devices are counted. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_DISK_RAW_READ_RATE The number of raw reads per second during the interval. Only accesses to local disk devices are counted. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_DISK_RAW_WRITE_RATE The number of raw writes per second during the interval. Only accesses to local disk devices are counted. On Sun, tape drive accesses are included in raw IOs, but not in physical IOs. To determine if raw IO is tape access versus disk access, compare the global physical disk accesses to the total raw, block, and vm IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_DISK_RAW_IO_RATE The total number of raw reads and writes per second during the interval. Only accesses to local disk devices are counted. On Sun, tape drive accesses are included in raw IOs, but not in physical IOs. To determine if raw IO is tape access versus disk access, compare the global physical disk accesses to the total raw, block, and vm IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_DISK_RAW_IO The total number of raw reads and writes during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Sun, tape drive accesses are included in raw IOs, but not in physical IOs. To determine if raw IO is tape access versus disk access, compare the global physical disk accesses to the total raw, block, and vm IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_DISK_RAW_IO_PCT The percentage of raw IOs to total physical IOs made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Sun, tape drive accesses are included in raw IOs, but not in physical IOs. To determine if raw IO is tape access versus disk access, compare the global physical disk accesses to the total raw, block, and vm IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. ====== GBL_DISK_RAW_BYTE The number of KBs (or MBs if specified) transferred to or from a raw disk during the interval. Only local disks are counted in this measurement. NFS devices are excluded. ====== GBL_DISK_BLOCK_READ_RATE The number of block reads per second during the interval. On SUN, these are physical readsgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical reads generated by file system access and do not include virtual memory reads, or reads relating to raw disk access. These do include the read of the inode (system read) and the file data read. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_DISK_BLOCK_WRITE_RATE The number of block writes per second during the interval. On SUN, these are physical writesgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical writes generated by file system access and do not include virtual memory writes, or writes relating to raw disk access. These do include the write of the inode (system write) and the file system data write. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_DISK_BLOCK_IO_RATE The total number of block IOs per second during the interval. On SUN, these are physical IOsgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These do include the IO of the inode (system write) and the file system data IO. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_DISK_BLOCK_IO The total number of block IOs during the interval. On SUN, these are physical IOsgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These do include the IO of the inode (system write) and the file system data IO. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_DISK_BLOCK_IO_PCT The percentage of block IOs of the total physical IOs during the interval. On SUN, these are physical IOsgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the "by-disk" data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On AIX, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These do include the IO of the inode (system write) and the file system data IO. ====== GBL_DISK_REM_PHYS_READ_RATE The number of remote physical reads per second during the interval. This includes all types of physical reads, including VM and raw. This is calculated as GBL_DISK_REM_PHYS_READ_RATE = GBL_DISK_REM_FS_READ_RATE + GBL_DISK_REM_VM_READ_RATE + GBL_DISK_REM_SYSTEM_READ_RATE + GBL_DISK_REM_RAW_READ_RATE On HP-UX, if an IO cannot be satisfied in a local client machine's memory buffer, a remote physical IO request is generated. This may or may not require a physical disk IO on the remote system. In either case, the remote IO request is considered a physical request on the local client machine. ====== GBL_DISK_REM_PHYS_READ The number of remote physical reads during the interval. This includes all types of physical reads, including VM and raw. This is calculated as GBL_DISK_REM_PHYS_READ = GBL_DISK_REM_FS_READ + GBL_DISK_REM_VM_READ + GBL_DISK_REM_SYSTEM_READ + GBL_DISK_REM_RAW_READ On HP-UX, if an IO cannot be satisfied in a local client machine's memory buffer, a remote physical IO request is generated. This may or may not require a physical disk IO on the remote system. In either case, the remote IO request is considered a physical request on the local client machine. ====== GBL_DISK_REM_PHYS_READ_PCT The percentage of remote physical reads of total remote physical IO during the interval. On HP-UX, if an IO cannot be satisfied in a local client machine's memory buffer, a remote physical IO request is generated. This may or may not require a physical disk IO on the remote system. In either case, the remote IO request is considered a physical request on the local client machine. ====== GBL_DISK_REM_PHYS_WRITE_RATE The number of remote physical writes per second during the interval. All types of remote physical writes, including VM and raw, are counted. This is calculated as GBL_DISK_REM_PHYS_WRITE_RATE = GBL_DISK_REM_FS_WRITE_RATE + GBL_DISK_REM_VM_WRITE_RATE + GBL_DISK_REM_SYSTEM_WRITE_RATE + GBL_DISK_REM_RAW_WRITE_RATE On HP-UX, if an IO cannot be satisfied in a local client machine's memory buffer, a remote physical IO request is generated. This may or may not require a physical disk IO on the remote system. In either case, the remote IO request is considered a physical request on the local client machine. ====== GBL_DISK_REM_PHYS_WRITE The number of physical writes during the interval. All types of remote physical writes are counted, including VM and raw, are counted. This is calculated as GBL_DISK_REM_PHYS_WRITE = GBL_DISK_REM_FS_WRITE + GBL_DISK_REM_VM_WRITE + GBL_DISK_REM_SYSTEM_WRITE + GBL_DISK_REM_RAW_WRITE On HP-UX, if an IO cannot be satisfied in a local client machine's memory buffer, a remote physical IO request is generated. This may or may not require a physical disk IO on the remote system. In either case, the remote IO request is considered a physical request on the local client machine. ====== GBL_DISK_REM_PHYS_WRITE_PCT The percentage of physical writes of total remote physical IO during the interval. On HP-UX, if an IO cannot be satisfied in a local client machine's memory buffer, a remote physical IO request is generated. This may or may not require a physical disk IO on the remote system. In either case, the remote IO request is considered a physical request on the local client machine. ====== GBL_DISK_REM_PHYS_READ_BYTE The number of physical read KBs during the interval. On HP-UX, if an IO cannot be satisfied in a local client machine's memory buffer, a remote physical IO request is generated. This may or may not require a physical disk IO on the remote system. In either case, the remote IO request is considered a physical request on the local client machine. ====== GBL_DISK_REM_PHYS_WRITE_BYTE The number of physical write KBs (or MBs if specified) during the interval. On HP-UX, if an IO cannot be satisfied in a local client machine's memory buffer, a remote physical IO request is generated. This may or may not require a physical disk IO on the remote system. In either case, the remote IO request is considered a physical request on the local client machine. ====== GBL_DISK_REM_LOGL_READ_RATE The average number of remote logical reads per second made during the interval. On HP-UX, the remote logical IOs include all IO requests generated on a local client to a remotely mounted file system or disk. If the logical request is satisfied on the local client (that is, the data is in a local memory buffer), a physical request is not generated. Otherwise, a physical IO request is made to the remote machine to read/write the data. Note that, in either case, a logical IO request is made. ====== GBL_DISK_REM_LOGL_READ The number of remote logical reads made during the interval. On HP-UX, the remote logical IOs include all IO requests generated on a local client to a remotely mounted file system or disk. If the logical request is satisfied on the local client (that is, the data is in a local memory buffer), a physical request is not generated. Otherwise, a physical IO request is made to the remote machine to read/write the data. Note that, in either case, a logical IO request is made. ====== GBL_DISK_REM_LOGL_READ_PCT The percentage of remote logical reads to the total remote logical IO during the interval. On HP-UX, the remote logical IOs include all IO requests generated on a local client to a remotely mounted file system or disk. If the logical request is satisfied on the local client (that is, the data is in a local memory buffer), a physical request is not generated. Otherwise, a physical IO request is made to the remote machine to read/write the data. Note that, in either case, a logical IO request is made. ====== GBL_DISK_REM_LOGL_WRITE_RATE The average number of remote logical writes per second made during the last interval. On HP-UX, the remote logical IOs include all IO requests generated on a local client to a remotely mounted file system or disk. If the logical request is satisfied on the local client (that is, the data is in a local memory buffer), a physical request is not generated. Otherwise, a physical IO request is made to the remote machine to read/write the data. Note that, in either case, a logical IO request is made. ====== GBL_DISK_REM_LOGL_WRITE The number of remote logical writes made during the interval. On HP-UX, the remote logical IOs include all IO requests generated on a local client to a remotely mounted file system or disk. If the logical request is satisfied on the local client (that is, the data is in a local memory buffer), a physical request is not generated. Otherwise, a physical IO request is made to the remote machine to read/write the data. Note that, in either case, a logical IO request is made. ====== GBL_DISK_REM_LOGL_WRITE_PCT The percentage of remote logical writes of the total remote logical IO during the interval. On HP-UX, the remote logical IOs include all IO requests generated on a local client to a remotely mounted file system or disk. If the logical request is satisfied on the local client (that is, the data is in a local memory buffer), a physical request is not generated. Otherwise, a physical IO request is made to the remote machine to read/write the data. Note that, in either case, a logical IO request is made. ====== GBL_DISK_REM_LOGL_READ_BYTE The number of KBs transferred via remote logical reads during the last interval. On HP-UX, the remote logical IOs include all IO requests generated on a local client to a remotely mounted file system or disk. If the logical request is satisfied on the local client (that is, the data is in a local memory buffer), a physical request is not generated. Otherwise, a physical IO request is made to the remote machine to read/write the data. Note that, in either case, a logical IO request is made. ====== GBL_DISK_REM_LOGL_WRITE_BYTE The number of KBs transferred via remote logical writes during the last interval. On HP-UX, the remote logical IOs include all IO requests generated on a local client to a remotely mounted file system or disk. If the logical request is satisfied on the local client (that is, the data is in a local memory buffer), a physical request is not generated. Otherwise, a physical IO request is made to the remote machine to read/write the data. Note that, in either case, a logical IO request is made. ====== GBL_DISK_REM_FS_IO_RATE The total of remote file system physical reads and writes per second during the interval. These are physical IOs generated by user file system access and do not include virtual memory IOs, system IOs (inode updates), or IOs relating to raw disk access. An exception is user files accessed via the mmap(2) call, which will not show their physical IOs in this category. They appear under virtual memory IOs. On HP-UX, remote file system IO typically occurs during client file system access of a network file system mounted on the server. A remote file system IO does not necessarily imply that a physical IO occurs on the remote (server) system. ====== GBL_DISK_REM_FS_IO The total of remote physical file system reads and writes during the last interval. These are physical IOs generated by user file system access and do not include virtual memory IOs, system IOs (inode updates), or IOs relating to raw disk access. An exception is user files accessed via the mmap(2) call, which will not show their physical IOs in this category. They appear under virtual memory IOs. On HP-UX, remote file system IO typically occurs during client file system access of a network file system mounted on the server. A remote file system IO does not necessarily imply that a physical IO occurs on the remote (server) system. ====== GBL_DISK_REM_FS_IO_PCT The percentage of remote file system generated physical IOs of the total remote physical IOs during the interval. These are physical IOs generated by user file system access and do not include virtual memory IOs, system IOs (inode updates), or IOs relating to raw disk access. An exception is user files accessed via the mmap(2) call, which will not show their physical IOs in this category. They appear under virtual memory IOs. On HP-UX, remote file system IO typically occurs during client file system access of a network file system mounted on the server. A remote file system IO does not necessarily imply that a physical IO occurs on the remote (server) system. ====== GBL_DISK_REM_FS_BYTE The number of remote file system KBs (or MBs if specified) physically transferred to or from the remote machine during the interval. These are bytes transferred by user file system access and do not include bytes transferred via virtual memory IOs, system IOs (inode updates), or IOs relating to raw disk access. An exception is user files accessed via the mmap(2) call, which will not show their bytes transferred in this category. They appear under virtual memory bytes transferred. On HP-UX, remote file system IO typically occurs during client file system access of a network file system mounted on the server. A remote file system IO does not necessarily imply that a physical IO occurs on the remote (server) system. ====== GBL_DISK_REM_VM_IO_RATE The number of remote virtual memory IOs per second made during the interval. These are physical IOs related to paging, swapping, or memory mapped file allocations. IOs to user file data are not included in this metric unless they were done via the mmap(2) system call. On HP-UX, remote VM IO is typically seen on a client system that is paging in text from or paging out data pages to a server system. Paging in from the server system can occur when the client is loading a program which requires the text pages to be fetched from the server. Paging out occurs when client system data pages are swapped out to a remote swap device on the server system. ====== GBL_DISK_REM_VM_IO The total number of remote virtual memory IOs made during the interval. These are physical IOs related to paging or swapping. IOs to user file data are not included in this metric unless they were done via the mmap(2) system call. On HP-UX, remote VM IO is typically seen on a client system that is paging in text from or paging out data pages to a server system. Paging in from the server system can occur when the client is loading a program which requires the text pages to be fetched from the server. Paging out occurs when client system data pages are swapped out to a remote swap device on the server system. ====== GBL_DISK_REM_VM_IO_PCT The percentage of remote virtual memory IO requests of total remote physical IOs during the interval. IOs to user file data are not included in this metric unless they were done via the mmap(2) system call. On HP-UX, remote VM IO is typically seen on a client system that is paging in text from or paging out data pages to a server system. Paging in from the server system can occur when the client is loading a program which requires the text pages to be fetched from the server. Paging out occurs when client system data pages are swapped out to a remote swap device on the server system. ====== GBL_DISK_REM_VM_BYTE The number of remote virtual memory KBs (or MBs if specified) transferred to or from the remote machine during the interval. User file data transfers are not included in this metric unless they were done via the mmap(2) system call. On HP-UX, remote VM IO is typically seen on a client system that is paging in text from or paging out data pages to a server system. Paging in from the server system can occur when the client is loading a program which requires the text pages to be fetched from the server. Paging out occurs when client system data pages are swapped out to a remote swap device on the server system. ====== GBL_DISK_REM_SYSTEM_IO_RATE The number of remote physical IOs per second generated by the kernel for file system management (inode accesses or updates) during the interval. On HP-UX, remote file system IO typically occurs during client file system access of a network file system mounted on the server. A remote file system IO does not necessarily imply that a physical IO occurs on the remote (server) system. ====== GBL_DISK_REM_SYSTEM_IO The number of remote physical IOs generated by the kernel for file system management (inode accesses or updates) during the interval. On HP-UX, remote file system IO typically occurs during client file system access of a network file system mounted on the server. A remote file system IO does not necessarily imply that a physical IO occurs on the remote (server) system. ====== GBL_DISK_REM_SYSTEM_IO_PCT The percentage of remote physical IOs generated by the kernel for file system management (inode accesses or updates) to the total number of remote physical IOs during the interval. On HP-UX, remote file system IO typically occurs during client file system access of a network file system mounted on the server. A remote file system IO does not necessarily imply that a physical IO occurs on the remote (server) system. ====== GBL_DISK_REM_SYSTEM_BYTE The number of remote KBs (or MBs if specified) transferred by the kernel from or to the remote machine for file system management access or updates during the interval. File system management IOs are the physical accesses required to obtain or update internal information about the file system structure (inode access). Accesses or updates to user data are not included in this metric. On HP-UX, remote file system IO typically occurs during client file system access of a network file system mounted on the server. A remote file system IO does not necessarily imply that a physical IO occurs on the remote (server) system. ====== GBL_MEM_UTIL The percentage of physical memory in use during the interval. This includes system memory (occupied by the kernel), buffer cache and user memory. On HP-UX 11iv3 and above, this includes file cache. This excludes file cache when cachemem parameter in the parm file is set to free. On HP-UX, this calculation is done using the byte values for physical memory and used memory, and is therefore more accurate than comparing the reported kilobyte values for physical memory and used memory. On Linux, the value of this metric includes file cache when the cachemem parameter in the parm file is set to user. On SUN, high values for this metric may not indicate a true memory shortage. This metric can be influenced by the VMM (Virtual Memory Management) system. This excludes ZFS ARC cache when cachemem parameter in the parm file is set to free. On AIX, this excludes file cache when cachemem parameter in the parm file is set to free. Locality Domain metrics are available on HP-UX 11iv2 and above. GBL_MEM_FREE and LDOM_MEM_FREE, as well as the memory utilization metrics derived from them, may not always fully match. GBL_MEM_FREE represents free memory in the kernel's reservation layer while LDOM_MEM_FREE shows actual free pages. If memory has been reserved but not actually consumed from the Locality Domains, the two values won't match. Because GBL_MEM_FREE includes pre-reserved memory, the GBL_MEM_ metrics are a better indicator of actual memory consumption in most situations. ====== GBL_MEM_SYS_UTIL The percentage of physical memory used by the system during the interval. System memory does not include the buffer cache. On HP-UX and Linux this does not include filecache also. On HP-UX 11.0, this metric does not include some kinds of dynamically allocated kernel memory. This has always been reported in the GBL_MEM_USER metrics. On HP-UX 11.11 and beyond, this metric includes some kinds of dynamically allocated kernel memory. On Solaris non-global zones, this metric shows value as 0. ====== GBL_MEM_CACHE_UTIL The percentage of physical memory used by the buffer cache during the interval. On HP-UX 11i v2 and below, the buffer cache is a memory pool used by the system to stage disk IO data for the driver. On HP-UX 11i v3 and above this metric value represents the usage of the file system buffer cache which is still being used for file system metadata. On SUN, this percentage is based on calculating the buffer cache size by multiplying the system page size times the number of buffer headers (nbuf). For example, on a SPARCstation 10 the buffer size is usually (200 (page size buffers) 4096 (bytes/page) = 800 KB). On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The "nbuf" value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On AIX, this value should be minimal since most disk IOs are done through memory mapped files. On Windows the value reports 'copy read hit %' and 'Pin read hit %'. ====== GBL_MEM_USER_UTIL The percent of physical memory allocated to user code and data at the end of the interval. This metric shows the percent of memory owned by user memory regions such as user code, heap, stack and other data areas including shared memory. This does not include memory for buffer cache. On HP-UX and Linux this does not include filecache also. On HP-UX 11.0, this metric includes some kinds of dynamically allocated kernel memory. On HP-UX 11.11 and beyond, this metric does not include some kinds of dynamically allocated kernel memory. This is now reported in the GBL_MEM_SYS metrics. Large fluctuations in this metric can be caused by programs which allocate large amounts of memory and then either release the memory or terminate. A slow continual increase in this metric may indicate a program with a memory leak. ====== GBL_MEM_CACHE_FLUSH_RATE The rate at which the file system cache has flushed its contents to disk as the result of a request to flush or to satisfy a write-through file write request. ====== GBL_MEM_DATAMAP_HIT_PCT The percentage of data maps in the file system cache that could be resolved without having to retrieve a page from the disk, because the page was already in physical memory. ====== GBL_SRV_WRKITM_SHORTAGES The number of times STATUS_DATA_NOT_ACCEPTED was returned at receive indication time. This occurs when no work item is available or can be allocated to service the incoming request. ====== GBL_MEM_SHARES_PRIO The weightage/priority for memory assigned to this logical system. This value influences the share of unutilized physical Memory that this logical system can utilize. On a recognized VMware ESX guest, where VMware guest SDK is enabled, this value can range from 0 to 100000. The value will be "na" otherwise. ====== GBL_MEM_PHYS_SWAPPED On a recognized VMware ESX guest, where VMware guest SDK is enabled, this metrics indicates the amount of memory that has been reclaimed by ESX Server from this logical system by transparently swapping logical system's memory to disk. The value is "na" otherwise. ====== GBL_MACHINE_MEM_USED The amount of physical host memory currently consumed for this logical system's physical memory. On a standalone system, the value will be (GBL_MEM_UTIL GBL_MEM_PHYS) / 100 ====== GBL_NET_UTIL_PEAK It is the utilisation of the most used network interfaces at the end of the interval. Some AIX systems report a speed that is lower than the measured throughput and this can result in BYNETIF_UTIL and GBL_NET_UTIL_PEAK showing more than 100% utilization. On Linux, root permission is required to obtain network interface bandwidth so values will be n/a when running in non-root mode. Also, maximum bandwidth for virtual interfaces (vnetN) may be reported wrongly on KVM or Xen server so, similarly to AIX, utilization may exceed 100%. ====== GBL_MEM_OVERHEAD The amount of "overhead" memory associated with this logical system that is currently consumed on the host system. On VMware ESX Server console, the value is equivalent to sum of the current overhead memory for all running virtual machines On a standalone system, the value will be 0. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is "na". ====== GBL_CPU_CYCLE_ENTL_MIN On a recognized VMware ESX guest, where VMware guest SDK is enabled,, this value indicates the minimum processor capacity, in MHz, configured for this logical system. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is "na". On a standalone system, the value is the sum of clock speed of individual CPUs. ====== GBL_CPU_CYCLE_ENTL_MAX On a recognized VMware ESX guest, where VMware guest SDK is enabled,, this value indicates the maximum processor capacity, in MHz, configured for this logical system. The value is -3 if entitlement is 'Unlimited' for this logical system. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is "na". On a standalone system, the value is the sum of clock speed of individual CPUs. ====== GBL_MEM_ENTL_MIN In a virtual environment, this metric indicates the minimum amount of memory configured for this logical system. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is "na" On a standalone system, this metrics is equivalent to GBL_MEM_PHYS. ====== GBL_MEM_ENTL_MAX In a virtual environment, this metric indicates the maximum amount of memory configured for this logical system. The value is -3 if entitlement is 'Unlimited' for this logical system. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is "na" On Solaris non-global zones, this metric value is equivalent to 'capped-memory' value for 'zonecfg -z zonename info' command. On a standalone system this metric is equivalent to GBL_MEM_PHYS. ====== GBL_MEM_ENTL_UTIL In a virtual environment, this metric indicates the maximum amount of memory utilized against memory configured for this logical system. ====== GBL_MEM_PHYS The amount of physical memory in the system (in MBs unless otherwise specified). On HP-UX, banks with bad memory are not counted. Note that on some machines, the Processor Dependent Code (PDC) code uses the upper 1MB of memory and thus reports less than the actual physical memory of the system. Thus, on a system with 256MB of physical memory, this metric and dmesg(1M) might only report 267,386,880 bytes (255MB). This is all the physical memory that software on the machine can access. On Windows, this is the total memory available, which may be slightly less than the total amount of physical memory present in the system. This value is also reported in the Control Panel's About Windows NT help topic. On Linux, this is the amount of memory given by dmesg(1M). If the value is not available in kernel ring buffer, then the sum of system memory and available memory will be reported as physical memory. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_AVAIL The amount of physical available memory in the system (in MBs unless otherwise specified). On Windows, memory resident operating system code and data is not included as available memory. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_FREE The amount of memory not allocated (in MBs unless otherwise specified). As this value drops, the likelihood increases that swapping or paging out to disk may occur to satisfy new memory requests. On SUN, low values for this metric may not indicate a true memory shortage. This metric can be influenced by the VMM (Virtual Memory Management) system. On uncapped solaris zones, the metric indicates the amount of memory that is available across the whole system that is not consumed by the global zone and other non-global zones. In case of capped solaris zones, the metric indicates the amount of memory that is not consumed by this zone against the memory cap set. On Linux, this metric is sum of 'free' and 'cached' memory. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. Locality Domain metrics are available on HP-UX 11iv2 and above. GBL_MEM_FREE and LDOM_MEM_FREE, as well as the memory utilization metrics derived from them, may not always fully match. GBL_MEM_FREE represents free memory in the kernel's reservation layer while LDOM_MEM_FREE shows actual free pages. If memory has been reserved but not actually consumed from the Locality Domains, the two values won't match. Because GBL_MEM_FREE includes pre-reserved memory, the GBL_MEM_ metrics are a better indicator of actual memory consumption in most situations. ====== GBL_MEM_CACHE The amount of physical memory (in MBs unless otherwise specified) used by the buffer cache during the interval. On HP-UX 11i v2 and below, the buffer cache is a memory pool used by the system to stage disk IO data for the driver. On HP-UX 11i v3 and above this metric value represents the usage of the file system buffer cache which is still being used for file system metadata. On SUN, this value is obtained by multiplying the system page size times the number of buffer headers (nbuf). For example, on a SPARCstation 10 the buffer size is usually (200 (page size buffers) 4096 (bytes/page) = 800 KB). On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The "nbuf" value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On AIX, this value should be minimal since most disk IOs are done through memory mapped files. ====== GBL_MEM_SYS_AND_CACHE_UTIL The percentage of physical memory used by the system (kernel) and the buffer cache at the end of the interval. On HP-UX 11iv3, this includes file cache also. On HP-UX 11.0, this metric does not include some kinds of dynamically allocated kernel memory. This has always been reported in the GBL_MEM_USER metrics. On HP-UX 11.11 and beyond, this metric includes some kinds of dynamically allocated kernel memory. On Solaris non-global zones, this metric is N/A. ====== GBL_MEM_FREE_UTIL The percentage of physical memory that was free at the end of the interval. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_VIRT The total private virtual memory (in MBs unless otherwise specified) at the end of the interval. This is the sum of the virtual allocation of private data and stack regions for all processes. ====== GBL_MEM_ACTIVE_VIRT The total virtual memory (in MBs unless otherwise specified) allocated for processes that are currently on the run queue or processes that have executed recently. This is the sum of the virtual memory sizes of the data and stack regions for these processes. On HP-UX, this is the sum of the virtual memory of all processes which have had a thread run in the last 20 seconds. On AIX System WPARs, this metric is NA. ====== GBL_MEM_ACTIVE_VIRT_UTIL The percentage of total virtual memory active at the end of the interval. Active virtual memory is the virtual memory associated with processes that are currently on the run queue or processes that have executed recently. This is the sum of the virtual memory sizes of the data and stack regions for these processes. On HP-UX, this is the sum of the virtual memory of all processes which have had a thread run in the last 20 seconds. ====== GBL_MEM_PAGE_FAULT The number of page faults that occurred during the interval. On Linux this metric is available only on 2.6 and above kernel versions. ====== GBL_MEM_PAGE_FAULT_RATE The number of page faults per second during the interval. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_PAGEIN_RATE The total number of page ins per second from the disk during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On HP-UX and AIX, this is the same as the "pi" value from the vmstat command. On Solaris, this is the same as the sum of the "epi" and "api" values from the "vmstat -p" command, divided by the page size in KB. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_PAGEOUT_RATE The total number of page outs to the disk per second during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On HP-UX and AIX, this is the same as the "po" value from the vmstat command. On Solaris, this is the same as the sum of the "epo" and "apo" values from the "vmstat -p" command, divided by the page size in KB. On Windows, this counter also includes paging traffic on behalf of the system cache to access file data for applications and so may be high when there is no memory pressure. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_PAGEIN The total number of page ins from the disk during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On HP-UX, this is the same as the "page ins" value from the "vmstat -s" command. On AIX, this is the same as the "paging space page ins" value. Remember that "vmstat -s" reports cumulative counts. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_PAGEOUT The total number of page outs to the disk during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On HP-UX, this is the same as the "page outs" value from the "vmstat -s" command. On HP-UX 11iv3 and above this includes filecache page outs also. On AIX, this is the same as the "paging space page outs" value. Remember that "vmstat -s" reports cumulative counts. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_PAGE_REQUEST The number of page requests to or from the disk during the interval. On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging space and not to the file system. On Windows, this includes pages paged to or from both paging space and the file system. On HP-UX, this is the same as the sun of the "page ins" and "page outs" values from the "vmstat -s" command. On AIX, this is the same as the sum of the "paging space page ins" and "paging space page outs" values. Remember that "vmstat -s" reports cumulative counts. On Windows, this counter also includes paging traffic on behalf of the system cache to access file data for applications and so may be high when there is no memory pressure. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_PAGE_REQUEST_RATE The number of page requests to or from the disk per second during the interval. On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging space and not to or from the file system. On Windows, this includes pages paged to or from both paging space and the file system. On HP-UX and AIX, this is the same as the sum of the "pi" and "po" values from the vmstat command. On Solaris, this is the same as the sum of the "epi", "epo", "api", and "apo" values from the "vmstat -p" command, divided by the page size in KB. Higher than normal rates can indicate either a memory or a disk bottleneck. Compare GBL_DISK_UTIL_PEAK and GBL_MEM_UTIL to determine which resource is more constrained. High rates may also indicate memory thrashing caused by a particular application or set of applications. Look for processes with high major fault rates to identify the culprits. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_PAGEIN_BYTE The number of KBs (or MBs if specified) of page ins during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. ====== GBL_MEM_PAGEIN_BYTE_RATE The number of KBs per second of page ins during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. ====== GBL_MEM_PAGEOUT_BYTE The number of KBs (or MBs if specified) of page outs during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_PAGEOUT_BYTE_RATE The number of KBs (or MBs if specified) per second of page outs during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_SWAP_1_MIN_RATE The number of swap ins and swap outs (or deactivations/reactivations on HP-UX) per minute during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. ====== GBL_MEM_SWAPIN The number of swap ins (or reactivations on HP-UX) during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, this is the same as the "swap ins" value from the "vmstat -s" command. Remember that "vmstat -s" reports cumulative counts. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. ====== GBL_MEM_SWAPIN_RATE The number of swap ins (or reactivations on HP-UX) per second during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_SWAPOUT The number of swap outs (or deactivations on HP-UX) during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, this is the same as the "swap outs" values from the "vmstat -s" command. Remember that "vmstat -s" reports cumulative counts. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. ====== GBL_MEM_SWAPOUT_RATE The number of swap outs (or deactivations on HP-UX) per second during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_SWAP The total number of swap ins and swap outs (or deactivations and reactivations on HP-UX) during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. ====== GBL_MEM_SWAP_RATE The total number of swap ins and swap outs (or deactivations and reactivations on HP-UX) per second during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. ====== GBL_MEM_SWAPIN_BYTE The number of KBs transferred in from disk due to swap ins (or reactivations on HP-UX) during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_SWAPIN_BYTE_RATE The number of KBs per second transferred from disk due to swap ins (or reactivations on HP-UX) during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_SWAPOUT_BYTE The number of KBs (or MBs if specified) transferred out to disk due to swap outs (or deactivations on HP-UX) during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_SWAPOUT_BYTE_RATE The number of KBs (or MBs if specified) per second transferred out to disk due to swap outs (or deactivations on HP-UX) during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_USER The amount of physical memory (in MBs unless otherwise specified) allocated to user code and data at the end of the interval. User memory regions include code, heap, stack, and other data areas including shared memory. This does not include memory for buffer cache. On HP-UX and Linux this does not include filecache also. On HP-UX 11.0, this metric includes some kinds of dynamically allocated kernel memory. On HP-UX 11.11 and beyond, this metric does not include some kinds of dynamically allocated kernel memory. This is now reported in the GBL_MEM_SYS metrics. Large fluctuations in this metric can be caused by programs which allocate large amounts of memory and then either release the memory or terminate. A slow continual increase in this metric may indicate a program with a memory leak. ====== GBL_MEM_SYS The amount of physical memory (in MBs unless otherwise specified) used by the system (kernel) during the interval. System memory does not include the buffer cache. On HP-UX and Linux this does not include filecache also. On HP-UX 11.0, this metric does not include some kinds of dynamically allocated kernel memory. This has always been reported in the GBL_MEM_USER metrics. On HP-UX 11.11 and beyond, this metric includes some kinds of dynamically allocated kernel memory. On Solaris non-global zones, this metric shows value as 0. ====== GBL_MEM_CACHE_HIT On HP-UX, the number of buffer cache reads resolved from the buffer cache (rather than going to disk) during the interval. Buffer cache reads can occur as a result of a logical read (for example, file read system call), a read generated by a client, a read-ahead on behalf of a logical read or a system procedure. On HP-UX, this metric is obtained by measuring the number of buffered read calls that were satisfied by the data that was in the file system buffer cache. Reads that are not in the buffer cache result in disk IO. raw IO and virtual memory IO, are not counted in this metric. On SUN, the number of physical reads resolved from memory (rather than going to disk) during the interval. This includes inode, indirect block and cylinder group related disk reads, plus file reads from files memory mapped by the virtual memory IO system. On AIX, the number of disk reads that were satisfied in the file system buffer cache (rather than going to disk) during the interval. On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. ====== GBL_MEM_CACHE_HIT_PCT On HP-UX, the percentage of buffer cache reads resolved from the buffer cache (rather than going to disk) during the interval. Buffer cache reads can occur as a result of a logical read (for example, file read system call), a read generated by a client, a read-ahead on behalf of a logical read or a system procedure. On HP-UX, this metric is obtained by measuring the number of buffered read calls that were satisfied by the data that was in the file system buffer cache. Reads to filesystem file buffers that are not in the buffer cache result in disk IO. Reads to raw IO and virtual memory IO (including memory mapped files), do not go through the filesystem buffer cache, and so are not relevant to this metric. On HP-UX, a low cache hit rate may indicate low efficiency of the buffer cache, either because applications have poor data locality or because the buffer cache is too small. Overly large buffer cache sizes can lead to a memory bottleneck. The buffer cache should be sized small enough so that pageouts do not occur even when the system is busy. However, in the case of VxFS, all memory-mapped IOs show up as page ins/page outs and are not a result of memory pressure. On AIX, the percentage of disk reads that were satisfied in the file system buffer cache (rather than going to disk) during the interval. On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. On the remaining Unix systems, this is the percentage of logical reads satisfied in memory (rather than going to disk) during the interval. This includes inode, indirect block and cylinder group related disk reads, plus file reads from files memory mapped by the virtual memory IO system. On Windows, this is the percentage of buffered reads satisfied in the buffer cache (rather than going to disk) during the interval. This metric is obtained by measuring the number of buffered read calls that were satisfied by the data that was in the system buffer cache. Reads that are not in the buffer cache result in disk IO. Unbuffered IO and virtual memory IO (including memory mapped files), are not counted in this metric. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_MEM_QUEUE The average number of processes or kernel threads blocked on memory (waiting for virtual memory disk accesses to complete) during the interval. This typically happens when processes or kernel threads are allocating a large amount of memory. It can also happen when processes or kernel threads access memory that has been paged out to disk (swap) because of overall memory pressure on the system. Note that large programs can block on VM disk access when they are initializing, bringing their text and data pages into memory. When this metric rises, it can be an indication of a memory bottleneck, especially if overall system memory utilization (GBL_MEM_UTIL) is near 100% and there is also swapout or page out activity. This is calculated as the accumulated time that all processes or kernel threads spent blocked on memory divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_SWAP_SPACE_MEM_AVAIL The amount of physical memory available for pseudo swap (in MB). On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. ====== GBL_SWAP_SPACE_MEM_UTIL The percent of physical memory available for pseudo swap currently allocated to running processes. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. ====== GBL_SWAP_SPACE_DEVICE_AVAIL The amount of swap space configured on disk devices exclusively as swap space (in MB). On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. ====== GBL_SWAP_SPACE_USED_UTIL This is the percentage of swap space used. On HP-UX, "Used %" indicates percentage of swap space written to disk (or locked in memory), rather than reserved. This is the same as percentage of ((USED: total - reserve)/total) 100, as reported by the "swapinfo -mt" command. On SUN, "Used %" indicates percentage of swap space written to disk (or locked in memory), rather than reserved. Swap space is reserved (by decrementing a counter) when virtual memory for a program is created. This is the same as percentage of ((bytes allocated)/total) 100, reported by the "swap -s" command. On SUN, global swap space is tracked through the operating system. Device swap space is tracked through the devices. For this reason, the amount of swap space used may differ between the global and by-device metrics. Sometimes pages that are marked to be swapped to disk by the operating system are never swapped. The operating system records this as used swap space, but the devices do not, since no physical IOs occur. (Metrics with the prefix "GBL" are global and metrics with the prefix "BYSWP" are by device.) On Linux, this is same as percentage of ((Swap: used)/total) 100, as reported by the "free -m" command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. ====== GBL_SWAP_SPACE_AVAIL The total amount of potential swap space, in MB. On HP-UX, this is the sum of the device swap areas enabled by the swapon command, the allocated size of any file system swap areas, and the allocated size of pseudo swap in memory if enabled. Note that this is potential swap space. This is the same as (AVAIL: total) as reported by the "swapinfo -mt" command. On SUN, this is the total amount of swap space available from the physical backing store devices (disks) plus the amount currently available from main memory. This is the same as (used + available) /1024, reported by the "swap -s" command. On Linux, this is same as (Swap: total) as reported by the "free -m" command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_SWAP_SPACE_USED The amount of swap space used, in MB. On HP-UX, "Used" indicates written to disk (or locked in memory), rather than reserved. This is the same as (USED: total - reserve) as reported by the "swapinfo -mt" command. On SUN, "Used" indicates amount written to disk (or locked in memory), rather than reserved. Swap space is reserved (by decrementing a counter) when virtual memory for a program is created. This is the same as (bytes allocated)/1024, reported by the "swap -s" command. On Linux, this is same as (Swap: used) as reported by the "free -m" command. On AIX System WPARs, this metric is NA. On Solaris non-global zones, this metric is N/A. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== GBL_SWAP_SPACE_RESERVED The amount of swap space (in MB) reserved for the swapping and paging of programs currently executing. Process pages swapped include data (heap and stack pages), bss (data uninitialized at the beginning of process execution), and the process user area (uarea). Shared memory regions also require the reservation of swap space. Swap space is reserved (by decrementing a counter) when virtual memory for a program is created, but swap is only used when a page or swap to disk is actually done or the page is locked in memory if swapping to memory is enabled. Virtual memory cannot be created if swap space cannot be reserved. On HP-UX, this is the same as (USED: total) as reported by the "swapinfo -mt" command. On SUN, this is the same as used/1024, reported by the "swap -s" command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. ====== GBL_SWAP_SPACE_UTIL The percent of available swap space that was being used by running processes in the interval. On Windows, this is the percentage of virtual memory, which is available to user processes, that is in use at the end of the interval. It is not an average over the entire interval. It reflects the ratio of committed memory to the current commit limit. The limit may be increased by the operating system if the paging file is extended. This is the same as (Committed Bytes / Commit Limit) 100 when comparing the results to Performance Monitor. On HP-UX, swap space must be reserved (but not allocated) before virtual memory can be created. If all of available swap is reserved, then no new processes or virtual memory can be created. Swap space locations are actually assigned (used) when a page is actually written to disk or locked in memory (pseudo swap in memory). This is the same as (PCT USED: total) as reported by the "swapinfo -mt" command. On Unix systems, this metric is a measure of capacity rather than performance. As this metric nears 100 percent, processes are not able to allocate any more memory and new processes may not be able to run. Very low swap utilization values may indicate that too much area has been allocated to swap, and better use of disk space could be made by reallocating some swap partitions to be user filesystems. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_SWAP_SPACE_DEVICE_UTIL On HP-UX, this is the percentage of device swap space currently in use of the total swap space available. This does not include file system or remote swap space. On HP-UX, note that available swap is only potential swap space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be usable. For example, on a 61 MB disk using 2 MB swap size allocations, 1 MB remains unusable and is considered wasted space. Consequently, 100 percent utilization on a single device is not always obtainable. The wasted swap space, and the remainder of allocated SWCHUNKs that have not been used is what is reported in the hold field of the /usr/sbin/swapinfo command. On HP-UX, when compared to the "swapinfo -mt" command results, this is calculated as: Util = ((USED: dev) sum / (AVAIL: total)) 100 On SUN, this is the percentage of total system device swap space currently in use. This metric only gives the percentage of swap space used from the available physical swap device space, and does not include the memory that can be used for swap. (On SunOS 5.X, the virtual swap swapfs can allocate swap space from memory.) On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. ====== GBL_SWAP_SPACE_FS_UTIL On HP-UX, this is the percentage file system swap space currently in use of the total swap space available. This includes both local and NFS file system swap. Since file system swap is dynamic (it grows in SWCHUNK sizes as needed and is not bounded as device swap is), this number fluctuates as more swap is allocated. When compared to the "swapinfo -mt" command results, this is calculated as: Util = ((USED: fs) sum / (AVAIL: total)) 100 On Sinix, this is the percentage of swap space in use of the total swap space provided on regular files that were configured for swap. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== GBL_SWAP_SPACE_RESERVED_UTIL This is the percentage of available swap space currently reserved for running processes. Reserved utilization = (amount of swap space reserved / amount of swap space available) 100 On HP-UX, swap space must be reserved (but not allocated) before virtual memory can be created. If all of available swap is reserved, then no new processes or virtual memory can be created. Swap space locations are actually assigned (used) when a page is actually written to disk. On HP-UX, note that available swap is only potential swap space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be usable. For example, on a 61 MB disk using 2 MB swap size allocations, 1 MB remains unusable and is considered wasted space. Consequently, 100 percent utilization on a single device is not always obtainable. When compared to the "swapinfo -mt" command results, this is calculated as: Util = ((USED: total) / (AVAIL: total)) 100 On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. ====== GBL_SWAP_RESERVED_ONLY_UTIL The percentage of available swap space reserved (for currently running programs), but not yet used. Swap space must be reserved (but not allocated) before virtual memory can be created. Swap space locations are actually assigned (used) when a page is actually written to disk. On HP-UX, when compared to the "swapinfo -mt" command results, this is calculated as: Util = ((USED: reserve) / (AVAIL: total)) 100 On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. ====== GBL_SWAP_SPACE_AVAIL_KB The total amount of potential swap space, in KB. On HP-UX, this is the sum of the device swap areas enabled by the swapon command, the allocated size of any file system swap areas, and the allocated size of pseudo swap in memory if enabled. Note that this is potential swap space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be usable. For example, on a 61MB disk using 2 MB swap size allocations, 1 MB remains unusable and is considered wasted space. On HP-UX, this is the same as (AVAIL: total) as reported by the "swapinfo -t" command. On SUN, this is the total amount of swap space available from the physical backing store devices (disks) plus the amount currently available from main memory. This is the same as (used + available)/1024, reported by the "swap -s" command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. ====== GBL_SUSPENDED_PROCS The average number of processes which have been either marked as should be suspended (SGETOUT) or have been suspended (SSWAPPED) during the interval. Processes are suspended when the OS detects that memory thrashing is occurring. The scheduler looks for processes that have a high repage rate compared with the number of major page faults the process has done and suspends these processes. ====== GBL_MEMFS_BLK_CNT The number of system memory blocks used by Memory based FileSystem (MemFS). ====== GBL_MEMFS_SWP_CNT The number of system memory blocks swapped by Memory based FileSystem (MemFS). ====== GBL_NUM_LDOM The number of active Locality Domains in the system. ====== GBL_MEM_DNLC_HIT The number of times a pathname component was found in the directory name lookup cache (rather than requiring a disk read to find a file) during the interval. On HP-UX, the directory name lookup cache is used to minimize sequential searches through directory entries for pathname components during pathname to inode translations. Such translations are done whenever a file is accessed through its filename. The cache holds the inode cache table offset for recently referenced pathname components. Pathname components that exceed 15 characters are not cached. Any HP-UX system call that includes a path parameter can result in directory name lookup cache activity, including but not limited to system calls such as open, stat, exec, lstat, unlink. Each component of a path parameter is parsed and converted to an inode separately, therefore several dnlc hits per path are possible. High directory name cache hit rates on HP-UX will be seen on systems where pathname component requests are frequently repeated. For example, when users or applications work in the same directory where they repeatedly list or open the same files, cache hit rates will be high. Unusually low cache hit rates might be seen on HP-UX systems where users or applications access many different directories in no particular pattern. Low cache hit rates can also be an indicator of an underconfigured inode cache. When an inode cache is too small, the kernel will more frequently have to flush older inode cache and their corresponding directory name cache entries in order to make room for new inode cache entries. On HP-UX, the directory name lookup cache is static in size and is allocated in kernel memory. As a result, it is not affected by user memory constraints. The size of the cache is stored in the kernel variable "ncsize" and is not directly tunable by the system administrator; however, it can be changed indirectly by tuning other tables used in the formula to compute the "ncsize". The formula is: ncsize = MAX(((nproc+16+maxusers)+ 32+(2 npty)),ninode) Note that ncsize is always >= ninode which is the default size of the inode cache. This is because the directory name cache contains inode table offsets for each cached pathname component. On SUN, long file names (greater than 30 characters) are not cached and are a type of cache miss. "Enters", or cache data updates, are not included in this data. The DNLC size is: (maxusers 17) + 90 ====== GBL_MEM_DNLC_HIT_PCT The percentage of time a pathname component was found in the directory name lookup cache (rather than requiring a disk read to find a file) during the interval. On HP-UX, the directory name lookup cache is used to minimize sequential searches through directory entries for pathname components during pathname to inode translations. Such translations are done whenever a file is accessed through its filename. The cache holds the inode cache table offset for recently referenced pathname components. Pathname components that exceed 15 characters are not cached. Any HP-UX system call that includes a path parameter can result in directory name lookup cache activity, including but not limited to system calls such as open, stat, exec, lstat, unlink. Each component of a path parameter is parsed and converted to an inode separately, therefore several dnlc hits per path are possible. High directory name cache hit rates on HP-UX will be seen on systems where pathname component requests are frequently repeated. For example, when users or applications work in the same directory where they repeatedly list or open the same files, cache hit rates will be high. Unusually low cache hit rates might be seen on HP-UX systems where users or applications access many different directories in no particular pattern. Low cache hit rates can also be an indicator of an underconfigured inode cache. When an inode cache is too small, the kernel will more frequently have to flush older inode cache and their corresponding directory name cache entries in order to make room for new inode cache entries. On HP-UX, the directory name lookup cache is static in size and is allocated in kernel memory. As a result, it is not affected by user memory constraints. The size of the cache is stored in the kernel variable "ncsize" and is not directly tunable by the system administrator; however, it can be changed indirectly by tuning other tables used in the formula to compute the "ncsize". The formula is: ncsize = MAX(((nproc+16+maxusers)+ 32+(2 npty)),ninode) Note that ncsize is always >= ninode which is the default size of the inode cache. This is because the directory name cache contains inode table offsets for each cached pathname component. On SUN, long file names (greater than 30 characters) are not cached and are a type of cache miss. "Enters", or cache data updates, are not included in this data. The DNLC size is: (maxusers 17) + 90 On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_MEM_DNLC_LONGS The number of times a pathname component was too long to be found in the directory name lookup cache during the interval. On HP-UX, the directory name lookup cache is used to minimize sequential searches through directory entries for pathname components during pathname to inode translations. Such translations are done whenever a file is accessed through its filename. The cache holds the inode cache table offset for recently referenced pathname components. Pathname components that exceed 15 characters are not cached. Any HP-UX system call that includes a path parameter can result in directory name lookup cache activity, including but not limited to system calls such as open, stat, exec, lstat, unlink. Each component of a path parameter is parsed and converted to an inode separately, therefore several dnlc hits per path are possible. High directory name cache hit rates on HP-UX will be seen on systems where pathname component requests are frequently repeated. For example, when users or applications work in the same directory where they repeatedly list or open the same files, cache hit rates will be high. Unusually low cache hit rates might be seen on HP-UX systems where users or applications access many different directories in no particular pattern. Low cache hit rates can also be an indicator of an underconfigured inode cache. When an inode cache is too small, the kernel will more frequently have to flush older inode cache and their corresponding directory name cache entries in order to make room for new inode cache entries. On HP-UX, the directory name lookup cache is static in size and is allocated in kernel memory. As a result, it is not affected by user memory constraints. The size of the cache is stored in the kernel variable "ncsize" and is not directly tunable by the system administrator; however, it can be changed indirectly by tuning other tables used in the formula to compute the "ncsize". The formula is: ncsize = MAX(((nproc+16+maxusers)+ 32+(2 npty)),ninode) Note that ncsize is always >= ninode which is the default size of the inode cache. This is because the directory name cache contains inode table offsets for each cached pathname component. On SUN, long file names (greater than 30 characters) are not cached and are a type of cache miss. "Enters", or cache data updates, are not included in this data. The DNLC size is: (maxusers 17) + 90 ====== GBL_MEM_DNLC_LONGS_PCT The percentage of time a pathname component was too long to be found in the directory name lookup cache during the interval. On HP-UX, the directory name lookup cache is used to minimize sequential searches through directory entries for pathname components during pathname to inode translations. Such translations are done whenever a file is accessed through its filename. The cache holds the inode cache table offset for recently referenced pathname components. Pathname components that exceed 15 characters are not cached. Any HP-UX system call that includes a path parameter can result in directory name lookup cache activity, including but not limited to system calls such as open, stat, exec, lstat, unlink. Each component of a path parameter is parsed and converted to an inode separately, therefore several dnlc hits per path are possible. High directory name cache hit rates on HP-UX will be seen on systems where pathname component requests are frequently repeated. For example, when users or applications work in the same directory where they repeatedly list or open the same files, cache hit rates will be high. Unusually low cache hit rates might be seen on HP-UX systems where users or applications access many different directories in no particular pattern. Low cache hit rates can also be an indicator of an underconfigured inode cache. When an inode cache is too small, the kernel will more frequently have to flush older inode cache and their corresponding directory name cache entries in order to make room for new inode cache entries. On HP-UX, the directory name lookup cache is static in size and is allocated in kernel memory. As a result, it is not affected by user memory constraints. The size of the cache is stored in the kernel variable "ncsize" and is not directly tunable by the system administrator; however, it can be changed indirectly by tuning other tables used in the formula to compute the "ncsize". The formula is: ncsize = MAX(((nproc+16+maxusers)+ 32+(2 npty)),ninode) Note that ncsize is always >= ninode which is the default size of the inode cache. This is because the directory name cache contains inode table offsets for each cached pathname component. On SUN, long file names (greater than 30 characters) are not cached and are a type of cache miss. "Enters", or cache data updates, are not included in this data. The DNLC size is: (maxusers 17) + 90 ====== GBL_MEM_PAGE_SIZE_MAX The maximum page size allowed for a memory region on the system. ====== GBL_MEM_LOCKED The amount of physical memory (in KBs unless otherwise specified) marked as locked memory at the end of the interval. This includes memory locked by processes, kernel and driver code, and can not exceed available physical memory on the system. This is the total non-paged pool memory usage. This memory is allocated from the system-wide non-paged pool, and is not affected by the pageout process. The kernel and driver code use the non-paged pool for data that should always be in physical memory. The size of the non-paged pool is limited to the approximately 128 MB on Windows NT systems and to 256 MB on Windows 2000 systems. A failure to allocate memory from the non-paged pool can cause a system crash. ====== GBL_MEM_LOCKED_UTIL The percentage of physical memory marked as locked memory at the end of the interval. This includes memory locked by processes, kernel and driver code. This is the total non-paged pool memory usage. This memory is allocated from the system-wide non-paged pool, and is not affected by the pageout process. The kernel and driver code use the non-paged pool for data that should always be in physical memory. The size of the non-paged pool is limited to the approximately 128 MB on Windows NT systems and to 256 MB on Windows 2000 systems. A failure to allocate memory from the non-paged pool can cause a system crash. ====== GBL_MEM_FILE_PAGE_CACHE The amount of physical memory (in MBs unless otherwise specified) used by the file cache during the interval. File cache is a memory pool used by the system to stage disk IO data for the driver. This metric is supported on HP-UX 11iv3 and above. The filecache_min and filecache_max tunables control the filecache memory usage on the system. The filecache_min tunable specifies the amount of physical memory that is guaranteed to be available for filecache on the system. The filecache memory usage can grow beyond filecache_min, up to the limit set by the filecache_max tunable. The Virtual Memory(VM) subsystem always pre reserves 'filecache_min' tunable value worth of pages on the system for filecache, even in the case of filecache under utilization (actual filecache utilization less than filecache_min value). This preserved memory by the VM is not available for the user. In this scenario, this metric will show the 'filecache_min' as the filecache value, rather than showing the actual filecache utilization. On Linux, this metric is equal to 'cached' value of 'free -m' command output. ====== GBL_MEM_FILE_PAGE_CACHE_UTIL The percentage of physical_memory used by the file cache during the interval. File cache is a memory pool used by the system to stage disk IO data for the driver. This metric is supported on HP-UX 11iv3 and above. The filecache_min and filecache_max tunables control the filecache memory usage on the system. The filecache_min tunable specifies the amount of physical memory that is guaranteed to be available for filecache on the system. The filecache memory usage can grow beyond filecache_min, up to the limit set by the filecache_max tunable. The Virtual Memory(VM) subsystem always pre reserves 'filecache_min' tunable value worth of pages on the system for filecache, even in the case of filecache under utilization (actual filecache utilization less than filecache_min value). This preserved memory by the VM is not available for the user. In this scenario, this metric will show the 'filecache_min' as the filecache value, rather than showing the actual filecache utilization. On Linux, this metric is derived from 'cached' value of 'free -m' command output. ====== GBL_NFS_SERVER_IO_RATE The number of NFS IOs per second the local machine has completed as an NFS server during the interval. This number represents physical IOs received by the server in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both a NFS server, and as an NFS client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. ====== GBL_NFS_SERVER_IO The number of NFS IOs the local machine has completed as an NFS server during the interval. This number represents physical IOs received by the serverein contrast to a call which is an attempt to initiate these operations. Each computer can operate as both a NFS server, and as an NFS client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. ====== GBL_NFS_SERVER_IO_PCT The percentage of NFS IOs the local machine has completed as an NFS server versus total NFS IOs completed during the interval. This number represents physical IOs received by the server in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both a NFS server, and as an NFS client. A percentage greater than 50 indicates that this machine is acting more as a server for others. A percentage less than 50 indicates this machine is acting more as a client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. ====== GBL_NFS_SERVER_BYTE The number of KBs the local machine has processed as a NFS server during the interval. Each computer can operate as both a NFS server, and as an NFS client. ====== GBL_NFS_SERVER_CALL The number of NFS calls the local machine has processed as a NFS server during the interval. Calls are the system calls used to initiate physical NFS operations. These calls are not always successful due to resource constraints or LAN errors, which means that the call rate could exceed the IO rate. This metric includes both successful and unsuccessful calls. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. ====== GBL_NFS_SERVER_CALL_RATE The number of NFS calls the local machine has processed per second as a NFS server during the interval. Calls are the system calls used to initiate physical NFS operations. These calls are not always successful due to resource constraints or LAN errors, which means that the call rate could exceed the IO rate. This metric includes both successful and unsuccessful calls. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. ====== GBL_NFS_CLIENT_IO_RATE The number of NFS IOs per second the local machine has completed as an NFS client during the interval. This number represents physical IOs sent by the client in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both an NFS server, and as a NFS client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. ====== GBL_NFS_CLIENT_IO The number of NFS IOs the local machine has completed as an NFS client during the interval. This number represents physical IOs sent by the client in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both an NFS server, and as a NFS client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. ====== GBL_NFS_CLIENT_IO_PCT The percentage of NFs IOs the local machine has completed as an NFS client versus total NFS IOs completed during the interval. This number represents physical IOs sent by the client in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both an NFS server, and as a NFS client. A percentage greater than 50 indicates that this machine is acting more as a client. A percentage less than 50 indicates this machine is acting more as a server for others. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. ====== GBL_NFS_CLIENT_BYTE The total number of KBs the local machine has sent or received as an NFS client during the interval. Each computer can operate as both an NFS server, and as a NFS client. ====== GBL_NFS_CLIENT_CALL The number of NFS calls the local machine has processed as a NFS client during the interval. Calls are the system calls used to initiate physical NFS operations. These calls are not always successful due to resource constraints or LAN errors, which means that the call rate should exceed the IO rate. This metric includes both successful and unsuccessful calls. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. ====== GBL_NFS_CLIENT_CALL_RATE The number of NFS calls the local machine has processed as a NFS client per second during the interval. Calls are the system call used to initiate physical NFS operations. These calls are not always successful due to resource constraints or LAN errors, which means that the call rate should exceed the IO rate. This metric includes both successful and unsuccessful calls. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. ====== GBL_NFS_CALL The number of NFS calls the local system has made as either a NFS client or server during the interval. This includes both successful and unsuccessful calls. Unsuccessful calls are those that cannot be completed due to resource limitations or LAN packet errors. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. On AIX System WPARs, this metric is NA. ====== GBL_NFS_CALL_RATE The number of NFS calls per second the system made as either a NFS client or NFS server during the interval. Each computer can operate as both a NFS server, and as an NFS client. This metric includes both successful and unsuccessful calls. Unsuccessful calls are those that cannot be completed due to resource limitations or LAN packet errors. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. On AIX System WPARs, this metric is NA. ====== GBL_NFS_SERVER_READ_RATE The number of NFS "read" operations per second the system processed as an NFS server during the interval. NFS Version 2 read operations consist of getattr, lookup, readlink, readdir, null, root, statfs, and read. NFS Version 3 read operations consist of getattr, lookup, access, readlink, read, readdir, readdirplus, fsstat, fsinfo, and null. ====== GBL_NFS_CLIENT_READ_RATE The number of NFS "read" operations per second the system generated as an NFS client during the interval. NFS Version 2 read operations consist of getattr, lookup, readlink, readdir, null, root, statfs, and read. NFS Version 3 read operations consist of getattr, lookup, access, readlink, read, readdir, readdirplus, fsstat, fsinfo, and null. ====== GBL_NFS_SERVER_WRITE_RATE The number of NFS "write" operations per second the system processed as an NFS server during the interval. NFS Version 2 write operations consist of setattr, write, writecache, create, remove, rename, link, symlink, mkdir, and rmdir. NFS Version 3 write operations consist of setattr, write, create, mkdir, symlink, mknod, remove, rmdir, rename, link, pathconf, and commit. ====== GBL_NFS_CLIENT_WRITE_RATE The number of NFS "write" operations per second the system generated as an NFS client during the interval. NFS Version 2 write operations consist of setattr, write, writecache, create, remove, rename, link, symlink, mkdir, and rmdir. NFS Version 3 write operations consist of setattr, write, create, mkdir, symlink, mknod, remove, rmdir, rename, link, pathconf, and commit. ====== GBL_NFS_SERVER_READ_BYTE_RATE The number of KBs per second the system sent as a NFS server responding to NFS read operations from client nodes during the interval. NFS Version 2 read operations consist of getattr, lookup, readlink, readdir, null, root, statfs, and read. NFS Version 3 read operations consist of getattr, lookup, access, readlink, read, readdir, readdirplus, fsstat, fsinfo, and null. ====== GBL_NFS_CLIENT_READ_BYTE_RATE The number of KBs per second the system received as an NFS client doing read operations during the interval. NFS Version 2 read operations consist of getattr, lookup, readlink, readdir, null, root, statfs, and read. NFS Version 3 read operations consist of getattr, lookup, access, readlink, read, readdir, readdirplus, fsstat, fsinfo, and null. ====== GBL_NFS_SERVER_WRITE_BYTE_RATE The number of KBs per second the system received over the network as an NFS server performing write operations for client nodes during the interval. NFS Version 2 write operations consist of setattr, write, writecache, create, remove, rename, link, symlink, mkdir, and rmdir. NFS Version 3 write operations consist of setattr, write, create, mkdir, symlink, mknod, remove, rmdir, rename, link, pathconf, and commit. ====== GBL_NFS_CLIENT_WRITE_BYTE_RATE The number of KBs per second the system sent over the network as an NFS client doing write operations during the interval. NFS Version 2 write operations consist of setattr, write, writecache, create, remove, rename, link, symlink, mkdir, and rmdir. NFS Version 3 write operations consist of setattr, write, create, mkdir, symlink, mknod, remove, rmdir, rename, link, pathconf, and commit. ====== GBL_NFS_SERVER_SERVICE_TIME The time, in seconds, spent for the NFS server to process the client's operations during the interval. This includes all of the time from the point that the operations are received to the point where a reply is sent back to the client, which includes software overhead and any local disk IOs. This is not an average service time per operation; it is the total service time for all operations processed during the interval. ====== GBL_NFS_CLIENT_SERVICE_TIME The time, in seconds, spent to service all NFS operations as a NFS client during the last interval. This is the time from the point that the client originates the requests to the point replies are received including IO buffering, NFS and network software layer delays, physical network latency, and NFS server service time. It is not a measure of the average response time per NFS request. This can be thought of as the round-trip time for all NFS requests made during the interval. ====== GBL_NFS_CLIENT_SERVICE_QUEUE The number of pending NFS client operations during the interval. This value increases as the service time increases and/or as the rate of client requests increases. ====== GBL_NFS_CLIENT_PHYS_TIME The time, in seconds, spent to service all NFS operations as a NFS client during the last interval. This is measured from the time the operation gets onto the physical network until the time a reply is received from the network. In other words, this is the "service time" less the local machine's software overhead. ====== GBL_NFS_CLIENT_BIOD The current number of biods running (both idle and active) at the end of the interval. ====== GBL_NFS_CLIENT_IDLE_BIOD The current number of biods inactive at the end of the interval. A value of zero indicates a potential bottleneck for the NFS client. ====== GBL_NFS_SERVER_BAD_CALL The number of failed NFS server calls during the interval. Calls fail due to lack of system resources (lack of virtual memory) as well as network errors. ====== GBL_NFS_CLIENT_BAD_CALL The number of failed NFS client calls during the interval. Calls fail due to lack of system resources (lack of virtual memory) as well as network errors. On linux and some unix platforms we need a soft mount for NFS client to get the metric updated. If you mount with hard mount which is default on many platforms it doesn't update the necessary counters atleast on linux. ====== GBL_NFS_LOGL_READ The number of logical reads made to NFS disks by the local machine as a NFS client during the interval. Each computer can operate as both a NFS server, and as an NFS client. For this metric the local machine is acting as a NFS client (that is, the disks are remote) since if it were acting as a server the logical disk requests would be going to local disks. These logical requests do not necessarily result in a physical IO request across the NFS link. ====== GBL_NFS_LOGL_READ_PCT The percentage of logical reads to total logical reads and writes to NFS disks by the local machine during the interval. ====== GBL_NFS_LOGL_READ_RATE The number of logical reads per second made to NFS disks by the local machine during the interval. ====== GBL_NFS_LOGL_READ_BYTE The number of KBs transferred through logical reads to NFS disks by the local machine during the interval. Note that these are transfers by read calls, not physical IO. ====== GBL_NFS_LOGL_WRITE The number of logical writes made to NFS disks by the local machine during the interval. Each computer can operate as both a NFS server, and as a NFS client. For this metric the local machine is acting as an NFS client (the disks are remote) since if it were acting as a server the logical disk requests would be going to local disks. These logical requests do not necessarily result in a physical IO request across the NFS link. ====== GBL_NFS_LOGL_WRITE_PCT The percentage of logical writes to total logical reads and writes to NFS disks by the local machine during the interval. ====== GBL_NFS_LOGL_WRITE_RATE The number of logical writes per second made to NFS disks by the local machine during the interval. ====== GBL_NFS_LOGL_WRITE_BYTE The number of KBs transferred through logical writes to NFS disks by the local machine during the interval. Note that these are transfers by write calls, not physical IO. ====== GBL_SYSCALL_READ_RATE The average number of read system calls per second made during the interval. This includes reads to all devices including disks, terminals and tapes. This is the same as "sread/s" reported by the sar -c command. ====== GBL_SYSCALL_READ The number of read system calls made during the interval. This includes reads to all devices including disks, terminals and tapes. ====== GBL_SYSCALL_READ_PCT The percentage of read system calls of the total system read and write system calls during the interval. ====== GBL_SYSCALL_WRITE_RATE The average number of write system calls per second made during the interval. This includes writes to all devices including disks, terminals and tapes. ====== GBL_SYSCALL_WRITE The number of write system calls made during the interval. ====== GBL_SYSCALL_WRITE_PCT The percentage of write system calls of the total system read and write system calls during the interval. ====== GBL_SYSCALL_BYTE_RATE The number of KBs transferred per second via read and write system calls during the interval. This includes reads and writes to all devices including disks, terminals and tapes. ====== GBL_SYSCALL_READ_BYTE_RATE The number of KBs transferred per second via read system calls during the interval. This includes reads to all devices including disks, terminals and tapes. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_SYSCALL_READ_BYTE The number of KBs transferred through read system calls during the interval. This includes reads to all devices including disks, terminals and tapes. ====== GBL_SYSCALL_WRITE_BYTE_RATE The number of KBs per second transferred via write system calls during the interval. This includes writes to all devices including disks, terminals and tapes. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. ====== GBL_SYSCALL_WRITE_BYTE The number of KBs transferred via write system calls during the interval. This includes writes to all devices including disks, terminals and tapes. ====== GBL_MI_LOST_PROC The number of processes the measurement layer has lost the ability to update during the interval. This is an indication the system activity might require the midaemon be restarted with a larger process count. See the midaemon man page for additional information on the -pids parameter. ====== GBL_MI_PROC_ENTRIES The number of process entries allocated in the midaemon shared memory area. ====== GBL_MI_THREAD_ENTRIES The number of thread entries allocated in the midaemon shared memory area. ====== GBL_RUN_QUEUE On UNIX systems except Linux, this is the average number of threads waiting in the runqueue over the interval. The average is computed against the number of times the run queue is occupied instead of time. The average is updated by the kernel at a fine grain interval, only when the run queue is occupied. It is not averaged against the interval and can therefore be misleading for long intervals when the run queue is empty most or part of the time. This value matches runq-sz reported by the "sar -q" command. The GBL_LOADAVG metrics are better indicators of run queue pressure. On Linux and Windows, this is instantaneous value obtained at the time of logging. On Linux, it shows the number of threads waiting in the runqueue. On Windows, it shows the Processor Queue Length. On Unix systems, GBL_RUN_QUEUE will typically be a small number. Larger than normal values for this metric indicate CPU contention among threads. This CPU bottleneck is also normally indicated by 100 percent GBL_CPU_TOTAL_UTIL. It may be OK to have GBL_CPU_TOTAL_UTIL be 100 percent if no other threads are waiting for the CPU. However, if GBL_CPU_TOTAL_UTIL is 100 percent and GBL_RUN_QUEUE is greater than the number of processors, it indicates a CPU bottleneck. On Windows, the Processor Queue reflects a count of process threads which are ready to execute. A thread is ready to execute (in the Ready state) when the only resource it is waiting on is the processor. The Windows operating system itself has many system threads which intermittently use small amounts of processor time. Several low priority threads intermittently wake up and execute for very short intervals. Depending on when the collection process samples this queue, there may be none or several of these low-priority threads trying to execute. Therefore, even on an otherwise quiescent system, the Processor Queue Length can be high. High values for this metric during intervals where the overall CPU utilization (gbl_cpu_total_util) is low do not indicate a performance bottleneck. Relatively high values for this metric during intervals where the overall CPU utilization is near 100% can indicate a CPU performance bottleneck. HP-UX RUN/PRI/CPU Queue differences for multi-cpu systems: For example, let's assume we're using a system with eight processors. We start eight CPU intensive threads that consume almost all of the CPU resources. The approximate values shown for the CPU related queue metrics would be: GBL_RUN_QUEUE = 1.0 GBL_PRI_QUEUE = 0.1 GBL_CPU_QUEUE = 1.0 Assume we start an additional eight CPU intensive threads. The approximate values now shown are: GBL_RUN_QUEUE = 2.0 GBL_PRI_QUEUE = 8.0 GBL_CPU_QUEUE = 16.0 At this point, we have sixteen CPU intensive threads running on the eight processors. Keeping the definitions of the three queue metrics in mind, the run queue is 2 (that is, 16 / 8); the pri queue is 8 (only half of the threads can be active at any given time); and the cpu queue is 16 (half of the threads waiting in the cpu queue that are ready to run, plus one for each active thread). This illustrates that the run queue is the average of number of threads waiting in the runqueue for all processors; the pri queue is the number of threads that are blocked on "PRI" (priority); and the cpu queue is the number of threads in the cpu queue that are ready to run, including the threads using the CPU. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_LOADAVG The 1 minute load average of the system obtained at the time of logging. On windows this is the load average of the system over the interval. Load average on windows is the average number of threads that have been waiting in ready state during the interval. This is obtained by checking the number of threads in ready state every sub proc interval, accumulating them over the interval and averaging over the interval. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_CPU_QUEUE The average number of processes or kernel threads using the CPU plus all of those processes or kernel threads blocked on PRIORITY (waiting for their priority to become high enough to get the CPU) during the interval. This metric is an indicator of CPU demands among the active processes or kernel threads. To determine if the CPU is a bottleneck, compare this metric with GBL_CPU_TOTAL_UTIL. If GBL_CPU_TOTAL_UTIL is near 100 percent and GBL_CPU_QUEUE is greater than four, there is a high probability of a CPU bottleneck. This is calculated as (the CPU time used plus the accumulated time that all processes or kernel threads spent blocked on PRI (that is, priority)) divided by the interval time. The difference between this metric and GBL_PRI_QUEUE is that it includes the processes or kernel threads using the CPU, if any. HP-UX RUN/PRI/CPU Queue differences for multi-cpu systems: For example, let's assume we're using a system with eight processors. We start eight CPU intensive threads that consume almost all of the CPU resources. The approximate values shown for the CPU related queue metrics would be: GBL_RUN_QUEUE = 1.0 GBL_PRI_QUEUE = 0.1 GBL_CPU_QUEUE = 1.0 Assume we start an additional eight CPU intensive threads. The approximate values now shown are: GBL_RUN_QUEUE = 2.0 GBL_PRI_QUEUE = 8.0 GBL_CPU_QUEUE = 16.0 At this point, we have sixteen CPU intensive threads running on the eight processors. Keeping the definitions of the three queue metrics in mind, the run queue is 2 (that is, 16 / 8); the pri queue is 8 (only half of the threads can be active at any given time); and the cpu queue is 16 (half of the threads waiting in the cpu queue that are ready to run, plus one for each active thread). This illustrates that the run queue is the average of number of threads waiting in the runqueue for all processors; the pri queue is the number of threads that are blocked on "PRI" (priority); and the cpu queue is the number of threads in the cpu queue that are ready to run, including the threads using the CPU. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. The snapshot of number of processes using the CPU plus all of those processes blocked on Priority (waiting for their priority to become high enough to get the CPU) during the last sub-procinterval. The value represents the queue length during the last sub-procinterval. Its calculated for the last sub-procinterval because the most recent number of processes running now and blocked on priority can be obtained only during the last sub-procinterval. To determine if the CPU is a bottleneck, compare this metric with GBL_CPU_TOTAL_UTIL. If GBL_CPU_TOTAL_UTIL is near 100 percent and GBL_CPU_QUEUE is greater than four, there is a high probability of a CPU bottleneck. This metric also accounts for GBL_PRI_QUEUE and its value is always greater than GBL_PRI_QUEUE. ====== GBL_PRI_WAIT_PCT The percentage of time processes or kernel threads were blocked on PRI (waiting for their priority to become high enough to get the CPU) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on PRI divided by the accumulated time that all processes or kernel threads were alive during the interval. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_PRI_QUEUE The average number of processes or kernel threads blocked on PRI (waiting for their priority to become high enough to get the CPU) during the interval. To determine if the CPU is a bottleneck, compare this metric with GBL_CPU_TOTAL_UTIL. If GBL_CPU_TOTAL_UTIL is near 100 percent and GBL_PRI_QUEUE is greater than three, there is a high probability of a CPU bottleneck. This is calculated as the accumulated time that all processes or kernel threads spent blocked on PRI divided by the interval time. HP-UX RUN/PRI/CPU Queue differences for multi-cpu systems: For example, let's assume we're using a system with eight processors. We start eight CPU intensive threads that consume almost all of the CPU resources. The approximate values shown for the CPU related queue metrics would be: GBL_RUN_QUEUE = 1.0 GBL_PRI_QUEUE = 0.1 GBL_CPU_QUEUE = 1.0 Assume we start an additional eight CPU intensive threads. The approximate values now shown are: GBL_RUN_QUEUE = 2.0 GBL_PRI_QUEUE = 8.0 GBL_CPU_QUEUE = 16.0 At this point, we have sixteen CPU intensive threads running on the eight processors. Keeping the definitions of the three queue metrics in mind, the run queue is 2 (that is, 16 / 8); the pri queue is 8 (only half of the threads can be active at any given time); and the cpu queue is 16 (half of the threads waiting in the cpu queue that are ready to run, plus one for each active thread). This illustrates that the run queue is the average of number of threads waiting in the runqueue for all processors; the pri queue is the number of threads that are blocked on "PRI" (priority); and the cpu queue is the number of threads in the cpu queue that are ready to run, including the threads using the CPU. Note that if the value for GBL_PRI_QUEUE greatly exceeds the value for GBL_RUN_QUEUE, this may be a side-effect of the measurement interface having lost trace data. In this case, check the value of the GBL_LOST_MI_TRACE_BUFFERS metric. If there has been buffer loss, you can correct the value of GBL_PRI_QUEUE by restarting the midaemon and the performance tools. You can use the /opt/perf/bin/midaemon -T command to force immediate shutdown of the measurement interface. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_PRI_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on PRI (waiting for their priority to become high enough to get the CPU) during the interval. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. ====== GBL_DISK_WAIT_PCT The percentage of time processes or kernel threads were blocked on DISK (waiting in a disk driver for their disk IO to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on DISK divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_DISK_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on DISK (waiting in a disk driver for their disk IO to complete) during the interval. ====== GBL_MEM_WAIT_PCT The percentage of time processes or kernel threads were blocked on VM (waiting for virtual memory resources to become available) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on VM divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_DISK_SUBSYSTEM_WAIT_TIME On HP-UX, the accumulated time, in seconds, that all processes or kernel threads were blocked on the disk subsystem (waiting for their file system IOs to complete) during the interval. This is the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. On Linux, the accumulated time, in seconds, that all processes or kernel threads were blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. ====== GBL_MEM_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on VM (waiting for virtual memory resources to become available) during the interval. ====== GBL_TERM_IO_WAIT_PCT The percentage of time processes or kernel threads were blocked on terminal IO (waiting for terminal IO to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on TERM (that is, terminal IO) divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_TERM_IO_QUEUE The average number of processes or kernel threads blocked on terminal IO (waiting for their terminal IO to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on TERM (that is, terminal IO) divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_TERM_IO_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on terminal IO (waiting for their terminal IO to complete) during the interval. ====== GBL_IPC_WAIT_PCT The percentage of time processes or kernel threads were blocked on InterProcess Communication (IPC) (waiting for their interprocess communication calls to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on IPC divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_IPC_QUEUE The average number of processes or kernel threads blocked on InterProcess Communication (IPC) (waiting for their interprocess communication calls to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on IPC divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_IPC_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on InterProcess Communication (IPC) (waiting for their interprocess communication calls to complete) during the interval. ====== GBL_SLEEP_WAIT_PCT The percentage of time processes or kernel threads were blocked on SLEEP (waiting to awaken from sleep system calls) during the interval. A process or kernel thread enters the SLEEP state by putting itself to sleep using system calls such as sleep, wait, pause, sigpause, sigsuspend, poll and select. This is calculated as the accumulated time that all processes or kernel threads spent blocked on SLEEP divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_SLEEP_QUEUE The average number of processes or kernel threads blocked on SLEEP (waiting to awaken from sleep system calls) during the interval. A process or kernel thread enters the SLEEP state by putting itself to sleep using system calls such as sleep, wait, pause, sigpause, sigsuspend, poll and select. This is calculated as the accumulated time that all processes or kernel threads spent blocked on SLEEP divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_SLEEP_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on SLEEP (waiting to awaken from sleep system calls) during the interval. A process or kernel thread enters the SLEEP state by putting itself to sleep using system calls such as sleep, wait, pause, sigpause, sigsuspend, poll and select. ====== GBL_OTHER_IO_WAIT_PCT The percentage of time processes or kernel threads were blocked on "other IO" during the interval. "Other IO" includes all IO directed at a device (connected to the local computer) which is not a terminal or LAN. Examples of "other IO" devices are local printers, tapes, instruments, and disks. Time waiting for character (raw) IO to disks is included in this measurement. Time waiting for file system buffered IO to disks will typically been seen as IO or CACHE wait. Time waiting for IO to NFS disks is reported as NFS wait. This is calculated as the accumulated time that all processes or kernel threads spent blocked on other IO divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_OTHER_IO_QUEUE The average number of processes or kernel threads blocked on "other IO" during the interval. "Other IO" includes all IO directed at a device (connected to the local computer) which is not a terminal or LAN. Examples of "other IO" devices are local printers, tapes, instruments, and disks. Time waiting for character (raw) IO to disks is included in this measurement. Time waiting for file system buffered IO to disks will typically been seen as IO or CACHE wait. Time waiting for IO to NFS disks is reported as NFS wait. This is calculated as the accumulated time that all processes or kernel threads spent blocked on other IO divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_OTHER_IO_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on "other IO" during the interval. "Other IO" includes all IO directed at a device (connected to the local computer) which is not a terminal or LAN. Examples of "other IO" devices are local printers, tapes, instruments, and disks. Time waiting for character (raw) IO to disks is included in this measurement. Time waiting for file system buffered IO to disks will typically been seen as IO or CACHE wait. Time waiting for IO to NFS disks is reported as NFS wait. ====== GBL_OTHER_WAIT_PCT The percentage of time processes or kernel threads were blocked on other (unknown) activities during the interval. This includes processes or kernel threads that were started and subsequently suspended before the midaemon was started and have not been resumed, or the block state is unknown. This is calculated as the accumulated time that all processes or kernel threads spent blocked on OTHER divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_OTHER_QUEUE The average number of processes or kernel threads blocked on other (unknown) activities during the interval. This includes processes or kernel threads that were started and subsequently suspended before the midaemon was started and have not been resumed, or the block state is unknown. This is calculated as the accumulated time that all processes or kernel threads spent blocked on OTHER divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_OTHER_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on other (unknown) activities during the interval. This includes processes or kernel threads that were started and subsequently suspended before the midaemon was started and have not been resumed, or the block state is unknown. ====== GBL_CACHE_WAIT_PCT The percentage of time processes or kernel threads were blocked on cache (waiting for the file system buffer cache to be updated) during the interval. Processes or kernel threads doing raw IO to a disk are not included in this measurement. This is calculated as the accumulated time that all processes or kernel threads spent blocked on CACHE divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_CACHE_QUEUE The average number of processes or kernel threads blocked on CACHE (waiting for the file system buffer cache to be updated) during the interval. Processes or kernel threads doing raw IO to a disk are not included in this measurement. As this number rises, it is an indication of a disk or memory bottleneck. This is calculated as the accumulated time that all processes or kernel threads spent blocked on CACHE divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_CACHE_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on CACHE (waiting for the file system buffer cache to be updated) during the interval. Processes or kernel threads doing raw IO to a disk are not included in this measurement. ====== GBL_RPC_WAIT_PCT The percentage of time processes or kernel threads were blocked on RPC (waiting for their remote procedure calls to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on RPC divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_RPC_QUEUE The average number of processes or kernel threads blocked on RPC (waiting for their remote procedure calls to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on RPC divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_RPC_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on RPC (waiting for their remote procedure calls to complete) during the interval. ====== GBL_LOADAVG5 The 5 minute load average of the system obtained at the time of logging. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_LOADAVG15 The 15 minute load average of the system obtained at the time of logging. ====== GBL_INODE_WAIT_PCT The percentage of time processes or kernel threads were blocked on INODE (waiting for an inode to be updated or to become available) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on INODE divided by the accumulated time that all processes or kernel threads were alive during the interval. Inodes are used to store information about files within the file system. Every file has at least two inodes associated with it (one for the directory and one for the file itself). The information stored in an inode includes the owners, timestamps, size, and an array of indices used to translate logical block numbers to physical sector numbers. There is a separate inode maintained for every view of a file, so if two processes have the same file open, they both use the same directory inode, but separate inodes for the file. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_INODE_QUEUE The average number of processes or kernel threads blocked on INODE (waiting for an inode to be updated or to become available) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on INODE divided by the interval time. Inodes are used to store information about files within the file system. Every file has at least two inodes associated with it (one for the directory and one for the file itself). The information stored in an inode includes the owners, timestamps, size, and an array of indices used to translate logical block numbers to physical sector numbers. There is a separate inode maintained for every view of a file, so if two processes have the same file open, they both use the same directory inode, but separate inodes for the file. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_INODE_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on INODE (waiting for an inode to be updated or to become available) during the interval. ====== GBL_LAN_WAIT_PCT The percentage of time processes or kernel threads were blocked on LAN (waiting for their IO over the LAN to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on LAN divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_LAN_QUEUE The average number of processes or kernel threads blocked on LAN (waiting for their IO over the LAN to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on LAN divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_LAN_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on LAN (waiting for their IO over the LAN to complete) during the interval. ====== GBL_MSG_WAIT_PCT The percentage of time processes or kernel threads were blocked on messages (waiting for their message queue calls to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on MESG (that is, messages) divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_MSG_QUEUE The average number of processes or kernel threads blocked on messages (waiting for their message queue calls to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on MESG (that is, messages) divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_MSG_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on messages (waiting for their message queue calls to complete) during the interval. ====== GBL_PIPE_WAIT_PCT The percentage of time processes or kernel threads were blocked on PIPE (waiting for pipe communication to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on PIPE divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_PIPE_QUEUE The average number of processes or kernel threads blocked on PIPE (waiting for pipe communication to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on PIPE divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_PIPE_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on PIPE (waiting for pipe communication to complete) during the interval. ====== GBL_SOCKET_WAIT_PCT The percentage of time processes or kernel threads were blocked on sockets (waiting for their IO to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on SOCKT (that is, sockets) divided by the accumulated time that all processes or threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_SOCKET_QUEUE The average number of processes or kernel threads blocked on sockets (waiting for their IO to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on SOCKT (that is, sockets) divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_SOCKET_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on sockets (waiting for their IO to complete) during the interval. ====== GBL_SEM_WAIT_PCT The percentage of time processes or kernel threads were blocked on semaphores (waiting on a semaphore operation) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on SEM (that is, semaphores) divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_SEM_QUEUE The average number of processes or kernel threads blocked on semaphores (waiting for their semaphore operations to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on PRI (that is, priority) divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_SEM_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on semaphores (waiting for their semaphore operations to complete) during the interval. ====== GBL_SYS_WAIT_PCT The percentage of time processes or kernel threads were blocked on SYSTM (that is, system resources) during the interval. These resources include data structures from the LVM, VFS, UFS, JFS, and Disk Quota subsystems. "SYSTM" is the "catch-all" wait state for blocks on system resources that are not common enough or long enough to warrant their own stop state. This is calculated as the accumulated time that all processes or kernel threads spent blocked on SYSTM divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_SYS_QUEUE The average number of processes or kernel threads blocked on SYSTM (that is, system resources) during the interval. These resources include data structures from the LVM, VFS, UFS, JFS, and Disk Quota subsystems. "SYSTM" is the "catch-all" wait state for blocks on system resources that are not common enough or long enough to warrant their own stop state. This is calculated as the accumulated time that all processes or kernel threads spent blocked on SYSTM divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_SYS_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on SYSTM (that is, system resources) during the interval. These resources include data structures from the LVM, VFS, UFS, JFS, and Disk Quota subsystems. "SYSTM" is the "catch-all" wait state for blocks on system resources that are not common enough or long enough to warrant their own stop state. ====== GBL_CDFS_WAIT_PCT The percentage of time processes or kernel threads were blocked on CDFS (waiting for their Compact Disk file system IO to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on CDFS divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_CDFS_QUEUE The average number of processes or kernel threads blocked on CDFS (waiting for their Compact Disk file system IO to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on CDFS divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_CDFS_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on CDFS (waiting for their Compact Disk file system IO to complete) during the interval. ====== GBL_GRAPHICS_WAIT_PCT The percentage of time processes or kernel threads were blocked on graphics (waiting for their graphics operations to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on GRAPH (that is, graphics) divide by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_GRAPHICS_QUEUE The average number of processes or kernel threads blocked on graphics (waiting for their graphics operations to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on GRAPH (that is, graphics) divide by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_GRAPHICS_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on graphics (waiting for their graphics operations to complete) during the interval. ====== GBL_NFS_WAIT_PCT The percentage of time processes or kernel threads were blocked on NFS (waiting for their network file system IO to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on NFS divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_NFS_QUEUE The average number of processes or kernel threads blocked on NFS (waiting for their network file system IO to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on NFS divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_NFS_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on NFS (waiting for their network file system IO to complete) during the interval. ====== GBL_JOBCTL_WAIT_PCT The percentage of time processes or kernel threads were blocked on job control (having been stopped with the job control facilities) during the interval. Job control waits include waiting at a debug breakpoint, as well as being blocked attempting to write (from background) to a terminal which has the "stty tostop" option set. This is calculated as the accumulated time that all processes or kernel threads spent blocked on job control divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_JOBCTL_QUEUE The average number of processes or kernel threads blocked on job control (having been stopped with the job control facilities) during the interval. Job control waits include waiting at a debug breakpoint, as well as being blocked attempting to write (from background) to a terminal which has the "stty tostop" option set. This is calculated as the accumulated time that all processes or kernel threads spent blocked on job control divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_JOBCTL_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on job control (having been stopped with the job control facilities) during the interval. Job control waits include waiting at a debug breakpoint, as well as being blocked attempting to write (from background) to a terminal which has the "stty tostop" option set. ====== GBL_STREAM_WAIT_PCT The percentage of time processes or kernel threads were blocked on streams IO (waiting for a streams IO operation to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on STRMS (that is, streams IO) divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_STREAM_QUEUE The average number of processes or kernel threads blocked on streams IO (waiting for a streams IO operation to complete) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on STRMS (that is, streams IO) divided by the interval time. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. ====== GBL_STREAM_WAIT_TIME The accumulated time, in seconds, that all processes or kernel threads were blocked on streams IO (waiting for a streams IO operation to complete) during the interval. ====== GBL_MEM_SWAP_QUEUE The average number of processes waiting to be swapped in. These processes are inactive because they are waiting for pages to be paged in. This is the same as the "procs b" field reported in vmstat. ====== GBL_BLOCKED_IO_QUEUE The average number of processes blocked on local disk resources (IO, paging). This metric is an indicator of disk contention among active processes. It should normally be a very small number. If GBL_DISK_UTIL_PEAK is near 100 percent and GBL_BLOCKED_IO_QUEUE is greater than 1, a disk bottleneck is probable. On SUN, this is the same as the "procs b" field reported in vmstat. On Solaris non-global zones, this metric shows data from the global zone. ====== GBL_MEM_IOPEND_QUEUE The count of pending IO waits for the VM subsystem. ====== GBL_MEM_FRAME_WAIT_QUEUE The count of free frame waits for the VM subsystems. ====== GBL_MEM_CACHE_WRITE_HIT The number of write cache hits - logical writes that did not result in physical IOs during the interval. A cache write hit occurs when a logical write request is issued to a disk file block that is already mapped in a buffer that is in a delayed write state. This metric gives an indication of how many physical IOs are eliminated as a result of buffering logical write requests. Physical IOs are eliminated in environments where asynchronous writes are done (see the O_SYNC flag in open(2)) to the same file blocks before being explicitly written to the disk or flushed to disk by the syncher process. Environments that attempt to minimize the chance of file system data loss by issuing synchronous writes or by using shorter syncer intervals will see fewer cache write hits. During a short interval, the number of physical writes can exceed the number of logical write requests. This would yield a negative number of "write hits". If this occurs in an interval, "na" will be returned. ====== GBL_MEM_CACHE_WRITE_HIT_PCT The percentage of logical disk writes that did not result in physical disk IOs during the interval. A cache write hit occurs when a logical write request is issued to a disk file block that is already mapped in a buffer that is in a delayed write state. This metric gives an indication of how many physical IOs are eliminated as a result of buffering logical write requests. Physical IOs are eliminated in environments where asynchronous writes are done (see the O_SYNC flag in open(2)) to the same file blocks before being explicitly written to the disk or flushed to disk by the syncher process. Environments that attempt to minimize the chance of file system data loss by issuing synchronous writes or by using shorter syncer intervals will see fewer cache write hits. During a short interval, the number of physical writes can exceed the number of logical write requests. This would yield a negative number of "write hits". If this occurs in an interval, "na" will be returned. ====== GBL_MEM_VM_BACKTRACK The number of backtracks done by the Virtual Memory Manager during the interval. ====== GBL_MEM_VM_BACKTRACK_RATE The number of backtracks done per second by Virtual Memory Manager during the interval. ====== GBL_MEM_PAGE_RECLAIM The number of page reclaimed by the Virtual Memory Manager during the interval. ====== GBL_MEM_PAGE_RECLAIM_RATE The number of page reclaimed per second by the Virtual Memory Manager during the interval. ====== GBL_MEM_LCK_MISS The number of locks missed by the Virtual Memory Manager during the interval. ====== GBL_MEM_LCK_MISS_RATE The number of locks missed per second by the Virtual Memory Manager during the interval. ====== GBL_MEM_IO_START The number of IOs started by the Virtual Memory Manager during the interval. ====== GBL_MEM_IO_START_RATE The number of IOs started per second by the Virtual Memory Manager during the interval. ====== GBL_MEM_IO_DONE The number of IOs completed by the Virtual Memory Manager during the interval. ====== GBL_MEM_IO_DONE_RATE The number of IOs started per second by the Virtual Memory Manager during the interval. ====== GBL_MEM_ZEROFILL_PG The number of pages filled with zero by the Virtual Memory Manager during the interval. ====== GBL_MEM_ZEROFILL_PG_RATE The number of pages filled with zero per second by the Virtual Memory Manager during the interval. ====== GBL_MEM_EXECFILL_PG The number of pages filled at process exec time by the Virtual Memory Manager during the interval. ====== GBL_MEM_EXECFILL_PG_RATE The number of pages filled per second at process exec time by the Virtual Memory Management System during the interval. ====== GBL_MEM_PG_SCAN The number of pages scanned by the pageout daemon (or by the Clock Hand on AIX) during the interval. The clock hand algorithm is used to control page aging on the system. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_PG_SCAN_RATE The number of pages scanned per second by the pageout daemon (or by the Clock Hand on AIX, "vmstat -s" pages examined by clock) during the interval. The clock hand algorithm is used to control page aging on the system. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_CLK_HAND_CYCLE The number of clock hand cycles during the interval. The clock hand algorithm is used to control page aging on the system. ====== GBL_MEM_CLK_HAND_CYCLE_RATE The number of clock hand cycles per second of the system during the interval. The clock hand algorithm is used to control page aging on the system. ====== GBL_MEM_PG_STEAL The number of pages stolen by the Virtual Memory Manager during the interval. ====== GBL_MEM_PG_STEAL_RATE The number of pages stolen per second by the Virtual Memory Manager during the interval. ====== GBL_FREE_FRAME_UPPER_THRESHOLD This metric returns the high threshold for the number of free pages. On AIX, the VMM attempts to keep the free list (number of empty page frames in memory) within a fixed range. The high threshold of this range is computed as 2 frames per megabytes of memory; the low threshold is set at 8 frames below the high threshold. When page faults and/or system demands cause the free list size to fall below the low threshold, the page replacement algorithm is run and enough frames are freed to make the list larger than the high threshold. The size of free list must be kept above the low threshold for several reasons. For example, the operating system algorithm requires up to 8 free frames at a time for each process that is doing sequential reads. Also, the VMM must avoid deadlocks within the operating system itself, which can occur if there is not enough space to read in a page required to free a page frame. ====== GBL_FREE_FRAME_LOWER_THRESHOLD This metric returns the low threshold for the number of free page frames. The Virtual Memory Manager must keep the size of the free list above the low threshold. On AIX, the VMM attempts to keep the free list (number of empty page frames in memory) within a fixed range. The high threshold of this range is computed as 2 frames per megabytes of memory; the low threshold is set at 8 frames below the high threshold. When page faults and/or system demands cause the free list size to fall below the low threshold, the page replacement algorithm is run and enough frames are freed to make the list larger than the high threshold. The size of free list must be kept above the low threshold for several reasons. For example, the operating system algorithm requires up to 8 free frames at a time for each process that is doing sequential reads. Also, the VMM must avoid deadlocks within the operating system itself, which can occur if there is not enough space to read in a page required to free a page frame. ====== GBL_FREE_FRAME_CURR This metric returns the number of page frames in the free list during the interval. This free list must be kept above the low threshold of free page frame. On AIX, the VMM attempts to keep the free list (number of empty page frames in memory) within a fixed range. The high threshold of this range is computed as 2 frames per megabytes of memory; the low threshold is set at 8 frames below the high threshold. When page faults and/or system demands cause the free list size to fall below the low threshold, the page replacement algorithm is run and enough frames are freed to make the list larger than the high threshold. The size of free list must be kept above the low threshold for several reasons. For example, the operating system algorithm requires up to 8 free frames at a time for each process that is doing sequential reads. Also, the VMM must avoid deadlocks within the operating system itself, which can occur if there is not enough space to read in a page required to free a page frame. ====== GBL_MEM_OVER_COMMIT This is calculated as the number of swap outs during the last interval multiplied by the high memory commitment threshold. The resulting number should be less than the number of page steals, since the number of swap outs is usually much less than the number of page steals. If this metric is larger than the number of page steals, memory is considered overcommitted, and a bottleneck condition exists. ====== GBL_PRM_MEM_UTIL The total percent of memory used by processes within the PRM groups during the interval. This does not include system processes (processes attached to PRM group 0). ====== GBL_MEM_FILE_PAGEIN_RATE The number of page ins from the file system per second during the interval. On Solaris, this is the same as the "fpi" value from the "vmstat -p" command, divided by page size in KB. On Linux, the value is reported in kilobytes and matches the 'io/bi' values from vmstat. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_MEM_FILE_PAGEOUT_RATE The number of page outs to the file system per second during the interval. On Solaris, this is the same as the "fpo" value from the "vmstat -p" command, divided by page size in KB. On Linux, the value is reported in kilobytes and matches the 'io/bo' values from vmstat. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. ====== GBL_LS_PHYS_MEM_TOTAL Total physical memory (in MBs) allotted across all the partitions. This metric is with respect to the partitions which are responding over network. On AIX System WPARs, this metric is NA. ====== GBL_LS_PHYS_MEM_CONSUMED The physical memory (in MBs) that is consumed by partitions. This metric is with respect to the partitions which are responding over network. On AIX System WPARs, this metric is NA. ====== GBL_POOL_CPU_AVAIL The available physical processors in the shared processor pool during the interval. This metric will be "na" if pool_util_authority is not set in HMC. pool_util_authority indicates if pool utilization data is available or not. To set pool_util_authority, select the "Allow shared processor pool utilization authority" check box from HMC. On AIX System WPARs, this metric is NA. ====== GBL_LS_MODE Indicates whether the CPU entitlement for the logical system is Capped or Uncapped. The value "Uncapped" indicates that the logical system can utilize idle cycles from the shared processor pool of CPUs beyond its CPU entitlement. On AIX SPLPAR, this metric is same as "Mode" field of 'lparstat -i' command. On a recognized VMware ESX guest, where VMware guest SDK is enabled, the value is "Uncapped" if maximum CPU entitlement (GBL_CPU_ENTL_MAX) is unlimited. Else, the value is always "Capped". ====== GBL_LS_TYPE The virtualization technology if applicable. The value of this metric is "HPVM" on HP-UX host, "LPAR" on AIX LPAR, "Sys WPAR" on system WPAR, "Zone" on Solaris Zones, "VMware" on recognized VMware ESX guest and VMware ESX Server console, "Hyper-V" on Hyper-V host, else "NoVM". In conjunction with GBL_LS_ROLE this metric could be used to identify the environment in which Perf Agent/Glance is running. For example, if GBL_LS_ROLE is "Guest" and GBL_LS_TYPE is "VMware" then Performance Collection Component/Glance is running on a VMware Guest. ====== GBL_LS_SHARED In a virtual environment, this metric indicates whether the physical CPUs are dedicated to this Logical system or shared. On AIX SPLPAR, this metric is equivalent to "Type" field of 'lparstat -i' command. On a recognized VMware ESX guest, where VMware guest SDK is enabled, the value is "Shared". On a standalone system the value of this metrics is "Dedicated". On AIX System WPARs, this metric is NA. ====== GBL_CPU_ENTL In a virtual environment this metric indicates the physical processor units allocated to this Logical system. On AIX SPLPAR, this metric indicates the entitlement allocated by Hypervisor to a logical system at the time of starting. This metric is equivalent to "Entitled Capacity" field of 'lparstat -i' command. On a standalone system the value of this metric is same as GBL_NUM_CPU. ====== GBL_CPU_ENTL_MIN In a virtual environment, this metric indicates the minimum number of processing units configured for this Logical system. On AIX SPLPAR, this metric is equivalent to "Minimum Capacity" field of 'lparstat -i' command. On a recognized VMware ESX guest, where VMware guest SDK is enabled, the value is equivalent to GBL_CPU_CYCLE_ENTL_MIN represented in CPU units. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is "na". On a standalone system the value is same as GBL_NUM_CPU. ====== GBL_CPU_ENTL_MAX In a virtual environment, this metric indicates the maximum number of processing units configured for this logical system. On AIX SPLPAR, this metric is equivalent to "Maximum Capacity" field of 'lparstat -i' command. On a recognized VMware ESX guest the value is equivalent to GBL_CPU_CYCLE_ENTL_MAX represented in CPU units. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is "na". On a standalone system the value is same as GBL_NUM_CPU. ====== GBL_POOL_NUM_CPU The number of physical processors in the shared resource pool to which this logical system belongs. On AIX SPLPAR, this metric is equivalent to "Physical CPUs in system" field of 'lparstat -i' command. On a standalone system, the value is "na". On AIX System WPARs, this metric value is not available. ====== GBL_POOL_CPU_ENTL The number of physical processors available in the shared processor pool to which this logical system belongs. On AIX SPLPAR, this metric is equivalent to "Active Physical CPUs in system" field of 'lparstat -i' command. On a standalone system, the value is "na". On AIX System WPARs, this metric is NA. ====== GBL_CPU_ENTL_UTIL Percentage of entitled processing units (guaranteed processing units allocated to this logical system) consumed by the logical system. On an "Uncapped" logical system, this metric can exceed 100% if the processing units are available in the shared resource pool and the number of virtual CPUs are satisfied. On a Capped logical system this metric can never go beyond 100%. On AIX, this metric is calculated as: GBL_CPU_ENTL_UTIL = (GBL_CPU_PHYSC / GBL_CPU_ENTL) 100 On a recognized VMware ESX guest, where VMware guest SDK is enabled, this metric is calculated as: GBL_CPU_ENTL_UTIL = (GBL_CPU_PHYSC / GBL_CPU_ENTL_MIN) 100 On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is "na". On a standalone system, the value is same as GBL_CPU_TOTAL_UTIL. ====== GBL_CPU_PHYS_TOTAL_UTIL The percentage of time the available physical CPUs were not idle for this logical system during the interval. On AIX, this metric is calculated as : GBL_CPU_PHYS_TOTAL_UTIL = GBL_CPU_PHYS_USER_MODE_UTIL + GBL_CPU_PHYS_SYS_MODE_UTIL ; GBL_CPU_PHYS_TOTAL_UTIL + GBL_CPU_PHYS_WAIT_UTIL + GBL_CPU_PHYS_IDLE_UTIL = 100% On Power5 based systems, traditional sample based calculations cannot be made because the dispatch cycle for each of the virtual CPUs is not same. So Power5 processor maintains a per-thread register PURR. The thread is dispatching instructions or the thread that last dispatched an instruction will be incremented at every processor clock cycle. This makes the value to be distributed between the two threads. Power5 processor also maintains two more registers, one is timebase - which gets incremented at every tick and decrementer - that provided periodic interrupts. On a Shared LPAR environment, PURR is equal to the time that a virtual processor has spent on a physical processor. Hypervisor maintains a virtual timebase which is same as the sum of two PURRs. On a Capped Shared logical system (partition), the calculations for the metric GBL_CPU_PHYS_USER_MODE_UTIL is as follows: (delta PURR in user mode/entitlement) 100 On an Uncapped Shared logical system (partition): (delta PURR in user mode/entitlement consumed) 100 The calculations for the other utilizations such as GBL_CPU_PHYS_USER_MODE_UTIL, GBL_CPU_PHYS_SYS_MODE_UTIL, and GBL_CPU_PHYS_WAIT_UTIL are also similar. On a standalone system, the value will be equivalent to GBL_CPU_TOTAL_UTIL. On AIX System WPARs, this metric value is calculated against physical cpu time. ====== GBL_CPU_PHYS_USER_MODE_UTIL The percentage of time the physical CPU was in user mode for the logical system during the interval. On AIX LPAR, this value is equivalent to "%user" field reported by the "lparstat" command. On AIX System WPARs, this metric value is calculated against physical cpu time. ====== GBL_CPU_PHYS_SYS_MODE_UTIL The percentage of time the physical CPU was in system mode (kernel mode) for the logical system during the interval. On AIX LPAR, this value is equivalent to "%sys" field reported by the "lparstat" command. On AIX System WPARs, this metric value is calculated against physical cpu time. ====== GBL_CPU_PHYS_WAIT_UTIL The percentage of time during the interval that the physical CPU was waiting for the physical IOs to complete. On AIX LPAR, this value is equivalent to "%wait" field reported by the "lparstat" command. ====== GBL_CPU_PHYS_IDLE_UTIL The percentage of time that the physical CPU was idle during the interval. On AIX LPAR, this value is equivalent to "%idle" field reported by the "lparstat" command. ====== GBL_CPU_NUM_THREADS The number of active CPU threads supported by the CPU architecture. The Linux kernel currently doesn't provide any metadata information for disabled CPUs. This means that there is no way to find out types, speeds, as well as hardware IDs or any other information that is used to determine the number of cores, the number of threads, the HyperThreading state, etc... If the agent (or Glance) is started while some of the CPUs are disabled, some of these metrics will be "na", some will be based on what is visible at startup time. All information will be updated if/when additional CPUs are enabled and information about them becomes available. The configuration counts will remain at the highest discovered level (i.e. if CPUs are then disabled, the maximum number of CPUs/cores/etc... will remain at the highest observed level). It is recommended that the agent be started with all CPUs enabled. On AIX System WPARs, this metric is NA. ====== GBL_POOL_ID In a virtual environment, this metric identifies the shared resource pool to which the logical system belongs. On AIX SPLPAR, this metric is equivalent to "Shared Pool ID" field of 'lparstat -i' command. On a standalone system, the value is "na". On AIX System WPARs, this metric is NA. ====== GBL_POOL_IDLE_TIME The total time, in seconds, that the pool CPU was idle during the interval. This metric will be "na" if pool_util_authority is not set in HMC. pool_util_authority indicates if pool utilization data is available or not. To set pool_util_authority, select the "Allow shared processor pool utilization authority" check box from HMC. ====== GBL_POOL_TOTAL_UTIL Percentage of time, the pool CPU was not idle during the interval. This metric will be "na" if pool_util_authority is not set in HMC. pool_util_authority indicates if pool utilization data is available or not. To set pool_util_authority, select the "Allow shared processor pool utilization authority" check box from HMC. On AIX System WPARs, this metric is NA. ====== GBL_LS_ROLE Indicates whether Perf Agent is installed on Logical system or host or standalone system. This metric will be either "GUEST", "HOST" or "STAND". ====== GBL_NUM_LS This indicates the number of LS hosted in a system. If Perf Agent is installed in a guest or in a standalone system this value will be 0. On Solaris non-global zones, this metric shows value as 0. ====== GBL_NUM_ACTIVE_LS This indicates the number of LS hosted in a system that are active . If Perf Agent is installed in a guest or in a standalone system this value will be 0. On Solaris non-global zones, this metric shows value as 0. ====== GBL_NUM_CPU_CORE This metric provides the total number of CPU cores on a physical system. On VMs, this metric shows information according to resources available on that VM. On non HP-UX system, this metric is equivalent to active CPU cores. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Windows, this metric will be "na" on Windows Server 2003 Itanium systems. The Linux kernel currently doesn't provide any metadata information for disabled CPUs. This means that there is no way to find out types, speeds, as well as hardware IDs or any other information that is used to determine the number of cores, the number of threads, the HyperThreading state, etc... If the agent (or Glance) is started while some of the CPUs are disabled, some of these metrics will be "na", some will be based on what is visible at startup time. All information will be updated if/when additional CPUs are enabled and information about them becomes available. The configuration counts will remain at the highest discovered level (i.e. if CPUs are then disabled, the maximum number of CPUs/cores/etc... will remain at the highest observed level). It is recommended that the agent be started with all CPUs enabled. ====== GBL_ACTIVE_CPU_CORE This metric provides the total number of active CPU cores on a physical system. ====== GBL_CPU_PHYSC The number of physical processors utilized by the logical system. On an Uncapped logical system (partition), this value will be equal to the physical processor capacity used by the logical system during the interval. This can be more than the value entitled for a logical system. On a standalone system the value is calculated based on GBL_CPU_TOTAL_UTIL ====== GBL_LS_ID On AIX LPAR, this metric indicates partition number and is equivalent to "Partition Number" field of 'lparstat -i' command. On a standalone system the value of this metrics is 'na' On AIX System WPARs, this metric is NA. ====== GBL_CPU_SHARES_PRIO The weightage/priority assigned to a Uncapped logical system. This value determines the minimum share of unutilized processing units that this logical system can utilize. On AIX SPLPAR this value is dependent on the available processing units in the pool and can range from 0 to 255 On recognized VMware ESX guest, this value can range from 1 to 100000 On a standalone system the value will be "na". ====== GBL_VCSWITCH_RATE The average number of Virtual Context switches per second. On AIX System WPARs, this metric is NA. ====== GBL_CPU_MT_ENABLED On AIX, this metric indicates if this (Logical) System has SMT enabled or not. Other platforms, this metric shows either HyperThreading(HT) is Enabled or Disabled/Not Supported. On Linux, this state is dynamic: if HyperThreading is enabled but all the CPUs have only one logical processor enabled, this metric will report that HT is disabled. On AIX System WPARs, this metric is NA. On Windows, this metric will be "na" on Windows Server 2003 Itanium systems. ====== GBL_HYP_UTIL The percentage of time spent in Hypervisor by this partition in this interval with respect to system mode utilization. ====== GBL_LS_UUID UUID of this logical system. This Id uniquely identifies this logical system in the virtualized enviroments. On a standalone system the value of this metrics is 'na'. ====== GBL_NUM_VSWITCH The number of virtual switches configured on the host system. ====== GBL_NUM_SOCKET The number of physical cpu sockets on the system. On VMs, this metric shows information according to resources available on that VM. On Windows, this metric will be "na" on Windows Server 2003 Itanium systems. ====== GBL_LS_NUM_DEDICATED Number of partitions which have dedicated processors. This metric is with respect to the partitions which are responding over network. On AIX System WPARs, this metric is NA. ====== GBL_LS_NUM_UNCAPPED Number of Uncapped shared partitions. This metric is with respect to the partitions which are responding over network. On AIX System WPARs, this metric is NA. ====== GBL_LS_NUM_CAPPED Number of Capped shared partitions. This metric is with respect to the partitions which are responding over network. On AIX System WPARs, this metric is NA. ====== GBL_LS_CPU_NUM_SHARED Number of processor units in shared partitions. This metric is with respect to the partitions which are responding over network. On AIX System WPARs, this metric is NA. ====== GBL_LS_CPU_NUM_DEDICATED Number of processor units in dedicated partitions. This metric is with respect to the partitions which are responding over network. On AIX System WPARs, this metric is NA. ====== GBL_LS_CPU_SHARED_CONSUMED The sum of processor units consumed of all the shared partitions. This metric is with respect to the partitions which are responding over network. ====== GBL_LS_CPU_DEDICATED_CONSUMED The sum of processors consumed of all the dedicated partitions. This metric is with respect to the partitions which are responding over network. ====== GBL_LS_NUM_SHARED Number of partitions which share the processors. This metric is with respect to the partitions which are responding over network. On AIX System WPARs, this metric is NA. ====== GBL_WEB_FILES_RECEIVED The number of files received by the HTTP or FTP servers during the interval. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_FILES_RECEIVED_RATE The rate of files/sec received by the HTTP or FTP servers during the interval. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_FILES_SENT The number of files sent by the HTTP, FTP or gopher servers during the interval. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_FILES_SENT_RATE The rate of files/sec sent by the HTTP, FTP or gopher servers during the interval. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_ANONYMOUS_USERS The number of anonymous users currently connected to the HTTP, FTP or gopher servers. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_NONANONYMOUS_USERS The number of non-anonymous users currently connected to the HTTP, FTP or gopher servers. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_LOGON_ATTEMPTS The number of logon attempts that have been made by the HTTP, FTP or gopher servers during the interval. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_LOGON_FAILURES The number of logon failures that have been made by the HTTP, FTP or gopher servers during the interval. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_CACHE_HIT_PCT The ratio of cache hits to all cache requests during the interval. Cache hits occur when a file open, directory listing or service specific object request is found in the cache. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_ALLOWED_ASYNC_IO The number of asynchronous IO requests allowed by the bandwidth throttler. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_BLOCKED_ASYNC_IO The number of asynchronous IO requests blocked by the bandwidth throttler. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_CONNECTION_RATE The sum of the number of simultaneous connections to the HTTP, FTP or gopher servers during the interval. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_MAX_CONNECTIONS The sum of the maximum number of simultaneous connections to the HTTP, FTP or gopher servers. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_CGI_REQUEST_RATE The number of CGI requests being processed per second. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_ISAPI_REQUEST_RATE The number of ISAPI requests being processed per second. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_GET_REQUEST_RATE The number of GET requests being processed per second. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_HEAD_REQUEST_RATE The number of HEAD requests being processed per second. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_POST_REQUEST_RATE The number of POST requests being processed per second. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_OTHER_REQUEST_RATE The number of OTHER requests being processed per second. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_NOT_FOUND_ERRORS Number of requests that could not be satisfied by service because requested documents could not be found; typically reported as HTTP 404 error code to client. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_HTTP_READ_BYTE_RATE The byte rate in KBs per second that data bytes are received by HTTP servers during the interval. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_HTTP_WRITE_BYTE_RATE The byte rate in KBs per second that data bytes are sent by HTTP servers during the interval. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_FTP_READ_BYTE_RATE The byte rate in KBs per second that data bytes are received by FTP servers during the interval. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_FTP_WRITE_BYTE_RATE The byte rate in KBs per second that data bytes are sent by FTP servers during the interval. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_GOPHER_READ_BYTE_RATE The byte rate in KBs per second that data bytes are received by gopher servers during the interval. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_GOPHER_WRITE_BYTE_RATE The byte rate in KBs per second that data bytes are sent by gopher servers during the interval. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_READ_BYTE_RATE The byte rate in KBs per second that data bytes are received by the HTTP, FTP or gopher servers during the interval. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_WEB_WRITE_BYTE_RATE The byte rate in KBs per second that data bytes are sent by the HTTP, FTP or gopher servers during the interval. This metric is available only for Internet Information Server (IIS) 3.0 because IIS 3.0 uses the HTTP object. The GBL_WEB_ metrics are not available for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object. There is a sample Extended Collection Builder policy that uses selected metrics from the Web Service object. This policy is provided with the MeasureWare Agent product. ====== GBL_JAVAARG This boolean value indicates whether the java class overloading mechanism is enabled or not. This metric will be set when the javaarg flag in the parm file is set. The metric affected by this setting is PROC_PROC_ARGV1. This setting is useful to construct parm file java application definitions using the argv1= keyword. ====== GBL_IGNORE_MT This boolean value indicates whether the CPU normalization is on or off. If the metric value is "true", CPU related metrics in the global class will report values which are normalized against the number of active cores on the system. If the metric value is "false", CPU related metrics in the global class will report values which are normalized against the number of CPU threads on the system. If CPU MultiThreading is turned off this configuration option is a no-op and the metric value will be "true". On Linux, this metric will only report "true" if this configuration is on and if the kernel provides enough information to determine whether MultiThreading is turned on. On HP-UX, this metric will report "na" if the processor doesn't support the feature. ====== GBL_THRESHOLD_PROCCPU The process CPU threshold specified in the parm file. ====== GBL_THRESHOLD_PROCMEM The process memory threshold specified in the parm file. ====== GBL_THRESHOLD_PROCDISK The process disk threshold specified in the parm file. ====== GBL_THRESHOLD_PROCIO The process IO threshold specified in the parm file. ====== GBL_NUM_VIRTUAL_TARGETS The number of virtual target devices served by the VIO server. This metric is only valid on aix VIO servers. ====== GBL_DISK_PATH_COUNT The number of paths available to the disks on the system. This metric is only valid on aix VIO servers. ====== GBL_MEM_ARC On Solaris, this value indicates the amount of Adaptive Replacement Cache(ARC) used by ZFS On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The "nbuf" value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. ====== GBL_MEM_ARC_UTIL The percentage of physical memory used by ZFS ARC during the interval. On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The "nbuf" value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. ====== GBL_MEM_ONLINE In a virtual environment, this metric indicates the amount of memory currently online for this logical system. For AIX wpars, this metric will be "na". ====== GBL_MAC_ADDR_LIST This metric indicates the MAC address(es) configured for the system. If there is more than one MAC address, values are seperated by ','. ====== TBL_DNLC_CACHE_AVAIL The configured number of entries in the incore directory name cache. On HP-UX, the directory name lookup cache is used to minimize sequential searches through directory entries for pathname components during pathname to inode translations. Such translations are done whenever a file is accessed through its filename. The cache holds the inode cache table offset for recently referenced pathname components. Pathname components that exceed 15 characters are not cached. Any HP-UX system call that includes a path parameter can result in directory name lookup cache activity, including but not limited to system calls such as open, stat, exec, lstat, unlink. Each component of a path parameter is parsed and converted to an inode separately, therefore several dnlc hits per path are possible. High directory name cache hit rates on HP-UX will be seen on systems where pathname component requests are frequently repeated. For example, when users or applications work in the same directory where they repeatedly list or open the same files, cache hit rates will be high. Unusually low cache hit rates might be seen on HP-UX systems where users or applications access many different directories in no particular pattern. Low cache hit rates can also be an indicator of an underconfigured inode cache. When an inode cache is too small, the kernel will more frequently have to flush older inode cache and their corresponding directory name cache entries in order to make room for new inode cache entries. On HP-UX, the directory name lookup cache is static in size and is allocated in kernel memory. As a result, it is not affected by user memory constraints. The size of the cache is stored in the kernel variable "ncsize" and is not directly tunable by the system administrator; however, it can be changed indirectly by tuning other tables used in the formula to compute the "ncsize". The formula is: ncsize = MAX(((nproc+16+maxusers)+ 32+(2 npty)),ninode) Note that ncsize is always >= ninode which is the default size of the inode cache. This is because the directory name cache contains inode table offsets for each cached pathname component. ====== TBL_PROC_TABLE_AVAIL The configured maximum number of the proc table entries used by the kernel to manage processes. This number includes both free and used entries. On HP-UX, this is set by the NPROC value during system generation. AIX has a "dynamic" proc table, which means that AVAIL has been set higher than should ever be needed. On AIX System WPARs, this metric is NA. ====== TBL_PROC_TABLE_USED The number of entries in the proc table currently used by processes. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_PROC_TABLE_UTIL The percentage of proc table entries currently used by processes. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. ====== TBL_CLK_TICK_LENGTH This metric returns the configured number of 10ms time slices. If the number is "n" then: n=0 => time slice length 0ms - 10ms n=1 => time slice length 10ms - 20ms n=2 => time slice length 20ms - 30ms ====== TBL_MAX_USERS The value of the system configurable parameter "maxusers". This value signifies the approximate number of users on a system. Note, changing this value can significantly affect the performance of a system because memory allocation calculations are based on it. This value can be set in the /etc/system file. On Solaris non-global zones, this metric is N/A. ====== TBL_NUM_NFSDS The number of NFS servers configured. This is the value "nservers" passed to nfsd (the NFS daemon) upon startup. If no value is specified, the default is one. This value determines the maximum number of concurrent NFS requests that the server can handle. See man page for "nfsd". ====== TBL_FILE_TABLE_AVAIL The number of entries in the file table. On HP-UX and AIX, this is the configured maximum number of the file table entries used by the kernel to manage open file descriptors. On HP-UX, this is the sum of the "nfile" and "file_pad" values used in kernel generation. On SUN, this is the number of entries in the file cache. This is a size. All entries are not always in use. The cache size is dynamic. Entries in this cache are used to manage open file descriptors. They are reused as files are closed and new ones are opened. The size of the cache will go up or down in chunks as more or less space is required in the cache. On AIX, the file table entries are dynamically allocated by the kernel if there is no entry available. These entries are allocated in chunks. ====== TBL_FILE_TABLE_USED The number of entries in the file table currently used by file descriptors. On SUN, this is the number of file cache entries currently used by file descriptors. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_FILE_TABLE_UTIL The percentage of file table entries currently used by file descriptors. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_BUFFER_HEADER_AVAIL This is the maximum number of headers pointing to buffers in the file system buffer cache. On HP-UX, this is the configured number, not the maximum number. This can be set by the "nbuf" kernel configuration parameter. nbuf is used to determine the maximum total number of buffers on the system. On HP-UX, these are used to manage the buffer cache, which is used for all block IO operations. When nbuf is zero, this value depends on the "bufpages" size of memory (see System Administration Tasks manual). A value of "na" indicates either a dynamic buffer cache configuration, or the nbuf kernel parameter has been left unconfigured and allowed to "float" with the bufpages parameter. This is not a maximum available value in a fixed buffer cache configuration. Instead, it is the initial configured value. The actual number of used buffer headers can grow beyond this initial value. On SUN, this value is "nbuf". On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The "nbuf" value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. ====== TBL_BUFFER_HEADER_USED The number of buffer headers currently in use. On HP-UX, this dynamic value will rarely change once the system boots. During the system bootup, the kernel allocates a large number of buffer headers and the count is likely to stay at that value after the bootup completes. If the value increases beyond the initial boot value, it will not decrease. Buffer headers are allocated in kernel memory, not user memory, and therefore, will not decrease. This value can exceed the available or configured number of buffer headers in a fixed buffer cache configuration. On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The "nbuf" value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_BUFFER_HEADER_UTIL The percentage of buffer headers currently used. On HP-UX, a value of "na" indicates either a dynamic buffer cache configuration, or the nbuf kernel parameter has been left unconfigured and allowed to "float" with the bufpages parameter. On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The "nbuf" value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_BUFFER_CACHE_HWM The value of the system configurable parameter "bufhwm". This is the maximum amount of memory that can be allocated to the buffer cache. Unless otherwise set in the /etc/system file, the default is 2 percent of system memory. ====== TBL_SHMEM_TABLE_AVAIL The configured number of shared memory segments that can be allocated on the system. On SUN, the InterProcess Communication facilities are dynamically loadable. If the amount available is zero, this facility was not loaded when data collection began, and its data is not obtainable. The data collector is unable to determine that a facility has been loaded once data collection has started. If you know a new facility has been loaded, restart the data collection, and the data for that facility will be collected. See ipcs(1) to report on interprocess communication resources. ====== TBL_SHMEM_TABLE_USED On HP-UX, this is the number of shared memory segments currently in use. On all other Unix systems, this is the number of shared memory segments that have been built. This includes shared memory segments with no processes attached to them. A shared memory segment is allocated by a program using the shmget(2) call. Also refer to ipcs(1). On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_SHMEM_TABLE_UTIL The percentage of configured shared memory segments currently in use. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_MSG_TABLE_AVAIL The configured maximum number of message queues that can be allocated on the system. A message queue is allocated by a program using the msgget(2) call. Refer to the ipcs(1) man page for more information. On SUN, the InterProcess Communication facilities are dynamically loadable. If the amount available is zero, this facility was not loaded when data collection began, and its data is not obtainable. The data collector is unable to determine that a facility has been loaded once data collection has started. If you know a new facility has been loaded, restart the data collection, and the data for that facility will be collected. See ipcs(1) to report on interprocess communication resources. ====== TBL_MSG_TABLE_USED On HP-UX, this is the number of message queues currently in use. On all other Unix systems, this is the number of message queues that have been built. A message queue is allocated by a program using the msgget(2) call. See ipcs(1) to list the message queues. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_MSG_TABLE_UTIL The percentage of configured message queues currently in use. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_SEM_TABLE_AVAIL The configured number of semaphore identifiers (sets) that can be allocated on the system. On SUN, the InterProcess Communication facilities are dynamically loadable. If the amount available is zero, this facility was not loaded when data collection began, and its data is not obtainable. The data collector is unable to determine that a facility has been loaded once data collection has started. If you know a new facility has been loaded, restart the data collection, and the data for that facility will be collected. See ipcs(1) to report on interprocess communication resources. ====== TBL_SEM_TABLE_USED On HP-UX, this is the number of semaphore identifiers currently in use. On all other Unix systems, this is the number of semaphore identifiers that have been built. A semaphore identifier is allocated by a program using the semget(2) call. See ipcs(1) to list semaphores. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_SEM_TABLE_UTIL The percentage of configured semaphores identifiers currently in use. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_FILE_LOCK_AVAIL The configured number of file or record locks that can be allocated on the system. Files and/or records are locked by calls to lockf(2). On Linux kernel versions 2.4 and above, available file orrecord locks is a dynamic value which can grow up to max unsigned long. ====== TBL_FILE_LOCK_USED The number of file or record locks currently in use. One file can have multiple locks. Files and/or records are locked by calls to lockf(2). On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. ====== TBL_FILE_LOCK_UTIL The percentage of configured file or record locks currently in use. On Linux 2.4 and above kernel versions, this may not give correct picture as file or record locks available may change dynamically and can grow up to max unsigned long. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_PTY_AVAIL The configured number of entries used by the pseudo-teletype driver on the system. This limits the number of pty logins possible. For HP-UX, both telnet and rlogin use streams devices. Note: On Solaris 8, by default, the number of ptys is unlimited but restricted by the size of RAM. If the number of ptys is unlimited, this metric is reported as "na". ====== TBL_PTY_USED The number of pseudo-teletype driver (pty) entries currently in use. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_PTY_UTIL The percentage of configured pseudo-teletype driver (pty) entries currently in use. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_INODE_CACHE_AVAIL On HP-UX, this is the configured total number of entries for the incore inode tables on the system. For HP-UX releases prior to 11.2x, this value reflects only the HFS inode table. For subsequent HP-UX releases, this value is the sum of inode tables for both HFS and VxFS file systems (ninode plus vxfs_ninode). On HP-UX, file system directory activity is done through inodes that are stored on disk. The kernel keeps a memory cache of active and recently accessed inodes to reduce disk IOs. When a file is opened through a pathname, the kernel converts the pathname to an inode number and attempts to obtain the inode information from the cache based on the filesystem type. If the inode entry is not in the cache, the inode is read from disk into the inode cache. On HP-UX, the number of used entries in the inode caches are usually at or near the capacity. This does not necessarily indicate that the configured sizes are too small because the tables may contain recently used inodes and inodes referenced by entries in the directory name lookup cache. When a new inode cache entry is required and a free entry does not exist, inactive entries referenced by the directory name cache are used. If after freeing inode entries only referenced by the directory name cache does not create enough free space, the message "inode: table is full" message may appear on the console. If this occurs, increase the size of the kernel parameter, ninode. Low directory name cache hit ratios may also indicate an underconfigured inode cache. On HP-UX, the default formula for the ninode size is: ninode = ((nproc+16+maxusers)+32+ (2 npty)+(4 num_clients)) On all other Unix systems, this is the number of entries in the inode cache. This is a size. All entries are not always in use. The cache size is dynamic. Entries in this cache are reused as files are closed and new ones are opened. The size of the cache will go up or down in chunks as more or less space is required in the cache. Inodes are used to store information about files within the file system. Every file has at least two inodes associated with it (one for the directory and one for the file itself). The information stored in an inode includes the owners, timestamps, size, and an array of indices used to translate logical block numbers to physical sector numbers. There is a separate inode maintained for every view of a file, so if two processes have the same file open, they both use the same directory inode, but separate inodes for the file. ====== TBL_INODE_CACHE_USED The number of inode cache entries currently in use. On HP-UX, this is the number of "non-free" inodes currently used. Since the inode table contains recently closed inodes as well as open inodes, the table often appears to be fully utilized. When a new entry is needed, one can usually be found by reusing one of the recently closed inode entries. On HP-UX, file system directory activity is done through inodes that are stored on disk. The kernel keeps a memory cache of active and recently accessed inodes to reduce disk IOs. When a file is opened through a pathname, the kernel converts the pathname to an inode number and attempts to obtain the inode information from the cache based on the filesystem type. If the inode entry is not in the cache, the inode is read from disk into the inode cache. On HP-UX, the number of used entries in the inode caches are usually at or near the capacity. This does not necessarily indicate that the configured sizes are too small because the tables may contain recently used inodes and inodes referenced by entries in the directory name lookup cache. When a new inode cache entry is required and a free entry does not exist, inactive entries referenced by the directory name cache are used. If after freeing inode entries only referenced by the directory name cache does not create enough free space, the message "inode: table is full" message may appear on the console. If this occurs, increase the size of the kernel parameter, ninode. Low directory name cache hit ratios may also indicate an underconfigured inode cache. On HP-UX, the default formula for the ninode size is: ninode = ((nproc+16+maxusers)+32+ (2 npty)+(4 num_clients)) On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_BUFFER_CACHE_AVAIL The size (in KBs unless otherwise specified) of the file system buffer cache on the system. On HP-UX 11i v2 and below, these buffers are used for all file system IO operations, as well as all other block IO operations in the system (exec, mount, inode reading, and some device drivers). If dynamic buffer cache is enabled, the system allocates a percentage of available memory not less than dbc_min_pct nor more than dbc_max_pct, depending on the system needs at any given time. On systems with a static buffer cache, this value will remain equal to bufpages, or not less than dbc_min_pct nor more than dbc_max_pct. On HP-UX 11i v3 and above the limits of the file system buffer cache which is still being used for file system metadata are automatically set to certain percentages of filecache_min and filecache_max. On SUN, this value is obtained by multiplying the system page size times the number of buffer headers (nbuf). For example, on a SPARCstation 10 the buffer size is usually (200 (page size buffers) 4096 (bytes/page) = 800 KB). NOTE: (For SUN systems with VERITAS File System installed) Veritas implemented their Direct I/O feature in their file system to provide mechanism for bypassing the Unix system buffer cache while retaining the on disk structure of a file system. The way in which Direct I/O works involves the way the system buffer cache is handled by the Unix OS. Once the VERITAS file system returns with the requested block, instead of copying the content to a system buffer page, it copies the block into the application's buffer space. That's why if you have installed vxfs on your system, the TBL_BUFFER_CACHE_AVAIL can exceed the TBL_BUFFER_CACHE_HWM metric. On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The "nbuf" value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On AIX, this cache is used for all block IO. On AIX System WPARs, this metric is NA. ====== TBL_BUFFER_CACHE_USED The size (in KBs unless otherwise specified) of the sum of the currently used buffers. On HP-UX 11i v2 and below, this is normally greater than the amount requested due to internal fragmentation of the buffer cache. Since this is a cache, it is normal for it to be filled. The buffer cache is used to stage all block IOs to disk. On a dynamic buffer cache configuration, this metric is always equal to TBL_BUFFER_CACHE_AVAIL. With dynamic buffer cache, the system allocates a percentage of available memory not less than dbc_min_pct nor more than dbc_max_pct, depending on the system needs at any given time. On systems with a static buffer cache, this value will remain equal to bufpages, or not less than dbc_min_pct nor more than dbc_max_pct. With a static buffer cache, this metric shows the amount of memory within the configured size that is actually used. On HP-UX 11i v3 and above this metric value represents the usage of the file system buffer cache which is still being used for file system metadata. On AIX, this is normally greater than the amount requested due to internal fragmentation of the buffer cache. Since this is a cache, it is normal for it to be filled. The buffer cache is used to stage all block IOs to disk. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_BUFFER_CACHE_MIN On HP-UX 11i v2 and below, this metric represents the minimum size (in KBs unless otherwise specified) of the buffer cache. This corresponds to the kernel configuration parameter "dbc_min_pct". On systems with a dynamic buffer cache, the cache does not shrink below this limit. On systems with a fixed buffer cache, the cache size is equal to the value reported, which is based on the dbc_min_pct or bufpages settings. On HP-UX 11i v3 and above, this metric represents the minimum size (in KBs unless otherwise specified) of the file cache. This corresponds to the kernel configuration parameter "filecache_min". ====== TBL_SHMEM_AVAIL The maximum achievable size (in MB unless otherwise specified) of the shared memory pool on the system. This is a theoretical maximum determined by multiplying the configured maximum number of shared memory entries (shmmni) by the maximum size of each shared memory segment (shmmax). Your system may not have enough virtual memory to actually reach this theoretical limit - one cannot allocate more shared memory than the available reserved space configured for virtual memory. It should be noted that this value does not include any architectural limitations. (For example, on a 32-bit kernel, there is an addressing limit of 1.75 GB.). If the value adds up to a value > 2048TB, "o/f" may be reported on some platforms. On SUN, the InterProcess Communication facilities are dynamically loadable. If the amount available is zero, this facility was not loaded when data collection began, and its data is not obtainable. The data collector is unable to determine that a facility has been loaded once data collection has started. If you know a new facility has been loaded, restart the data collection, and the data for that facility will be collected. See ipcs(1) to report on interprocess communication resources. ====== TBL_SHMEM_REQUESTED The size (in KBs unless otherwise specified) of the sum of the currently requested shared memory segments. This may be more than shared memory used if any segments are swapped out. It also may be less than shared memory used due to internal fragmentation of the shared memory pool. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_BUFFER_CACHE_MAX On HP-UX 11i v2 and below, this metric represents the maximum size (in KBs unless otherwise specified) of the buffer cache. This corresponds to the kernel configuration parameter "dbc_max_pct". On systems with a dynamic buffer cache, the cache does not exceed this limit. On systems with a fixed buffer cache, the cache size is equal to the value reported, which is based on the dbc_max_pct or bufpages settings. On HP-UX 11i v3 and above, this metric represents the maximum size (in KBs unless otherwise specified) of the file cache. This corresponds to the kernel configuration parameter "filecache_max". ====== TBL_SHMEM_USED The size (in KBs unless otherwise specified) of the shared memory segments. Additionally, it includes memory segments to which no processes are attached. If a shared memory segment has zero attachments, the space may not always be allocated in memory. See ipcs(1) to list shared memory segments. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_MSG_BUFFER_AVAIL The maximum achievable size (in KBs unless otherwise specified) of the message queue buffer pool on the system. Each message queue can contain many buffers which are created whenever a program issues a msgsnd(2) call. Each of these buffers is allocated from this buffer pool. Refer to the ipcs(1) man page for more information. This value is determined by taking the product of the three kernel configuration variables "msgseg", "msgssz" and "msgmni". If the value adds up to a value > 2048GB, "o/f" may be reported on some platforms. On SUN, the InterProcess Communication facilities are dynamically loadable. If the amount available is zero, this facility was not loaded when data collection began, and its data is not obtainable. The data collector is unable to determine that a facility has been loaded once data collection has started. If you know a new facility has been loaded, restart the data collection, and the data for that facility will be collected. See ipcs(1) to report on interprocess communication resources. ====== TBL_MSG_BUFFER_USED The current total size (in KBs unless otherwise specified) of all IPC message buffers. These buffers are created by msgsnd(2) calls and released by msgrcv(2) calls. On HP-UX and OSF1, this field corresponds to the CBYTES field of the "ipcs -qo" command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_SHMEM_ACTIVE The size (in KBs unless otherwise specified) of the shared memory segments that have running processes attached to them. This may be less than the amount of shared memory used on the system because a shared memory segment may exist and not have any process attached to it. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_SHMEM_TABLE_ACTIVE The number of shared memory segments that have running processes attached to them. This may be less than the number of shared memory segments that have been allocated. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_MSG_BUFFER_ACTIVE The current active total size (in KBs unless otherwise specified) of all IPC message buffers. These buffers are created by msgsnd(2) calls and released by msgrcv(2) calls. This metric only counts the active message queue buffers, which means that a msgsnd(2) call has been made and the msgrcv(2) has not yet been done on the queue entry or a msgrcv(2) call is waiting on a message queue entry. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_MSG_TABLE_ACTIVE The number of message queues currently active. A message queue is allocated by a program using the msgget(2) call. This metric returns only the entries in the message queue currently active. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_SEM_TABLE_ACTIVE The number of semaphore identifiers currently active. This means that the semaphores are currently locked by processes. Any new process requesting this semaphore is blocked if IPC_NOWAIT flag is not set. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. ====== TBL_DISKIO_HIST_ENABLE This metric returns a string set to "Enable" or "Disable" depending on the system parameter value for Disk IO history enabled or disabled. ====== TBL_LEAST_PRIV_ENABLE This metric returns a string set to "Enable" or "Disable" depending on the system parameter value for least privilege enabled or disabled. ====== TBL_AUTO_START_ENABLE This metric returns a string set to "Enable" or "Disable" depending on the system parameter value for Automatic Start after halt enabled or disabled. ====== TBL_MEM_SCRUB_ENABLE This metric returns a string set to "Enable" or "Disable" depending on the system parameter value for Memory Scrubbing enabled or disabled. ====== TBL_THREAD_TABLE_AVAIL This metric returns the System parameter for the maximum number of system threads configured on the system. ====== TBL_CBLOCK_TABLE_AVAIL This metric returns the System configuration parameter for the maximum number of cblocks in the cblock array. ====== TBL_PROC_MEM_THRESHOLD This metric returns the configured process memory overcommitment threshold. This is the "p" parameter for the Memory Control algorithm. This parameter determines whether a process is eligible for suspension. This is used to set a threshold for the ratio of two measures that are maintained for every process. A process is suspended when memory is over committed and the following criterion is met: (r p) > f, where r = The number of repages that the process has accumulated in the last second p = The "p" parameter (this metrics) f = The number of page faults that the process has accumulated in the last second The term "repage" is defined as the number of pages belonging to the process which were written to paging space or file space and are soon after referenced again by the process. The default value of "p" is 4. ====== TBL_WAIT_REACTIVATE_PROCESS This metric returns the configured Wait Time in seconds after thrashing ends before adding suspended processes back into the mix. The "w" parameter controls the number of one-second intervals during which the number of pages written in page space divided by number of pages stolen must remain below 1/h before suspended processes are reactivated. The default value of one second is close to the minimum value allowed, zero. A value of one second aggressively attempts to reactivate processes as soon as one-second safe period has occurred. The larger the value of "w", the longer the system must "behave itself" before suspended processes are reactivated. Large values of "w" run the risk of unnecessarily poor response times for suspended processes, while allowing the system to "starve" for lack of active process to run. ====== TBL_MIN_MULTI_PROGRAM This metric returns the configured minimum degree of multi-programming for the system. This is the "m" parameter for the Memory Control algorithm. The "m" parameter determines a lower limit for the degree of multiprogramming. The degree of multiprogramming is defined as the number of active (not suspended) processes. Excluded from the count are the kernel process and processes with (1) fixed priorities with priority less than 60, (2) pinned memory or (3) awaiting events. The default value of 2 ensures that at least two user processes are always able to be active. High values of "m" effectively defeat the ability of Memory Load Control to suspend processes. ====== TBL_ELAPSED_RESUSPEN_PROCESS This metric returns the configured Elapsed Time, in seconds, for resuspension of a process that has recently resumed after suspension. This is the "e" parameter of the Memory Load Control algorithm. The "e" parameter represents a time value. Each time a suspended process is reactivated, it is guaranteed to be exempt from suspension for a period of "e" elapsed seconds. This is to ensure that the high cost (in disk IO) of paging in a suspended process pages results in a reasonable opportunity for progress. The default value of "e" is 2 seconds. ====== TBL_FORK_RETRY_CLK_TICK This metric returns the configured number of clock ticks to delay before retrying a failed fork call. The system retries up to five times by default. This parameter is the pacing of process creation. If not enough paging space is available to fulfill the current fork request, then the system delays this parameter specified number of clock ticks before retrying. ====== GBL_LOST_MI_FTRACE_BUFFERS The number of ftrace buffers lost by the measurement processing daemon. ====== HBA ====== BYHBA_TIME The time of day of the interval. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_INTERVAL The amount of time in the interval. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_ID The instance number of the Host Bus Adaptor. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_NAME The name of the Host Bus Adaptor. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_DEVNAME The hardware path of the Host Bus Adaptor. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_DEVNO Major / Minor number of the device. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_CLASS The class of the Host Bus Adaptor. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_DRIVER Name of driver handling the Host Bus Adaptor. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_STATE The state of the Host Bus Adaptor("Active"/"Closed"). This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_UTIL Percentage of time HBA was busy servicing the IO requests in this interval. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_THROUGHPUT_UTIL Percentage of IO bandwidth utilized by the Host Bus Adaptor. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_IO Number of IO requests handled by the HBA in this interval. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_READ The number of reads for this IO card during the interval. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_WRITE The number of writes for this IO card during the interval. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_IO_RATE The average number of IO requests per second for this device during the interval. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_READ_RATE The number of reads for this IO card during the interval. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_WRITE_RATE The number of writes for this IO card during the interval. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_BYTE_RATE The average KBs per second transferred to or from this card during the interval. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_READ_BYTE_RATE The average KBs per second read from this card during the interval. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_WRITE_BYTE_RATE The average KBs per second written to this card during the interval. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_REQUEST_QUEUE The average number of IO requests that were in the wait queue for this disk device during the interval. These requests are the physical requests (as opposed to logical IO requests). This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_BUSY_TIME This is the time in seconds, during the interval that the device had IO in progress from the point of view of the Operating System. In other words, the time, in seconds, the device was busy servicing requests. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_AVG_WAIT_TIME This is the time, in milli seconds, that a request had to wait in the device queue before getting processed. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_AVG_SERVICE_TIME This is the average time, in milli seconds, this device took to service one request. This metric is supported on HP-UX 11iv3 and above. ====== BYHBA_TYPE The type of device. "HBA" for HBA card and "TAPE" for tape drives. This metric is supported on HP-UX 11iv3 and above. ====== Logical ====== BYLS_LS_ID An unique identifier of the logical system. On HPVM, this metric is a numeric id and is equivalent to "VM # " field of 'hpvmstatus' command. On AIX LPAR, this metric indicates partition number and is equivalent to "Partition Number" field of 'lparstat -i' command. For aix wpar, this metric represents the partition number and is equivalent to "uname -W" from inside wpar. On Solaris Zones, this metric indicates the zone id and is equivalent to 'ID' field of 'zoneadm list -vc' command. On Hyper-V host, this metric indicates the PID of the process corresponding to this logical system. For Root partition, this metric is NA. On vMA, this metric is a unique identifier for a host, resource pool and a logical system. The value of this metric may change for an instance across collection intervals. ====== BYLS_LS_NAME This is the name of the computer. On HPVM, this metric indicates the Virtual Machine name of the logical systemand is equivalent to "Virtual Machine Name" field of 'hpvmstatus' command. On AIX the value is as returned by the command "uname -n" (that is, the string returned from the "hostname" program). On vMA, this metric is a unique identifier for host, resource pool and a logical system. The value of this metric remains the same, for an instance, across collection intervals. On Solaris Zones, this metric indicates the zone name and is equivalent to 'NAME' field of 'zoneadm list -vc' command. On Hyper-V host, this metric indicates the name of the XML file which has configuration information of the logical system. This file will be present under the logical system's installation directory indicated by BYLS_LS_PATH. For Root partition, the value is always "Root". ====== BYLS_LS_STATE The state of this logical system. On HPVM, the logical systems can have one of the following states: Unknown Other invalid Up Down Boot Crash Shutdown Hung On vMA, this metric can have one of the following states for a host: on off unknown The values for a logical system can be one of the following: on off suspended unknown The value is NA for resource pool. On Solaris Zones, the logical systems can have one of the following states: configured incomplete installed ready running shutting down mounted On AIX lpars, the logical system will be always active. On AIX wpars, the logical systems can have one of the following states: Broken Transitional Defined Active Loaded Paused Frozen Error A logical system on a Hyper-V host can have the following states: unknown enabled disabled paused suspended starting snapshtng migrating saving stopping deleted pausing resuming ====== BYLS_LS_OSTYPE The Guest OS this logical system is hosting. On HPVM, the metric can have following values: HP-UX Linux Windows OpenVMS Other Unknown On Hyper-V host, the metric can have following values: Windows Other On Hyper-V host, this metric is NA if the logical system is not active or Hyper-V Integration Components are not installed on it. On vMA, the metric can have the following values for host and logical system: ESX/ESXi followed by version or ESX-Serv (applicable only for a host) Linux Windows Solaris Unknown The value is NA for resource pool ====== BYLS_LS_PROC_ID On HPVM host and Hyper-V host, each VM is manifested as a process. These processes have the executable name hpvmapp for HPVM and vmwp.exe for Hyper-V host. This metric will have the PID of the process corresponding to this logical system. On HPVM, typically hpvmapp has the option -d whose argument is the name of the VM. On Hyper-V host, for Root partition, this metric is NA. ====== BYLS_NUM_CPU The number of virtual CPUs configured for this logical system. This metric is equivalent to GBL_NUM_CPU on the corresponding logical system. On HPVM, the maximum CPUs a logical system can have is 4 with respect to HPVM 3.x. On AIX SPLPAR, the number of CPUs can be configured irrespective of the available physical CPUs in the pool this logical system belongs to. For AIX wpars, this metric represents the logical CPUs of the global environment. On vMA, for a host the metric is the number of physical CPU threads on the host. For a logical system, the metric is the number of virtual cpus configured.For a resource pool the metric is NA. On Solaris Zones, this metric represents number of CPUs in the CPU pool this zone is attached to. This metric value is equivalent to GBL_NUM_CPU inside corresponding non-global zone. ====== BYLS_NUM_DISK The number of disks configured for this logical system. Only local disk devices and optical devices present on the system are counted in this metric. On vMA, for a host the metric is the number of disks configured for the host . For a logical system, the metric is the number of logical disk devices present on the logical system. For a resource pool the metric is NA. For AIX wpars, this metric will be "na". On Hyper-V host, this metric value is equivalent to GBL_NUM_DISK inside corresponding Hyper-V guest. On Hyper-V host, this metric is NA if the logical system is not active. ====== BYLS_NUM_NETIF The number of network interfaces configured for this logical system. On LPAR, this metric includes the loopback interface. On Hyper-V host, this metric value is equivalent to GBL_NUM_NETWORK inside corresponding Hyper-V guest. On Solaris Zones, this metric value is equivalent to GBL_NUM_NETWORK inside corresponding non-global zone. On Hyper-V host, this metric is NA if the logical system is not active. On vMA, for a host the metric is the number of network adapters on the host. For a logical system, the metric is the number of network interfaces configured for the logical system. For a resource pool the metric is NA. ====== BYLS_CPU_ENTL_MIN The minimum CPU units configured for this logical system. On HP-UX HPVM, this metric indicates the minimum percentage of physical CPU that a virtual CPU of this logical system is guaranteed. On AIX SPLPAR, this metric is equivalent to "Minimum Capacity" field of 'lparstat -i' command. For WPARs, it is the minimum CPU share assigned to a WPAR that is guaranteed. WPAR shares CPU units of its global environment. On Hyper-V host, for Root partition, this metric is NA. On vMA, for a host, the metric is equivalent to total number of cores on the host. For a resource pool and a logical system, this metrics indicates the guranteed minimum CPU units configured for it. On Solaris Zones, this metrics indicates the configured minimum CPU percentage reserved for a logical system. For Solaris Zones, this metric is calculated as: BYLS_CPU_ENTL_MIN = ( BYLS_CPU_SHARES_PRIO / Pool-Cpu-Shares ) where, Pool-Cpu-Shares is the total CPU shares available with CPU pool the zone is associated with. Pool-Cpu-Shares is addition of BYLS_CPU_SHARES_PRIO values for all active zones associated with this pool. ====== BYLS_CPU_ENTL_MAX The maximum CPU units configured for a logical system. On HP-UX HPVM, this metric indicates the maximum percentage of physical CPU that a virtual CPU of this logical system can get. On AIX SPLPAR, this metric is equivalent to "Maximum Capacity" field of 'lparstat -i' command. For WPARs, it is the maximum percentage of CPU that a WPAR can have even if there is no contention for CPU. WPAR shares CPU units of its global environment. On Hyper-V host, for Root partition, this metric is NA. On vMA, for a host, the metric is equivalent to total number of cores on the host. For a resource pool and a logical system, this metrics indicates the maximum CPU units configured for it. ====== BYLS_UPTIME_SECONDS The uptime of this logical system in seconds. On AIX LPARs, this metric will be "na". On vMA, for a host and logical system the metric is the uptime in seconds while for a resource pool the metric is NA. ====== BYLS_LS_MODE This metric indicates whether the CPU entitlement for the logical system is Capped or Uncapped. The value "Uncapped" indicates that the logical system can utilize idle cycles from the shared processor pool of CPUs beyond its CPU entitlement. On AIX SPLPAR, this metric is same as "Mode" field of 'lparstat -i' command. For WPARs, this metric is always CAPPED. On vMA, the value is Capped for a host and Uncapped for a logical system. For resource pool, the value is Uncapped or Capped depending on whether the reservation is expandable or not for it. On Solaris Zones, this metric is "Capped" when the zone is assigned CPU shares and is attached to a valid CPU pool. ====== BYLS_LS_SHARED This metric indicates whether the physical CPUs are dedicated to this logical system or shared. On HP-UX HPVM, and Hyper-V host,this metric is always "Shared". On vMA, the value is "Dedicated" for host, and "Shared" for logical system and resource pool. On AIX SPLPAR, this metric is equivalent to "Type" field of 'lparstat -i' command. For AIX wpars,this metric will be always "Shared". On Solaris Zones, this metric is "Dedicated" when this zone is attached to a CPU pool not shared by any other zone. ====== BYLS_DISPLAY_NAME On vMA, this metric indicates the name of the host or logical system or resource pool. On HPVM, this metric indicates the Virtual Machine name of the logical systemand is equivalent to "Virtual Machine Name" field of 'hpvmstatus' command. On AIX the value is as returned by the command "uname -n" (that is, the string returned from the "hostname" program). On Solaris Zones, this metric indicates the zone name and is equivalent to 'NAME' field of 'zoneadm list -vc' command. On Hyper-V host, this metric indicates the Virtual Machine name of the logical systemand is equivalent to the Name displayed in Hyper-V Manager. For Root partition, the value is always "Root". ====== BYLS_MEM_ENTL The entitled memory configured for this logical system (in MB). On Hyper-V host, for Root partition, this metric is NA. On vMA, for host the value is the physical memory available in the system and for logical system this metric indicates the minimum memory configured while for resource pool the value is NA. For an AIX frame, this value is obtained from the command "lshwres -m (frame) -r mem --level sys ". ====== BYLS_CPU_SHARES_PRIO This metric indicates the weightage/priority assigned to a Uncapped logical system. This value determines the minimum share of unutilized processing units that this logical system can utilize. The value of this metric will be "-3" in Performance Collection Component and "ul" in other clients if cpu shares value is 'Unlimited' for a logical system. On AIX SPLPAR this value is dependent on the available processing units in the pool and can range from 0 to 255. For WPARs, this metric represents how much of a particular resource a WPAR receives relative to the other WPARs. On vMA, for logical system and resource pool this value can range from 1 to 1000000 while for host the value is NA. On Solaris Zones, this metric sets a limit on the number of fair share scheduler (FSS) CPU shares for a zone. On Hyper-V host, this metric specifies allocation of CPU resources when more than one virtual machine is running and competing for resources. This value can range from 0 to 10000. For Root partition, this metric is NA. ====== BYLS_CPU_PHYS_TOTAL_TIME Total time in seconds, spent by the logical system on the physical CPUs. On HP-UX, this information is updated internally every 10 seconds so it may take that long for these values to be updated in Performance Collection Component/ Glance. On vMA, the value indicates the time spent in seconds on the physical CPU. by logical system or host or resource pool, On KVM/Xen, this value is core-normalized if GBL_IGNORE_MT is enabled on the server. ====== BYLS_CPU_PHYS_TOTAL_UTIL Percentage of total time the physical CPUs were utilized by this logical system during the interval. On HP-UX, this information is updated internally every 10 seconds so it may take that long for these values to be updated in Performance Collection Component/ Glance. On Solaris, this metric is calculated with respect to the available active physical CPUs on the system. On AIX, this metric is equivalent to sum of BYLS_CPU_PHYS_USER_MODE_UTIL and BYLS_CPU_PHYS_SYS_MODE_UTIL. For AIX lpars, the metric is calculated with respect to the available physical CPUs in the pool to which this LPAR belongs to. For AIX WPARs, the metric is calculated with respect to the available physical CPUs in the resource set or Global Environment. On vMA, the value indicates percentage of total time the physical CPUs were utilized by logical system or host or resource pool, On KVM/Xen, this value is core-normalized if GBL_IGNORE_MT is enabled on the server. ====== BYLS_CPU_ENTL_EMIN On vMA, for host, logical system and resource pool the value is "na". ====== BYLS_MEM_ENTL_UTIL The percentage of entitled memory in use during the interval. On vMA, for a logical system or a host, the value indicates percentage of entitled memory in use during the interval by it. For an AIX frame, this is calculated using "lshwres -r mempool -m (frame)" from HMC. Active Memory Sharing has to be turned on for this. On vMA, for a resource pool, this metric is "na". On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running "hpvmstatus -V" will indicate whether the driver is active. For all other guests, the value is "na". ====== BYLS_LS_UUID UUID of this logical system. This Id uniquely identifies this logical system across multiple hosts. On Hyper-V host, for Root partition, this metric is NA. On vMA, for a logical system or a host, the value indicates the UUID appended to display_name of the system. For a resource pool the value is hostname of the host where resource pool is hosted followed by the unique id of resource pool. For an AIX frame, the value is the display name appended with serial number. For an LPAR, this value is the frame's name appended with serial number. ====== BYLS_CPU_TOTAL_UTIL Percentage of total time the logical CPUs were not idle during this interval. This metric is calculated against the number of logical CPUs configured for this logical system. For AIX wpars, the metric represents the percentage of time the physical CPUs were not idle during this interval. ====== BYLS_CPU_PHYSC This metric indicates the number of CPU units utilized by the logical system. On an Uncapped logical system, this value will be equal to the CPU units capacity used by the logical system during the interval. This can be more than the value entitled for a logical system. ====== BYLS_CPU_ENTL_UTIL Percentage of entitled processing units (guaranteed processing units allocated to this logical system) consumed by the logical system. On HP-UX HPVM host, the metric indicates the logical system's CPU utilization with respect to minimum CPU entitlement. On HP-UX HPVM host, this metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_PHYSC / ((BYLS_CPU_ENTL_MIN/100) BYLS_NUM_CPU)) 100 On AIX, this metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_PHYSC / BYLS_CPU_ENTL) 100 On WPAR, this metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_PHYSC / BYLS_CPU_ENTL_MAX) 100 This metric matches "%Resc" of topas command (inside WPAR) On Solaris Zones, the metric indicates the logical system's CPU utilization with respect to minimum CPU entitlement. This metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_TOTAL_UTIL / BYLS_CPU_SHARES_PRIO) 100 If a Solaris zone is not assigned a CPU entitlement value then a CPU entitlement value is derived for this zone based on total CPU entitlement associated with the CPU pool this zone is attached to. On Hyper-V host, for Root partition, this metric is NA. On vMA, for a host the value is same as BYLS_CPU_PHYS_TOTAL_UTIL while for logical system and resource pool the value is the percentage of processing units consumed w.r.t minimum CPU entitlement. ====== BYLS_RUN_QUEUE The 1-minute load average for processors available for a logical system. On AIX LPAR, the load average is the total number of runnable and running threads summed over all processors during the interval. ====== BYLS_HYPCALL The number of Hypervisor calls made by a logical system during the interval. Higher number of calls will result in higher BYLS_CPU_PHYS_SYS_MODE_UTIL, BYLS_CPU_PHYS_WAIT_MODE_UTIL, GBL_CPU_SYS_MODE_UTIL and GBL_CPU_WAIT_UTIL. For AIX wpars, the metric will be "na". ====== BYLS_CPU_MT_ENABLED Indicates whether the CPU hardware threads are enabled("On") or not("Off") for a logical system. For AIX WPARs, the metric will be "na". On vMA, this metric indicates whether the CPU hardware threads are enabled or not for a host while for a resource pool and a logical system the value is not available("na"). ====== BYLS_VCSWITCH_RATE Number of virtual context switches per second for a logical system during the interval. For AIX wpars, the metric will be "na". ====== BYLS_CPU_ENTL The entitlement or the CPU units granted to a logical system at startup. On AIX SPLPAR, this metric indicates the cpu units allocated by Hypervisor to a logical system at the time of starting. This metric is equivalent to "Entitled Capacity" field of 'lparstat -i' command. For WPARs, it is the maximum units of CPU that a WPAR can have when there is a contention for CPU. WPAR shares CPU units of its global environment. ====== BYLS_HYP_UTIL Percentage of time spent in Hypervisor by a logical system during the interval. Higher utilization of hypervisor will result in higher BYLS_CPU_PHYS_SYS_MODE_UTIL, BYLS_CPU_PHYS_WAIT_MODE_UTIL, GBL_CPU_SYS_MODE_UTIL and GBL_CPU_WAIT_UTIL. For AIX wpars, the metric will be "na". ====== BYLS_CPU_PHYS_USER_MODE_UTIL The percentage of time the physical CPUs were in user mode for the logical system during the interval. On AIX LPAR, this value is equivalent to "%user" field reported by the "lparstat" command. On Hyper-V host, this metric indicates the percentage of time spent in guest code. On vMA, the metrics indicates the percentage of time the physical CPUs were in user mode during the interval for the host or logical system. On vMA, for a resource pool, this metric is "na". ====== BYLS_CPU_PHYS_SYS_MODE_UTIL The percentage of time the physical CPUs were in system mode (kernel mode) for the logical system during the interval. On AIX LPAR, this value is equivalent to "%sys" field reported by the "lparstat" command. On Hyper-V host, this metric indicates the percentage of time spent in Hypervisor code. On vMA, the metric indicates the percentage of time the physical CPUs were in system mode during the interval for the host or logical system. On vMA, for a resource pool, this metric is "na". ====== BYLS_IP_ADDRESS This metric indicates IP Address of the particular logical system. On vMA, this metric indicates the IP Address for a host and a logical system while for a resource pool the value is NA. ====== BYLS_MEM_ENTL_MIN The minimum amount of memory configured for the logical system, in MB. On AIX LPARs, this metric will be "na". On vMA, this metric indicates the reserved amount of memory configured for a host, resource pool or a logical system. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running "hpvmstatus -V" will indicate whether the driver is active. For all other guests, the value is "na". ====== BYLS_MEM_ENTL_MAX The maximum amount of memory configured for a logical system, in MB. The value of this metric will be "-3" in Performance Collection Component and "ul" in other clients if entitlement is 'Unlimited' for a logical system. On AIX LPARs, this metric will be "na". On vMA, this metric indicates the maximum amount of memory configured for a resource pool or a logical system. For a host, the value is the amount of physical memory available in the system. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running "hpvmstatus -V" will indicate whether the driver is active. For all other guests, the value is "na". ====== BYLS_MEM_SHARES_PRIO The weightage/priority for memory assigned to this logical system. This value influences the share of unutilized physical Memory that this logical system can utilize. The value of this metric will be "-3" in Performance Collection Component and "ul" in other clients if memory shares value is 'Unlimited' for a logical system. On AIX LPARs, this metric will be "na". On vMA, this metric indicates the share of memory configured to a resource pool and a logical system. For a host the value is NA. ====== BYLS_MEM_OVERHEAD The amount of memory associated with a logical system, that is currently consumed on the host system, due to virtualization. On vMA, this metric indicates the amount of overhead memory associated with a host, logical system and resource pool. ====== BYLS_MEM_SWAPPED On vMA, for a host, logical system and resource pool, this metrics indicates the amount of memory that has been transparently swapped to and from the disk. ====== BYLS_MEM_PHYS_UTIL The percentage of physical memory used during the interval. On vMA and Cluster, the metric indicates the percentage of physical memory used by a host, logical system. On vMA, for a resource pool, this metric is "na". On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running "hpvmstatus -V" will indicate whether the driver is active. For all other guests, the value is "na". On KVM/Xen, this is the percentage of the total memory assigned to the VM that is currently used. For Domain-0 or any other instance with unlimited memory entitlement, it is NA. ====== BYLS_LS_PATH This metric indicates the installation path for the logical system. On Hyper-V host, for Root partition, this metric is NA. On vMA, the metric indicates the installation path for host or logical system. On vMA, for a resource pool and a host, this metric is "na". ====== BYLS_LS_TYPE The type of this logical system. On AIX, the logical systems can have one of the following types: lpar sys wpar app wpar On vMA, the value of this metric is "VMware". For an AIX frame, the value of this metric is "FRAME". ====== BYLS_POOL_NAME This metric indicates the name of the cpu pool this zone is attached to. ====== BYLS_SCHEDULING_CLASS This metric indicates the scheduling class for the zone. ====== BYLS_MEM_SWAP This metric indicates the total amount of swap that can be consumed by user process address space mappings and tmpfs mounts for this zone. The metric value is represented in Mbytes. ====== BYLS_MEM_LOCKED This metric indicates the amount of locked physical memory available to a zone. The metric value is represented in Mbytes. ====== BYLS_LS_HOSTNAME This is the DNS registered name of the system. On Hyper-V host, this metric is NA if the logical system is not active or Hyper-V Integration Components are not installed on it. On vMA, for a host and logical system the metric is the Fully Qualified Domain Name, while for resource pool the value is NA. ====== BYLS_CPU_PHYS_WAIT_MODE_UTIL The percentage of time the physical CPUs were in wait mode for the logical system during the interval. On AIX LPAR, this value is equivalent to "%wait" field reported by the "lparstat" command. ====== BYLS_CPU_PHYS_IDLE_MODE_UTIL The percentage of time the physical CPUs were in idle state for the logical system during the interval. On AIX LPAR, this value is equivalent to "%idle" field reported by the "lparstat" command. ====== BYLS_PHANTOM_INTR It is the number of phantom interrupts that the logical partition received during the interval. A phantom interrupt is an interrupt sent to another logical partition that shares the same CPU Unit. On AIX LPAR, this value is equivalent to "phint" field reported by the "lparstat" command. For AIX wpars, the metric will be "na". ====== BYLS_MEM_SWAP_USED This metric indicates the amount of swap memory consumed by the zone with respect to total configured swap memory (BYLS_MEM_SWAP). The metric value is represented in Mbytes. ====== BYLS_MEM_LOCKED_USED This metric indicates the amount of locked memory consumed by the zone with respect to total configured locked memory (BYLS_MEM_LOCKED). The metric value is represented in Mbytes. ====== BYLS_MEM_SWAP_UTIL On Solaris, this metric indicates the percentage of swap memory consumed by the zone with respect to total configured swap memory (BYLS_MEM_SWAP). This metric is calculated as : BYLS_MEM_SWAP_UTIL = (BYLS_MEM_SWAP_USED ) / (BYLS_MEM_SWAP) 100 In case of uncapped zones (swap memory not configured), this is calculated as : BYLS_MEM_SWAP_UTIL = (BYLS_MEM_SWAP_USED) / ( (GBL_SWAP_SPACE_MEM_AVAIL) - (Sum of BYLS_MEM_SWAP of all capped zones) ) 100 On vMA, for a logical system, it is the percentage of swap memory utilized w.r.t the amount of swap memory available for a logical system. For host and resource pool the value is NA. For a logical system this metric is calculated using the below formula: (BYLS_MEM_SWAPPED 100)/(BYLS_MEM_ENTL - BYLS_MEM_ENTL_MIN) ====== BYLS_MEM_LOCKED_UTIL This metric indicates the percentage of locked memory consumed by the zone with respect to total configured locked memory (BYLS_MEM_LOCKED). In case of uncapped zones (locked memory not configured), this is calculated as : BYLS_MEM_LOCKED_UTIL = (BYLS_MEM_LOCKED_USED) / ( (GBL_MEM_PHYS) - (Sum of BYLS_MEM_LOCKED of all capped zones) ) 100 ====== BYLS_LS_ROLE On vMA, for a host the metric is HOST. For a logical system the value is GUEST and for a resource pool the value is RESPOOL. For logical system which is a vMA or VA, the value is PROXY. For datacenter, the value is DATACENTER. For cluster, the value is CLUSTER. For datastore, the value is DATASTORE. For template, the value is TEMPLATE. For an AIX frame, the role is "Host". For an LPAR, the role is "Guest". ====== BYLS_NUM_CPU_CORE On vMA, for a host this metric provides the total number of CPU cores on the system. For a logical system or a resource pool the value is NA. ====== BYLS_NUM_SOCKET On vMA, for a host, this metric indicates the number of physical cpu sockets on the system. For a logical system or a resource pool the value is NA. ====== BYLS_UPTIME_HOURS On vMA, for a host and logical system the metrics is the time, in hours, since the last system reboot. For a resource pool the value is NA. ====== BYLS_CPU_CLOCK On vMA, for a host and logical system, it is the clock speed of the CPUs in MHz if all of the processors have the same clock speed. For a resource pool the value is NA. This metric represents the CPU clock speed. For an AIX frame, this metric is available only if the LPAR supports perfstat_partition_config call from libperfstat.a. This is usually present on AIX 7.1 onwards. For an LPAR, this value will be na. ====== BYLS_MEM_FREE The amount of free memory on the logical system, in MB. On vMA, for a host and logical system, it is the amount of memory not allocated. For a resource pool the value is "na". On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running "hpvmstatus -V" will indicate whether the driver is active. For all other guests, the value is "na". ====== BYLS_MEM_FREE_UTIL The percentage of memory that is free at the end of the interval. On vMA, for a resource pool the value is NA. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running "hpvmstatus -V" will indicate whether the driver is active. For all other guests, the value is "na". ====== BYLS_BOOT_TIME On vMA, for a host and logical system the metric is the date and time when the system was last booted. The value is NA for resource pool. Note that this date is obtained from the VMware API as an already formatted string and may not conform to the expected localization. ====== BYLS_MACHINE_MODEL On vMA, for a host, it is the CPU model of the host system. For a logical system and resource pool the value is "na". The machine model of the AIX Frame if present. For an LPAR, this value would be "na". ====== BYLS_MEM_AVAIL On vMA, for a host, the amount of physical available memory in the host system (in MBs unless otherwise specified). For a logical system and resource pool the value is NA. ====== BYLS_MEM_PHYS On vMA, for host the value is the physical memory available in the system and for logical system this metric indicates the minimum memory configured. On vMA, for a resource pool, this metric is "na". On HPVM, this metric matches the data in the "Memory Details" section of "hpvmstatus -V", when the dynamic memory driver is not enabled, and it matches the data in the "Dynamic Memory Information" section when the dynamic memory driver is active. The dynamic memory driver is currently only available on guests running HP-UX 11iv3 or newer versions. ====== BYLS_NUM_LS On vMA, for a host, resource pool, virtual app and datacenter,this indicates the number of logical systems hosted. For all other entities, the value is NA. For an AIX frame, this is the number of LPARs hosted by frame. For an LPAR, this value will be "na". ====== BYLS_NUM_ACTIVE_LS On vMA, for a host, this indicates the number of logical systems hosted in a system that are active. For a logical system and resource pool the value is NA. For an AIX frame, this is the number of LPARs in "Running" state. For an LPAR, this value will be "na". ====== BYLS_CPU_USER_MODE_UTIL On vMA, for a host and a logical system, this metric indicates the percentage of time the CPU was in user mode during the interval. On vMA, for a resource pool, this metric is "na". ====== BYLS_CPU_SYS_MODE_UTIL On vMA, for a host and a logical system, this metric indicates the percentage of time the CPU was in system mode. On vMA, for a resource pool, this metric is "na". during the interval. ====== BYLS_LS_SERIALNO The serial number of the AIX frame. For an LPAR, this value would be "na". ====== BYLS_CPU_FAMILY The family of the processor of the frame . This is available only if the LPAR supports perfstat_partition_config call from libperfstat.a. This is usually present on AIX 7.1 onwards. ====== BYLS_MGMT_IP_ADDRESS The value is the IP address of the HMC. This entry format will be in the form (username)@(IPaddress) in the file "/var/opt/perf/hmc". ====== BYLS_LS_HOST_HOSTNAME On vMA, for logical system and resource pool, it is the FQDN of the host on which they are hosted. For a host, the value is NA. ====== BYLS_LS_PARENT_UUID On vMA, the metric indicates the UUID appended to display_name of the parent entity. For logical system and resource pool this metric could indicate the UUID appended to display_name of a host or resource pool as they can be created under a host or resource pool. For a host, the value is NA. For an LPAR , if the frame is discovered the value will be BYLS_LS_UUID of the frame. ====== BYLS_LS_PARENT_TYPE On vMA, the metric indicates the type of parent entity. The value is HOST if the parent is a host, RESPOOL if the parent is resource pool. For a host, the value is NA. ====== BYLS_VC_IP_ADDRESS On vMA, for a host, the metric indicates the IP address of the Virtual Centre that the host is managed by. For a resource pool and logical system the value is NA. ====== BYLS_MEM_USED The amount of memory used by the logical system at the end of the interval. On vMA, this applies to hosts, resource pools and logical systems. On vMA, for a resource pool, this metric is "na". On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running "hpvmstatus -V" will indicate whether the driver is active. For all other guests, the value is "na". ====== BYLS_CPU_CYCLE_TOTAL_USED On vMA, for host, resource pool and logical system, it is the total time the physical CPUs were utilized during the interval, represented in cpu cycles. On KVM/Xen, this is the number of milliseconds used on all CPUs during the interval. ====== BYLS_MEM_BALLOON_UTIL On vMA, for logical system, it is the amount of memory held by memory control for ballooning. It is represented as a percentage of BYLS_MEM_ENTL. For a host, and resource pool the value is NA. On KVM/Xen, this value will be "na" if version of libvirt doesn't support memory stats. ====== BYLS_MEM_BALLOON_USED On vMA, for logical system and cluster, it is the amount of memory held by memory control for ballooning. The value is represented in KB. For a host and resource pool the value is NA. On KVM/Xen, this value will be "na" if version of libvirt doesn't support memory stats. ====== BYLS_CPU_UNRESERVED On vMA, for host, it is the number of CPU cycles that are available for creating a new logical system. For a logical system and resource pool the value is NA. ====== BYLS_CPU_PHYS_WAIT_UTIL On vMA, for a logical system it is the percentage of time, during the interval, that the virtual CPU was waiting for the IOs to complete. For a host and resource pool the value is NA. ====== BYLS_CPU_PHYS_READY_UTIL On vMA, for a logical system it is the percentage of time, during the interval, that the CPU was in ready state. For a host and resource pool the value is NA. ====== BYLS_MEM_ACTIVE On vMA, for a logical system it is the amount of memory, that is actively used. For a host and resource pool the value is NA. ====== BYLS_MEM_UNRESERVED On vMA, for a host it is the amount of memory, that is unreserved. For a logical system and resource pool the value is "na". Memory reservation not used by the Service Console, VMkernel, vSphere services and other powered on VMs user-specified memory reservations and overhead memory. ====== BYLS_MEM_HEALTH On vMA, for a host, it is a number that indicates the state of the memory. Low number indicates system is not under memory pressure. For a logical system and resource pool the value is "na". On vMA, the values are defined as: 0 - High - indicates free memory is available and no memory pressure. 1 - Soft 2 - Hard 3 - Low - indicates there is a pressure for free memory. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running "hpvmstatus -V" will indicate whether the driver is active. For all other guests, the value is "na". For relevant guests, these values represent the level of memory pressure, 0 being none and 3 being very high. ====== BYLS_MEM_SWAPTARGET On vMA, for a logical system the value indicates the amount of memory that can be swapped. For a host and resource pool the value is "na". ====== BYLS_MEM_SWAPIN On vMA, for a logical system the value indicates the amount of memory that is swapped in during the interval. For a host and resource pool the value is NA. On KVM/Xen, this value will be "na" if extended memory statistics are not available. ====== BYLS_MEM_SWAPOUT On vMA, for a logical system the value indicates the amount of memory that is swapped out during the interval. For a host and resource pool the value is NA. On KVM/Xen, this value will be "na" if extended memory statistics are not available. ====== BYLS_NET_BYTE_RATE On vMA, for a host and logical system, it is the sum of data transmitted and received for all the NIC instances of the host and virtual machine. It is represented in KBps. For a resource pool the value is NA. ====== BYLS_NET_IN_BYTE On vMA, for a host and logical system, it is number of bytes, in MB, received during the interval. For a resource pool the value is NA. ====== BYLS_NET_OUT_BYTE On vMA, for a host and logical system, it is the number of bytes, in MB, transmitted during the interval. For a resource pool the value is NA. ====== BYLS_CLUSTER_NAME On vMA, for a host and resource pool it is the name of the cluster to which the host belongs to when it is managed by virtual centre. For a logical system, the value is NA. ====== BYLS_CPU_CYCLE_ENTL_MIN On vMA, for a host, logical system and resource pool this value indicates the minimum processor capacity, in MHz, configured for the entity. On HP-UX, the minimum processor capacity, in MHz, configured for this logical system. ====== BYLS_CPU_CYCLE_ENTL_MAX On vMA, for a host, logical system and resource pool this value indicates the maximum processor capacity, in MHz, configured for the entity. If the maximum processor capacity is not configured for the entity, a value of "-3" will be displayed in Performance Collection Component and "ul"( unlimited ) in other clients. On HP-UX, the maximum processor capacity, in MHz, configured for this logical system. ====== BYLS_DISK_PHYS_READ_RATE On vMA, for a host and a logical system, this metric indicates the number of physical reads per second during the interval. On vMA, for a resource pool, this metric is "na". ====== BYLS_DISK_PHYS_READ On vMA, for a host and a logical system this metric indicates the number of physical reads during the interval. On vMA, for a resource pool, this metric is "na". ====== BYLS_DISK_PHYS_WRITE_RATE On vMA, for a host and a logical system, this metric indicates the number of physical writes per second during the interval. On vMA, for a resource pool, this metric is "na". ====== BYLS_DISK_PHYS_WRITE On vMA, for a host and a logical system, this metric indicates the number of physical writes during the interval. On vMA, for a resource pool, this metric is "na". ====== BYLS_DISK_PHYS_BYTE_RATE On vMA, for a host and a logical system, this metric indicates the average number of KBs per second at which data was transferred to and from disks during the interval. On vMA, for a resource pool, this metric is "na". ====== BYLS_DISK_PHYS_BYTE On vMA, for a host and a logical system, this metric indicates the number of KBs transferred to and from disks during the interval. On vMA, for a resource pool, this metric is "na". ====== BYLS_DISK_PHYS_READ_BYTE_RATE On vMA, for a host and a logical system, this metric indicates the average number of KBs transferred from the disk per second during the interval. On vMA, for a resource pool, this metric is "na". ====== BYLS_DISK_PHYS_WRITE_BYTE_RATE On vMA, for a host and a logical system, this metric indicates the average number of KBs transferred to the disk per second during the interval. On vMA, for a resource pool, this metric is "na". ====== BYLS_DISK_UTIL On vMA, for a host, it is the average percentage of time during the interval (average utilization) that all the disks had IO in progress. For logical system and resource pool the value is NA. ====== BYLS_NET_IN_PACKET_RATE On vMA, for a host and a logical system, this metric indicates the number of successful packets per second received through all network interfaces during the interval. On vMA, for a resource pool, this metric is "na". ====== BYLS_NET_IN_PACKET On vMA, for a host and a logical system, this metric indicates the number of successful packets received through all network interfaces during the interval. On vMA, for a resource pool, this metric is "na". ====== BYLS_NET_OUT_PACKET_RATE On vMA, for a host and a logical system, this metric indicates the number of successful packets per second sent through the network interfaces during the interval. On vMA, for a resource pool, this metric is "na". ====== BYLS_NET_OUT_PACKET On vMA, for a host and a logical system, it is the number of successful packets sent through all network interfaces during the last interval. On vMA, for a resource pool, this metric is "na". ====== BYLS_NET_PACKET_RATE On vMA, for a host and a logical system, it is the number of successful packets per second, both sent and received, for all network interfaces during the interval. On vMA, for a resource pool, this metric is "na". ====== BYLS_MEM_SYS On vMA, for a host, it is the amount of physical memory (in MBs unless otherwise specified) used by the system (kernel) during the interval. For logical system and resource pool the value is NA. ====== BYLS_DISK_UTIL_PEAK On vMA, for a host, it is the utilization of the busiest disk during the interval. For a logical system and resource pool the value is NA. ====== BYLS_DATACENTER_NAME On vMA, for a host it is the name of the datacenter to which the host belongs to when it is managed by virtual center. To uniquely identify datacenter in a virtual center, datacenter name is appended with the folder names in bottom up order. For a logical system and resource pool, the value is NA. ====== BYLS_NUM_HOSTS On vMA, for a DataCenter and cluster it is the number of hosts hosted by it. For all other entities, the value is NA. ====== BYLS_SUBTYPE On vMA, for a datastoreatastore ====== BYLS_DISK_CAPACITY On vMA, for a datastoreatastore ====== BYLS_MULTIACC_ENABLED On vMA, for a datastoreatastore ====== BYLS_DISK_IORM_ENABLED On vMA, for a datastoreatastore ====== BYLS_DISK_IORM_THRESHOLD On vMA, for a datastoreatastore ====== BYLS_DISK_FREE_SPACE On vMA, for a datastoreatastore ====== BYLS_DISK_SHARE_PRIORITY On vMA, for a datastoreatastore ====== BYLS_DISK_READ_LATENCY On vMA, for a hostost ====== BYLS_DISK_WRITE_LATENCY On vMA, for a hostost ====== BYLS_DISK_QUEUE_DEPTH_PEAK On vMA, for a hostost ====== BYLS_DISK_COMMAND_ABORT_RATE On vMA, for a hostost issued in that interval. The value is NA for all other entities. ====== BYLS_DISK_THROUGPUT_USAGE On vMA, for a datastoreatastore ====== BYLS_DISK_THROUGHPUT_CONTENTION On vMA, for a datastoreatastore ====== BYLS_NUM_CLONES On vMA, for a clusterluster The value is NA for all other entities. ====== BYLS_NUM_CREATE On vMA, for a clusterluster The value is NA for all other entities. ====== BYLS_NUM_DEPLOY On vMA, for a clusterluster The value is NA for all other entities. ====== BYLS_NUM_DESTROY On vMA, for a clusterluster The value is NA for all other entities. ====== BYLS_NUM_RECONFIGURE On vMA, for a clusterluster The value is NA for all other entities. ====== BYLS_TOTAL_VM_MOTIONS On vMA, for a clusterluster for that cluster in that interval. The value is NA for all other entities. ====== BYLS_TOTAL_SV_MOTIONS On vMA, for a clusterluster for that cluster in that interval. The value is NA for all other entities. ====== BYLS_CPU_EFFECTIVE_UTIL On vMA, for a clusterluster Effective CPU = Aggregate host CPU capacity - (VMkernel CPU + Service Console CPU + other service CPU). The value is NA for all other entities. ====== BYLS_MEM_EFFECTIVE_UTIL On vMA, for a clusterluster that is available for use for virtual machine memory (physical memory for use by the Guest OS) and virtual machine overhead memory. Effective Memory = Aggregate host machine memory - (VMkernel memory + Service Console memory + other service memory). The value is NA for all other entities. ====== BYLS_CPU_FAILOVER On vMA, for a clusterluster ====== BYLS_GUEST_TOOLS_STATUS On vMA, for a guestuest The value is NA for all other entities. ====== BYLS_LS_CONNECTION_STATE For a hostost It can have values as - Connected, Disconnected or NotResponding. The value is NA for all other entities. ====== BYLS_LS_NUM_SNAPSHOTS For a guest, the metric is the number of snapshots created for the system. The value is NA for all other entities. ====== BYLS_LS_STATE_CHANGE_TIME For a guest, the metric is the epoch time when the last state change was observed. The value is NA for all other entities. ====== BYLS_LS_MODE_MEM A string representing the type(s) of memory caps assosciated with a zone. A 'P' indicates that the zone's physical memory is capped. A 'S' indicates that the zone's swap memory is capped. A 'L' indicates that the zone's locked memory is capped. A 'U' indicates that the zone is NOT memory capped. ====== LVolume ====== LV_DIRNAME The path name of this logical volume or volume/disk group. On HP-UX 11i and beyond, data is available from VERITAS Volume Manager (VxVM). LVM (Logical Volume Manager) uses the terminology "volume group" to describe a set of related volumes. VERITAS Volume Manager uses the terminology "disk group" to describe a collection of VM disks. For additional information on VERITAS Volume Manager, see vxintro(1M). For LVM logical volumes, this is the name used as a parameter to the lvdisplay(1M) command. For volume groups, this is the name used as a parameter to the vgdisplay(1M) command. The entry referred to as the "/dev/vgXX/group" entry shows the internal resources used by the LVM software to manage the logical volumes. The path name of this logical volume or volume/disk group. The absolute path name of this logical volume, volume group, or DiskSuite metadevice name. For example: Volume group: /dev/vx/dsk/(group_name) Logical volume: /dev/vx/dsk/(group_name)/(log_vol) Disk Suite: /dev/md/dsk/(meta_device_name) The device file name of this logical volume or volume group. The device file name of the virtual disk. The name is used as a parameter to the dkconfig(8) command. The example device file name is "/dev/vd/vdisk13". Virtual disks function identically to traditional physical disks, but their relation to physical disks is determined from a mapping of a physical disk (or disks) to a virtual disk (vdisk(1M)). This is done by means of a virtual disk configuration file, /etc/dktab. The device file name of a logical volume. For example: Volume groups: /dev/vol/(group_name) To get a list of all group names from the system use: volprint -G Logical volumes: /dev/vol/(group_name)/(log_vol) To get a list of all logical-volume names from a group: ls -l /dev/vol/(choosen_group_name)/ ====== LV_DEVNO Major / Minor number of this logical volume. On HP-UX 11i and beyond, data is available from VERITAS Volume Manager (VxVM). LVM (Logical Volume Manager) uses the terminology "volume group" to describe a set of related volumes. VERITAS Volume Manager uses the terminology "disk group" to describe a collection of VM disks. For additional information on VERITAS Volume Manager, see vxintro(1M). Disk groups in the VERITAS Volume Manager do not have device files. Therefore, "na" is reported for this metric since it is not applicable. Major / Minor number of this logical volume. Major / Minor number of this logical volume. Volume groups in the Veritas LVM do not have device files, so for this entry, "na" is shown for the major/minor numbers. Major / Minor number of the virtual disk. Major / Minor numbers are encoded in a 32-bit integer with the lower 18 bits representing the minor number and upper (leftmost) 14 bits representing the major number. The integer is displayed as a hexadecimal number. The minor number is the minor number of the "s0" slice (partition) of the spindle. ====== LV_TYPE Either "G" or "V", indicating either a volume/disk group ("G") or a logical volume ("V"). On SUN, it can also be a Disk Suite meta device ("S"). On HP-UX 11i and beyond, data is available from VERITAS Volume Manager (VxVM). LVM (Logical Volume Manager) uses the terminology "volume group" to describe a set of related volumes. VERITAS Volume Manager uses the terminology "disk group" to describe a collection of VM disks. For additional information on VERITAS Volume Manager, see vxintro(1M). ====== LV_GROUP_NAME On HP-UX, this is the name of this volume/disk group associated with a logical volume. On SUN and AIX, this is the name of this volume group associated with a logical volume. On SUN, this metric is applicable only for the Veritas LVM. On HP-UX 11i and beyond, data is available from VERITAS Volume Manager (VxVM). LVM (Logical Volume Manager) uses the terminology "volume group" to describe a set of related volumes. VERITAS Volume Manager uses the terminology "disk group" to describe a collection of VM disks. For additional information on VERITAS Volume Manager, see vxintro(1M). ====== LV_STATE_LV On SUN, this is the kernel state of this volume. Enabled means the volume block device can be used. Detached means the volume block device cannot be used, but ioctl's will still be accepted and the plex block devices will still accept reads and writes. Disabled means that the volume or its plexes cannot be used for any operations. DiskSuite metadevices are not supported. This metric is reported as "na" for volume groups since it is not applicable. On AIX, this is the state of this logical volume in the volume group. The normal state of a logical volume should be "open/syncd", which means that the logical volume is open and clean. ====== LV_MIRRORP_LV The mirroring Policy that is configured for use on this logical volume. There are three different mirror policies that the logical volume manager driver can use: Sequential, Parallel and Striped. Logical volume performance depends on these policies, and they should be adjusted properly based on the application. ====== LV_MIRRORCONSIST_LV The mirror write consistency parameter that is configured for this logical volume. If the write consistency flag is set to "Consistency" then every write to the mirrored partition of the logical volume is verified for data integrity. This can reduce performance, but data integrity needs may justify such situations. If the consistency parameter is set to "NoConsistency", then writes to the mirrored partition are not verified. This will improve performance, but may increase the risk to data integrity. ====== LV_OPEN_LV The number of logical volumes currently opened in this volume group (or disk group, if HP-UX). An entry of "na" indicates that there are no logical volumes open in this volume group and there are no active disks in this volume group. On HP-UX, the extra entry (referred to as the "/dev/vgXX/group" entry), shows the internal resources used by the LVM software to manage the logical volumes. On HP-UX 11i and beyond, data is available from VERITAS Volume Manager (VxVM). LVM (Logical Volume Manager) uses the terminology "volume group" to describe a set of related volumes. VERITAS Volume Manager uses the terminology "disk group" to describe a collection of VM disks. For additional information on VERITAS Volume Manager, see vxintro(1M). On SUN, this metric is reported as "na" for logical volumes and metadevices since it is not applicable. ====== LV_CACHE_SIZE The number of entries in this logical volume group's Mirror Write Cache (MWC). The size of this cache is determined by the kernel's logical volume code and is not configurable. The MWC is optional and only used for volume mirroring. The MWC tracks each write of mirrored data to the physical volumes and maintains a record of any mirrored writes not yet successfully completed at the time of a system crash. The MWC is disabled with the lvchange(1M) command ("lvchange -M n..."), which may increase system performance, but slow down recovery in the event of a system failure. This metric is reported as "na" for VERITAS Volume Manager. On HP-UX 11i and beyond, data is available from VERITAS Volume Manager (VxVM). LVM (Logical Volume Manager) uses the terminology "volume group" to describe a set of related volumes. VERITAS Volume Manager uses the terminology "disk group" to describe a collection of VM disks. For additional information on VERITAS Volume Manager, see vxintro(1M). ====== LV_CACHE_QUEUE The number of requests queued to the Mirror Write Cache (MWC) at the end of the interval. The MWC is only used for volume mirroring and its use degrades performance, as extra work is required during disk writes to maintain the Mirror Write Cache. The MWC is disabled with the lvchange(1M) command ("lvchange -M n..."), which may increase system performance, but slow down recovery in the event of a system failure. This metric is reported as "na" for VERITAS Volume Manager. On HP-UX 11i and beyond, data is available from VERITAS Volume Manager (VxVM). LVM (Logical Volume Manager) uses the terminology "volume group" to describe a set of related volumes. VERITAS Volume Manager uses the terminology "disk group" to describe a collection of VM disks. For additional information on VERITAS Volume Manager, see vxintro(1M). ====== LV_CACHE_HIT The number of requests successfully satisfied from the Mirror Write Cache (MWC) during the interval. The Mirror Write Cache tracks each write of mirrored data to the physical volumes and maintains a record of any mirrored writes not yet successfully completed at the time of a system crash. This metric is reported as "na" for VERITAS Volume Manager. On HP-UX 11i and beyond, data is available from VERITAS Volume Manager (VxVM). LVM (Logical Volume Manager) uses the terminology "volume group" to describe a set of related volumes. VERITAS Volume Manager uses the terminology "disk group" to describe a collection of VM disks. For additional information on VERITAS Volume Manager, see vxintro(1M). ====== LV_CACHE_MISS The number of requests that were not satisfied from the Mirror Write Cache (MWC) during the interval. The MWC is disabled with the lvchange(1M) command ("lvchange -M n..."), which may increase system performance, but slow down recovery in the event of a system failure. This metric is reported as "na" for VERITAS Volume Manager. On HP-UX 11i and beyond, data is available from VERITAS Volume Manager (VxVM). LVM (Logical Volume Manager) uses the terminology "volume group" to describe a set of related volumes. VERITAS Volume Manager uses the terminology "disk group" to describe a collection of VM disks. For additional information on VERITAS Volume Manager, see vxintro(1M). ====== LV_WRITEV_LV The write verify parameter that is configured for this logical volume. If the write verify flag is set to "Verify" then every write to the logical volume is verified between logical and physical write for data integrity. This can reduce performance, but data integrity needs may justify such situations. ====== LV_LOGLP_LV On SUN, this is the total number of plexes configured for this logical volume. This metric is reported as "na" for volume groups since it is not applicable. On AIX, this is the total number of logical partitions configured for this logical volume. ====== LV_OPEN_PV The number of physical volumes that are configured for use by this volume group. A logical volume group can spread across multiple physical volumes. ====== LV_INTERVAL The amount of time in the interval. ====== LV_PHYSLV_SIZE On SUN, this is the physical size in MBs of this logical volume or metadevice. This metric is reported as "na" for volume groups since it is not applicable. On AIX, this is the physical size in MBs of this logical volume. ====== LV_TYPE_LV This metric is only applicable for DiskSuite metadevices and it can be one of the following: TRANS RAID MIRROR CONCAT/STRIPE TRANS A metadevice called the trans device manages the UFS log. The trans normally has 2 metadevices: MASTER DEVICE, contains the file system that is being logged. Can be used as a block device (up to 2 Gbytes) or a raw device (up to 1 Tbyte). LOGGING DEVICE, contains the log and can be shared by several file systems. The log is a sequence of cords, each of which describes a change to a file system. RAID Redundant Array of Inexpen- sive Disks. A scheme for classifying data distribution and redundancy. MIRROR For high data availability, DiskSuite can write data in metadevices to other meta- devices. A mirror is a meta- device made of one or more concatenations or striped metadevices. Concatenation is the combining of two or more physical components into a single metadevice by treating slices (partitions) as a logical device. STRIPE (or Striping) For increased performance, you can create striped metadevices (or "stripes"). Striping is creating a single metadevice by interlacing data on slices across disks. After a striped metadevice is created, read/write requests are spread to multiple disk controllers, increasing performance. The type of the virtual disk. This is also returned from the dkconfig(8) command and can be one of the following: SIMPLE Simple virtual disk which consists of only one piece. CONCATENATED Virtual disk which consists of several concatenated pieces. MIRROR Virtual disk which performs software disk mirroring. STATESAVE Statesave disk for MIRROR virtual disk. STRIPE Virtual disk with consi INTERLEAVED Virtual disk which consists of several interleaved pieces. MEMORY Memory based virtual disk. ARRAY Software disk array. RAID Software RAID disk. UNKNOWN Unknown virtual disk type. Virtual disks function identically to traditional physical disks, but their relation to physical disks is determined from a mapping of a physical disk (or disks) to a virtual disk (vdisk(1M)). This is done by means of a virtual disk configuration file, /etc/dktab. ====== LV_READ_RATE The number of physical reads per second for this logical volume during the interval. This may not correspond to the physical read rate from a particular disk drive since a logical volume may be composed of many disk drives or it may be a subset of a disk drive. An individual physical read from one logical volume may span multiple individual disk drives. Since this is a physical read rate, there may not be any correspondence to the logical read rate since many small reads are satisfied in the buffer cache, and large logical read requests must be broken up into physical read requests. The number of physical reads per second for this logical volume during the interval. The number of physical reads per second for this logical volume during the interval. This may not correspond to the physical read rate from a particular disk drive since a logical volume may be composed of many disk drives or it may be a subset of a disk drive. An individual physical read from one logical volume may span multiple individual disk drives. Since this is a physical read rate, there may not be any correspondence to the logical read rate since many small reads are satisfied in the buffer cache, and large logical read requests must be broken up into physical read requests. DiskSuite metadevices are not supported. This metric is reported as "na" for volume groups since it is not applicable. The number of physical reads per second for the current virtual disk during the interval. Virtual disks function identically to traditional physical disks, but their relation to physical disks is determined from a mapping of a physical disk (or disks) to a virtual disk (vdisk(1M)). This is done by means of a virtual disk configuration file, /etc/dktab. The number of physical reads per second for this logical volume during the interval. This may not correspond to the physical read rate from a particular disk drive since a logical volume may be composed of many disk drives or it may be a subset of a disk drive. An individual physical read from one logical volume may span multiple individual disk drives. Since this is a physical read rate, there may not be any correspondence to the logical read rate since many small reads are satisfied in the buffer cache, and large logical read requests must be broken up into physical read requests. The utility volstat can be used to get the data from the shell. ====== LV_READ_BYTE_RATE The number of physical KBs per second read from this logical volume during the interval. Note that bytes read from the buffer cache are not included in this calculation. The number of physical KBs per second read from this logical volume during the interval. DiskSuite metadevices are not supported. This metric is reported as "na" for volume groups since it is not applicable. ====== LV_WRITE_RATE The number of physical writes per second to this logical volume during the interval. This may not correspond to the physical write rate to a particular disk drive since a logical volume may be composed of many disk drives or it may be a subset of a disk drive. Since this is a physical write rate, there may not be any correspondence to the logical write rate since many small writes are combined in the buffer cache, and many large logical writes must be broken up. The number of physical writes per second to this logical volume during the interval. The number of physical writes per second to this logical volume during the interval. This may not correspond to the physical write rate to a particular disk drive since a logical volume may be composed of many disk drives or it may be a subset of a disk drive. Since this is a physical write rate, there may not be any correspondence to the logical write rate since many small writes are combined in the buffer cache, and many large logical writes must be broken up. DiskSuite metadevices are not supported. This metric is reported as "na" for volume groups since it is not applicable. The number of physical writes per second to the current virtual disk during the interval. Virtual disks function identically to traditional physical disks, but their relation to physical disks is determined from a mapping of a physical disk (or disks) to a virtual disk (vdisk(1M)). This is done by means of a virtual disk configuration file, /etc/dktab. The number of physical writes per second to this logical volume during the interval. This may not correspond to the physical write rate to a particular disk drive since a logical volume may be composed of many disk drives or it may be a subset of a disk drive. Since this is a physical write rate, there may not be any correspondence to the logical write rate since many small writes are combined in the buffer cache, and many large logical writes must be broken up. The utility volstat can be used to get the data from the shell. ====== LV_WRITE_BYTE_RATE The number of KBs per second written to this logical volume during the interval. The number of KBs per second written to this logical volume during the interval. DiskSuite metadevices are not supported. This metric is reported as "na" for volume groups since it is not applicable. ====== LV_AVG_READ_SERVICE_TIME The average time, in milliseconds, that this logical volume spent processing each read request during the interval. For example, a value of 5.14 would indicate that read requests during the last interval took on average slightly longer than five one-thousandths of a second to complete for this device. This metric can be used to help determine which logical volumes are taking more time than usual to process requests. This metric is reported as "na" for LVM. On HP-UX 11i and beyond, data is available from VERITAS Volume Manager (VxVM). LVM (Logical Volume Manager) uses the terminology "volume group" to describe a set of related volumes. VERITAS Volume Manager uses the terminology "disk group" to describe a collection of VM disks. For additional information on VERITAS Volume Manager, see vxintro(1M). DiskSuite metadevices are not supported. This metric is reported as "na" for volume groups since it is not applicable. ====== LV_AVG_WRITE_SERVICE_TIME The average time, in milliseconds, that this logical volume spent processing each write request during the interval. For example, a value of 5.14 would indicate that write requests during the last interval took on average slightly longer than five one-thousandths of a second to complete for this device. This metric can be used to help determine which logical volumes are taking more time than usual to process requests. This metric is reported as "na" for LVM. On HP-UX 11i and beyond, data is available from VERITAS Volume Manager (VxVM). LVM (Logical Volume Manager) uses the terminology "volume group" to describe a set of related volumes. VERITAS Volume Manager uses the terminology "disk group" to describe a collection of VM disks. For additional information on VERITAS Volume Manager, see vxintro(1M). DiskSuite metadevices are not supported. This metric is reported as "na" for volume groups since it is not applicable. ====== LV_SPACE_UTIL Percentage of the logical volume file system space in use during the interval. A value of "na" is displayed for volume groups and logical volumes which have no mounted filesystem. ====== NetIF ====== BYNETIF_IN_BYTE The number of KBs received from the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate. If BYNETIF_NET_TYPE is "ESXVLan", then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "na" for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== BYNETIF_OUT_BYTE The number of KBs sent to the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate. If BYNETIF_NET_TYPE is "ESXVLan", then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "na" for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== BYNETIF_UTIL The percentage of bandwidth used with respect to the total available bandwidth on a given network interface at the end of the interval. On vMA this value will be N/A for those Lan cards which are of type ESXVLan. Some AIX systems report a speed that is lower than the measured throughput and this can result in BYNETIF_UTIL and GBL_NET_UTIL_PEAK showing more than 100% utilization. On Linux, root permission is required to obtain network interface bandwidth so values will be n/a when running in non-root mode. Also, maximum bandwidth for virtual interfaces (vnetN) may be reported wrongly on KVM or Xen server so, similarly to AIX, utilization may exceed 100%. ====== BYNETIF_NET_TYPE The type of network device the interface communicates through. Lan - local area network card Loop - software loopback interface (not tied to a hardware device) Loop6 - software loopback interface IPv6 (not tied to a hardware device) Serial - serial modem port Vlan - virtual lan Wan - wide area network card Tunnel - tunnel interface Apa - HP LinkAggregate Interface (APA) Other - hardware network interface type is unknown. ESXVLan - The card type belongs to network cards of ESX hosts which are monitored on vMA. ====== BYNETIF_NAME The name of the network interface. For HP-UX 11.0 and beyond, these are the same names that appear in the "Description" field of the "lanadmin" command output. On all other Unix systems, these are the same names that appear in the "Name" column of the "netstat -i" command. Some examples of device names are: lo - loop-back driver ln - Standard Ethernet driver en - Standard Ethernet driver le - Lance Ethernet driver ie - Intel Ethernet driver tr - Token-Ring driver et - Ether Twist driver bf - fiber optic driver All of the device names will have the unit number appended to the name. For example, a loop-back device in unit 0 will be "lo0". On vMA for Lan cards which are of type ESXVLan, this metric contains the vmnic(number) as first half and the second half is the ESX host name. ====== BYNETIF_MAC_ADDR The mac address of the network interface. ====== BYNETIF_PACKET_RATE The number of successful physical packets per second sent and received through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. If BYNETIF_NET_TYPE is "ESXVLan", then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "na" for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== BYNETIF_IN_PACKET The number of successful physical packets received through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. For HP-UX, this will be the same as the sum of the "Inbound Unicast Packets" and "Inbound Non-Unicast Packets" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the "Ipkts" column (RX-OK on Linux) from the "netstat -i" command for a network device. See also netstat(1). If BYNETIF_NET_TYPE is "ESXVLan", then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "na" for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== BYNETIF_IN_PACKET_RATE The number of successful physical packets per second received through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. If BYNETIF_NET_TYPE is "ESXVLan", then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "na" for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== BYNETIF_OUT_PACKET The number of successful physical packets sent through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. For HP-UX, this will be the same as the sum of the "Outbound Unicast Packets" and "Outbound Non-Unicast Packets" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the "Opkts" column (TX-OK on Linux) from the "netstat -i" command for a network device. See also netstat(1). If BYNETIF_NET_TYPE is "ESXVLan", then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "na" for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== BYNETIF_OUT_PACKET_RATE The number of successful physical packets per second sent through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. If BYNETIF_NET_TYPE is "ESXVLan", then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "na" for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== BYNETIF_COLLISION The number of physical collisions that occurred on the network interface during the interval. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets. This data is not collected for non-broadcasting devices, such as loopback (lo), and is always zero. For HP-UX, this will be the same as the sum of the "Single Collision Frames", "Multiple Collision Frames", "Late Collisions", and "Excessive Collisions" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only. For most other Unix systems, this is the same as the sum of the "Coll" column from the "netstat -i" command ("collisions" from the "netstat -i -e" command on Linux) for a network device. See also netstat(1). If BYNETIF_NET_TYPE is "ESXVLan", then this metric will be N/A. AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "na" for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. ====== BYNETIF_COLLISION_RATE The number of physical collisions per second on the network interface during the interval. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets. This data is not collected for non-broadcasting devices, such as loopback (lo), and is always zero. If BYNETIF_NET_TYPE is "ESXVLan", then this metric will be N/A. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "na" for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. ====== BYNETIF_COLLISION_1_MIN_RATE The number of physical collisions per minute on the network interface during the interval. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets. This data is not collected for non-broadcasting devices, such as loopback (lo), and is always zero. If BYNETIF_NET_TYPE is "ESXVLan", then this metric will be N/A. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "na" for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== BYNETIF_ERROR The number of physical errors that occurred on the network interface during the interval. An increasing number of errors may indicate a hardware problem in the network. On Unix systems, this data is not available for loop-back (lo) devices and is always zero. For HP-UX, this will be the same as the sum of the "Inbound Errors" and "Outbound Errors" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of "Ierrs" (RX-ERR on Linux) and "Oerrs" (TX-ERR on Linux) from the "netstat -i" command for a network device. See also netstat(1). If BYNETIF_NET_TYPE is "ESXVLan", then this metric will be N/A. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "na" for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. ====== BYNETIF_ERROR_RATE The number of physical errors per second on the network interface during the interval. On Unix systems, this data is not available for loop-back (lo) devices and is always zero. If BYNETIF_NET_TYPE is "ESXVLan", then this metric will be N/A. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "na" for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. ====== BYNETIF_ERROR_1_MIN_RATE The number of physical errors per minute on the network interface during the interval. On Unix systems, this data is not available for loop-back (lo) devices and is always zero. If BYNETIF_NET_TYPE is "ESXVLan", then this metric will be N/A. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "na" for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== BYNETIF_DEFERRED The number of physical outbound packets that were deferred due to the network being in use during the interval. On Unix systems, this data is not available for loop-back (lo) devices and is always zero. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== BYNETIF_DEFERRED_RATE The number of physical outbound packets per second that were deferred due to the network being in use during the interval. On Unix systems, this data is not available for loop-back (lo) devices and is always zero. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== BYNETIF_QUEUE The length of the outbound queue at the time of the last sample. This metric will be the same as the "Outbound Queue Length" values from the output of "lanadmin" utility. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On HP-UX, this metric is only available for LAN interfaces. For WAN (Wide-Area Network) interfaces such as ATM and X.25, with interface names such as el, cip/ixe, and netisdn, this metric returns "na". ====== BYNETIF_INTERVAL The amount of time in the interval. ====== BYNETIF_NET_SPEED The speed of this interface. This is the bandwidth in Mega bits/sec. For Loopback interface, this metric returns "na". Some AIX systems report a speed that is lower than the measured throughput and this can result in BYNETIF_UTIL and GBL_NET_UTIL_PEAK showing more than 100% utilization. On Linux, root permission is required to obtain network interface bandwidth so values will be n/a when running in non-root mode. Also, maximum bandwidth for virtual interfaces (vnetN) may be reported wrongly on KVM or Xen server so, similarly to AIX, utilization may exceed 100%. ====== BYNETIF_NET_MTU The size of the maximum transfer unit (MTU) for this interface. ====== BYNETIF_IN_BYTE_RATE The number of KBs per second received from the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate. If BYNETIF_NET_TYPE is "ESXVLan", then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "na" for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== BYNETIF_OUT_BYTE_RATE The number of KBs per second sent to the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate. If BYNETIF_NET_TYPE is "ESXVLan", then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "na" for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. ====== BYNETIF_ID The ID number of the network interface. ====== Process ====== PROC_TIME THREAD_TIME The time the data for the process (or kernel threads, if HP-UX/Linux Kernel 2.6 and above) was collected, in local time. ====== PROC_INTERVAL THREAD_INTERVAL The amount of time in the interval. This is the same value for all processes (and kernel threads, if HP-UX/Linux Kernel 2.6 and above), regardless of whether they were alive for the entire interval. Note, calculations such as utilizations or rates are calculated using this standardized process interval (PROC_INTERVAL), rather than the actual alive time during the interval (PROC_INTERVAL_ALIVE). Thus, if a process was only alive for 1 second and used the CPU during its entire life (1 second), but the process sample interval was 5 seconds, it would be reported as using 1/5 or 20% CPU utilization, rather than 100% CPU utilization. ====== PROC_INTEREST THREAD_INTEREST A string containing the reason(s) why the process or thread is of interest, based on the thresholds specified in the parm file. An 'A' indicates that the process or thread exceeds the process CPU threshold, computed using the actual time the process or thread was alive during the interval. A 'C' indicates that the process or thread exceeds the process CPU threshold, computed using the collection interval. Currently, the same CPU threshold is used for both CPU interest reasons. A 'D' indicates that the process or thread exceeds the process disk IO threshold. An 'I' indicates that the process or thread exceeds the IO threshold. An 'M' indicates that the process exceeds the process memory threshold. This interest reason is only meaningful for processes and therefore not shown for threads. New processes or threads are identified with an 'N', terminated processes or threads are identified with a 'K'. Note that the parm file 'nonew', 'nokill' and 'shortlived' settings are logging only options and therefore ignored in Glance components. ====== PROC_TOP_CPU_INDEX THREAD_TOP_CPU_INDEX The index of the process which consumed the most CPU during the interval. From this index, the process PID, process name, and CPU utilization can be obtained. (Even for kernel threads if HP-UX/Linux Kernel 2.6 and above this metric returns the index of the process) This metric is used by the Performance Tools to index into the Data collection interface's internal table. This is not a metric that will be interesting to Tool users. ====== PROC_TOP_DISK_INDEX THREAD_TOP_DISK_INDEX The index of the process which did the most physical IOs during the last interval. On HP-UX, note that NFS mounted disks are not considered in this calculation. With this index, the PID, process name, and IOs per second can be obtained. This metric is used by the Performance Tools to index into the Data collection interface's internal table. This is not a metric that will be interesting to Tool's users. ====== PROC_APP_ID THREAD_APP_ID The ID number of the application to which the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) belonged during the interval. Application "other" always has an ID of 1. There can be up to 999 user-defined applications, which are defined in the parm file. ====== PROC_GROUP_ID THREAD_GROUP_ID On most systems, this is the real group ID number of the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above). On AIX, this is the effective group ID number of the process. On HP-UX, this is the effective group ID number of the process if not in setgid mode. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_STATE_FLAG THREAD_STATE_FLAG The Unix STATE flag of the process(or kernel thread, if Linux Kernel 2.6 and above) during the interval. ====== PROC_RUN_TIME THREAD_RUN_TIME The elapsed time since a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) started, in seconds. This metric is less than the interval time if the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) was not alive during the entire first or last interval. On a threaded operating system such as HP-UX 11.0 and beyond, this metric is available for a process or kernel thread. ====== PROC_STOP_REASON_FLAG THREAD_STOP_REASON_FLAG A numeric value for the stop reason. This is used by scopeux instead of the ASCII string returned by PROC_STOP_REASON in order to conserve space in the log file. On a threaded operating system, such as HP-UX 11.0 and beyond, this metric represents a kernel thread characteristic. If this metric is reported for a process, the value for its last executing kernel thread is given. For example, if a process has multiple kernel threads and kernel thread one is the last to execute during the interval, the metric value for kernel thread one is assigned to the process. ====== PROC_INTERVAL_ALIVE THREAD_INTERVAL_ALIVE The number of seconds that the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) was alive during the interval. This may be less than the time of the interval if the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) was new or died during the interval. ====== PROC_REVERSE_PRI The process priority in a range of 0 to 127, with a lower value interpreted as a higher priority. Since priority ranges can be customized, this metric provides a standardized way of interpreting priority that is consistent with other versions of Unix. This is the same value as reported in the PRI field by the ps command when the -c option is not used. ====== PROC_PRMID THREAD_PRMID The PRM Group ID this process is assigned to. The PRM group configuration is kept in the PRM configuration file. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_SCHEDULER THREAD_SCHEDULER The scheduling policy for this process or kernel thread. On HP-UX, the available scheduling policies are: HPUX - Normal timeshare NOAGE - Timeshare without usage decay RTPRIO - HP-UX Real-time FIFO - Posix First In/First Out RR - Posix Round-Robin RR2 - Posix Round-Robin with a per-priority time slice interval On Linux, they are: TS - Normal timeshare FF - Posix First In/First Out RR - Posix Round-Robin B - Batch ISO - Reserved IDL - Idle On a threaded operating system, such as HP-UX 11.0 and beyond, this metric represents a kernel thread characteristic. If this metric is reported for a process, the value for its last executing kernel thread is given. For example, if a process has multiple kernel threads and kernel thread one is the last to execute during the interval, the metric value for kernel thread one is assigned to the process. ====== PROC_USRPRI THREAD_USRPRI The user priority for the process or kernel thread is set by the kernel during scheduling. This value becomes the actual process or kernel thread priority once it returns to user mode from kernel mode. The calculation of the user priority is based on the process or kernel thread CPU usage and the nice value. On a threaded operating system, such as HP-UX 11.0 and beyond, this metric represents a kernel thread characteristic. If this metric is reported for a process, the value for its last executing kernel thread is given. For example, if a process has multiple kernel threads and kernel thread one is the last to execute during the interval, the metric value for kernel thread one is assigned to the process. ====== PROC_EUID THREAD_EUID The Effective User ID of a process(or kernel thread, if HP-UX/Linux Kernel 2.6 and above). On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_THREAD_ID THREAD_THREAD_ID The thread ID number of this kernel thread, used to uniquely identify it. On Linux systems this metric shall be available from Linux Kernel 2.6 onwards. ====== PROC_PROC_NAME THREAD_PROC_NAME The process(or kernel thread, if HP-UX/Linux Kernel 2.6 and above) program name. It is limited to 16 characters. On Unix systems, this is derived from the 1st parameter to the exec(2) system call. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. On Windows, the "System Idle Process" is not reported by Perf Agent since Idle is a process that runs to occupy the processors when they are not executing other threads. Idle has one thread per processor. ====== PROC_USER_NAME THREAD_USER_NAME On Unix systems, this is real user name of a process or the login account (from /etc/passwd) of a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above). If more than one account is listed in /etc/passwd with the same user ID (uid) field, the first one is used. If an account cannot be found that matches the uid field, then the uid number is returned. This would occur if the account was removed after a process was started. On Windows, this is the process owner account name, without the domain name this account resides in. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_GROUP_NAME THREAD_GROUP_NAME The group name (from /etc/group) of a process(or kernel thread, if HP-UX/Linux Kernel 2.6 and above). The group identifier is obtained from searching the /etc/passwd file using the user ID (uid) as a key. Therefore, if more than one account is listed in /etc/passwd with the same user ID (uid) field, the first one is used. If no entry can be found for the user ID in /etc/passwd, the group name is the uid number. If no matching entry in /etc/group can be found, the group ID is returned as the group name. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_APP_NAME THREAD_APP_NAME The application name of a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above). Processes (or kernel threads, if HP-UX/Linux Kernel 2.6 and above) are assigned into application groups based upon rules in the parm file. If a process does not fit any rules in this file, it is assigned to the application "other." The rules include decisions based upon pathname, user ID, priority, and so forth. As these values change during the life of a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above), it is re-assigned to another application. This re-evaluation is done every measurement interval. ====== PROC_UID THREAD_UID The real UID (user ID number) of a process(or kernel threads, if HP-UX/Linux Kernel 2.6 and above). This is the UID returned from the getuid system call. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_TTY THREAD_TTY The controlling terminal for a process(or kernel threads, if HP-UX/Linux Kernel 2.6 and above). This field is blank if there is no controlling terminal. On HP-UX, Linux, and AIX, this is the same as the "TTY" field of the ps command. On all other Unix systems, the controlling terminal name is found by searching the directories provided in the /etc/ttysrch file. See man page ttysrch(4) for details. The matching criteria field ("M", "F" or "I" values) of the ttysrch file is ignored. If a terminal is not found in one of the ttysrch file directories, the following directories are searched in the order here: "/dev", "/dev/pts", "/dev/term" and "dev/xt". When a match is found in one of the "/dev" subdirectories, "/dev/" is not displayed as part of the terminal name. If no match is found in the directory searches, the major and minor numbers of the controlling terminal are displayed. In most cases, this value is the same as the "TTY" field of the ps command. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_TTY_DEV THREAD_TTY_DEV The device number of the controlling terminal for a process(or kernel threads, if HP-UX/Linux Kernel 2.6 and above). On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_PROC_ID THREAD_PROC_ID The process ID number (or PID) of this process(or associated process for kernel threads, if HP-UX/Linux Kernel 2.6 and above) that is used by the kernel to uniquely identify the process. Process numbers are reused, so they only identify a process for its lifetime. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_PARENT_PROC_ID THREAD_PARENT_PROC_ID The parent process' PID number. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_STOP_REASON THREAD_STOP_REASON A text string describing what caused the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) to stop executing. For example, if the process is waiting for a CPU while higher priority processes are executing, then its block reason is PRI. A complete list of block reasons follows: String Reason for Process Block ------------------------------------ CACHE Waiting at the buffer cache level trying to lock down a buffer cache structure, or waiting for an IO operation to or from a buffer cache to complete. File system access will block on IO more often than CACHE on HP-UX 11.x. CDFS Waiting for CD-ROM file system node structure allocation or locks while accessing a CD-ROM device through the file system. died Process terminated during the interval. DISK Waiting for an IO operation to complete at the logical device manager or disk driver level. Waits from raw disk IO and diagnostic requests can be seen here. Buffered IO requests can also block on DISK, but will more often be seen waiting on "IO". CDFS access will block on "CDFS". Virtual memory activity will block on "VM". GRAPH Waiting for a graphics card or framebuf semaphore operation. INODE Waiting while accessing an inode structure. This includes inode gets and waiting due to inode locks. IO Waiting for IO to local disks, printers, tapes, or instruments to complete (above the driver, but below the buffer cache). Both file system and raw disk access can block in this state. CDFS access will block on "CDFS". Virtual memory activity will block on "VM". IPC Waiting for a process or kernel thread event (that is, waiting for a child to receive a signal). This includes both inter and intra process or kernel thread operations, such as IPC locks, kernel thread mutexes, and database IPC operations. System V message queue operations will block on "MESG", while semaphore operations will block on "SEM". JOBCL Waiting for tracing resume, debug resume, or job control start. A background process incurs this block when attempting to write to a terminal set with "stty tostop". On HP-UX 11i, scheduler activation threads (user threads) will show this block. LAN Waiting for a network IO completion. This includes waiting on the LAN hardware and low level LAN device driver. It does not include waiting on the higher level network software such as the streams based transport or NFS, which has its own stop state. MESG Waiting for a System V message queue operation such as msgrcv or msgsnd. new Process was created (via the fork/vfork system calls) during the interval. NFS Waiting for a Networked File System request to complete. This includes both NFS V2 and V3 requests. This does not include stops where kernel threads or deamons are waiting for a NFS event or request (such as biod or nfsd). These will block on SLEEP to show they are waiting for some activity. NONE Zombie process - waiting to die. OTHER The process was started before the midaemon was started and has not been resumed, or the block state is unknown. PIPE Waiting for operations involving pipes. This includes opening, closing, reading, and writing using pipes. Named pipes will block on PIPE. PRI Waiting because a higher priority process is running, or waiting for a spinlock or alpha semaphore. RPC Waiting for remote procedure call operations to complete. This includes both NFS and DCE RPC requests. SEM Waiting for a System V semaphore operation (such as semop, semget, or semctl) or waiting for a memory mapped file semaphore operation (such as msem_init or msem_lock). SLEEP Waiting because the process put itself to sleep using system calls such as sleep, wait, pause, sigpause, poll, sigsuspend and select. This is the standard stop reason for idle system daemons. SOCKT Waiting for an operation to complete while accessing a device through a socket. This is used primarily in networking code and includes all protocols using sockets (X25, UDP, TCP, and so on). STRMS Waiting for an operation to complete while accessing a "streams" device. This is the normal stop reason for kernel threads and daemons waiting for a streams event. This includes the network transport and pseudo terminal IO requests. For example, waiting for a read on a streams device or waiting for an internal streams synchronization. SYSTM Waiting for access to a system resource or lock. These resources include data structures from the LVM, VFS, UFS, JFS, and Disk Quota subsystems. "SYSTM" is the "catch-all" wait state for blocks on system resources that are not common enough or long enough to warrant their own stop state. TERM Waiting for a non-streams terminal transfer (tty or pty). VM Waiting for a virtual memory operation to complete, or waiting for free memory, or blocked while creating/ accessing a virtual memory structure. For a process or kernel thread currently running, the last reason it was stopped before obtaining the CPU is shown. On HP-UX 11.0 and beyond, mikslp.text (located in /opt/perf/lib) contains the blocking functions and their corresponding block states for use by midaemon. On a threaded operating system, such as HP-UX 11.0 and beyond, this metric represents a kernel thread characteristic. If this metric is reported for a process, the value for its last executing kernel thread is given. For example, if a process has multiple kernel threads and kernel thread one is the last to execute during the interval, the metric value for kernel thread one is assigned to the process. SunOS 5.X String Reason for Process Block ------------------------------------ died Process terminated during the interval. new Process was created (via the exec() system call) during the interval. NONE Process is ready to run. It is not apparent that the process is blocked. OTHER Waiting for a reason not decipherable by the measurement software. PMEM Waiting for more primary memory. PRI Process is on the run queue. SLEEP Waiting for an event to complete. TRACE Received a signal to stop because parent is tracing this process. ZOMB Process has terminated and the parent is not waiting. On SunOS 5.X, instead of putting the scheduler to sleep and waking it up, the kernel just stops and continues the scheduler as needed. This is done by changing the state of the scheduler to ws_stop, which is when you see the TRACE state. This is for efficiency and happens every clock tick so the "sched" process will always appear to be in a "TRACE" state. String Reason for Process Block ------------------------------------ died Process terminated during the interval. LOCK Waiting either for serialization or phys lock. new Process was created (via the exec() system call) during the interval. NONE Process is ready to run. It is not apparent that the process is blocked. OTHER Waiting for a reason not decipherable by the measurement software. PRI Process is on the run queue. SLEEP Waiting for an event to complete. TIMER Waiting for the timer. TRACE Received a signal to stop because parent is tracing this process. VM Waiting for a virtual memory operation to complete. ZOMB Process has terminated and the parent is not waiting. String Reason for Process Block ------------------------------------ died Process terminated during the interval. new Process was created (via the exec() system call) during the interval. NONE Process is ready to run. It is not apparent that the process is blocked. OTHER Waiting for a reason not decipherable by the measurement software. PMEM Waiting for more primary memory. PRI Process is on the run queue. SLEEP Waiting for an event to complete. TRACE Received a signal to stop the process for tracing. This will occur when a process is stopped waiting on the tty device after having been backgrounded, or when the process is suspended by a debugger, or when a privileged process is accessing its proc structure to get process information. ZOMB Process has terminated and the parent is not waiting. String Reason for Process Block ------------------------------------ died Process terminated during the interval. new Process was created (via the exec() system call) during the interval. NONE Process is ready to run. It is not apparent that the process is blocked. OTHER Waiting for a reason not decipherable by the measurement software. PRI Process is on the run queue. SLEEP Waiting for an event to complete. TRACE Received a signal to stop because parent is tracing this process. ZOMB Process has terminated and the parent is not waiting. ====== PROC_STATE THREAD_STATE A text string summarizing the current state of a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above), either: new This is the first interval the process has been displayed. active Process is continuing. died Process expired during the interval. ====== PROC_PRI THREAD_PRI On Unix systems, this is the dispatch priority of a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) at the end of the interval. The lower the value, the more likely the process is to be dispatched. On Windows, this is the current base priority of this process. On HP-UX, whenever the priority is changed for the selected process or kernel thread, the new value will not be reflected until the process or kernel thread is reactivated if it is currently idle (for example, SLEEPing). On HP-UX, the lower the value, the more the process or kernel thread is likely to be dispatched. Values between zero and 127 are considered to be "real-time" priorities, which the kernel does not adjust. Values above 127 are normal priorities and are modified by the kernel for load balancing. Some special priorities are used in the HP-UX kernel and subsystems for different activities. These values are described in /usr/include/sys/param.h. Priorities less than PZERO 153 are not signalable. Note that on HP-UX, many network-related programs such as inetd, biod, and rlogind run at priority 154 which is PPIPE. Just because they run at this priority does not mean they are using pipes. By examining the open files, you can determine if a process or kernel thread is using pipes. For HP-UX 10.0 and later releases, priorities between -32 and -1 can be seen for processes or kernel threads using the Posix Real-time Schedulers. When specifying a Posix priority, the value entered must be in the range from 0 through 31, which the system then remaps to a negative number in the range of -1 through -32. Refer to the rtsched man pages for more information. On a threaded operating system, such as HP-UX 11.0 and beyond, this metric represents a kernel thread characteristic. If this metric is reported for a process, the value for its last executing kernel thread is given. For example, if a process has multiple kernel threads and kernel thread one is the last to execute during the interval, the metric value for kernel thread one is assigned to the process. On AIX, values for priority range from 0 to 127. Processes running at priorities less than PZERO (40) are not signalable. On Windows, the higher the value the more likely the process or thread is to be dispatched. Values for priority range from 0 to 31. Values of 16 and above are considered to be "realtime" priorities. Threads within a process can raise and lower their own base priorities relative to the process's base priority. ====== PROC_NICE_PRI THREAD_NICE_PRI The nice priority for the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) when it was last dispatched. The value is a bias used to adjust the priority for the process. On AIX, the nice user value, makes a process less favored than it otherwise would be, has a range of 0-40 with a default value of 20. The value of PUSER is always added to the value of nice to weight the user process down below the range of priorities expected to be in use by system jobs like the scheduler and special wait queues. On all other Unix systems, the value ranges from 0 to 39. A higher value causes a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) to be dispatched less. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_STARTTIME THREAD_STARTTIME The creation date and time of the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above). ====== PROC_CPU_LAST_USED THREAD_CPU_LAST_USED The ID number of the processor that last ran the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above). For uni-processor systems, this value is always zero. On a threaded operating system, such as HP-UX 11.0 and beyond, this metric represents a kernel thread characteristic. If this metric is reported for a process, the value for its last executing kernel thread is given. For example, if a process has multiple kernel threads and kernel thread one is the last to execute during the interval, the metric value for kernel thread one is assigned to the process. ====== PROC_CPU_SWITCHES THREAD_CPU_SWITCHES The number of times the process or kernel thread was switched to another processor during the interval. For uni-processor systems, this value is always zero. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_MSG_SENT THREAD_MSG_SENT The number of socket messages sent by a process or kernel thread during the interval. This does not include SYSV messages (msgsnd). ====== PROC_MSG_RECEIVED THREAD_MSG_RECEIVED The number of socket messages received by a process or kernel thread during the interval. This does not include SYSV messages (msgrcv). ====== PROC_PROC_ARGV1 THREAD_PROC_ARGV1 The first argument (argv[1]) of the process argument list or the second word of the command line, if present. (For kernel threads, if HP-UX/Linux Kernel 2.6 and above this metric returns the value of the associated process). The HP Performance Agent logs the first 32 characters of this metric. For releases that support the parm file javaarg flag, this metric may not be the first argument. When javaarg=true, the value of this metric is replaced (for java processes only) by the java class or jar name. This can then be useful to construct parm file java application definitions using the argv1= keyword. ====== PROC_PROC_CMD THREAD_PROC_CMD The full command line with which the process was initiated. (For kernel threads, if HP-UX/Linux Kernel 2.6 and above this metric returns the value of the associated process). On HP-UX, the maximum length returned depends upon the version of the OS, but typically up to 1020 characters are available. On other Unix systems, the maximum length is 4095 characters. On Linux, if the command string exceeds 4096 characters, the kernel instrumentation may not report any value. If the command line contains special characters, such as carriage return and tab, these characters will be converted to \r, \t, and so on. ====== PROC_LS_ID PROC_LS_ID represents the zone-id of the zone, this process is running in. This metric is only available on Solaris 10 and above versions. ====== PROC_OPEN THREAD_OPEN The number of file, socket, or pipe opens made by the process or kernel thread during the interval. This corresponds to the number of open(2) system calls. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_CLOSE THREAD_CLOSE The number of file closes made by the process or kernel thread during the interval. This corresponds to the number of close(2) system calls. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_IOCTL THREAD_IOCTL The number of file ioctls made by the process during the interval. ioctls that result in data read from or written to a device are not counted. These are counted under disk and non-disk read and writes. This metric is no longer collected on HP-UX 11.0 and beyond. ====== PROC_IO_BYTE THREAD_IO_BYTE On HP-UX, this is the total number of physical IO KBs (unless otherwise specified) that was used by this process or kernel thread, either directly or indirectly, during the interval. On all other systems, this is the total number of physical IO KBs (unless otherwise specified) that was used by this process during the interval. IOs include disk, terminal, tape and network IO. On HP-UX, indirect IOs include paging and deactivation/reactivation activity done by the kernel on behalf of the process or kernel thread. Direct IOs include disk, terminal, tape, and network IO, but exclude all NFS traffic. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On SUN, counts in the MB ranges in general can be attributed to disk accesses and counts in the KB ranges can be attributed to terminal IO. This is useful when looking for processes with heavy disk IO activity. This may vary depending on the sample interval length. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS metrics will report pages of disk IO specifically. The PROC_IO metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have "na" values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread's IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. ====== PROC_FORK THREAD_FORK The total number of fork and vfork system calls executed by this process during the interval. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_SIGNAL THREAD_SIGNAL Number of signals seen by the current process (or kernel thread, if HP-UX) during the lifetime of the process or kernel thread. ====== PROC_DISPATCH THREAD_DISPATCH The number of times the process or kernel thread was made the executing process on the CPU over the interval. This includes dispatches associated with a context switch because some other process or kernel thread had the CPU, as well as those dispatches caused by the process or kernel thread stopping, then resuming, with no other process or kernel thread running in the meantime. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_SYSCALL The number of system calls this process executed during the interval. ====== PROC_IO_BYTE_RATE THREAD_IO_BYTE_RATE On HP-UX, this is the number of physical IO KBs per second that was used by this process or kernel thread, either directly or indirectly, during the interval. On all other systems, this is the number of physical IO KBs per second that was used by this process during the interval. IOs include disk, terminal, tape and network IO. On HP-UX, indirect IOs include paging and deactivation/reactivation activity done by the kernel on behalf of the process or kernel thread. Direct IOs include disk, terminal, tape, and network IO, but exclude all NFS traffic. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On SUN, counts in the MB ranges in general can be attributed to disk accesses and counts in the KB ranges can be attributed to terminal IO. This is useful when looking for processes with heavy disk IO activity. This may vary depending on the sample interval length. Certain types of disk IOs are not counted by AIX at the process level, so they are excluded from this metric. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS metrics will report pages of disk IO specifically. The PROC_IO metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have "na" values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread's IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. ====== PROC_INTERRUPTS THREAD_INTERRUPTS The number of interrupts during the interval. ====== PROC_CPU_TOTAL_UTIL THREAD_CPU_TOTAL_UTIL The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) as a percentage of the total CPU time available during the interval. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On HP-UX, the total CPU utilization is the sum of the CPU utilization components for a process or kernel thread, including system, user, context switch, interrupts processing, realtime, and nice utilization values. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On AIX SPLPAR, this metric indicates the total physical processing units consumed by processes. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_TOTAL_TIME THREAD_CPU_TOTAL_TIME The total CPU time, in seconds, consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) during the interval. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On HP-UX, the total CPU time is the sum of the CPU time components for a process or kernel thread, including system, user, context switch, interrupts processing, realtime, and nice utilization values. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_SYS_MODE_UTIL THREAD_CPU_SYS_MODE_UTIL The percentage of time that the CPU was in system mode in the context of the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. High system mode CPU utilizations are normal for IO intensive programs. Abnormally high system CPU utilization can indicate that a hardware problem is causing a high interrupt rate. It can also indicate programs that are not using system calls efficiently. A classic "hung shell" shows up with very high system mode CPU because it gets stuck in a loop doing terminal reads (a system call) to a device that never responds. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_SYS_MODE_TIME THREAD_CPU_SYS_MODE_TIME The CPU time in system mode in the context of the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_USER_MODE_UTIL THREAD_CPU_USER_MODE_UTIL The percentage of time the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) was using the CPU in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_USER_MODE_TIME THREAD_CPU_USER_MODE_TIME The time, in seconds, the process (or kernel threads, if HP-UX/Linux Kernel 2.6 and above) was using the CPU in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_THREAD_COUNT THREAD_THREAD_COUNT The total number of kernel threads for the current process. On Linux systems with Kernel 2.5 and below, every thread has its own process ID so this metric will always be 1. On Solaris systems, this metric reflects the total number of Light Weight Processes (LWPs) associated with the process. ====== PROC_THREAD_ACTIVE The total number of active kernel threads for the current process. ====== PROC_THREAD_SUSPENDED The total number of suspended kernel threads for the current process. ====== PROC_THREAD_TERMINATED The total number of terminated kernel threads for the current process. ====== PROC_USER_THREAD_ID THREAD_USER_THREAD_ID The user thread ID number of the last user thread to execute within the context of this process or kernel thread. User threads IDs are used to identify user-level threads of execution within the context of a process. A process may have one or more user threads even if there is only one kernel thread. ====== PROC_CPU_NICE_UTIL THREAD_CPU_NICE_UTIL The percentage of time that this niced process or kernel thread was in user mode during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_NICE_TIME THREAD_CPU_NICE_TIME The time, in seconds, that this niced process or kernel thread was using the CPU in user mode during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_NNICE_UTIL THREAD_CPU_NNICE_UTIL The percentage of time that this negatively niced process or kernel thread was in user mode during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_NNICE_TIME THREAD_CPU_NNICE_TIME The time, in seconds, that this negatively niced process or kernel thread was using the CPU in user mode during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_ALIVE_TOTAL_UTIL THREAD_CPU_ALIVE_TOTAL_UTIL The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) as a percentage of the time it is alive during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_ALIVE_USER_MODE_UTIL THREAD_CPU_ALIVE_USER_MODE_UTIL The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) in user mode as a percentage of the time it is alive during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_ALIVE_SYS_MODE_UTIL THREAD_CPU_ALIVE_SYS_MODE_UTIL The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) in system mode as a percentage of the time it is alive during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_REALTIME_UTIL THREAD_CPU_REALTIME_UTIL The percentage of time that this process or kernel thread was at a realtime priority during the interval. The realtime CPU is separated out to allow users to see the effect of using the realtime facilities to alter priority. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_REALTIME_TIME THREAD_CPU_REALTIME_TIME The time, in seconds, that the selected process or kernel thread was in user mode at a realtime priority during the interval. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_CSWITCH_UTIL THREAD_CPU_CSWITCH_UTIL The percentage of time spent in context switching the current process or kernel thread during the interval. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_CSWITCH_TIME THREAD_CPU_CSWITCH_TIME The time, in seconds, that the process or kernel thread spent in context switching during the interval. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_INTERRUPT_UTIL THREAD_CPU_INTERRUPT_UTIL The percentage of time that this process or kernel thread was in interrupt mode during the last interval. Interrupt mode means that interrupts were being handled while the process or kernel thread was loaded and running on the CPU. The interrupts may have been generated by any process, not just the running process, but they were handled while the process or kernel thread was running and may have had an impact on the performance of this process or kernel thread. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_INTERRUPT_TIME THREAD_CPU_INTERRUPT_TIME The time, in seconds, that the process or kernel thread spent processing interrupts during the interval. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_NORMAL_UTIL THREAD_CPU_NORMAL_UTIL The percentage of time that this process or kernel thread was in user mode at a normal priority during the interval. "At a normal priority" means the neither rtprio or nice had been used to alter the priority of the process or kernel thread during the interval. Normal priority user mode CPU excludes CPU used at real-time and nice priorities. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_NORMAL_TIME THREAD_CPU_NORMAL_TIME The time, in seconds, that the selected process or kernel thread was in user mode at normal priority during the interval. Normal priority user mode CPU excludes CPU used at real-time and nice priorities. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_SYSCALL_UTIL THREAD_CPU_SYSCALL_UTIL The percentage of the total CPU time this process or kernel thread spent in system mode (excluding interrupt, context switch, trap, or vfault CPU) during the interval. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_SYSCALL_TIME THREAD_CPU_SYSCALL_TIME The time, in seconds, that this process or kernel thread spent executing system calls in system mode, excluding interrupt or context processing, during the interval. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the "-ignore_mt" option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with "-ignore_mt" by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. ====== PROC_CPU_TRAP_COUNT THREAD_CPU_TRAP_COUNT The number of times the CPU was in trap handler code for this process or kernel thread during the interval. On HP-UX, all exceptions (including faults) cause traps. These include pfaults (protection faults), vfaults (virtual faults), time slice expiration (rescheduling), zero divide, illegal or privileged instructions, single-stepping, breakpoints, and so on. The kernel trap handler code will switch trap counters for vfaults and pfaults to fault counters when appropriate. As such, the trap count excludes vfaults and pfaults. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_STREAM_WAIT_PCT THREAD_STREAM_WAIT_PCT The percentage of time the process or kernel thread was blocked on streams IO (waiting for a streams IO operation to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_STREAM_WAIT_TIME THREAD_STREAM_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on streams IO (waiting for a streams IO operation to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_CHILD_CPU_USER_MODE_UTIL THREAD_CHILD_CPU_USER_MODE_UTIL The percentage of user time accumulated by this process's children processes during the interval. On Unix systems, when a process terminates, its CPU counters (user and system) are accumulated in the parent's "children times" counters. This occurs when the parent waits for (or reaps) the child. See getrusage(2). If the process is an orphan process, its parent becomes the init(1m) process, and its CPU times will be accumulated to the init process upon termination. The PROC _CHILD_ metrics attempt to report these counters in a meaningful way. If these counters were reported unconditionally as they are incremented, they would be misleading. For example, consider a shell process that forks another process and that process accumulates 100 minutes of CPU time. When that process terminates, the shell would report a huge child time utilization for that interval even though it was generally idle, waiting for that child to terminate. The child process was most likely already reported in previous intervals as it used the CPU time, and therefore it would be confusing to report this time in the parent. If, on the other hand, a process was continuously forking short-lived processes during the interval, it would be useful to report the CPU time used by those children processes. The simple algorithm chosen is to only report children times when their total CPU time is less than the process alive interval, and zero otherwise. It is not fool-proof but it generally yields the right results, i.e., if a process reports high child time utilization for several intervals in a row, it could be a runaway forking process. An example of such a runaway process (or "fork bomb") is: while true ; do ps -ef | grep something done Moderate children times are also a useful way to identify daemons that rely on child processes, or, in the case of the init process it may indicate that many short-lived orphan processes are being created. Note that this metric is only valid at the process level. It reports CPU time of processes forked and does not report on threads created by processes. The PROC _CHILD metrics have no meaning at the thread level, therefore the thread metric of the same name, on systems that report per-thread data, will show "na". ====== PROC_CHILD_CPU_SYS_MODE_UTIL THREAD_CHILD_CPU_SYS_MODE_UTIL The percentage of system time accumulated by this process's children processes during the interval. On Unix systems, when a process terminates, its CPU counters (user and system) are accumulated in the parent's "children times" counters. This occurs when the parent waits for (or reaps) the child. See getrusage(2). If the process is an orphan process, its parent becomes the init(1m) process, and its CPU times will be accumulated to the init process upon termination. The PROC _CHILD_ metrics attempt to report these counters in a meaningful way. If these counters were reported unconditionally as they are incremented, they would be misleading. For example, consider a shell process that forks another process and that process accumulates 100 minutes of CPU time. When that process terminates, the shell would report a huge child time utilization for that interval even though it was generally idle, waiting for that child to terminate. The child process was most likely already reported in previous intervals as it used the CPU time, and therefore it would be confusing to report this time in the parent. If, on the other hand, a process was continuously forking short-lived processes during the interval, it would be useful to report the CPU time used by those children processes. The simple algorithm chosen is to only report children times when their total CPU time is less than the process alive interval, and zero otherwise. It is not fool-proof but it generally yields the right results, i.e., if a process reports high child time utilization for several intervals in a row, it could be a runaway forking process. An example of such a runaway process (or "fork bomb") is: while true ; do ps -ef | grep something done Moderate children times are also a useful way to identify daemons that rely on child processes, or, in the case of the init process it may indicate that many short-lived orphan processes are being created. Note that this metric is only valid at the process level. It reports CPU time of processes forked and does not report on threads created by processes. The PROC _CHILD metrics have no meaning at the thread level, therefore the thread metric of the same name, on systems that report per-thread data, will show "na". ====== PROC_CHILD_CPU_TOTAL_UTIL THREAD_CHILD_CPU_TOTAL_UTIL The percentage of system + user time accumulated by this process's children processes during the interval. On Unix systems, when a process terminates, its CPU counters (user and system) are accumulated in the parent's "children times" counters. This occurs when the parent waits for (or reaps) the child. See getrusage(2). If the process is an orphan process, its parent becomes the init(1m) process, and its CPU times will be accumulated to the init process upon termination. The PROC _CHILD_ metrics attempt to report these counters in a meaningful way. If these counters were reported unconditionally as they are incremented, they would be misleading. For example, consider a shell process that forks another process and that process accumulates 100 minutes of CPU time. When that process terminates, the shell would report a huge child time utilization for that interval even though it was generally idle, waiting for that child to terminate. The child process was most likely already reported in previous intervals as it used the CPU time, and therefore it would be confusing to report this time in the parent. If, on the other hand, a process was continuously forking short-lived processes during the interval, it would be useful to report the CPU time used by those children processes. The simple algorithm chosen is to only report children times when their total CPU time is less than the process alive interval, and zero otherwise. It is not fool-proof but it generally yields the right results, i.e., if a process reports high child time utilization for several intervals in a row, it could be a runaway forking process. An example of such a runaway process (or "fork bomb") is: while true ; do ps -ef | grep something done Moderate children times are also a useful way to identify daemons that rely on child processes, or, in the case of the init process it may indicate that many short-lived orphan processes are being created. Note that this metric is only valid at the process level. It reports CPU time of processes forked and does not report on threads created by processes. The PROC _CHILD metrics have no meaning at the thread level, therefore the thread metric of the same name, on systems that report per-thread data, will show "na". ====== PROC_DISK_PHYS_READ THREAD_DISK_PHYS_READ The number of physical reads made by (or for) a process or kernel thread during the last interval. "Disk" refers to a physical drive (that is, "spindle"), not a partition on a drive (unless the partition occupies the entire physical disk). NFS mounted disks are not included in this list. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS metrics will report pages of disk IO specifically. The PROC_IO metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have "na" values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread's IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. ====== PROC_DISK_PHYS_READ_RATE THREAD_DISK_PHYS_READ_RATE The number of physical reads per second made by (or for) a process or kernel thread during the interval. "Disk" refers to a physical drive (that is, "spindle"), not a partition on a drive (unless the partition occupies the entire physical disk). NFS mounted disks are not included in this list. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS metrics will report pages of disk IO specifically. The PROC_IO metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have "na" values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread's IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. ====== PROC_DISK_REM_LOGL_READ THREAD_DISK_REM_LOGL_READ The number of remote logical reads made by a process or kernel thread during the last interval. On HP-UX, the remote logical IOs include all IO requests generated on a local client to a remotely mounted file system or disk. If the logical request is satisfied on the local client (that is, the data is in a local memory buffer), a physical request is not generated. Otherwise, a physical IO request is made to the remote machine to read/write the data. Note that, in either case, a logical IO request is made. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_REM_LOGL_READ_RATE THREAD_DISK_REM_LOGL_READ_RATE The number of remote logical reads per second made by (or for) a process or kernel thread during the interval. On HP-UX, the remote logical IOs include all IO requests generated on a local client to a remotely mounted file system or disk. If the logical request is satisfied on the local client (that is, the data is in a local memory buffer), a physical request is not generated. Otherwise, a physical IO request is made to the remote machine to read/write the data. Note that, in either case, a logical IO request is made. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_REM_LOGL_WRITE THREAD_DISK_REM_LOGL_WRITE Number of remote logical writes made by a process or kernel thread during the interval. On HP-UX, the remote logical IOs include all IO requests generated on a local client to a remotely mounted file system or disk. If the logical request is satisfied on the local client (that is, the data is in a local memory buffer), a physical request is not generated. Otherwise, a physical IO request is made to the remote machine to read/write the data. Note that, in either case, a logical IO request is made. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_REM_LOGL_WRITE_RATE THREAD_DISK_REM_LOGL_WRITE_RATE The number of remote logical writes per second made by (or for) a process or kernel thread during the interval. On HP-UX, the remote logical IOs include all IO requests generated on a local client to a remotely mounted file system or disk. If the logical request is satisfied on the local client (that is, the data is in a local memory buffer), a physical request is not generated. Otherwise, a physical IO request is made to the remote machine to read/write the data. Note that, in either case, a logical IO request is made. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_REM_PHYS_READ THREAD_DISK_REM_PHYS_READ The number of remote physical reads made by (or for) a process or kernel thread during the interval. On HP-UX, if an IO cannot be satisfied in a local client machine's memory buffer, a remote physical IO request is generated. This may or may not require a physical disk IO on the remote system. In either case, the remote IO request is considered a physical request on the local client machine. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_REM_PHYS_READ_RATE THREAD_DISK_REM_PHYS_READ_RATE The number of remote physical reads per second made by (or for) a process or kernel thread during the interval. On HP-UX, if an IO cannot be satisfied in a local client machine's memory buffer, a remote physical IO request is generated. This may or may not require a physical disk IO on the remote system. In either case, the remote IO request is considered a physical request on the local client machine. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_REM_PHYS_WRITE THREAD_DISK_REM_PHYS_WRITE The number of physical writes made by (or for) a process or kernel thread during the interval. On HP-UX, if an IO cannot be satisfied in a local client machine's memory buffer, a remote physical IO request is generated. This may or may not require a physical disk IO on the remote system. In either case, the remote IO request is considered a physical request on the local client machine. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_REM_PHYS_WRITE_RATE THREAD_DISK_REM_PHYS_WRITE_RATE The number of physical writes per second made by (or for) a process or kernel thread during the interval. On HP-UX, if an IO cannot be satisfied in a local client machine's memory buffer, a remote physical IO request is generated. This may or may not require a physical disk IO on the remote system. In either case, the remote IO request is considered a physical request on the local client machine. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_PHYS_WRITE THREAD_DISK_PHYS_WRITE The number of physical writes made by (or for) a process or kernel thread during the last interval. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS metrics will report pages of disk IO specifically. The PROC_IO metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have "na" values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread's IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. ====== PROC_DISK_PHYS_WRITE_RATE THREAD_DISK_PHYS_WRITE_RATE The number of physical writes per second made by (or for) a process or kernel thread during the interval. "Disk" refers to a physical drive (that is, "spindle"), not a partition on a drive (unless the partition occupies the entire physical disk). NFS mounted disks are not included in this list. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS metrics will report pages of disk IO specifically. The PROC_IO metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have "na" values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread's IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. ====== PROC_DISK_LOGL_READ THREAD_DISK_LOGL_READ The number of disk logical reads made by a process or kernel thread during the interval. Calls destined for NFS mounted files are not counted. On many Unix systems, logical disk IOs are measured by counting the read system calls that are directed to disk devices. Also counted are read system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_LOGL_READ_RATE THREAD_DISK_LOGL_READ_RATE The number of logical reads per second made by (or for) a process or kernel thread during the interval. Calls destined for NFS mounted files are not counted. On many Unix systems, logical disk IOs are measured by counting the read system calls that are directed to disk devices. Also counted are read system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. "Disk" refers to a physical drive (that is, "spindle"), not a partition on a drive (unless the partition occupies the entire physical disk). On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_LOGL_WRITE THREAD_DISK_LOGL_WRITE Number of disk logical writes made by a process or kernel thread during the interval. Calls destined for NFS mounted files are not counted. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting "o/f" (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won't include times accumulated prior to the performance tool's start and a message will be logged to indicate this. On many Unix systems, logical disk IOs are measured by counting the write system calls that are directed to disk devices. Also counted are write system calls made indirectly through other system calls, including writev, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_LOGL_WRITE_RATE THREAD_DISK_LOGL_WRITE_RATE The number of logical writes per second made by (or for) a process or kernel thread during the interval. NFS mounted disks are not included in this list. "Disk" refers to a physical drive (that is, "spindle"), not a partition on a drive (unless the partition occupies the entire physical disk). On many Unix systems, logical disk IOs are measured by counting the write system calls that are directed to disk devices. Also counted are write system calls made indirectly through other system calls, including writev, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_LOGL_IO THREAD_DISK_LOGL_IO The number of logical IOs made by (or for) a process or kernel thread during the interval. NFS mounted disks are not included in this list. On many Unix systems, logical disk IOs are measured by counting the read and write system calls that are directed to disk devices. Also counted are read and write system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, writev, send, sento, sendmsg, and ipcsend. "Disk" refers to a physical drive (that is, "spindle"), not a partition on a drive (unless the partition occupies the entire physical disk). On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_LOGL_IO_RATE THREAD_DISK_LOGL_IO_RATE The number of logical IOs per second made by (or for) a process or kernel thread during the interval. NFS mounted disks are not included in this list. On many Unix systems, logical disk IOs are measured by counting the read and write system calls that are directed to disk devices. Also counted are read and write system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, writev, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. For processes which run for less than the measurement interval, this metric is normalized over the measurement interval. For example, a process ran for 1 second and did 50 IOs during its life. If the measurement interval is 5 seconds, it is reported as having done 10 IOs per second. If the measurement interval is 60 seconds, it is reported as having done 50/60 or 0.83 IOs per second. "Disk" refers to a physical drive (that is, "spindle"), not a partition on a drive (unless the partition occupies the entire physical disk). On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_FS_READ THREAD_DISK_FS_READ Number of file system physical disk reads made by a process or kernel thread during the interval. Only local disks are counted in this measurement. NFS devices are excluded. These are physical reads generated by user file system access and do not include virtual memory reads, system reads (inode access), or reads relating to raw disk access. An exception is user files accessed via the mmap(2) call, which does not show their physical reads in this category. They appear under virtual memory reads. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_FS_READ_RATE THREAD_DISK_FS_READ_RATE The number of file system physical disk reads made by a process or kernel thread during the interval. Only local disks are counted in this measurement. NFS devices are excluded. These are physical reads generated by user file system access and do not include virtual memory reads, system reads (inode access), or reads relating to raw disk access. An exception is user files accessed via the mmap(2) call, which does not show their physical reads in this category. They appear under virtual memory reads. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_FS_WRITE THREAD_DISK_FS_WRITE Number of file system physical disk writes made by a process or kernel thread during the interval. Only local disks are counted in this measurement. NFS devices are excluded. These are physical writes generated by user file system access and do not include virtual memory writes, system writes (inode updates), or writes relating to raw disk access. An exception is user files accessed via the mmap(2) call, which does not show their physical writes in this category. They appear under virtual memory writes. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_FS_WRITE_RATE THREAD_DISK_FS_WRITE_RATE The number of file system physical disk writes made by a process or kernel thread during the interval. Only local disks are counted in this measurement. NFS devices are excluded. These are physical writes generated by user file system access and do not include virtual memory writes, system writes (inode updates), or writes relating to raw disk access. An exception is user files accessed via the mmap(2) call, which does not show their physical writes in this category. They appear under virtual memory writes. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_VM_READ THREAD_DISK_VM_READ Number of virtual memory reads made for a process or kernel thread during the interval. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_VM_WRITE THREAD_DISK_VM_WRITE Number of virtual memory writes made for a process or kernel thread during the interval. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_VM_IO THREAD_DISK_VM_IO The number of virtual memory IOs made for a process or kernel thread during the interval. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_VM_IO_RATE THREAD_DISK_VM_IO_RATE The number of virtual memory IOs per second made for a process or kernel thread during the interval. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_PHYS_IO_RATE THREAD_DISK_PHYS_IO_RATE The average number of physical disk IOs per second made by the process or kernel thread during the interval. For processes which run for less than the measurement interval, this metric is normalized over the measurement interval. For example, a process ran for 1 second and did 50 IOs during its life. If the measurement interval is 5 seconds, it is reported as having done 10 IOs per second. If the measurement interval is 60 seconds, it is reported as having done 50/60 or 0.83 IOs per second. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS metrics will report pages of disk IO specifically. The PROC_IO metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have "na" values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread's IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. ====== PROC_DISK_SYSTEM_READ THREAD_DISK_SYSTEM_READ Number of file system management physical disk reads made for a process or kernel thread during the interval. File system management IOs are the physical accesses required to obtain or update internal information about the file system structure (inode access). Accesses or updates to user data are not included in this metric. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_SYSTEM_WRITE THREAD_DISK_SYSTEM_WRITE Number of file system management physical disk writes made for a process or kernel thread during the interval. File system management IOs are the physical accesses required to obtain or update internal information about the file system structure (inode access). Accesses or updates to user data are not included in this metric. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_SYSTEM_IO THREAD_DISK_SYSTEM_IO Number of file system management physical disk IOs made for a process or kernel thread during the interval. File system management IOs are the physical accesses required to obtain or update internal information about the file system structure (inode access). Accesses or updates to user data are not included in this metric. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_SYSTEM_IO_RATE THREAD_DISK_SYSTEM_IO_RATE The number of file system management physical disk IOs per second made for a process or kernel thread during the interval. File system management IOs are the physical accesses required to obtain or update internal information about the file system structure (inode access). Accesses or updates to user data are not included in this metric. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_RAW_READ THREAD_DISK_RAW_READ Number of raw reads made for a process or kernel thread during the interval. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_RAW_READ_RATE THREAD_DISK_RAW_READ_RATE Rate of raw reads made for a process or kernel thread during the interval. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_RAW_WRITE THREAD_DISK_RAW_WRITE Number of raw writes made for a process or kernel thread during the interval. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_RAW_WRITE_RATE THREAD_DISK_RAW_WRITE_RATE Rate of raw writes made for a process or kernel thread during the interval. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_DISK_BLOCK_READ The number of block reads made by a process during the interval. On Sun 5.X (Solaris 2.X or later), these are physical readsgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. ====== PROC_DISK_BLOCK_READ_RATE The number of block reads per second made by (or for) a process during the interval. On Sun 5.X (Solaris 2.X or later), these are physical readsgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. ====== PROC_DISK_BLOCK_WRITE Number of block writes made by a process during the interval. Calls destined for NFS mounted files are not included. On Sun 5.X (Solaris 2.X or later), these are physical writesgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. ====== PROC_DISK_BLOCK_WRITE_RATE The number of block writes per second made by (or for) a process during the interval. On Sun 5.X (Solaris 2.X or later), these are physical writesgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. ====== PROC_DISK_BLOCK_IO The number of block IOs made by (or for) a process during the interval. On Sun 5.X (Solaris 2.X or later), these are physical IOsgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, block IOs refer to data transferred between disk and the file system buffer cache in block size chunks. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. ====== PROC_DISK_BLOCK_IO_RATE The number of block IOs per second made by (or for) a process during the interval. On Sun 5.X (Solaris 2.X or later), these are physical IOsgenerated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, block IOs refer to data transferred between disk and the file system buffer cache in block size chunks. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. ====== PROC_HANDLE_COUNT The total number of handles currently open by this process. This is the sum of the handles opened by each thread in this process. Included in this count are handles created for all types of object, including threads, semaphores, mutexes, files and file mappings. Object creation results in system nonpaged pool memory utilization, which is reflected in the GBL_MEM_SYS_UTIL metric. ====== PROC_NONDISK_LOGL_READ THREAD_NONDISK_LOGL_READ The number of non-disk logical reads (that is, calls to read(2)) made by a process or kernel thread during the interval. "Non-disk" devices include terminals, tapes, and so forth on the local or remote machine. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_NONDISK_LOGL_WRITE THREAD_NONDISK_LOGL_WRITE The number of non-disk logical writes (that is, calls to write(2)) made by a process or kernel thread during the interval. "Non-disk" devices include terminals, tapes, and so forth on the local or remote machine. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_NONDISK_PHYS_READ THREAD_NONDISK_PHYS_READ The number of physical non-disk reads made by a process or kernel thread during the interval to buffered/block devices, such as a tape drive. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_NONDISK_PHYS_WRITE THREAD_NONDISK_PHYS_WRITE The number of local/remote physical non-disk writes made by a process or kernel thread during the interval to buffered/block devices, such as a tape drive. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_MEM_RES THREAD_MEM_RES The size (in KB) of resident memory allocated for the process(or kernel thread, if HP-UX/Linux Kernel 2.6 and above). On HP-UX, the calculation of this metric differs depending on whether this process has used any CPU time since the midaemon process was started. This metric is less accurate and does not include shared memory regions in its calculation when the process has been idle since the midaemon was started. On HP-UX, for processes that use CPU time subsequent to midaemon startup, the resident memory is calculated as RSS = sum of private region pages + (sum of shared region pages / number of references) The number of references is a count of the number of attachments to the memory region. Attachments, for shared regions, may come from several processes sharing the same memory, a single process with multiple attachments, or combinations of these. This value is only updated when a process uses CPU. Thus, under memory pressure, this value may be higher than the actual amount of resident memory for processes which are idle because their memory pages may no longer be resident or the reference count for shared segments may have changed. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. A value of "na" is displayed when this information is unobtainable. This information may not be obtainable for some system (kernel) processes. It may also not be available for defunct processes. On AIX, this is the same as the RSS value shown by "ps v". On Windows, this is the number of KBs in the working set of this process. The working set includes the memory pages touched recently by the threads of the process. If free memory in the system is above a threshold, then pages are left in the working set even if they are not in use. When free memory falls below a threshold, pages are trimmed from the working set, but not necessarily paged out to disk from memory. If those pages are subsequently referenced, they will be page faulted back into the working set. Therefore, the working set is a general indicator of the memory resident set size of this process, but it will vary depending on the overall status of memory on the system. Note that the size of the working set is often larger than the amount of pagefile space consumed (PROC_MEM_VIRT). ====== PROC_MEM_PRIVATE_RES THREAD_MEM_PRIVATE_RES The size (in KB) of resident memory of private regions only, such as data and stack, currently consumed by this process. On HP-UX, this metric is initialized only when the menu option "Process Memory Region" is activated for the process. A value of "na" is displayed otherwise. A value of "na" is displayed when this information is unobtainable. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_MEM_SHARED_RES THREAD_MEM_SHARED_RES The size (in KB) of resident memory of shared regions only, such as shared text, shared memory, and shared libraries. On HP-UX, this value is not affected by the reference count. A value of "na" is displayed when this information is unobtainable. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_MEM_DATA_RES resident set size (in KB) of private data segments only, such as data and stack, currently consumed by this process. ====== PROC_MEM_VIRT THREAD_MEM_VIRT The size (in KB) of virtual memory allocated for the process(or kernel thread, if HP-UX/Linux Kernel 2.6 and above). On HP-UX, this consists of the sum of the virtual set size of all private memory regions used by this process, plus this process' share of memory regions which are shared by multiple processes. For processes that use CPU time, the value is divided by the reference count for those regions which are shared. On HP-UX, this metric is less accurate and does not reflect the reference count for shared regions for processes that were started prior to the midaemon process and have not used any CPU time since the midaemon was started. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. On all other Unix systems, this consists of private text, private data, private stack and shared memory. The reference count for shared memory is not taken into account, so the value of this metric represents the total virtual size of all regions regardless of the number of processes sharing access. Note also that lazy swap algorithms, sparse address space malloc calls, and memory-mapped file access can result in large VSS values. On systems that provide Glance memory regions detail reports, the drilldown detail per memory region is useful to understand the nature of memory allocations for the process. A value of "na" is displayed when this information is unobtainable. This information may not be obtainable for some system (kernel) processes. It may also not be available for defunct processes. On Windows, this is the number of KBs the process has used in the paging file(s). Paging files are used to store pages of memory used by the process, such as local data, that are not contained in other files. Examples of memory pages which are contained in other files include pages storing a program's .EXE and .DLL files. These would not be kept in pagefile space. Thus, often programs will have a memory working set size (PROC_MEM_RES) larger than the size of its pagefile space. On Linux this value is rounded to PAGESIZE. ====== PROC_MEM_DATA_VIRT THREAD_MEM_DATA_VIRT On SUN, this is the virtual set size (in KB) of the heap memory for this process. Note that heap can reside partially in BSS and partially in the data segment, so its value will not be the same as PROC_REGION_VIRT of the data segment or PROC_REGION_VIRT_DATA, which is the sum of all data segments for the process. On the other non HP-UX systems, this is the virtual set size (in KB) of the data segment for this process(or kernel thread, if Linux Kernel 2.6 and above). A value of "na" is displayed when this information is unobtainable. On AIX, this is the same as the SIZE value reported by "ps v". On Linux this value is rounded to PAGESIZE. ====== PROC_MEM_TEXT_RES Size (in KB) of the private text for a process currently residing in physical memory. On AIX, this is the same as the TRS field shown by "ps v". ====== PROC_MEM_TEXT_VIRT THREAD_MEM_TEXT_VIRT Size (in KB) of the private text for this process(or kernel thread, if Linux Kernel 2.6 and above). On AIX, this is the same as the TSIZ field shown by "ps v". On Linux this value is rounded to PAGESIZE. ====== PROC_MEM_STACK_VIRT THREAD_MEM_STACK_VIRT Size (in KB) of the stack for this process(or kernel thread, if Linux Kernel 2.6 and above). On SUN, the stack is initialized to 8K bytes. On Linux this value is rounded to PAGESIZE. ====== PROC_PAGEFAULT THREAD_PAGEFAULT The number of page faults that occurred during the interval for the process(or kernel threads, if HP-UX/Linux Kernel 2.6 and above). ====== PROC_PAGEFAULT_RATE THREAD_PAGEFAULT_RATE The number of page faults per second that occurred during the interval for the process(or kernel threads, if HP-UX/Linux Kernel 2.6 and above). ====== PROC_MINOR_FAULT THREAD_MINOR_FAULT Number of minor page faults for this process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) during the interval. On HP-UX, major page faults and minor page faults are a subset of vfaults (virtual faults). Stack and heap accesses can cause vfaults, but do not result in a disk page having to be loaded into memory. ====== PROC_MAJOR_FAULT THREAD_MAJOR_FAULT Number of major page faults for this process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) during the interval. On HP-UX, major page faults and minor page faults are a subset of vfaults (virtual faults). Stack and heap accesses can cause vfaults, but do not result in a disk page having to be loaded into memory. ====== PROC_SWAP THREAD_SWAP The number of times the process or kernel thread was deactivated during the interval. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. ====== PROC_MEM_VFAULT_COUNT THREAD_MEM_VFAULT_COUNT The number of times the CPU handled vfaults on behalf of this process or kernel thread during the interval. On HP-UX, major page faults and minor page faults are a subset of vfaults (virtual faults). Stack and heap accesses can cause vfaults, but do not result in a disk page having to be loaded into memory. On HP-UX, all exceptions (including faults) cause traps. These include pfaults (protection faults), vfaults (virtual faults), time slice expiration (rescheduling), zero divide, illegal or privileged instructions, single-stepping, breakpoints, and so on. The kernel trap handler code will switch trap counters for vfaults and pfaults to fault counters when appropriate. As such, the trap count excludes vfaults and pfaults. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. ====== PROC_MEM_LOCKED THREAD_MEM_LOCKED The number of KBs of virtual memory allocated by the process, marked as locked memory. On Windows, this is the non-paged pool memory of the process. This memory is allocated from the system-wide non-paged pool, and is not affected by the pageout process. Device drivers may allocate memory from the non-paged pool, charging quota against the current (caller) thread. The kernel and driver code use the non-paged pool for data that should always be in the physical memory. The size of the non-paged pool is limited to approximately 128 MB on Windows NT systems and to 256 MB on Windows 2000 systems. The failure to allocate memory from the non-paged pool can cause a system crash. ====== PROC_DISK_SUBSYSTEM_WAIT_PCT THREAD_DISK_SUBSYSTEM_WAIT_PCT The percentage of time the process or kernel thread was blocked on the disk subsystem (waiting for its file system IOs to complete) during the interval. On HP-UX, this is based on the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On Linux, this is based on the sum of all processes or kernel threads blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_DISK_SUBSYSTEM_WAIT_TIME THREAD_DISK_SUBSYSTEM_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on the disk subsystem (waiting for its file system IOs to complete) during the interval. On HP-UX, this is based on the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On Linux, this is based on the sum of all processes or kernel threads blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_IPC_SUBSYSTEM_WAIT_PCT THREAD_IPC_SUBSYSTEM_WAIT_PCT The percentage of time the process or kernel thread was blocked on the InterProcess Communication (IPC) subsystems (waiting for its interprocess communication activity to complete) during the interval. This is the sum of processes or kernel threads in the IPC, MSG, SEM, PIPE, SOCKT (that is, sockets) and STRMS (that is, streams IO) wait states. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_IPC_SUBSYSTEM_WAIT_TIME THREAD_IPC_SUBSYSTEM_WAIT_TIME The time, in seconds, the process or kernel thread was blocked on the InterProcess Communication (IPC) subsystems (waiting for its interprocess communication activity to complete) during the interval. This is the sum of processes or kernel threads in the IPC, MSG, SEM, PIPE, SOCKT (that is, sockets) and STRMS (that is, streams IO) wait states. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_JOBCTL_WAIT_PCT THREAD_JOBCTL_WAIT_PCT The percentage of time during the interval the process or kernel thread was blocked on job control (having been stopped with the job control facilities) during the interval. Job control waits include waiting at a debug breakpoint, as well as being blocked attempting to write (from background) to a terminal which has the "stty tostop" option set. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_JOBCTL_WAIT_TIME THREAD_JOBCTL_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on job control (having been stopped with the job control facilities) during the interval. Job control waits include waiting at a debug breakpoint, as well as being blocked attempting to write (from background) to a terminal which has the "stty tostop" option set. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting "o/f" (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won't include times accumulated prior to the performance tool's start and a message will be logged to indicate this. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_GRAPHICS_WAIT_PCT THREAD_GRAPHICS_WAIT_PCT The percentage of time the process or kernel thread was blocked on graphics (waiting for graphics operations to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_GRAPHICS_WAIT_TIME THREAD_GRAPHICS_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on graphics (waiting for their graphics operations to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_PRI_WAIT_PCT THREAD_PRI_WAIT_PCT The percentage of time during the interval the process or kernel thread was blocked on priority (waiting for its priority to become high enough to get the CPU). On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_PRI_WAIT_TIME THREAD_PRI_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on PRI (waiting for its priority to become high enough to get the CPU) during the interval. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_DISK_WAIT_PCT THREAD_DISK_WAIT_PCT The percentage of time the process or kernel thread was blocked on DISK (waiting in the disk drivers for file system disk IO to complete) during the interval. The time spent waiting in the disk drivers is usually very small. Most of the time, processes doing file system IO are waiting on IO or CACHE. Processes waiting for character (raw) IO to a disk device are usually waiting on IO. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_DISK_WAIT_TIME THREAD_DISK_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on DISK (waiting in a disk driver for its disk IO to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_MEM_WAIT_PCT THREAD_MEM_WAIT_PCT The percentage of time the process or kernel thread was blocked on memory (waiting for memory resources to become available) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_MEM_WAIT_TIME THREAD_MEM_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on VM (waiting for virtual memory resources to become available) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_TERM_IO_WAIT_PCT THREAD_TERM_IO_WAIT_PCT The percentage of time the process or kernel thread was blocked on terminal IO (waiting for its terminal IO to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_TERM_IO_WAIT_TIME THREAD_TERM_IO_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on terminal IO (waiting for its terminal IO to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_IPC_WAIT_PCT THREAD_IPC_WAIT_PCT The percentage of time the process or kernel thread was blocked on IPC (waiting for interprocess communication calls to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_IPC_WAIT_TIME THREAD_IPC_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on InterProcess Communication (IPC) (waiting for its interprocess communication calls to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_SLEEP_WAIT_PCT THREAD_SLEEP_WAIT_PCT The percentage of time the process or kernel thread was blocked on SLEEP (waiting to awaken from sleep system calls) during the interval. A process or kernel thread enters the SLEEP state by putting itself to sleep using system calls such as sleep, wait, pause, sigpause, sigsuspend, poll and select. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_TOTAL_WAIT_TIME THREAD_TOTAL_WAIT_TIME The total time, in seconds, that the process or kernel thread spent blocked during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_SLEEP_WAIT_TIME THREAD_SLEEP_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on SLEEP (waiting to awaken from sleep system calls) during the interval. A process or kernel thread enters the SLEEP state by putting itself to sleep using system calls such as sleep, wait, pause, sigpause, sigsuspend, poll and select. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_OTHER_IO_WAIT_PCT THREAD_OTHER_IO_WAIT_PCT The percentage of time the process or kernel thread was blocked on "other IO" during the interval. "Other IO" includes all IO directed at a device (connected to the local computer) which is not a terminal or LAN. Examples of "other IO" devices are local printers, tapes, instruments, and disks. Time waiting for character (raw) IO to disks is included in this measurement. Time waiting for file system buffered IO to disks will typically been seen as IO or CACHE wait. Time waiting for IO to NFS disks is reported as NFS wait. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_OTHER_IO_WAIT_TIME THREAD_OTHER_IO_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on other IO during the interval. "Other IO" includes all IO directed at a device (connected to the local computer) which is not a terminal or LAN. Examples of "other IO" devices are local printers, tapes, instruments, and disks. Time waiting for character (raw) IO to disks is included in this measurement. Time waiting for file system buffered IO to disks will typically been seen as IO or CACHE wait. Time waiting for IO to NFS disks is reported as NFS wait. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_OTHER_WAIT_PCT THREAD_OTHER_WAIT_PCT The percentage of time the process or kernel thread was blocked on other (unknown) activities during the interval. This includes processes or kernel threads that were started and subsequently suspended before the midaemon was started and have not been resumed, or the block state is unknown. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_OTHER_WAIT_TIME THREAD_OTHER_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on other (unknown) activities during the interval. This includes processes or kernel threads that were started and subsequently suspended before the midaemon was started and have not been resumed, or the block state is unknown. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_CACHE_WAIT_PCT THREAD_CACHE_WAIT_PCT The percentage of time the process or kernel thread was blocked on CACHE (waiting for the file system buffer cache to be updated) during the interval. Processes or kernel threads doing raw IO to a disk are not included in this measurement. Processes and kernel threads doing buffered IO to disks normally spend more time blocked on CACHE and IO than on DISK. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_CACHE_WAIT_TIME THREAD_CACHE_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on CACHE (waiting for the file system buffer cache to be updated) during the interval. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_RPC_WAIT_PCT THREAD_RPC_WAIT_PCT The percentage of time the process or kernel thread was blocked on RPC (waiting for remote procedure calls to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_RPC_WAIT_TIME THREAD_RPC_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on RPC (waiting for its remote procedure calls to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_NFS_WAIT_PCT THREAD_NFS_WAIT_PCT The percentage of time the process or kernel thread was blocked on NFS (waiting for network file system IO to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_NFS_WAIT_TIME THREAD_NFS_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on NFS (waiting for its network file system IO to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_INODE_WAIT_PCT THREAD_INODE_WAIT_PCT The percentage of time the process or kernel thread was blocked on INODE (waiting for an inode to be updated or to become available) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_INODE_WAIT_TIME THREAD_INODE_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on INODE (waiting for an inode to be updated or to become available) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_LAN_WAIT_PCT THREAD_LAN_WAIT_PCT The percentage of time the process or kernel thread was blocked on LAN (waiting for IO over the LAN to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_LAN_WAIT_TIME THREAD_LAN_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on LAN (waiting for IO over the LAN to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_MSG_WAIT_PCT THREAD_MSG_WAIT_PCT The percentage of time the process or kernel thread was blocked on messages (waiting for message queue operations to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_MSG_WAIT_TIME THREAD_MSG_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on messages (waiting for message queue operations to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_PIPE_WAIT_PCT THREAD_PIPE_WAIT_PCT The percentage of time the process or kernel thread was blocked on PIPE (waiting for pipe communication to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_PIPE_WAIT_TIME THREAD_PIPE_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on PIPE (waiting for pipe communication to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_SOCKET_WAIT_PCT THREAD_SOCKET_WAIT_PCT The percentage of time the process or kernel thread was blocked on sockets (waiting for their IO to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_SOCKET_WAIT_TIME THREAD_SOCKET_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on sockets (waiting for its IO to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_SEM_WAIT_PCT THREAD_SEM_WAIT_PCT The percentage of time the process or kernel thread was blocked on semaphores (waiting on a semaphore operation to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_SEM_WAIT_TIME THREAD_SEM_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on semaphores (waiting on a semaphore operation to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_SYS_WAIT_PCT THREAD_SYS_WAIT_PCT The percentage of time the process or kernel thread was blocked on system resources during the interval. These resources include data structures from the LVM, VFS, UFS, JFS, and Disk Quota subsystems. "SYSTM" is the "catch-all" wait state for blocks on system resources that are not common enough or long enough to warrant their own stop state. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_SYS_WAIT_TIME THREAD_SYS_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on SYSTM (that is, system resources) during the interval. These resources include data structures from the LVM, VFS, UFS, JFS, and Disk Quota subsystems. "SYSTM" is the "catch-all" wait state for blocks on system resources that are not common enough or long enough to warrant their own stop state. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== PROC_CDFS_WAIT_PCT THREAD_CDFS_WAIT_PCT The percentage of time the process or kernel thread was blocked on CDFS (waiting for its Compact Disk file system IO to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. ====== PROC_CDFS_WAIT_TIME THREAD_CDFS_WAIT_TIME The time, in seconds, that the process or kernel thread was blocked on CDFS (waiting in the CD-ROM driver for Compact Disc file system IO to complete) during the interval. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. ====== Transaction ====== TT_NAME The registered transaction name for this transaction. ====== TT_COUNT TT_CLIENT_COUNT The number of completed transactions during the last interval for this transaction. ====== TT_WALL_TIME TT_CLIENT_WALL_TIME The total time, in seconds, of all transactions completed during the last interval for this transaction. ====== TT_WALL_TIME_PER_TRAN TT_CLIENT_WALL_TIME_PER_TRAN The average transaction time, in seconds, during the last interval for this transaction. ====== TT_INTERVAL TT_CLIENT_INTERVAL The amount of time in the collection interval. ====== TT_SLO_THRESHOLD The upper range (transaction time) of the Service Level Objective (SLO) threshold value. This value is used to count the number of transactions that exceed this user-supplied transaction time value. ====== TT_SLO_COUNT TT_CLIENT_SLO_COUNT The number of completed transactions that violated the defined Service Level Objective (SLO) by exceeding the SLO threshold time during the interval. ====== TT_ABORT TT_CLIENT_ABORT The number of aborted transactions during the last interval for this transaction. ====== TT_ABORT_WALL_TIME TT_CLIENT_ABORT_WALL_TIME The total time, in seconds, of all aborted transactions during the last interval for this transaction. ====== TT_INFO The registered ARM Transaction Information for this transaction. ====== TT_APP_NAME The registered ARM Application name. ====== TT_MEASUREMENT_COUNT The number of user defined measurements for this transaction class. ====== TT_UPDATE TT_CLIENT_UPDATE The number of updates during the last interval for this transaction class. This count includes update calls for completed and in progress transactions. ====== TT_INPROGRESS_COUNT The number of transactions in progress (started, but not stopped) at the end of the interval for this transaction class. ====== TT_FAILED TT_CLIENT_FAILED The number of Failed transactions during the last interval for this transaction name. ====== TT_FAILED_WALL_TIME TT_CLIENT_FAILED_WALL_TIME The total time, in seconds, of all failed transactions during the last interval for this transaction name. ====== TT_UNAME The registered ARM Transaction User Name for this transaction. If the arm_init function has NULL for the appl_user_id field, then the user name is blank. Otherwise, if " " was specified, then the user name is displayed. For example, to show the user name for the armsample1 program, use: appl_id = arm_init("armsample1"," ",0,0,0); To ignore the user name for the armsample1 program, use: appl_id = arm_init("armsample1",NULL,0,0,0); ====== TT_APPNO The registered ARM Application/User ID for this transaction class. ====== TT_TRAN_ID The registered ARM Transaction ID for this transaction class as returned by arm_getid(). A unique transaction id is returned for a unique application id (returned by arm_init), tran name, and meta data buffer contents. ====== TT_UID The registered ARM Transaction User ID for this transaction name. ====== TT_SLO_PERCENT The percentage of transactions which violate service level objectives. ====== TT_TRAN_1_MIN_RATE For this transaction name, the number of completed transactions calculated to a 1 minute rate. For example, if you completed five of these transactions in a 5 minute window, the rate is one transaction per minute. ====== TT_CPU_TOTAL_TIME_PER_TRAN The average total CPU time, in seconds, consumed by each completed instance of the transaction during the interval. Total CPU time is the sum of the CPU time components for a process or kernel thread, including system, user, context switch, interrupt processing, realtime, and nice utilization values. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_CPU_SYS_MODE_TIME_PER_TRAN The average CPU time in system mode in the context of each completed instance of the transaction during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_CPU_USER_MODE_TIME_PER_TRAN The average time, in seconds, each completed instance of the transaction was using the CPU in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_CPU_NICE_TIME_PER_TRAN The average time, in seconds, that each niced instance of the transaction was using the CPU in user mode during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_CPU_NNICE_TIME_PER_TRAN The average time, in seconds, that each negatively niced instance of the transaction was using the CPU in user mode during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_CPU_REALTIME_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was in user mode at a realtime priority during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_CPU_CSWITCH_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction spent in context switching during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_CPU_INTERRUPT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction spent processing interrupts during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_CPU_NORMAL_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was in user mode at normal priority during the interval. Normal priority user mode CPU excludes CPU used at real-time and nice priorities. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_CPU_SYSCALL_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was in system mode, excluding interrupt or context processing, during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_DISK_PHYS_READ_PER_TRAN The average number of physical reads made by (or for) each completed instance of the transaction during the last interval. "Disk" refers to a physical drive (that is, "spindle"), not a partition on a drive (unless the partition occupies the entire physical disk). NFS mounted disks are not included in this list. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_DISK_PHYS_WRITE_PER_TRAN The average number of physical writes made by (or for) each completed instance of the transaction during the last interval. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_DISK_LOGL_READ_PER_TRAN The average number of disk logical reads made by each completed instance of the transaction during the interval. Calls destined for NFS mounted files are not counted. On many Unix systems, logical disk IOs are measured by counting the read system calls that are directed to disk devices. Also counted are read system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_DISK_LOGL_WRITE_PER_TRAN Average number of disk logical writes made by each completed instance of the transaction during the interval. Calls destined for NFS mounted files are not counted. On many Unix systems, logical disk IOs are measured by counting the write system calls that are directed to disk devices. Also counted are write system calls made indirectly through other system calls, including writev, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_DISK_LOGL_IO_PER_TRAN The average number of logical IOs made by (or for) each completed instance of the transaction during the interval. NFS mounted disks are not included in this list. "Disk" refers to a physical drive (that is, "spindle"), not a partition on a drive (unless the partition occupies the entire physical disk). On many Unix systems, logical disk IOs are measured by counting the read and write system calls that are directed to disk devices. Also counted are read and write system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, writev, send, sento, sendmsg, and ipcsend. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_DISK_FS_READ_PER_TRAN The average number of file system physical disk reads made by each completed instance of the transaction during the interval. Only local disks are counted in this measurement. NFS devices are excluded. These are physical reads generated by user file system access and do not include virtual memory reads, system reads (inode access), or reads relating to raw disk access. An exception is user files accessed via the mmap(2) call, which does not show their physical reads in this category. They appear under virtual memory reads. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_DISK_FS_WRITE_PER_TRAN The average number of file system physical disk writes made by each completed instance of the transaction during the interval. Only local disks are counted in this measurement. NFS devices are excluded. These are physical writes generated by user file system access and do not include virtual memory writes, system writes (inode updates), or writes relating to raw disk access. An exception is user files accessed via the mmap(2) call, which does not show their physical writes in this category. They appear under virtual memory writes. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_DISK_VM_READ_PER_TRAN The average number of virtual memory reads made for each completed instance of the transaction during the interval. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_DISK_VM_WRITE_PER_TRAN The average number of virtual memory writes made for each completed instance of the transaction during the interval. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_DISK_PHYS_IO_PER_TRAN The average number of physical disk IOs per second made by each completed instance of the transaction during the interval. For transactions which run for less than the measurement interval, this metric is normalized over the measurement interval. For example, a transaction ran for 1 second and did 50 IOs during its life. If the measurement interval is 5 seconds, it is reported as having done 10 IOs per second. If the measurement interval is 60 seconds, it is reported as having done 50/60 or 0.83 IOs per second. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_DISK_SYSTEM_READ_PER_TRAN The average number of file system management physical disk reads made for each completed instance of the transaction during the interval. File system management IOs are the physical accesses required to obtain or update internal information about the file system structure (inode access). Accesses or updates to user data are not included in this metric. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_DISK_SYSTEM_WRITE_PER_TRAN The average number of file system management physical disk writes made for each completed instance of the transaction during the interval. File system management IOs are the physical accesses required to obtain or update internal information about the file system structure (inode access). Accesses or updates to user data are not included in this metric. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_DISK_RAW_READ_PER_TRAN The average number of raw reads made for each completed instance of the transaction during the interval. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_DISK_RAW_WRITE_PER_TRAN The average number of raw writes made for each completed instance of the transaction during the interval. "Disk" in this instance refers to any locally attached physical disk drives (that is, "spindles") that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_JOBCTL_WAIT_TIME_PER_TRAN The average time that each completed instance of the transaction was blocked on job control (having been stopped with the job control facilities) during the interval. Job control waits include waiting at a debug breakpoint, as well as being blocked attempting to write (from background) to a terminal which has the "stty tostop" option set. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_GRAPHICS_WAIT_TIME_PER_TRAN The average time that each completed instance of the transaction was blocked on graphics (waiting for their graphics operations to complete) during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_PRI_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on priority during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_DISK_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on DISK during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_MEM_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on memory during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_TERM_IO_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on terminal IO (waiting for its terminal IO to complete) during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_IPC_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on IPC during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_TOTAL_WAIT_TIME_PER_TRAN The average total time that each completed instance of the transaction spent blocked during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_SLEEP_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on SLEEP (waiting to awaken from sleep system calls) during the interval. A transaction enters the SLEEP state by putting itself to sleep using system calls such as sleep, wait, pause, sigpause, sigsuspend, poll and select. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_OTHER_IO_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on "other IO" during the interval. "Other IO" includes all IO directed at a device (connected to the local computer) which is not a terminal or LAN. Examples of "other IO" devices are local printers, tapes, instruments, and disks. Time waiting for character (raw) IO to disks is included in this measurement. Time waiting for file system buffered IO to disks will typically been seen as IO or CACHE wait. Time waiting for IO to NFS disks is reported as NFS wait. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_OTHER_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on other (unknown) activities during the interval. This includes transactions that were started and subsequently suspended before the midaemon was started and have not been resumed, or the block state is unknown. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_CACHE_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on CACHE during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_RPC_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on RPC (waiting for its remote procedure calls to complete) during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_NFS_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on NFS (waiting for its network file system IO to complete) during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_INODE_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on INODE (waiting for an inode to be updated or to become available) during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_LAN_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on LAN (waiting for IO over the LAN to complete) during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_MSG_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on messages (waiting for message queue operations to complete) during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_PIPE_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on PIPE (waiting for pipe communication to complete) during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_SOCKET_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on sockets (waiting for its IO to complete) during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_SEM_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on semaphores (waiting on a semaphore operation to complete) during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_SYS_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on system blocked on SYSTM (that is, system resources) during the interval. These resources include data structures from the LVM, VFS, UFS, JFS, and Disk Quota subsystems. "SYSTM" is the "catch-all" wait state for blocks on system resources that are not common enough or long enough to warrant their own stop state. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_CDFS_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on CDFS (waiting in the CD-ROM driver for Compact Disc file system IO to complete) during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_STREAM_WAIT_TIME_PER_TRAN The average time, in seconds, that each completed instance of the transaction was blocked on streams IO (waiting for a streams IO operation to complete) during the interval. Per-transaction performance resource metrics represent an average for all completed instances of the given transaction during the interval. If there are no completed transaction instances during an interval, then there are no resources accounted, even though there may be in-progress transactions using resources which have not completed. Resource metrics for in-progress transactions will be shown in the interval after they complete (that is, after the process has called arm_stop). If there is only one completed transaction instance during an interval, then the resources attributed to the transaction will represent the resources used by the process between its call to arm_start and arm_stop, even if arm_start was called before the current interval. Thus, the resource usage time or wall time per transaction can exceed the current collection interval time. If there are several completed transaction instances during an interval for a given transaction, then the resources attributed to the transaction will represent an average for all completed instances during the interval. To obtain the total accumulated resource consumption for all completed transactions during an interval, multiply the resource metric by the number of completed transaction instances during the interval (TT_COUNT). ====== TT_GOLDENRESOURCE_INTERVAL The amount of time in the collection interval. ====== TT_RESOURCE_INTERVAL The amount of time in the collection interval.