HP Operations Agent - Performance Collection Component for AIX Dictionary of Operating System Performance Metrics Print Date 12/2012 HP Operations Agent for AIX Release 11.11 ************************************************************* Legal Notices ============= Warranty -------- The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. The information contained herein is subject to change without notice. Restricted Rights Legend ------------------------ Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. Copyright Notices ----------------- ©Copyright 2010-2012 Hewlett-Packard Development Company, L.P. All rights reserved. ***************************************************************** Introduction ============ This dictionary contains definitions of the AIX operating system performance metrics for the Performance Collection Component. This document is divided into the following sections: * "Metric Names by Data Class," which lists the metrics alphabetically by data class. Use these metric names for exporting data with the extract utility. You can also use these metric names in defining alarm conditions in your alarmdef file. * "Metric Definitions," which describes each metric in alphabetical order. Please note that the metric help has been put in a more generic format and references are made to the other platforms that also support each of the metrics. Metric Names by Data Class ========================== AIX Global Metrics ---------------------------------- BLANK DATE DATE_SECONDS DAY INTERVAL RECORD_TYPE TIME YEAR GBL_ACTIVE_CPU GBL_ACTIVE_CPU_CORE GBL_ACTIVE_PROC GBL_ALIVE_PROC GBL_BLOCKED_IO_QUEUE GBL_COMPLETED_PROC GBL_CPU_CLOCK GBL_CPU_ENTL GBL_CPU_ENTL_UTIL GBL_CPU_HISTOGRAM GBL_CPU_IDLE_TIME GBL_CPU_IDLE_UTIL GBL_CPU_MT_ENABLED GBL_CPU_NUM_THREADS GBL_CPU_PHYSC GBL_CPU_PHYS_SYS_MODE_UTIL GBL_CPU_PHYS_TOTAL_UTIL GBL_CPU_PHYS_USER_MODE_UTIL GBL_CPU_QUEUE GBL_CPU_SYS_MODE_TIME GBL_CPU_SYS_MODE_UTIL GBL_CPU_TOTAL_TIME GBL_CPU_TOTAL_UTIL GBL_CPU_USER_MODE_TIME GBL_CPU_USER_MODE_UTIL GBL_CPU_WAIT_TIME GBL_CPU_WAIT_UTIL GBL_CSWITCH_RATE GBL_DISK_BLOCK_IO GBL_DISK_BLOCK_IO_RATE GBL_DISK_BLOCK_READ GBL_DISK_BLOCK_READ_RATE GBL_DISK_BLOCK_WRITE GBL_DISK_BLOCK_WRITE_RATE GBL_DISK_HISTOGRAM GBL_DISK_PATH_COUNT GBL_DISK_PHYS_BYTE GBL_DISK_PHYS_BYTE_RATE GBL_DISK_PHYS_IO GBL_DISK_PHYS_IO_RATE GBL_DISK_PHYS_READ GBL_DISK_PHYS_READ_BYTE_RATE GBL_DISK_PHYS_READ_PCT GBL_DISK_PHYS_READ_RATE GBL_DISK_PHYS_WRITE GBL_DISK_PHYS_WRITE_BYTE_RATE GBL_DISK_PHYS_WRITE_RATE GBL_DISK_RAW_IO GBL_DISK_RAW_IO_RATE GBL_DISK_RAW_READ GBL_DISK_RAW_READ_RATE GBL_DISK_RAW_WRITE GBL_DISK_RAW_WRITE_RATE GBL_DISK_REQUEST_QUEUE GBL_DISK_TIME_PEAK GBL_DISK_UTIL_PEAK GBL_DISK_VM_IO GBL_DISK_VM_IO_RATE GBL_DISK_VM_READ GBL_DISK_VM_READ_RATE GBL_DISK_VM_WRITE GBL_DISK_VM_WRITE_RATE GBL_FS_SPACE_UTIL_PEAK GBL_HYP_UTIL GBL_INTERRUPT GBL_INTERRUPT_RATE GBL_INTERVAL GBL_LOADAVG GBL_LOADAVG15 GBL_LOADAVG5 GBL_LOST_MI_TRACE_BUFFERS GBL_LS_CPU_NUM_DEDICATED GBL_LS_CPU_NUM_SHARED GBL_LS_NUM_CAPPED GBL_LS_NUM_DEDICATED GBL_LS_NUM_SHARED GBL_LS_NUM_UNCAPPED GBL_LS_PHYS_MEM_CONSUMED GBL_LS_PHYS_MEM_TOTAL GBL_MEM_ACTIVE_VIRT GBL_MEM_CACHE_HIT_PCT GBL_MEM_CACHE_UTIL GBL_MEM_ENTL_UTIL GBL_MEM_FILE_PAGEIN_RATE GBL_MEM_FILE_PAGEOUT_RATE GBL_MEM_FREE GBL_MEM_FREE_UTIL GBL_MEM_PAGEIN GBL_MEM_PAGEIN_BYTE GBL_MEM_PAGEIN_BYTE_RATE GBL_MEM_PAGEIN_RATE GBL_MEM_PAGEOUT GBL_MEM_PAGEOUT_BYTE GBL_MEM_PAGEOUT_BYTE_RATE GBL_MEM_PAGEOUT_RATE GBL_MEM_PAGE_FAULT GBL_MEM_PAGE_FAULT_RATE GBL_MEM_PAGE_REQUEST GBL_MEM_PAGE_REQUEST_RATE GBL_MEM_PG_SCAN GBL_MEM_PG_SCAN_RATE GBL_MEM_PG_STEAL_RATE GBL_MEM_SWAPIN_BYTE GBL_MEM_SWAPIN_BYTE_RATE GBL_MEM_SWAPIN_RATE GBL_MEM_SWAPOUT_BYTE GBL_MEM_SWAPOUT_BYTE_RATE GBL_MEM_SWAPOUT_RATE GBL_MEM_SWAP_QUEUE GBL_MEM_SYS_AND_CACHE_UTIL GBL_MEM_SYS_UTIL GBL_MEM_USER_UTIL GBL_MEM_UTIL GBL_NET_COLLISION GBL_NET_COLLISION_1_MIN_RATE GBL_NET_COLLISION_PCT GBL_NET_COLLISION_RATE GBL_NET_DEFERRED_PCT GBL_NET_ERROR GBL_NET_ERROR_1_MIN_RATE GBL_NET_ERROR_RATE GBL_NET_IN_ERROR_PCT GBL_NET_IN_ERROR_RATE GBL_NET_IN_PACKET GBL_NET_IN_PACKET_RATE GBL_NET_OUT_ERROR_PCT GBL_NET_OUT_ERROR_RATE GBL_NET_OUT_PACKET GBL_NET_OUT_PACKET_RATE GBL_NET_PACKET_RATE GBL_NET_UTIL_PEAK GBL_NFS_CALL GBL_NFS_CALL_RATE GBL_NUM_NETWORK GBL_NUM_ONLINE_VCPU GBL_NUM_USER GBL_NUM_VIRTUAL_TARGETS GBL_OTHER_QUEUE GBL_POOL_CPU_AVAIL GBL_POOL_TOTAL_UTIL GBL_PROC_RUN_TIME GBL_PROC_SAMPLE GBL_RUN_QUEUE GBL_STARTED_PROC GBL_STARTED_PROC_RATE GBL_STATTIME GBL_SUSPENDED_PROCS GBL_SWAP_SPACE_USED GBL_SWAP_SPACE_USED_UTIL GBL_SWAP_SPACE_UTIL GBL_SYSCALL GBL_SYSCALL_RATE GBL_SYSCALL_READ_BYTE_RATE GBL_SYSCALL_WRITE_BYTE_RATE GBL_SYSTEM_UPTIME_HOURS GBL_SYSTEM_UPTIME_SECONDS GBL_TOTAL_DISPATCH_TIME GBL_TT_OVERFLOW_COUNT GBL_VCSWITCH_RATE STATDATE STATTIME TBL_FILE_TABLE_USED TBL_MSG_TABLE_USED TBL_PROC_TABLE_USED TBL_SEM_TABLE_USED TBL_SHMEM_ACTIVE TBL_SHMEM_TABLE_USED TBL_SHMEM_USED AIX Application Metrics ---------------------------------- BLANK DATE DATE_SECONDS DAY INTERVAL RECORD_TYPE TIME YEAR APP_ACTIVE_PROC APP_ALIVE_PROC APP_COMPLETED_PROC APP_CPU_SYS_MODE_TIME APP_CPU_SYS_MODE_UTIL APP_CPU_TOTAL_TIME APP_CPU_TOTAL_UTIL APP_CPU_USER_MODE_TIME APP_CPU_USER_MODE_UTIL APP_DISK_BLOCK_IO APP_DISK_BLOCK_IO_RATE APP_DISK_BLOCK_READ APP_DISK_BLOCK_READ_RATE APP_DISK_BLOCK_WRITE APP_DISK_BLOCK_WRITE_RATE APP_DISK_PHYS_IO APP_DISK_PHYS_IO_RATE APP_IO_BYTE APP_IO_BYTE_RATE APP_MAJOR_FAULT_RATE APP_MEM_RES APP_MEM_UTIL APP_MEM_VIRT APP_MINOR_FAULT_RATE APP_NAME APP_NUM APP_PRI APP_PRI_STD_DEV APP_PROC_RUN_TIME APP_SAMPLE APP_SUSPENDED_PROCS AIX Process Metrics ---------------------------------- BLANK DATE DATE_SECONDS DAY INTERVAL RECORD_TYPE TIME YEAR PROC_APP_ID PROC_CPU_ALIVE_SYS_MODE_UTIL PROC_CPU_ALIVE_TOTAL_UTIL PROC_CPU_ALIVE_USER_MODE_UTIL PROC_CPU_SYS_MODE_TIME PROC_CPU_SYS_MODE_UTIL PROC_CPU_TOTAL_TIME PROC_CPU_TOTAL_TIME_CUM PROC_CPU_TOTAL_UTIL PROC_CPU_TOTAL_UTIL_CUM PROC_CPU_USER_MODE_TIME PROC_CPU_USER_MODE_UTIL PROC_DISK_BLOCK_IO PROC_DISK_BLOCK_IO_CUM PROC_DISK_BLOCK_IO_RATE PROC_DISK_BLOCK_IO_RATE_CUM PROC_DISK_BLOCK_READ PROC_DISK_BLOCK_READ_RATE PROC_DISK_BLOCK_WRITE PROC_DISK_BLOCK_WRITE_RATE PROC_FORCED_CSWITCH PROC_GROUP_ID PROC_INTEREST PROC_INTERVAL_ALIVE PROC_IO_BYTE PROC_IO_BYTE_CUM PROC_IO_BYTE_RATE PROC_IO_BYTE_RATE_CUM PROC_MAJOR_FAULT PROC_MEM_RES PROC_MEM_VIRT PROC_MINOR_FAULT PROC_PAGEFAULT PROC_PAGEFAULT_RATE PROC_PARENT_PROC_ID PROC_PRI PROC_PROC_ARGV1 PROC_PROC_CMD PROC_PROC_ID PROC_PROC_NAME PROC_RUN_TIME PROC_STARTTIME PROC_STOP_REASON PROC_THREAD_COUNT PROC_TTY PROC_USER_NAME PROC_VOLUNTARY_CSWITCH AIX Transaction Metrics ---------------------------------- BLANK DATE DATE_SECONDS DAY INTERVAL RECORD_TYPE TIME YEAR TTBIN_TRANS_COUNT_1 TTBIN_TRANS_COUNT_10 TTBIN_TRANS_COUNT_2 TTBIN_TRANS_COUNT_3 TTBIN_TRANS_COUNT_4 TTBIN_TRANS_COUNT_5 TTBIN_TRANS_COUNT_6 TTBIN_TRANS_COUNT_7 TTBIN_TRANS_COUNT_8 TTBIN_TRANS_COUNT_9 TTBIN_UPPER_RANGE_1 TTBIN_UPPER_RANGE_10 TTBIN_UPPER_RANGE_2 TTBIN_UPPER_RANGE_3 TTBIN_UPPER_RANGE_4 TTBIN_UPPER_RANGE_5 TTBIN_UPPER_RANGE_6 TTBIN_UPPER_RANGE_7 TTBIN_UPPER_RANGE_8 TTBIN_UPPER_RANGE_9 TT_ABORT TT_ABORT_WALL_TIME_PER_TRAN TT_APP_NAME TT_APP_TRAN_NAME TT_CLIENT_ADDRESS TT_CLIENT_ADDRESS_FORMAT TT_CLIENT_TRAN_ID TT_COUNT TT_FAILED TT_INFO TT_NAME TT_NUM_BINS TT_SLO_COUNT TT_SLO_PERCENT TT_SLO_THRESHOLD TT_TERM_TRAN_1_HR_RATE TT_TRAN_1_MIN_RATE TT_TRAN_ID TT_UNAME TT_USER_MEASUREMENT_AVG TT_USER_MEASUREMENT_AVG_2 TT_USER_MEASUREMENT_AVG_3 TT_USER_MEASUREMENT_AVG_4 TT_USER_MEASUREMENT_AVG_5 TT_USER_MEASUREMENT_AVG_6 TT_USER_MEASUREMENT_MAX TT_USER_MEASUREMENT_MAX_2 TT_USER_MEASUREMENT_MAX_3 TT_USER_MEASUREMENT_MAX_4 TT_USER_MEASUREMENT_MAX_5 TT_USER_MEASUREMENT_MAX_6 TT_USER_MEASUREMENT_MIN TT_USER_MEASUREMENT_MIN_2 TT_USER_MEASUREMENT_MIN_3 TT_USER_MEASUREMENT_MIN_4 TT_USER_MEASUREMENT_MIN_5 TT_USER_MEASUREMENT_MIN_6 TT_USER_MEASUREMENT_NAME TT_USER_MEASUREMENT_NAME_2 TT_USER_MEASUREMENT_NAME_3 TT_USER_MEASUREMENT_NAME_4 TT_USER_MEASUREMENT_NAME_5 TT_USER_MEASUREMENT_NAME_6 TT_WALL_TIME_PER_TRAN AIX Disk Metrics ---------------------------------- BLANK DATE DATE_SECONDS DAY INTERVAL RECORD_TYPE TIME YEAR BYDSK_AVG_SERVICE_TIME BYDSK_BUSY_TIME BYDSK_DEVNAME BYDSK_HISTOGRAM BYDSK_ID BYDSK_PHYS_BYTE BYDSK_PHYS_BYTE_RATE BYDSK_PHYS_IO BYDSK_PHYS_IO_RATE BYDSK_PHYS_READ BYDSK_PHYS_READ_BYTE BYDSK_PHYS_READ_BYTE_RATE BYDSK_PHYS_READ_RATE BYDSK_PHYS_WRITE BYDSK_PHYS_WRITE_BYTE BYDSK_PHYS_WRITE_BYTE_RATE BYDSK_PHYS_WRITE_RATE BYDSK_REQUEST_QUEUE BYDSK_UTIL AIX Network Interface Metrics ---------------------------------- BLANK DATE DATE_SECONDS DAY INTERVAL RECORD_TYPE TIME YEAR BYNETIF_COLLISION BYNETIF_COLLISION_RATE BYNETIF_ERROR BYNETIF_ERROR_RATE BYNETIF_ID BYNETIF_IN_BYTE BYNETIF_IN_BYTE_RATE BYNETIF_IN_PACKET BYNETIF_IN_PACKET_RATE BYNETIF_NAME BYNETIF_NET_SPEED BYNETIF_NET_TYPE BYNETIF_OUT_BYTE BYNETIF_OUT_BYTE_RATE BYNETIF_OUT_PACKET BYNETIF_OUT_PACKET_RATE BYNETIF_PACKET_RATE BYNETIF_UTIL AIX CPU Metrics ---------------------------------- BLANK DATE DATE_SECONDS DAY INTERVAL RECORD_TYPE TIME YEAR BYCPU_CPU_CLOCK BYCPU_CPU_PHYSC BYCPU_CPU_SYS_MODE_TIME BYCPU_CPU_SYS_MODE_UTIL BYCPU_CPU_TOTAL_TIME BYCPU_CPU_TOTAL_UTIL BYCPU_CPU_USER_MODE_TIME BYCPU_CPU_USER_MODE_UTIL BYCPU_CSWITCH_RATE BYCPU_ID BYCPU_STATE AIX Filesystem Metrics ---------------------------------- BLANK DATE DATE_SECONDS DAY INTERVAL RECORD_TYPE TIME YEAR FS_BLOCK_SIZE FS_DEVNAME FS_DEVNO FS_DIRNAME FS_FRAG_SIZE FS_INODE_UTIL FS_MAX_INODES FS_MAX_SIZE FS_SPACE_RESERVED FS_SPACE_USED FS_SPACE_UTIL FS_TYPE AIX Configuration Metrics ---------------------------------- BLANK DATE DATE_SECONDS DAY INTERVAL RECORD_TYPE TIME YEAR GBL_APP_THRESHOLD GBL_BOOT_TIME GBL_BYCPU_THRESHOLD GBL_BYDSK_THRESHOLD GBL_BYFS_THRESHOLD GBL_BYNETIF_THRESHOLD GBL_COLLECTOR GBL_COLLECT_INTERVAL GBL_COLLECT_INTERVAL_PROC GBL_CPU_ENTL_MAX GBL_CPU_ENTL_MIN GBL_CPU_SHARES_PRIO GBL_FLUSH GBL_GMTOFFSET GBL_IGNORE_MT GBL_JAVAARG GBL_LOGFILE_VERSION GBL_LOGGING_TYPES GBL_LS_ID GBL_LS_MODE GBL_LS_ROLE GBL_LS_SHARED GBL_LS_TYPE GBL_MACHINE GBL_MACHINE_MODEL GBL_MEM_AVAIL GBL_MEM_ENTL_MAX GBL_MEM_ENTL_MIN GBL_MEM_ONLINE GBL_MEM_PHYS GBL_NUM_CPU GBL_NUM_CPU_CORE GBL_NUM_DISK GBL_OSNAME GBL_OSRELEASE GBL_OSVERSION GBL_POOL_CPU_ENTL GBL_POOL_ID GBL_POOL_NUM_CPU GBL_SUBPROCSAMPLEINTERVAL GBL_SWAP_SPACE_AVAIL GBL_SWAP_SPACE_AVAIL_KB GBL_SYSTEM_ID GBL_THRESHOLD_CPU GBL_THRESHOLD_DISK GBL_THRESHOLD_NOKILLED GBL_THRESHOLD_NONEW GBL_THRESHOLD_PROCMEM TBL_BUFFER_CACHE_AVAIL TBL_PROC_TABLE_AVAIL AIX Logical System Metrics ---------------------------------- BLANK DATE DATE_SECONDS DAY INTERVAL RECORD_TYPE TIME YEAR BYLS_CPU_CLOCK BYLS_CPU_ENTL BYLS_CPU_ENTL_MAX BYLS_CPU_ENTL_MIN BYLS_CPU_ENTL_UTIL BYLS_CPU_FAMILY BYLS_CPU_MT_ENABLED BYLS_CPU_PHYSC BYLS_CPU_PHYS_IDLE_MODE_UTIL BYLS_CPU_PHYS_SYS_MODE_UTIL BYLS_CPU_PHYS_TOTAL_UTIL BYLS_CPU_PHYS_USER_MODE_UTIL BYLS_CPU_PHYS_WAIT_MODE_UTIL BYLS_CPU_SHARES_PRIO BYLS_CPU_TOTAL_UTIL BYLS_DISPLAY_NAME BYLS_HYPCALL BYLS_HYP_UTIL BYLS_IP_ADDRESS BYLS_LS_HOSTNAME BYLS_LS_ID BYLS_LS_MODE BYLS_LS_NAME BYLS_LS_PARENT_UUID BYLS_LS_ROLE BYLS_LS_SERIALNO BYLS_LS_SHARED BYLS_LS_STATE BYLS_LS_TYPE BYLS_LS_UUID BYLS_MACHINE_MODEL BYLS_MEM_ENTL BYLS_MEM_ENTL_MAX BYLS_MEM_ENTL_MIN BYLS_MEM_ENTL_UTIL BYLS_MEM_SHARES_PRIO BYLS_MGMT_IP_ADDRESS BYLS_NUM_ACTIVE_LS BYLS_NUM_CPU BYLS_NUM_DISK BYLS_NUM_LS BYLS_NUM_NETIF BYLS_PHANTOM_INTR BYLS_RUN_QUEUE BYLS_UPTIME_SECONDS BYLS_VCSWITCH_RATE Metric Definitions ================== APP_ACTIVE_PROC ---------------------------------- An active process is one that exists and consumes some CPU time. APP_ACTIVE_PROC is the sum of the alive-process-time/interval-time ratios of every process belonging to an application that is active (uses any CPU time) during an interval. The following diagram of a four second interval showing two processes, A and B, for an application should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval, but consumes no CPU. A’s contribution to APP_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to APP_ACTIVE_PROC. B’s contribution to APP_ALIVE_PROC is 3*1/4. B contributes 2*1/4 to APP_ACTIVE_PROC. Thus, for this interval, APP_ACTIVE_PROC equals 0.5 and APP_ALIVE_PROC equals 1.75. Because a process may be alive but not active, APP_ACTIVE_PROC will always be less than or equal to APP_ALIVE_PROC. This metric indicates the number of processes in an application group that are competing for the CPU. This metric is useful, along with other metrics, for comparing loads placed on the system by different groups of processes. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. APP_ALIVE_PROC ---------------------------------- An alive process is one that exists on the system. APP_ALIVE_PROC is the sum of the alive-process-time/interval-time ratios for every process belonging to a given application. The following diagram of a four second interval showing two processes, A and B, for an application should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval but consumes no CPU. A’s contribution to APP_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to APP_ACTIVE_PROC. B’s contribution to APP_ALIVE_PROC is 3*1/4. B contributes 2*1/4 to APP_ACTIVE_PROC. Thus, for this interval, APP_ACTIVE_PROC equals 0.5 and APP_ALIVE_PROC equals 1.75. Because a process may be alive but not active, APP_ACTIVE_PROC will always be less than or equal to APP_ALIVE_PROC. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. APP_COMPLETED_PROC ---------------------------------- The number of processes in this group that completed during the interval. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. APP_CPU_SYS_MODE_TIME ---------------------------------- The time, in seconds, during the interval that the CPU was in system mode for processes in this group. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_SYS_MODE_UTIL ---------------------------------- The percentage of time during the interval that the CPU was used in system mode for processes in this group. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. High system CPU utilizations are normal for IO intensive groups. Abnormally high system CPU utilization can indicate that a hardware problem is causing a high interrupt rate. It can also indicate programs that are not making efficient system calls. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_TOTAL_TIME ---------------------------------- The total CPU time, in seconds, devoted to processes in this group during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_TOTAL_UTIL ---------------------------------- The percentage of the total CPU time devoted to processes in this group during the interval. This indicates the relative CPU load placed on the system by processes in this group. On AIX SPLPAR, this metric indicates the total physical processing units consumed by applications. Hence sum of the APP_CPU_TOTAL_UTIL for all applications must be compared with GBL_CPU_PHYS_TOTAL_UTIL. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. Large values for this metric may indicate that this group is causing a CPU bottleneck. This would be normal in a computation-bound workload, but might mean that processes are using excessive CPU time and perhaps looping. If the “other” application shows significant amounts of CPU, you may want to consider tuning your parm file so that process activity is accounted for in known applications. APP_CPU_TOTAL_UTIL = APP_CPU_SYS_MODE_UTIL + APP_CPU_USER_MODE_UTIL NOTE: On Windows, the sum of the APP_CPU_TOTAL_UTIL metrics may not equal GBL_CPU_TOTAL_UTIL. Microsoft states that “this is expected behavior” because the GBL_CPU_TOTAL_UTIL metric is taken from the NT performance library Processor objects while the APP_CPU_TOTAL_UTIL metrics are taken from the Process objects. Microsoft states that there can be CPU time accounted for in the Processor system objects that may not be seen in the Process objects. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_USER_MODE_TIME ---------------------------------- The time, in seconds, that processes in this group were in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_USER_MODE_UTIL ---------------------------------- The percentage of time that processes in this group were using the CPU in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. High user mode CPU percentages are normal for computation-intensive groups. Low values of user CPU utilization compared to relatively high values for APP_CPU_SYS_MODE_UTIL can indicate a hardware problem or improperly tuned programs in this group. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_DISK_BLOCK_IO ---------------------------------- The number of block IOs to the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. APP_DISK_BLOCK_IO_RATE ---------------------------------- The number of block IOs per second to the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. APP_DISK_BLOCK_READ ---------------------------------- The number of block reads from the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical reads generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. APP_DISK_BLOCK_READ_RATE ---------------------------------- The number of block reads per second from the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical reads generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. APP_DISK_BLOCK_WRITE ---------------------------------- The number of block writes to the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical writes generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. APP_DISK_BLOCK_WRITE_RATE ---------------------------------- The number of block writes per second from the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical writes generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. APP_DISK_PHYS_IO ---------------------------------- The number of physical IOs for processes in this group during the interval. On SUN systems, this metric is only available on Sun 5.X or later. APP_DISK_PHYS_IO_RATE ---------------------------------- The number of physical IOs per second for processes in this group during the interval. APP_IO_BYTE ---------------------------------- The number of characters (in KB) transferred for processes in this group to all devices during the interval. This includes IO to disk, terminal, tape and printers. APP_IO_BYTE_RATE ---------------------------------- The number of characters (in KB) per second transferred for processes in this group to all devices during the interval. This includes IO to disk, terminal, tape and printers. APP_MAJOR_FAULT_RATE ---------------------------------- The number of major page faults per second that required a disk IO for processes in this group during the interval. APP_MEM_RES ---------------------------------- On Unix systems, this is the sum of the size (in MB) of resident memory for processes in this group that were alive at the end of the interval. This consists of text, data, stack, and shared memory regions. On HP-UX, since PROC_MEM_RES typically takes shared region references into account, this approximates the total resident (physical) memory consumed by all processes in this group. On all other Unix systems, this is the sum of the resident memory region sizes for all processes in this group. When the resident memory size for processes includes shared regions, such as shared memory and library text and data, the shared regions are counted multiple times in this sum. For example, if the application contains four processes that are attached to a 500MB shared memory region that is all resident in physical memory, then 2000MB is contributed towards the sum in this metric. As such, this metric can overestimate the resident memory being used by processes in this group when they share memory regions. Refer to the help text for PROC_MEM_RES for additional information. On Windows, this is the sum of the size (in MB) of the working sets for processes in this group during the interval. The working set counts memory pages referenced recently by the threads making up this group. Note that the size of the working set is often larger than the amount of pagefile space consumed. APP_MEM_UTIL ---------------------------------- On Unix systems, this is the approximate percentage of the system’s physical memory used as resident memory by processes in this group that were alive at the end of the interval. This metric summarizes process private and shared memory in each application. On Windows, this is an estimate of the percentage of the system’s physical memory allocated for working set memory by processes in this group during the interval. On HP-UX, this consists of text, data, stack, as well the process’ portion of shared memory regions (such as, shared libraries, text segments, and shared data). The sum of the shared region pages is typically divided by the number of references. APP_MEM_VIRT ---------------------------------- On Unix systems, this is the sum (in MB) of virtual memory for processes in this group that were alive at the end of the interval. This consists of text, data, stack, and shared memory regions. On HP-UX, since PROC_MEM_VIRT typically takes shared region references into account, this approximates the total virtual memory consumed by all processes in this group. On all other Unix systems, this is the sum of the virtual memory region sizes for all processes in this group. When the virtual memory size for processes includes shared regions, such as shared memory and library text and data, the shared regions are counted multiple times in this sum. For example, if the application contains four processes that are attached to a 500MB shared memory region, then 2000MB is reported in this metric. As such, this metric can overestimate the virtual memory being used by processes in this group when they share memory regions. On Windows, this is the sum (in MB) of paging file space used for all processes in this group during the interval. Groups of processes may have working set sizes (APP_MEM_RES) larger than the size of their pagefile space. APP_MINOR_FAULT_RATE ---------------------------------- The number of minor page faults per second satisfied in memory (pages were reclaimed from one of the free lists) for processes in this group during the interval. APP_NAME ---------------------------------- The name of the application (up to 20 characters). This comes from the parm file where the applications are defined. The application called “other” captures all processes not aggregated into applications specifically defined in the parm file. In other words, if no applications are defined in the parm file, then all process data would be reflected in the “other” application. APP_NUM ---------------------------------- The sequentially assigned number of this application or, on Solaris, the project ID when application grouping by project is enabled. APP_PRI ---------------------------------- On Unix systems, this is the average priority of the processes in this group during the interval. On Windows, this is the average base priority of the processes in this group during the interval. APP_PRI_STD_DEV ---------------------------------- The standard deviation of priorities of the processes in this group during the interval. This metric is available on HP-UX 10.20. APP_PROC_RUN_TIME ---------------------------------- The average run time for processes in this group that completed during the interval. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. APP_SAMPLE ---------------------------------- The number of samples of process data that have been averaged or accumulated during this sample. APP_SUSPENDED_PROCS ---------------------------------- The average number of processes in this group which have been either marked as should be suspended (SGETOUT) or have been suspended (SSWAPPED) during the interval. Processes are suspended when the OS detects that memory thrashing is occurring. The scheduler looks for processes that have a high repage rate when compared with the number of major page faults the process has done and suspends these processes. If this metric is not zero, there is a memory bottleneck on the system. BLANK ---------------------------------- An empty field used for spacing reports. For example, this field can be used to create a blank column in a spreadsheet that may be used to sum several items. BYCPU_CPU_CLOCK ---------------------------------- The clock speed of the CPU in the current slot. The clock speed is in MHz for the selected CPU. The Linux kernel currently doesn’t provide any metadata information for disabled CPUs. This means that there is no way to find out types, speeds, as well as hardware IDs or any other information that is used to determine the number of cores, the number of threads, the HyperThreading state, etc... If the agent (or Glance) is started while some of the CPUs are disabled, some of these metrics will be “na”, some will be based on what is visible at startup time. All information will be updated if/when additional CPUs are enabled and information about them becomes available. The configuration counts will remain at the highest discovered level (i.e. if CPUs are then disabled, the maximum number of CPUs/cores/etc... will remain at the highest observed level). It is recommended that the agent be started with all CPUs enabled. On Linux, this value is always rounded up to the next MHz. BYCPU_CPU_PHYSC ---------------------------------- The total processing units of physical CPU consumed by this logical CPU during this interval. BYCPU_CPU_SYS_MODE_TIME ---------------------------------- The time, in seconds, that this CPU (or logical processor) was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_SYS_MODE_UTIL ---------------------------------- The percentage of time that this CPU (or logical processor) was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_TOTAL_TIME ---------------------------------- The total time, in seconds, that this CPU (or logical processor) was not idle during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_TOTAL_UTIL ---------------------------------- The percentage of time that this CPU (or logical processor) was not idle during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_USER_MODE_TIME ---------------------------------- The time, in seconds, during the interval that this CPU (or logical processor) was in user mode. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_USER_MODE_UTIL ---------------------------------- The percentage of time that this CPU (or logical processor) was in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CSWITCH_RATE ---------------------------------- The average number of context switches per second for this CPU during the interval. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. BYCPU_ID ---------------------------------- The ID number of this CPU. On some Unix systems, such as SUN, CPUs are not sequentially numbered. BYCPU_STATE ---------------------------------- A text string indicating the current state of a processor. On HP-UX, this is either “Enabled”, “Disabled” or “Unknown”. On AIX, this is either “Idle/Offline” or “Online”. On all other systems, this is either “Offline”, “Online” or “Unknown”. BYDSK_AVG_SERVICE_TIME ---------------------------------- The average time, in milliseconds, that this disk device spent processing each disk request during the interval. For example, a value of 5.14 would indicate that disk requests during the last interval took on average slightly longer than five one-thousandths of a second to complete for this device. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. This is a measure of the speed of the disk, because slower disk devices typically show a larger average service time. Average service time is also dependent on factors such as the distribution of I/O requests over the interval and their locality. It can also be influenced by disk driver and controller features such as I/O merging and command queueing. Note that this service time is measured from the perspective of the kernel, not the disk device itself. For example, if a disk device can find the requested data in its cache, the average service time could be quicker than the speed of the physical disk hardware. This metric can be used to help determine which disk devices are taking more time than usual to process requests. BYDSK_BUSY_TIME ---------------------------------- The time, in seconds, that this disk device was busy transferring data during the interval. On HP-UX, this is the time, in seconds, during the interval that the disk device had IO in progress from the point of view of the Operating System. In other words, the time, in seconds, the disk was busy servicing requests for this device. BYDSK_DEVNAME ---------------------------------- The name of this disk device. On HP-UX, the name identifying the specific disk spindle is the hardware path which specifies the address of the hardware components leading to the disk device. On SUN, these names are the same disk names displayed by “iostat”. On AIX, this is the path name string of this disk device. This is the fsname parameter in the mount(1M) command. If more than one file system is contained on a device (that is, the device is partitioned), this is indicated by an asterisk (“*”) at the end of the path name. On OSF1, this is the path name string of this disk device. This is the file- system parameter in the mount(1M) command. On Windows, this is the unit number of this disk device. BYDSK_HISTOGRAM ---------------------------------- A bar chart of the disk IO. Shows a breakout of the disk IO. Disk IO Rate = BYDSK_PHYS_READ_RATE + BYDSK_PHYS_WRITE_RATE ASCII and binary files contain a line of ASCII characters that make up one row of a printed histogram. This can be a quick way to get a graphical view of Disk IO on a character mode terminal display. BYDSK_ID ---------------------------------- The ID of the current disk device. BYDSK_PHYS_BYTE ---------------------------------- The number of KBs of physical IOs transferred to or from this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_BYTE_RATE ---------------------------------- The average KBs per second transferred to or from this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_IO ---------------------------------- The number of physical IOs for this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw reads. BYDSK_PHYS_IO_RATE ---------------------------------- The average number of physical IO requests per second for this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system IO, virtual memory and raw IO. BYDSK_PHYS_READ ---------------------------------- The number of physical reads for this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On AIX, this is an estimated value based on the ratio of read bytes to total bytes transferred. The actual number of reads is not tracked by the kernel. This is calculated as BYDSK_PHYS_READ = BYDSK_PHYS_IO * (BYDSK_PHYS_READ_BYTE / BYDSK_PHYS_IO_BYTE) BYDSK_PHYS_READ_BYTE ---------------------------------- The KBs transferred from this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_READ_BYTE_RATE ---------------------------------- The average KBs per second transferred from this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_READ_RATE ---------------------------------- The average number of physical reads per second for this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On AIX, this is an estimated value based on the ratio of read bytes to total bytes transferred. The actual number of reads is not tracked by the kernel. This is calculated as BYDSK_PHYS_READ_RATE = BYDSK_PHYS_IO_RATE * (BYDSK_PHYS_READ_BYTE / BYDSK_PHYS_IO_BYTE) BYDSK_PHYS_WRITE ---------------------------------- The number of physical writes for this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On AIX, this is an estimated value based on the ratio of write bytes to total bytes transferred because the actual number of writes is not tracked by the kernel. This is calculated as BYDSK_PHYS_WRITE = BYDSK_PHYS_IO * (BYDSK_PHYS_WRITE_BYTE / BYDSK_PHYS_IO_BYTE) BYDSK_PHYS_WRITE_BYTE ---------------------------------- The KBs transferred to this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_WRITE_BYTE_RATE ---------------------------------- The average KBs per second transferred to this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_WRITE_RATE ---------------------------------- The average number of physical writes per second for this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On AIX, this is an estimated value based on the ratio of write bytes to total bytes transferred. The actual number of writes is not tracked by the kernel. This is calculated as BYDSK_PHYS_WRITE_RATE = BYDSK_PHYS_IO_RATE * (BYDSK_PHYS_WRITE_BYTE / BYDSK_PHYS_IO_BYTE) BYDSK_REQUEST_QUEUE ---------------------------------- The average number of IO requests that were in the wait queue for this disk device during the interval. These requests are the physical requests (as opposed to logical IO requests). Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_UTIL ---------------------------------- On HP-UX, this is the percentage of the time during the interval that the disk device had IO in progress from the point of view of the Operating System. In other words, the utilization or percentage of time busy servicing requests for this device. On the non-HP-UX systems, this is the percentage of the time that this disk device was busy transferring data during the interval. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. This is a measure of the ability of the IO path to meet the transfer demands being placed on it. Slower disk devices may show a higher utilization with lower IO rates than faster disk devices such as disk arrays. A value of greater than 50% utilization over time may indicate that this device or its IO path is a bottleneck, and the access pattern of the workload, database, or files may need reorganizing for better balance of disk IO load. BYLS_CPU_CLOCK ---------------------------------- On vMA, for a host and logical system, it is the clock speed of the CPUs in MHz if all of the processors have the same clock speed. For a resource pool the value is NA. This metric represents the CPU clock speed. For an AIX frame, this metric is available only if the LPAR supports perfstat_partition_config call from libperfstat.a. This is usually present on AIX 7.1 onwards. For an LPAR, this value will be na. BYLS_CPU_ENTL ---------------------------------- The entitlement or the CPU units granted to a logical system at startup. On AIX SPLPAR, this metric indicates the cpu units allocated by Hypervisor to a logical system at the time of starting. This metric is equivalent to “Entitled Capacity” field of ‘lparstat -i’ command. For WPARs, it is the maximum units of CPU that a WPAR can have when there is a contention for CPU. WPAR shares CPU units of its global environment. BYLS_CPU_ENTL_MAX ---------------------------------- The maximum CPU units configured for a logical system. On HP-UX HPVM, this metric indicates the maximum percentage of physical CPU that a virtual CPU of this logical system can get. On AIX SPLPAR, this metric is equivalent to “Maximum Capacity” field of ‘lparstat -i’ command. For WPARs, it is the maximum percentage of CPU that a WPAR can have even if there is no contention for CPU. WPAR shares CPU units of its global environment. On Hyper-V host, for Root partition, this metric is NA. On vMA, for a host, the metric is equivalent to total number of cores on the host. For a resource pool and a logical system, this metrics indicates the maximum CPU units configured for it. BYLS_CPU_ENTL_MIN ---------------------------------- The minimum CPU units configured for this logical system. On HP-UX HPVM, this metric indicates the minimum percentage of physical CPU that a virtual CPU of this logical system is guaranteed. On AIX SPLPAR, this metric is equivalent to “Minimum Capacity” field of ‘lparstat -i’ command. For WPARs, it is the minimum CPU share assigned to a WPAR that is guaranteed. WPAR shares CPU units of its global environment. On Hyper-V host, for Root partition, this metric is NA. On vMA, for a host, the metric is equivalent to total number of cores on the host. For a resource pool and a logical system, this metrics indicates the guranteed minimum CPU units configured for it. On Solaris Zones, this metrics indicates the configured minimum CPU percentage reserved for a logical system. For Solaris Zones, this metric is calculated as: BYLS_CPU_ENTL_MIN = ( BYLS_CPU_SHARES_PRIO / Pool-Cpu-Shares ) where, Pool-Cpu-Shares is the total CPU shares available with CPU pool the zone is associated with. Pool-Cpu-Shares is addition of BYLS_CPU_SHARES_PRIO values for all active zones associated with this pool. BYLS_CPU_ENTL_UTIL ---------------------------------- Percentage of entitled processing units (guaranteed processing units allocated to this logical system) consumed by the logical system. On a HP-UX HPVM host the metric indicates the logical system’s CPU utilization with respect to minimum CPU entitlement. On HP-UX HPVM host, this metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_PHYSC / (BYLS_CPU_ENTL_MIN * BYLS_NUM_CPU)) * 100 On AIX, this metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_PHYSC / BYLS_CPU_ENTL) * 100 On WPAR, this metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_PHYSC / BYLS_CPU_ENTL_MAX) * 100 This metric matches “%Resc” of topas command (inside WPAR) On Solaris Zones, the metric indicates the logical system’s CPU utilization with respect to minimum CPU entitlement. This metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_TOTAL_UTIL / BYLS_CPU_SHARES_PRIO) * 100 If a Solaris zone is not assigned a CPU entitlement value then a CPU entitlement value is derived for this zone based on total CPU entitlement associated with the CPU pool this zone is attached to. On Hyper-V host, for Root partition, this metric is NA. On vMA, for a host the value is same as BYLS_CPU_PHYS_TOTAL_UTIL while for logical system and resource pool the value is the percentage of processing units consumed w.r.t minimum CPU entitlement. BYLS_CPU_FAMILY ---------------------------------- The family of the processor of the frame . This is available only if the LPAR supports perfstat_partition_config call from libperfstat.a. This is usually present on AIX 7.1 onwards. BYLS_CPU_MT_ENABLED ---------------------------------- Indicates whether the CPU hardware threads are enabled(“On”) or not(“Off”) for a logical system. For AIX WPARs, the metric will be “na”. On vMA, this metric indicates whether the CPU hardware threads are enabled or not for a host while for a resource pool and a logical system the value is not available(“na”). BYLS_CPU_PHYSC ---------------------------------- This metric indicates the number of CPU units utilized by the logical system. On an Uncapped logical system, this value will be equal to the CPU units capacity used by the logical system during the interval. This can be more than the value entitled for a logical system. BYLS_CPU_PHYS_IDLE_MODE_UTIL ---------------------------------- The percentage of time the physical CPUs were in idle state for the logical system during the interval. On AIX LPAR, this value is equivalent to “%idle” field reported by the “lparstat” command. BYLS_CPU_PHYS_SYS_MODE_UTIL ---------------------------------- The percentage of time the physical CPUs were in system mode (kernel mode) for the logical system during the interval. On AIX LPAR, this value is equivalent to “%sys” field reported by the “lparstat” command. On Hyper-V host, this metric indicates the percentage of time spent in Hypervisor code. On vMA, the metric indicates the percentage of time the physical CPUs were in system mode during the interval for the host or logical system. On vMA, for a resource pool, this metric is “na”. BYLS_CPU_PHYS_TOTAL_UTIL ---------------------------------- Percentage of total time the physical CPUs were utilized by this logical system during the interval. On HPUX, this information is updated internally every 10 seconds so it may take that long for these values to be updated in PA/Glance. On Solaris, this metric is calculated with respect to the available active physical CPUs on the system. On AIX, this metric is equivalent to sum of BYLS_CPU_PHYS_USER_MODE_UTIL and BYLS_CPU_PHYS_SYS_MODE_UTIL. For AIX lpars, the metric is calculated with respect to the available physical CPUs in the pool to which this LPAR belongs to. For AIX WPARs, the metric is calculated with respect to the available physical CPUs in the resource set or Global Environment. On vMA, the value indicates percentage of total time the physical CPUs were utilized by logical system or host or resource pool, On KVM/Xen, this value is core-normalized if GBL_IGNORE_MT is enabled on the server. BYLS_CPU_PHYS_USER_MODE_UTIL ---------------------------------- The percentage of time the physical CPUs were in user mode for the logical system during the interval. On AIX LPAR, this value is equivalent to “%user” field reported by the “lparstat” command. On Hyper-V host, this metric indicates the percentage of time spent in guest code. On vMA, the metrics indicates the percentage of time the physical CPUs were in user mode during the interval for the host or logical system. On vMA, for a resource pool, this metric is “na”. BYLS_CPU_PHYS_WAIT_MODE_UTIL ---------------------------------- The percentage of time the physical CPUs were in wait mode for the logical system during the interval. On AIX LPAR, this value is equivalent to “%wait” field reported by the “lparstat” command. BYLS_CPU_SHARES_PRIO ---------------------------------- This metric indicates the weightage/priority assigned to a Uncapped logical system. This value determines the minimum share of unutilized processing units that this logical system can utilize. The value of this metric will be “-3” in PA and “ul” in other clients if cpu shares value is ‘Unlimited’ for a logical system. On AIX SPLPAR this value is dependent on the available processing units in the pool and can range from 0 to 255. For WPARs, this metric represents how much of a particular resource a WPAR receives relative to the other WPARs. On vMA, for logical system and resource pool this value can range from 1 to 1000000 while for host the value is NA. On Solaris Zones, this metric sets a limit on the number of fair share scheduler (FSS) CPU shares for a zone. On Hyper-V host, this metric specifies allocation of CPU resources when more than one virtual machine is running and competing for resources. This value can range from 0 to 10000. For Root partition, this metric is NA. BYLS_CPU_TOTAL_UTIL ---------------------------------- Percentage of total time the logical CPUs were not idle during this interval. This metric is calculated against the number of logical CPUs configured for this logical system. For AIX wpars, the metric represents the percentage of time the physical CPUs were not idle during this interval. BYLS_DISPLAY_NAME ---------------------------------- On vMA, this metric indicates the name of the host or logical system or resource pool. On HPVM, this metric indicates the Virtual Machine name of the logical systemand is equivalent to “Virtual Machine Name” field of ‘hpvmstatus’ command. On AIX the value is as returned by the command “uname -n” (that is, the string returned from the “hostname” program). On Solaris Zones, this metric indicates the zone name and is equivalent to ‘NAME’ field of ‘zoneadm list -vc’ command. On Hyper-V host, this metric indicates the Virtual Machine name of the logical systemand is equivalent to the Name displayed in Hyper-V Manager. For Root partition, the value is always “Root”. BYLS_HYPCALL ---------------------------------- The number of Hypervisor calls made by a logical system during the interval. Higher number of calls will result in higher BYLS_CPU_PHYS_SYS_MODE_UTIL, BYLS_CPU_PHYS_WAIT_MODE_UTIL, GBL_CPU_SYS_MODE_UTIL and GBL_CPU_WAIT_UTIL. For AIX wpars, the metric will be “na”. BYLS_HYP_UTIL ---------------------------------- Percentage of time spent in Hypervisor by a logical system during the interval. Higher utilization of hypervisor will result in higher BYLS_CPU_PHYS_SYS_MODE_UTIL, BYLS_CPU_PHYS_WAIT_MODE_UTIL, GBL_CPU_SYS_MODE_UTIL and GBL_CPU_WAIT_UTIL. For AIX wpars, the metric will be “na”. BYLS_IP_ADDRESS ---------------------------------- This metric indicates IP Address of the particular logical system. On vMA, this metric indicates the IP Address for a host and a logical system while for a resource pool the value is NA. BYLS_LS_HOSTNAME ---------------------------------- This is the DNS registered name of the system. On Hyper-V host, this metric is NA if the logical system is not active or Hyper-V Integration Components are not installed on it. On vMA, for a host and logical system the metric is the Fully Qualified Domain Name, while for resource pool the value is NA. BYLS_LS_ID ---------------------------------- An unique identifier of the logical system. On HPVM, this metric is a numeric id and is equivalent to “VM # “ field of ‘hpvmstatus’ command. On AIX LPAR, this metric indicates partition number and is equivalent to “Partition Number” field of ‘lparstat -i’ command. For aix wpar, this metric represents the partition number and is equivalent to “uname -W” from inside wpar. On Solaris Zones, this metric indicates the zone id and is equivalent to ‘ID’ field of ‘zoneadm list -vc’ command. On Hyper-V host, this metric indicates the PID of the process corresponding to this logical system. For Root partition, this metric is NA. On vMA, this metric is a unique identifier for a host, resource pool and a logical system. The value of this metric may change for an instance across collection intervals. BYLS_LS_MODE ---------------------------------- This metric indicates whether the CPU entitlement for the logical system is Capped or Uncapped. The value “Uncapped” indicates that the logical system can utilize idle cycles from the shared processor pool of CPUs beyond its CPU entitlement. On AIX SPLPAR, this metric is same as “Mode” field of ‘lparstat -i’ command. For WPARs, this metric is always CAPPED. On vMA, the value is Capped for a host and Uncapped for a logical system. For resource pool, the value is Uncapped or Capped depending on whether the reservation is expandable or not for it. On Solaris Zones, this metric is “Capped” when the zone is assigned CPU shares and is attached to a valid CPU pool. BYLS_LS_NAME ---------------------------------- This is the name of the computer. On HPVM, this metric indicates the Virtual Machine name of the logical systemand is equivalent to “Virtual Machine Name” field of ‘hpvmstatus’ command. On AIX the value is as returned by the command “uname -n” (that is, the string returned from the “hostname” program). On vMA, this metric is a unique identifier for host, resource pool and a logical system. The value of this metric remains the same, for an instance, across collection intervals. On Solaris Zones, this metric indicates the zone name and is equivalent to ‘NAME’ field of ‘zoneadm list -vc’ command. On Hyper-V host, this metric indicates the name of the XML file which has configuration information of the logical system. This file will be present under the logical system’s installation directory indicated by BYLS_LS_PATH. For Root partition, the value is always “Root”. BYLS_LS_PARENT_UUID ---------------------------------- On vMA, the metric indicates the UUID appended to display_name of the parent entity. For logical system and resource pool this metric could indicate the UUID appended to display_name of a host or resource pool as they can be created under a host or resource pool. For a host, the value is NA. For an LPAR , if the frame is discovered the value will be BYLS_LS_UUID of the frame. BYLS_LS_ROLE ---------------------------------- On vMA, for a host the metric is HOST. For a logical system the value is GUEST and for a resource pool the value is RESPOOL. For logical system which is a vMA, the value is PROXY. For datacenter, the value is DATACENTER. For cluster, the value is CLUSTER. For datastore, the value is DATASTORE. For an AIX frame, the role is “Host”. For an LPAR, the role is “Guest”. BYLS_LS_SERIALNO ---------------------------------- The serial number of the AIX frame. For an LPAR, this value would be “na”. BYLS_LS_SHARED ---------------------------------- This metric indicates whether the physical CPUs are dedicated to this logical system or shared. On HPUX HPVM, and Hyper-V host,this metric is always “Shared”. On vMA, the value is “Dedicated” for host, and “Shared” for logical system and resource pool. On AIX SPLPAR, this metric is equivalent to “Type” field of ‘lparstat -i’ command. For AIX wpars,this metric will be always “Shared”. On Solaris Zones, this metric is “Dedicated” when this zone is attached to a CPU pool not shared by any other zone. BYLS_LS_STATE ---------------------------------- The state of this logical system. On HPVM, the logical systems can have one of the following states: Unknown Other invalid Up Down Boot Crash Shutdown Hung On vMA, this metric can have one of the following states for a host: on off unknown The values for a logical system can be one of the following: on off suspended unknown The value is NA for resource pool. On Solaris Zones, the logical systems can have one of the following states: configured incomplete installed ready running shutting down mounted On AIX lpars, the logical system will be always active. On AIX wpars, the logical systems can have one of the following states: Broken Transitional Defined Active Loaded Paused Frozen Error A logical system on a Hyper-V host can have the following states: unknown enabled disabled paused suspended starting snapshtng migrating saving stopping deleted pausing resuming BYLS_LS_TYPE ---------------------------------- The type of this logical system. On AIX, the logical systems can have one of the following types: lpar sys wpar app wpar On vMA, the value of this metric is “VMware”. For an AIX frame, the value of this metric is “FRAME”. BYLS_LS_UUID ---------------------------------- UUID of this logical system. This Id uniquely identifies this logical system across multiple hosts. On Hyper-V host, for Root partition, this metric is NA. On vMA, for a logical system or a host, the value indicates the UUID appended to display_name of the system. For a resource pool the value is hostname of the host where resource pool is hosted followed by the unique id of resource pool. For an AIX frame, the value is the display name appended with serial number. For an LPAR, this value is the frame’s name appended with serial number. BYLS_MACHINE_MODEL ---------------------------------- On vMA, for a host, it is the CPU model of the host system. For a logical system and resource pool the value is “na”. The machine model of the AIX Frame if present. For an LPAR, this value would be “na”. BYLS_MEM_ENTL ---------------------------------- The entitled memory configured for this logical system (in MB). On Hyper-V host, for Root partition, this metric is NA. On vMA, for host the value is the physical memory available in the system and for logical system this metric indicates the minimum memory configured while for resource pool the value is NA. For an AIX frame, this value is obtained from the command “lshwres -m -r mem --level sys “. BYLS_MEM_ENTL_MAX ---------------------------------- The maximum amount of memory configured for a logical system, in MB. The value of this metric will be “-3” in PA and “ul” in other clients if entitlement is ‘Unlimited’ for a logical system. On AIX LPARs, this metric will be “na”. On vMA, this metric indicates the maximum amount of memory configured for a resource pool or a logical system. For a host, the value is the amount of physical memory available in the system. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. BYLS_MEM_ENTL_MIN ---------------------------------- The minimum amount of memory configured for the logical system, in MB. On AIX LPARs, this metric will be “na”. On vMA, this metric indicates the reserved amount of memory configured for a host, resource pool or a logical system. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. BYLS_MEM_ENTL_UTIL ---------------------------------- The percentage of entitled memory in use during the interval. On vMA, for a logical system or a host, the value indicates percentage of entitled memory in use during the interval by it. For an AIX frame, this is calculated using “lshwres -r mempool -m “ from HMC. Active Memory Sharing has to be turned on for this. On vMA, for a resource pool, this metric is “na”. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. BYLS_MEM_SHARES_PRIO ---------------------------------- The weightage/priority for memory assigned to this logical system. This value influences the share of unutilized physical Memory that this logical system can utilize. The value of this metric will be “-3” in PA and “ul” in other clients if memory shares value is ‘Unlimited’ for a logical system. On AIX LPARs, this metric will be “na”. On vMA, this metric indicates the share of memory configured to a resource pool and a logical system. For a host the value is NA. BYLS_MGMT_IP_ADDRESS ---------------------------------- The value is the IP address of the HMC. This entry format will be in the form @ in the file “/var/opt/perf/hmc”. BYLS_NUM_ACTIVE_LS ---------------------------------- On vMA, for a host, this indicates the number of logical systems hosted in a system that are active. For a logical system and resource pool the value is NA. For an AIX frame, this is the number of LPARs in “Running” state. For an LPAR, this value will be “na”. BYLS_NUM_CPU ---------------------------------- The number of virtual CPUs configured for this logical system. This metric is equivalent to GBL_NUM_CPU on the corresponding logical system. On HPVM, the maximum CPUs a logical system can have is 4 with respect to HPVM 3.x. On AIX SPLPAR, the number of CPUs can be configured irrespective of the available physical CPUs in the pool this logical system belongs to. For AIX wpars, this metric represents the logical CPUs of the global environment. On vMA, for a host the metric is the number of physical CPU threads on the host. For a logical system, the metric is the number of virtual cpus configured.For a resource pool the metric is NA. On Solaris Zones, this metric represents number of CPUs in the CPU pool this zone is attached to. This metric value is equivalent to GBL_NUM_CPU inside corresponding non-global zone. BYLS_NUM_DISK ---------------------------------- The number of disks configured for this logical system. Only local disk devices and optical devices present on the system are counted in this metric. On vMA, for a host the metric is the number of disks configured for the host . For a logical system, the metric is the number of logical disk devices present on the logical system. For a resource pool the metric is NA. For AIX wpars, this metric will be “na”. On Hyper-V host, this metric value is equivalent to GBL_NUM_DISK inside corresponding Hyper-V guest. On Hyper-V host, this metric is NA if the logical system is not active. BYLS_NUM_LS ---------------------------------- On vMA, for a host, resource pool, virtual app and datacenter,this indicates the number of logical systems hosted. For all other entities, the value is NA. For an AIX frame, this is the number of LPARs hosted by frame. For an LPAR, this value will be “na”. BYLS_NUM_NETIF ---------------------------------- The number of network interfaces configured for this logical system. On LPAR, this metric includes the loopback interface. On Hyper-V host, this metric value is equivalent to GBL_NUM_NETWORK inside corresponding Hyper-V guest. On Solaris Zones, this metric value is equivalent to GBL_NUM_NETWORK inside corresponding non-global zone. On Hyper-V host, this metric is NA if the logical system is not active. On vMA, for a host the metric is the number of network adapters on the host. For a logical system, the metric is the number of network interfaces configured for the logical system. For a resource pool the metric is NA. BYLS_PHANTOM_INTR ---------------------------------- It is the number of phantom interrupts that the logical partition received during the interval. A phantom interrupt is an interrupt sent to another logical partition that shares the same CPU Unit. On AIX LPAR, this value is equivalent to “phint” field reported by the “lparstat” command. For AIX wpars, the metric will be “na”. BYLS_RUN_QUEUE ---------------------------------- The 1-minute load average for processors available for a logical system. On AIX LPAR, the load average is the total number of runnable and running threads summed over all processors during the interval. BYLS_UPTIME_SECONDS ---------------------------------- The uptime of this logical system in seconds. On AIX LPARs, this metric will be “na”. On vMA, for a host and logical system the metric is the uptime in seconds while for a resource pool the metric is NA. BYLS_VCSWITCH_RATE ---------------------------------- Number of virtual context switches per second for a logical system during the interval. For AIX wpars, the metric will be “na”. BYNETIF_COLLISION ---------------------------------- The number of physical collisions that occurred on the network interface during the interval. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets. This data is not collected for non-broadcasting devices, such as loopback (lo), and is always zero. For HP-UX, this will be the same as the sum of the “Single Collision Frames”, “Multiple Collision Frames”, “Late Collisions”, and “Excessive Collisions” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For most other Unix systems, this is the same as the sum of the “Coll” column from the “netstat -i” command (“collisions” from the “netstat -i -e” command on Linux) for a network device. See also netstat(1). If BYNETIF_NET_TYPE is “ESXVLan”, then this metric will be N/A. AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. BYNETIF_COLLISION_RATE ---------------------------------- The number of physical collisions per second on the network interface during the interval. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets. This data is not collected for non-broadcasting devices, such as loopback (lo), and is always zero. If BYNETIF_NET_TYPE is “ESXVLan”, then this metric will be N/A. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. BYNETIF_ERROR ---------------------------------- The number of physical errors that occurred on the network interface during the interval. An increasing number of errors may indicate a hardware problem in the network. On Unix systems, this data is not available for loop-back (lo) devices and is always zero. For HP-UX, this will be the same as the sum of the “Inbound Errors” and “Outbound Errors” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a network device. See also netstat(1). If BYNETIF_NET_TYPE is “ESXVLan”, then this metric will be N/A. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. BYNETIF_ERROR_RATE ---------------------------------- The number of physical errors per second on the network interface during the interval. On Unix systems, this data is not available for loop-back (lo) devices and is always zero. If BYNETIF_NET_TYPE is “ESXVLan”, then this metric will be N/A. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. BYNETIF_ID ---------------------------------- The ID number of the network interface. BYNETIF_IN_BYTE ---------------------------------- The number of KBs received from the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate. If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_IN_BYTE_RATE ---------------------------------- The number of KBs per second received from the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate. If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_IN_PACKET ---------------------------------- The number of successful physical packets received through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. For HP-UX, this will be the same as the sum of the “Inbound Unicast Packets” and “Inbound Non-Unicast Packets” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Ipkts” column (RX-OK on Linux) from the “netstat -i” command for a network device. See also netstat(1). If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_IN_PACKET_RATE ---------------------------------- The number of successful physical packets per second received through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_NAME ---------------------------------- The name of the network interface. For HP-UX 11.0 and beyond, these are the same names that appear in the “Description” field of the “lanadmin” command output. On all other Unix systems, these are the same names that appear in the “Name” column of the “netstat -i” command. Some examples of device names are: lo - loop-back driver ln - Standard Ethernet driver en - Standard Ethernet driver le - Lance Ethernet driver ie - Intel Ethernet driver tr - Token-Ring driver et - Ether Twist driver bf - fiber optic driver All of the device names will have the unit number appended to the name. For example, a loop-back device in unit 0 will be “lo0”. On vMA for Lan cards which are of type ESXVLan, this metric contains the vmnic as first half and the second half is the ESX host name. BYNETIF_NET_SPEED ---------------------------------- The speed of this interface. This is the bandwidth in Mega bits/sec. BYNETIF_NET_TYPE ---------------------------------- The type of network device the interface communicates through. Lan - local area network card Loop - software loopback interface (not tied to a hardware device) Loop6 - software loopback interface IPv6 (not tied to a hardware device) Serial - serial modem port Vlan - virtual lan Wan - wide area network card Tunnel - tunnel interface Apa - HP LinkAggregate Interface (APA) Other - hardware network interface type is unknown. ESXVLan - The card type belongs to network cards of ESX hosts which are monitored on vMA. BYNETIF_OUT_BYTE ---------------------------------- The number of KBs sent to the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate. If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_OUT_BYTE_RATE ---------------------------------- The number of KBs per second sent to the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate. If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_OUT_PACKET ---------------------------------- The number of successful physical packets sent through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. For HP-UX, this will be the same as the sum of the “Outbound Unicast Packets” and “Outbound Non-Unicast Packets” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Opkts” column (TX-OK on Linux) from the “netstat -i” command for a network device. See also netstat(1). If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_OUT_PACKET_RATE ---------------------------------- The number of successful physical packets per second sent through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_PACKET_RATE ---------------------------------- The number of successful physical packets per second sent and received through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_UTIL ---------------------------------- The percentage of bandwidth used with respect to the total available bandwidth on a given network interface at the end of the interval. On vMA this value will be N/A for those Lan cards which are of type ESXVLan. DATE ---------------------------------- The date the information in this record was captured, based on local time. The date is an ASCII field in mm/dd/yyyy format unless localized. If localized, the separators may be different and the subfield may be in a different sequence. In ASCII files this field will always contain 10 characters. Each subfield (mm, dd, yyyy) will contain a leading zero if the value is less than 10. This metric is extracted from GBL_STATTIME, which is obtained using the time() system call at the time of data collection. This field responds to language localization. For example, in Italy the field would appear as dd/mm/yyyy and in Japan it would be yyyy/mm/dd. In binary files this field is in MPE CALENDAR format in the least significant 16 bits of the field. The most significant 16 bits should all be zero. Dividing the field by 512 will isolate the year (that is, 94). This field MOD 512 will isolate the day of the year. DATE_SECONDS ---------------------------------- The time that the data in this record was captured, expressed in seconds since January 1, 1970, based on local time. This is related to the standard time- stamp returned by the unix system call time(), but has had the local time zone correction applied. DAY ---------------------------------- The julian day of the year that the data in this record was captured. This metric is extracted from GBL_STATTIME. FS_BLOCK_SIZE ---------------------------------- The maximum block size of this file system, in bytes. A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. FS_DEVNAME ---------------------------------- On Unix systems, this is the path name string of the current device. On Windows, this is the disk drive string of the current device. On HP-UX, this is the “fsname” parameter in the mount(1M) command. For NFS devices, this includes the name of the node exporting the file system. It is possible that a process may mount a device using the mount(2) system call. This call does not update the “/etc/mnttab” and its name is blank. This situation is rare, and should be corrected by syncer(1M). Note that once a device is mounted, its entry is displayed, even after the device is unmounted, until the midaemon process terminates. On SUN, this is the path name string of the current device, or “tmpfs” for memory based file systems. See tmpfs(7). FS_DEVNO ---------------------------------- On Unix systems, this is the major and minor number of the file system. On Windows, this is the unit number of the disk device on which the logical disk resides. The scope collector logs the value of this metric in decimal format. FS_DIRNAME ---------------------------------- On Unix systems, this is the path name of the mount point of the file system. On Windows, this is the drive letter associated with the selected disk partition. On HP-UX, this is the path name of the mount point of the file system if the logical volume has a mounted file system. This is the directory parameter of the mount(1M) command for most entries. Exceptions are: * For lvm swap areas, this field contains “lvm swap device”. * For logical volumes with no mounted file systems, this field contains “Raw Logical Volume” (relevant only to Perf Agent). On HP-UX, the file names are in the same order as shown in the “/usr/sbin/mount -p” command. File systems are not displayed until they exhibit IO activity once the midaemon has been started. Also, once a device is displayed, it continues to be displayed (even after the device is unmounted) until the midaemon process terminates. On SUN, only “UFS”, “HSFS” and “TMPFS” file systems are listed. See mount(1M) and mnttab(4). “TMPFS” file systems are memory based filesystems and are listed here for convenience. See tmpfs(7). On AIX, see mount(1M) and filesystems(4). On OSF1, see mount(2). FS_FRAG_SIZE ---------------------------------- The fundamental file system block size, in bytes. A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. FS_INODE_UTIL ---------------------------------- Percentage of this file system’s inodes in use during the interval. A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. FS_MAX_INODES ---------------------------------- Number of configured file system inodes. A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. FS_MAX_SIZE ---------------------------------- Maximum number that this file system could obtain if full, in MB. Note that this is the user space capacity - it is the file system space accessible to non root users. On most Unix systems, the df command shows the total file system capacity which includes the extra file system space accessible to root users only. The equivalent fields to look at are “used” and “avail”. For the target file system, to calculate the maximum size in MB, use FS Max Size = (used + avail)/1024 A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. On HP-UX, this metric is updated at 4 minute intervals to minimize collection overhead. FS_SPACE_RESERVED ---------------------------------- The amount of file system space in MBs reserved for superuser allocation. On AIX, this metric is typically zero for local filesystems because by default AIX does not reserve any file system space for the superuser. FS_SPACE_USED ---------------------------------- The amount of file system space in MBs that is being used. FS_SPACE_UTIL ---------------------------------- Percentage of the file system space in use during the interval. Note that this is the user space capacity - it is the file system space accessible to non root users. On most Unix systems, the df command shows the total file system capacity which includes the extra file system space accessible to root users only. A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. On HP-UX, this metric is updated at 4 minute intervals to minimize collection overhead. FS_TYPE ---------------------------------- A string indicating the file system type. On Unix systems, some of the possible types are: hfs - user file system ufs - user file system ext2 - user file system cdfs - CD-ROM file system vxfs - Veritas (vxfs) file system nfs - network file system nfs3 - network file system Version 3 On Windows, some of the possible types are: NTFS - New Technology File System FAT - 16-bit File Allocation Table FAT32 - 32-bit File Allocation Table FAT uses a 16-bit file allocation table entry (216 clusters). FAT32 uses a 32-bit file allocation table entry. However, Windows 2000 reserves the first 4 bits of a FAT32 file allocation table entry, which means FAT32 has a theoretical maximum of 228 clusters. NTFS is native file system of Windows NT and beyond. GBL_ACTIVE_CPU ---------------------------------- The number of CPUs online on the system. For HP-UX and certain versions of Linux, the sar(1M) command allows you to check the status of the system CPUs. For SUN and DEC, the commands psrinfo(1M) and psradm(1M) allow you to check or change the status of the system CPUs. For AIX, the pstat(1) command allows you to check the status of the system CPUs. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment if RSET is not configured for the System WPAR. If RSET is configured for the System WPAR, this metric value will report the number of CPUs in the RSET. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_ACTIVE_CPU_CORE ---------------------------------- This metric provides the total number of active CPU cores on a physical system. GBL_ACTIVE_PROC ---------------------------------- An active process is one that exists and consumes some CPU time. GBL_ACTIVE_PROC is the sum of the alive-process-time/interval-time ratios of every process that is active (uses any CPU time) during an interval. The following diagram of a four second interval during which two processes exist on the system should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval but consumes no CPU. A’s contribution to GBL_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to GBL_ACTIVE_PROC. B’s contribution to GBL_ALIVE_PROC is 3*1/4. B contributes 2*1/4 to GBL_ACTIVE_PROC. Thus, for this interval, GBL_ACTIVE_PROC equals 0.5 and GBL_ALIVE_PROC equals 1.75. Because a process may be alive but not active, GBL_ACTIVE_PROC will always be less than or equal to GBL_ALIVE_PROC. This metric is a good overall indicator of the workload of the system. An unusually large number of active processes could indicate a CPU bottleneck. To determine if the CPU is a bottleneck, compare this metric with GBL_CPU_TOTAL_UTIL and GBL_RUN_QUEUE. If GBL_CPU_TOTAL_UTIL is near 100 percent and GBL_RUN_QUEUE is greater than one, there is a bottleneck. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. GBL_ALIVE_PROC ---------------------------------- An alive process is one that exists on the system. GBL_ALIVE_PROC is the sum of the alive-process-time/interval-time ratios for every process. The following diagram of a four second interval during which two processes exist on the system should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval but consumes no CPU. A’s contribution to GBL_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to GBL_ACTIVE_PROC. B’s contribution to GBL_ALIVE_PROC is 3*1/4. B contributes 2*1/4 to GBL_ACTIVE_PROC. Thus, for this interval, GBL_ACTIVE_PROC equals 0.5 and GBL_ALIVE_PROC equals 1.75. Because a process may be alive but not active, GBL_ACTIVE_PROC will always be less than or equal to GBL_ALIVE_PROC. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. GBL_APP_THRESHOLD ---------------------------------- appthreshold specifies the thresholds for APPLICATION class. This is the percentage of cpu being utilized by an application (APP_CPU_TOTAL_UTIL) during the interval. This threshold value is supplied by the parm file. An application must exceed this threshold value in any given interval before it will be considered interesting to be logged. GBL_BLOCKED_IO_QUEUE ---------------------------------- The average number of processes blocked on local disk resources (IO, paging). This metric is an indicator of disk contention among active processes. It should normally be a very small number. If GBL_DISK_UTIL_PEAK is near 100 percent and GBL_BLOCKED_IO_QUEUE is greater than 1, a disk bottleneck is probable. On SUN, this is the same as the “procs b” field reported in vmstat. On Solaris non-global zones, this metric shows data from the global zone. GBL_BOOT_TIME ---------------------------------- The date and time when the system was last booted. GBL_BYCPU_THRESHOLD ---------------------------------- bycputhreshold specifies the thresholds for CPU class. This is the percentage of time a cpu was busy (BYCPU_CPU_TOTAL_UTIL) during the interval. This threshold value is supplied by the parm file. A cpu must exceed this threshold value in any given interval before it will be considered interesting to be logged. GBL_BYDSK_THRESHOLD ---------------------------------- diskthreshold specifies the threshold for DISK class. This is the percentage of time that a disk busy in performing IO (BYDSK_UTIL) during the interval. This threshold value is supplied by the parm file. A disk must exceed this threshold value in any given interval before it will be considered interesting and be logged. GBL_BYFS_THRESHOLD ---------------------------------- fsthreshold specifies the thresholds for FILESYSTEM class. This is the percentage of space used (FS_SPACE_UTIL) of the filesystem. This threshold value is supplied by the parm file. A filesystem must exceed this threshold value in any given interval before it will be considered interesting to be logged. GBL_BYNETIF_THRESHOLD ---------------------------------- bynetifthreshold specifies the thresholds for NETIF class. This is the number of packets transferred per second during the interval(BYNETIF_PACKET_RATE). This threshold value is supplied by the parm file. A network interface must exceed this threshold value in any given interval before it will be considered interesting to be logged. GBL_COLLECTOR ---------------------------------- ASCII field containing collector name and version. The collector name will appear as either “SCOPE/xx V.UU.FF.LF” or “Coda RV.UU.FF.LF”. xx identifies the platform; V = version, UU = update level, FF = fix level, and LF = lab fix id. For example, SCOPE/UX C.04.00.00; or Coda A.07.10.04. GBL_COLLECT_INTERVAL ---------------------------------- The interval, in seconds, at which non-process metrics are collected. Collection intervals are set in parm file. GBL_COLLECT_INTERVAL_PROC ---------------------------------- The interval, in seconds, at which process metrics are collected. Collection intervals are set in parm file. GBL_COMPLETED_PROC ---------------------------------- The number of processes that terminated during the interval. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. GBL_CPU_CLOCK ---------------------------------- The clock speed of the CPUs in MHz if all of the processors have the same clock speed. Otherwise, “na” is shown if the processors have different clock speeds. Note that Linux supports dynamic frequency scaling and if it is enabled then there can be a change in CPU speed with varying load. GBL_CPU_ENTL ---------------------------------- In a virtual environment this metric indicates the physical processor units allocated to this Logical system. On AIX SPLPAR, this metric indicates the entitlement allocated by Hypervisor to a logical system at the time of starting. This metric is equivalent to “Entitled Capacity” field of ‘lparstat -i’ command. On a standalone system the value of this metric is same as GBL_NUM_CPU. GBL_CPU_ENTL_MAX ---------------------------------- In a virtual environment, this metric indicates the maximum number of processing units configured for this logical system. On AIX SPLPAR, this metric is equivalent to “Maximum Capacity” field of ‘lparstat -i’ command. On a recognized VMware ESX guest the value is equivalent to GBL_CPU_CYCLE_ENTL_MAX represented in CPU units. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is “na”. On a standalone system the value is same as GBL_NUM_CPU. GBL_CPU_ENTL_MIN ---------------------------------- In a virtual environment, this metric indicates the minimum number of processing units configured for this Logical system. On AIX SPLPAR, this metric is equivalent to “Minimum Capacity” field of ‘lparstat -i’ command. On a recognized VMware ESX guest, where VMware guest SDK is enabled, the value is equivalent to GBL_CPU_CYCLE_ENTL_MIN represented in CPU units. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is “na”. On a standalone system the value is same as GBL_NUM_CPU. GBL_CPU_ENTL_UTIL ---------------------------------- Percentage of entitled processing units (guaranteed processing units allocated to this logical system) consumed by the logical system. On an “Uncapped” logical system, this metric can exceed 100% if the processing units are available in the shared resource pool and the number of virtual CPUs are satisfied. On a Capped logical system this metric can never go beyond 100%. On AIX, this metric is calculated as: GBL_CPU_ENTL_UTIL = (GBL_CPU_PHYSC / GBL_CPU_ENTL) * 100 On a recognized VMware ESX guest, where VMware guest SDK is enabled, this metric is calculated as: GBL_CPU_ENTL_UTIL = (GBL_CPU_PHYSC / GBL_CPU_ENTL_MIN) * 100 On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is “na”. On a standalone system, the value is same as GBL_CPU_TOTAL_UTIL. GBL_CPU_HISTOGRAM ---------------------------------- Histogram of CPU utilization components. Shows breakout: GBL_CPU_TOTAL_UTIL = GBL_CPU_SYS_MODE_UTIL + GBL_CPU_USER_MODE_UTIL ASCII and BINARY files contain a line of ASCII characters that make up one row of a printed histogram. This can be a quick way to get a graphical view of CPU usage on a character-mode terminal display. GBL_CPU_IDLE_TIME ---------------------------------- The time, in seconds, that the CPU was idle during the interval. This is the total idle time, including waiting for I/O. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. On AIX System WPARs, this metric value is calculated against physical cpu time. On Solaris non-global zones, this metric is N/A. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_IDLE_UTIL ---------------------------------- The percentage of time that the CPU was idle during the interval. This is the total idle time, including waiting for I/O. On Unix systems, this is the same as the sum of the “%idle” and “%wio” fields reported by the “sar -u” command. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. On Solaris non-global zones, this metric is N/A. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_MT_ENABLED ---------------------------------- On AIX, this metric indicates if this (Logical) System has SMT enabled or not. Other platforms, this metric shows either HyperThreading(HT) is Enabled or Disabled/Not Supported. On Linux, this state is dynamic: if HyperThreading is enabled but all the CPUs have only one logical processor enabled, this metric will report that HT is disabled. On AIX System WPARs, this metric is NA. On Windows, this metric will be “na” on Windows Server 2003 Itanium systems. GBL_CPU_NUM_THREADS ---------------------------------- The number of active CPU threads supported by the CPU architecture. The Linux kernel currently doesn’t provide any metadata information for disabled CPUs. This means that there is no way to find out types, speeds, as well as hardware IDs or any other information that is used to determine the number of cores, the number of threads, the HyperThreading state, etc... If the agent (or Glance) is started while some of the CPUs are disabled, some of these metrics will be “na”, some will be based on what is visible at startup time. All information will be updated if/when additional CPUs are enabled and information about them becomes available. The configuration counts will remain at the highest discovered level (i.e. if CPUs are then disabled, the maximum number of CPUs/cores/etc... will remain at the highest observed level). It is recommended that the agent be started with all CPUs enabled. On AIX System WPARs, this metric is NA. GBL_CPU_PHYSC ---------------------------------- The number of physical processors utilized by the logical system. On an Uncapped logical system (partition), this value will be equal to the physical processor capacity used by the logical system during the interval. This can be more than the value entitled for a logical system. On a standalone system the value is calculated based on GBL_CPU_TOTAL_UTIL GBL_CPU_PHYS_SYS_MODE_UTIL ---------------------------------- The percentage of time the physical CPU was in system mode (kernel mode) for the logical system during the interval. On AIX LPAR, this value is equivalent to “%sys” field reported by the “lparstat” command. On AIX System WPARs, this metric value is calculated against physical cpu time. GBL_CPU_PHYS_TOTAL_UTIL ---------------------------------- The percentage of time the available physical CPUs were not idle for this logical system during the interval. On AIX, this metric is calculated as : GBL_CPU_PHYS_TOTAL_UTIL = GBL_CPU_PHYS_USER_MODE_UTIL + GBL_CPU_PHYS_SYS_MODE_UTIL ; GBL_CPU_PHYS_TOTAL_UTIL + GBL_CPU_PHYS_WAIT_UTIL + GBL_CPU_PHYS_IDLE_UTIL = 100% On Power5 based systems, traditional sample based calculations cannot be made because the dispatch cycle for each of the virtual CPUs is not same. So Power5 processor maintains a per-thread register PURR. The thread is dispatching instructions or the thread that last dispatched an instruction will be incremented at every processor clock cycle. This makes the value to be distributed between the two threads. Power5 processor also maintains two more registers, one is timebase - which gets incremented at every tick and decrementer - that provided periodic interrupts. On a Shared LPAR environment, PURR is equal to the time that a virtual processor has spent on a physical processor. Hypervisor maintains a virtual timebase which is same as the sum of two PURRs. On a Capped Shared logical system (partition), the calculations for the metric GBL_CPU_PHYS_USER_MODE_UTIL is as follows: (delta PURR in user mode/entitlement) * 100 On an Uncapped Shared logical system (partition): (delta PURR in user mode/entitlement consumed) * 100 The calculations for the other utilizations such as GBL_CPU_PHYS_USER_MODE_UTIL, GBL_CPU_PHYS_SYS_MODE_UTIL, and GBL_CPU_PHYS_WAIT_UTIL are also similar. On a standalone system, the value will be equivalent to GBL_CPU_TOTAL_UTIL. On AIX System WPARs, this metric value is calculated against physical cpu time. GBL_CPU_PHYS_USER_MODE_UTIL ---------------------------------- The percentage of time the physical CPU was in user mode for the logical system during the interval. On AIX LPAR, this value is equivalent to “%user” field reported by the “lparstat” command. On AIX System WPARs, this metric value is calculated against physical cpu time. GBL_CPU_QUEUE ---------------------------------- The snapshot of number of processes using the CPU plus all of those processes blocked on Priority (waiting for their priority to become high enough to get the CPU) during the last sub-procinterval. The value represents the queue length during the last sub-procinterval. Its calculated for the last sub- procinterval because the most recent number of processes running now and blocked on priority can be obtained only during the last sub-procinterval. To determine if the CPU is a bottleneck, compare this metric with GBL_CPU_TOTAL_UTIL. If GBL_CPU_TOTAL_UTIL is near 100 percent and GBL_CPU_QUEUE is greater than four, there is a high probability of a CPU bottleneck. This metric also accounts for GBL_PRI_QUEUE and its value is always greater than GBL_PRI_QUEUE. GBL_CPU_SHARES_PRIO ---------------------------------- The weightage/priority assigned to a Uncapped logical system. This value determines the minimum share of unutilized processing units that this logical system can utilize. On AIX SPLPAR this value is dependent on the available processing units in the pool and can range from 0 to 255 On recognized VMware ESX guest, this value can range from 1 to 100000 On a standalone system the value will be “na”. GBL_CPU_SYS_MODE_TIME ---------------------------------- The time, in seconds, that the CPU was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On AIX System WPARs, this metric value is calculated against physical cpu time. On Hyper-V host, this metric indicates the time spent in Hypervisor code. GBL_CPU_SYS_MODE_UTIL ---------------------------------- Percentage of time the CPU was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. This metric is a subset of the GBL_CPU_TOTAL_UTIL percentage. This is NOT a measure of the amount of time used by system daemon processes, since most system daemons spend part of their time in user mode and part in system calls, like any other process. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. High system mode CPU percentages are normal for IO intensive applications. Abnormally high system mode CPU percentages can indicate that a hardware problem is causing a high interrupt rate. It can also indicate programs that are not calling system calls efficiently. On a logical system, this metric indicates the percentage of time the logical processor was in kernel mode during this interval. On Hyper-V host, this metric indicates the percentage of time spent in Hypervisor code. GBL_CPU_TOTAL_TIME ---------------------------------- The total time, in seconds, that the CPU was not idle in the interval. This is calculated as GBL_CPU_TOTAL_TIME = GBL_CPU_USER_MODE_TIME + GBL_CPU_SYS_MODE_TIME On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On AIX System WPARs, this metric value is calculated against physical cpu time. GBL_CPU_TOTAL_UTIL ---------------------------------- Percentage of time the CPU was not idle during the interval. This is calculated as GBL_CPU_TOTAL_UTIL = GBL_CPU_USER_MODE_UTIL + GBL_CPU_SYS_MODE_UTIL On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_TOTAL_UTIL + GBL_CPU_IDLE_UTIL = 100% This metric varies widely on most systems, depending on the workload. A consistently high CPU utilization can indicate a CPU bottleneck, especially when other indicators such as GBL_RUN_QUEUE and GBL_ACTIVE_PROC are also high. High CPU utilization can also occur on systems that are bottlenecked on memory, because the CPU spends more time paging and swapping. NOTE: On Windows, this metric may not equal the sum of the APP_CPU_TOTAL_UTIL metrics. Microsoft states that “this is expected behavior” because this GBL_CPU_TOTAL_UTIL metric is taken from the performance library Processor objects while the APP_CPU_TOTAL_UTIL metrics are taken from the Process objects. Microsoft states that there can be CPU time accounted for in the Processor system objects that may not be seen in the Process objects. On a logical system, this metric indicates the logical utilization with respect to number of processors available for the logical system (GBL_NUM_CPU). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_USER_MODE_TIME ---------------------------------- The time, in seconds, that the CPU was in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On AIX System WPARs, this metric value is calculated against physical cpu time. On Hyper-V host, this metric indicates the time spent in guest code. GBL_CPU_USER_MODE_UTIL ---------------------------------- The percentage of time the CPU was in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. This metric is a subset of the GBL_CPU_TOTAL_UTIL percentage. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. High user mode CPU percentages are normal for computation-intensive applications. Low values of user CPU utilization compared to relatively high values for GBL_CPU_SYS_MODE_UTIL can indicate an application or hardware problem. On a logical system, this metric indicates the percentage of time the logical processor was in user mode during this interval. On Hyper-V host, this metric indicates the percentage of time spent in guest code. GBL_CPU_WAIT_TIME ---------------------------------- The time, in seconds, that the CPU was idle and there were processes waiting for physical IOs to complete during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On AIX System WPARs, this metric value is calculated against physical cpu time. On Solaris non-global zones, this metric is N/A. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On Linux, wait time includes CPU steal time. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’. GBL_CPU_WAIT_UTIL ---------------------------------- The percentage of time during the interval that the CPU was idle and there were processes waiting for physical IOs to complete. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On Solaris non-global zones, this metric is N/A. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On Linux, wait time includes CPU steal time. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’. GBL_CSWITCH_RATE ---------------------------------- The average number of context switches per second during the interval. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. On Windows, this includes switches from one thread to another either inside a single process or across processes. A thread switch can be caused either by one thread asking another for information or by a thread being preempted by another higher priority thread becoming ready to run. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_DISK_BLOCK_IO ---------------------------------- The total number of block IOs during the interval. On SUN, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These do include the IO of the inode (system write) and the file system data IO. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_DISK_BLOCK_IO_RATE ---------------------------------- The total number of block IOs per second during the interval. On SUN, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These do include the IO of the inode (system write) and the file system data IO. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_DISK_BLOCK_READ ---------------------------------- The number of block reads during the interval. On SUN, these are physical reads generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical reads generated by file system access and do not include virtual memory reads, or reads relating to raw disk access. These do include the read of the inode (system read) and the file data read. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_DISK_BLOCK_READ_RATE ---------------------------------- The number of block reads per second during the interval. On SUN, these are physical reads generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical reads generated by file system access and do not include virtual memory reads, or reads relating to raw disk access. These do include the read of the inode (system read) and the file data read. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_DISK_BLOCK_WRITE ---------------------------------- The number of block writes during the interval. On SUN, these are physical writes generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical writes generated by file system access and do not include virtual memory writes, or writes relating to raw disk access. These do include the write of the inode (system write) and the file system data write. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_DISK_BLOCK_WRITE_RATE ---------------------------------- The number of block writes per second during the interval. On SUN, these are physical writes generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical writes generated by file system access and do not include virtual memory writes, or writes relating to raw disk access. These do include the write of the inode (system write) and the file system data write. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_DISK_HISTOGRAM ---------------------------------- Histogram of physical Disk IO rate components. On HP-UX, this shows a breakout of: GBL_DISK_PHYS_IO_RATE = GBL_DISK_VM_IO_RATE + GBL_DISK_SYSTEM_IO_RATE + GBL_DISK_FS_IO_RATE + GBL_DISK_RAW_IO_RATE On SUN systems, this shows a breakout of: GBL_DISK_PHYS_IO_RATE = GBL_DISK_BLOCK_READ_RATE + GBL_DISK_BLOCK_WRITE_RATE + GBL_DISK_RAW_READ_RATE + GBL_DISK_RAW_WRITE_RATE + GBL_DISK_VM_IO_RATE On the remaining Unix systems, this shows a breakout of: GBL_DISK_PHYS_IO_RATE = GBL_DISK_BLOCK_IO_RATE + GBL_DISK_VM_IO_RATE + GBL_DISK_RAW_IO_RATE On Windows, this shows a breakout of: GBL_DISK_PHYS_IO_RATE = GBL_DISK_PHYS_READ_RATE + GBL_DISK_PHYS_WRITE_RATE ASCII and BINARY files contain a line of ASCII characters that make up one row of a printed histogram. This can be a quick way to get a graphical view of Disk usage on a character-mode terminal display. GBL_DISK_PATH_COUNT ---------------------------------- The number of paths available to the disks on the system. This metric is only valid on aix VIO servers. GBL_DISK_PHYS_BYTE ---------------------------------- The number of KBs transferred to and from disks during the interval. The bytes for all types of physical IOs are counted. Only local disks are counted in this measurement. NFS devices are excluded. It is not directly related to the number of IOs, since IO requests can be of differing lengths. On Unix systems, this includes file system IO, virtual memory IO, and raw IO. On Windows, all types of physical IOs are counted. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_BYTE_RATE ---------------------------------- The average number of KBs per second at which data was transferred to and from disks during the interval. The bytes for all types physical IOs are counted. Only local disks are counted in this measurement. NFS devices are excluded. This is a measure of the physical data transfer rate. It is not directly related to the number of IOs, since IO requests can be of differing lengths. This is an indicator of how much data is being transferred to and from disk devices. Large spikes in this metric can indicate a disk bottleneck. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw reads. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_IO ---------------------------------- The number of physical IOs during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk IOs are counted, including file system IO, virtual memory IO and raw IO. On HP-UX, this is calculated as GBL_DISK_PHYS_IO = GBL_DISK_FS_IO + GBL_DISK_VM_IO + GBL_DISK_SYSTEM_IO + GBL_DISK_RAW_IO On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_IO_RATE ---------------------------------- The number of physical IOs per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk IOs are counted, including file system IO, virtual memory IO and raw IO. On HP-UX, this is calculated as GBL_DISK_PHYS_IO_RATE = GBL_DISK_FS_IO_RATE + GBL_DISK_VM_IO_RATE + GBL_DISK_SYSTEM_IO_RATE + GBL_DISK_RAW_IO_RATE On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_READ ---------------------------------- The number of physical reads during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On HP-UX, there are many reasons why there is not a direct correlation between the number of logical IOs and physical IOs. For example, small sequential logical reads may be satisfied from the buffer cache, resulting in fewer physical IOs than logical IOs. Conversely, large logical IOs or small random IOs may result in more physical than logical IOs. Logical volume mappings, logical disk mirroring, and disk striping also tend to remove any correlation. On HP-UX, this is calculated as GBL_DISK_PHYS_READ = GBL_DISK_FS_READ + GBL_DISK_VM_READ + GBL_DISK_SYSTEM_READ + GBL_DISK_RAW_READ On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_READ_BYTE_RATE ---------------------------------- The average number of KBs transferred from the disk per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_READ_PCT ---------------------------------- The percentage of physical reads of total physical IO during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_READ_RATE ---------------------------------- The number of physical reads per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On HP-UX, this is calculated as GBL_DISK_PHYS_READ_RATE = GBL_DISK_FS_READ_RATE + GBL_DISK_VM_READ_RATE + GBL_DISK_SYSTEM_READ_RATE + GBL_DISK_RAW_READ_RATE On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_WRITE ---------------------------------- The number of physical writes during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On HP-UX, there are many reasons why there is not a direct correlation between logical IOs and physical IOs. For example, small logical writes may end up entirely in the buffer cache, and later generate fewer physical IOs when written to disk due to the larger IO size. Or conversely, small logical writes may require physical prefetching of the corresponding disk blocks before the data is merged and posted to disk. Logical volume mappings, logical disk mirroring, and disk striping also tend to remove any correlation. On HP-UX, this is calculated as GBL_DISK_PHYS_WRITE = GBL_DISK_FS_WRITE + GBL_DISK_VM_WRITE + GBL_DISK_SYSTEM_WRITE + GBL_DISK_RAW_WRITE On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_WRITE_BYTE_RATE ---------------------------------- The average number of KBs transferred to the disk per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_WRITE_RATE ---------------------------------- The number of physical writes per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On HP-UX, this is calculated as GBL_DISK_PHYS_WRITE_RATE = GBL_DISK_FS_WRITE_RATE + GBL_DISK_VM_WRITE_RATE + GBL_DISK_SYSTEM_WRITE_RATE + GBL_DISK_RAW_WRITE_RATE On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_RAW_IO ---------------------------------- The total number of raw reads and writes during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Sun, tape drive accesses are included in raw IOs, but not in physical IOs. To determine if raw IO is tape access versus disk access, compare the global physical disk accesses to the total raw, block, and vm IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_DISK_RAW_IO_RATE ---------------------------------- The total number of raw reads and writes per second during the interval. Only accesses to local disk devices are counted. On Sun, tape drive accesses are included in raw IOs, but not in physical IOs. To determine if raw IO is tape access versus disk access, compare the global physical disk accesses to the total raw, block, and vm IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_DISK_RAW_READ ---------------------------------- The number of raw reads during the interval. Only accesses to local disk devices are counted. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_DISK_RAW_READ_RATE ---------------------------------- The number of raw reads per second during the interval. Only accesses to local disk devices are counted. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_DISK_RAW_WRITE ---------------------------------- The number of raw writes during the interval. Only accesses to local disk devices are counted. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_DISK_RAW_WRITE_RATE ---------------------------------- The number of raw writes per second during the interval. Only accesses to local disk devices are counted. On Sun, tape drive accesses are included in raw IOs, but not in physical IOs. To determine if raw IO is tape access versus disk access, compare the global physical disk accesses to the total raw, block, and vm IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_DISK_REQUEST_QUEUE ---------------------------------- The total length of all of the disk queues at the end of the interval. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_TIME_PEAK ---------------------------------- The time, in seconds, during the interval that the busiest disk was performing IO transfers. This is for the busiest disk only, not all disk devices. This counter is based on an end-to-end measurement for each IO transfer updated at queue entry and exit points. Only local disks are counted in this measurement. NFS devices are excluded. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_UTIL_PEAK ---------------------------------- The utilization of the busiest disk during the interval. On HP-UX, this is the percentage of time during the interval that the busiest disk device had IO in progress from the point of view of the Operating System. On all other systems, this is the percentage of time during the interval that the busiest disk was performing IO transfers. It is not an average utilization over all the disk devices. Only local disks are counted in this measurement. NFS devices are excluded. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. A peak disk utilization of more than 50 percent often indicates a disk IO subsystem bottleneck situation. A bottleneck may not be in the physical disk drive itself, but elsewhere in the IO path. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_VM_IO ---------------------------------- The total number of virtual memory IOs made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the IOs to user file data are not included in this metric unless they were done via the mmap(2) system call. On SUN, when a file is accessed, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On SUN, this metric is calculated by subtracting raw and block IOs from physical IOs. Tape drive accesses are included in the raw IOs, but not in the physical IOs. Therefore, when tape drive accesses are occurring on a system, all virtual memory and raw IO is counted as raw IO. For example, you may see heavy raw IO occurring during system backup. Raw IOs for disks are counted in the physical IOs. To determine if the raw IO is tape access versus disk access, compare the global physical disk accesses to the total of raw, block, and VM IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_VM_IO_RATE ---------------------------------- The number of virtual memory IOs per second made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the IOs to user file data are not included in this metric unless they were done via the mmap(2) system call. On SUN, when a file is accessed, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On SUN, this metric is calculated by subtracting raw and block IOs from physical IOs. Tape drive accesses are included in the raw IOs, but not in the physical IOs. Therefore, when tape drive accesses are occurring on a system, all virtual memory and raw IO is counted as raw IO. For example, you may see heavy raw IO occurring during system backup. Raw IOs for disks are counted in the physical IOs. To determine if the raw IO is tape access versus disk access, compare the global physical disk accesses to the total of raw, block, and VM IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_VM_READ ---------------------------------- The number of virtual memory reads made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the reads to user file data are not included in this metric unless they were accessed via the mmap(2) system call. On AIX System WPARs, this metric is NA. GBL_DISK_VM_READ_RATE ---------------------------------- The number of virtual memory reads per second made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the reads to user file data are not included in this metric unless they were accessed via the mmap(2) system call. On AIX System WPARs, this metric is NA. GBL_DISK_VM_WRITE ---------------------------------- The number of virtual memory writes made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the writes to user file data are not included in this metric unless they were done via the mmap(2) system call. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On AIX System WPARs, this metric is NA. GBL_DISK_VM_WRITE_RATE ---------------------------------- The number of virtual memory writes per second made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the writes to user file data are not included in this metric unless they were done via the mmap(2) system call. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On AIX System WPARs, this metric is NA. GBL_FLUSH ---------------------------------- Flush specifies the interval, in seconds, at which scope logs the application and device data classes even though the data does not meet the threshold conditions being set. Flush parameter is set in parm file. GBL_FS_SPACE_UTIL_PEAK ---------------------------------- The percentage of occupied disk space to total disk space for the fullest file system found during the interval. Only locally mounted file systems are counted in this metric. This metric can be used as an indicator that at least one file system on the system is running out of disk space. On Unix systems, CDROM and PC file systems are also excluded. This metric can exceed 100 percent. This is because a portion of the file system space is reserved as a buffer and can only be used by root. If the root user has made the file system grow beyond the reserved buffer, the utilization will be greater than 100 percent. This is a dangerous situation since if the root user totally fills the file system, the system may crash. On Windows, CDROM file systems are also excluded. On Solaris non-global zones, this metric shows data from the global zone. GBL_GMTOFFSET ---------------------------------- The difference, in minutes, between local time and GMT (Greenwich Mean Time). GBL_HYP_UTIL ---------------------------------- The percentage of time spent in Hypervisor by this partition in this interval with respect to system mode utilization. GBL_IGNORE_MT ---------------------------------- This boolean value indicates whether the CPU normalization is on or off. If the metric value is “true”, CPU related metrics in the global class will report values which are normalized against the number of active cores on the system. If the metric value is “false”, CPU related metrics in the global class will report values which are normalized against the number of CPU threads on the system. If CPU MultiThreading is turned off this configuration option is a no-op and the metric value will be “true”. On Linux, this metric will only report “true” if this configuration is on and if the kernel provides enough information to determine whether MultiThreading is turned on. On HPUX, this metric will report “na” if the processor doesn’t support the feature. GBL_INTERRUPT ---------------------------------- The number of IO interrupts during the interval. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_INTERRUPT_RATE ---------------------------------- The average number of IO interrupts per second during the interval. On HPUX and SUN this value includes clock interrupts. To get non-clock device interrupts, subtract clock interrupts from the value. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_INTERVAL ---------------------------------- The amount of time in the interval. This measured interval is slightly larger than the desired or configured interval if the collection program is delayed by a higher priority process and cannot sample the data immediately. GBL_JAVAARG ---------------------------------- This boolean value indicates whether the java class overloading mechanism is enabled or not. This metric will be set when the javaarg flag in the parm file is set. The metric affected by this setting is PROC_PROC_ARGV1. This setting is useful to construct parm file java application definitions using the argv1= keyword. GBL_LOADAVG ---------------------------------- The 1 minute load average of the system obtained at the time of logging. On windows this is the load average of the system over the interval. Load average on windows is the average number of threads that have been waiting in ready state during the interval. This is obtained by checking the number of threads in ready state every sub proc interval, accumulating them over the interval and averaging over the interval. On Solaris non-global zones, this metric shows data from the global zone. GBL_LOADAVG15 ---------------------------------- The 15 minute load average of the system obtained at the time of logging. GBL_LOADAVG5 ---------------------------------- The 5 minute load average of the system obtained at the time of logging. On Solaris non-global zones, this metric shows data from the global zone. GBL_LOGFILE_VERSION ---------------------------------- Three byte ASCII field containing the log file version number. The log file version is assigned by scopeux and is incremented when changes to the log file causes the layout to be different from previous versions. The current version is “ D”. Every effort is made to protect the information investment maintained in historical log files by providing forward compatibility and/or conversion utilities when log files change. GBL_LOGGING_TYPES ---------------------------------- A 13-byte field indicating the types of data logged by the collector. This is controlled by the LOG statement in the parm file. Each position will contain either a space or the characters as shown below. Note that positions two (all applications) and four (all processes) were implemented for HP internal use only and are not normally used outside of HP. An @ in position two indicates that all applications are logged each five minute interval even if they had no activity during the interval. An @ in position four indicates that all processes, not just the interesting ones, are logged each one minute interval. This can result in very large log files.An @ in position 6 indicates all devices( File System Device,Disk,CPU,LAN,Logical Volume) are logged. Position Char Meaning 1 G Global data 2 @ All applications 3 A Applications 4 @ All processes 5 P Interesting processes 6 @ All Devices 7 F File System Device 8 D Disk 9 C CPU 10 L LAN 11 V Logical Volume 12 T Transaction data 13 space Not used By default, global, interesting process, LAN data is logged, in which case this field would be “ G P L”. GBL_LOST_MI_TRACE_BUFFERS ---------------------------------- The number of trace buffers lost by the measurement processing daemon. On HP-UX systems, if this value is > 0, the measurement subsystem is not keeping up with the system events that generate traces. For other Unix systems, if this value is > 0, the measurement subsystem is not keeping up with the ARM API calls that generate traces. Note: The value reported for this metric will roll over to 0 once it crosses INTMAX. GBL_LS_CPU_NUM_DEDICATED ---------------------------------- Number of processor units in dedicated partitions. This metric is with respect to the partitions which are responding over network. On AIX System WPARs, this metric is NA. GBL_LS_CPU_NUM_SHARED ---------------------------------- Number of processor units in shared partitions. This metric is with respect to the partitions which are responding over network. On AIX System WPARs, this metric is NA. GBL_LS_ID ---------------------------------- On AIX LPAR, this metric indicates partition number and is equivalent to “Partition Number” field of ‘lparstat -i’ command. On a standalone system the value of this metrics is ‘na’ On AIX System WPARs, this metric is NA. GBL_LS_MODE ---------------------------------- Indicates whether the CPU entitlement for the logical system is Capped or Uncapped. The value “Uncapped” indicates that the logical system can utilize idle cycles from the shared processor pool of CPUs beyond its CPU entitlement. On AIX SPLPAR, this metric is same as “Mode” field of ‘lparstat -i’ command. GBL_LS_NUM_CAPPED ---------------------------------- Number of Capped shared partitions. This metric is with respect to the partitions which are responding over network. On AIX System WPARs, this metric is NA. GBL_LS_NUM_DEDICATED ---------------------------------- Number of partitions which have dedicated processors. This metric is with respect to the partitions which are responding over network. On AIX System WPARs, this metric is NA. GBL_LS_NUM_SHARED ---------------------------------- Number of partitions which share the processors. This metric is with respect to the partitions which are responding over network. On AIX System WPARs, this metric is NA. GBL_LS_NUM_UNCAPPED ---------------------------------- Number of Uncapped shared partitions. This metric is with respect to the partitions which are responding over network. On AIX System WPARs, this metric is NA. GBL_LS_PHYS_MEM_CONSUMED ---------------------------------- The physical memory (in MBs) that is consumed by partitions. This metric is with respect to the partitions which are responding over network. On AIX System WPARs, this metric is NA. GBL_LS_PHYS_MEM_TOTAL ---------------------------------- Total physical memory (in MBs) allotted across all the partitions. This metric is with respect to the partitions which are responding over network. On AIX System WPARs, this metric is NA. GBL_LS_ROLE ---------------------------------- Indicates whether Perf Agent is installed on Logical system or host or standalone system. This metric will be either “GUEST”, “HOST” or “STAND”. GBL_LS_SHARED ---------------------------------- In a virtual environment, this metric indicates whether the physical CPUs are dedicated to this Logical system or shared. On AIX SPLPAR, this metric is equivalent to “Type” field of ‘lparstat -i’ command. On a recognized VMware ESX guest, where VMware guest SDK is enabled, the value is “Shared”. On a standalone system the value of this metrics is “Dedicated”. On AIX System WPARs, this metric is NA. GBL_LS_TYPE ---------------------------------- The virtulization technology if applicable. The value of this metric is “HPVM” on HP-UX host, “LPAR” on AIX LPAR, “Sys WPAR” on system WPAR, “Zone” on Solaris Zones, “VMware” on recognized VMware ESX guest and VMware ESX Server console, “Hyper-V” on Hyper-V host, else “NoVM”. In conjunction with GBL_LS_ROLE this metric could be used to identify the environment in which Perf Agent/Glance is running. For example, if GBL_LS_ROLE is “Guest” and GBL_LS_TYPE is “VMware” then PA/Glance is running on a VMware Guest. GBL_MACHINE ---------------------------------- An ASCII string representing the Processor Architecture. And machine hardware model is represented by GBL_MACHINE_MODEL metric. GBL_MACHINE_MODEL ---------------------------------- The CPU model. This is similar to the information returned by the GBL_MACHINE metric and the uname command(except for Solaris 10 x86/x86_64). However, this metric returns more information on some processors. On HP-UX, this is the same information returned by the model command. GBL_MEM_ACTIVE_VIRT ---------------------------------- The total virtual memory (in MBs unless otherwise specified) allocated for processes that are currently on the run queue or processes that have executed recently. This is the sum of the virtual memory sizes of the data and stack regions for these processes. On HP-UX, this is the sum of the virtual memory of all processes which have had a thread run in the last 20 seconds. On AIX System WPARs, this metric is NA. GBL_MEM_AVAIL ---------------------------------- The amount of physical available memory in the system (in MBs unless otherwise specified). On Windows, memory resident operating system code and data is not included as available memory. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_CACHE_HIT_PCT ---------------------------------- On HP-UX, the percentage of buffer cache reads resolved from the buffer cache (rather than going to disk) during the interval. Buffer cache reads can occur as a result of a logical read (for example, file read system call), a read generated by a client, a read-ahead on behalf of a logical read or a system procedure. On HP-UX, this metric is obtained by measuring the number of buffered read calls that were satisfied by the data that was in the file system buffer cache. Reads to filesystem file buffers that are not in the buffer cache result in disk IO. Reads to raw IO and virtual memory IO (including memory mapped files), do not go through the filesystem buffer cache, and so are not relevant to this metric. On HP-UX, a low cache hit rate may indicate low efficiency of the buffer cache, either because applications have poor data locality or because the buffer cache is too small. Overly large buffer cache sizes can lead to a memory bottleneck. The buffer cache should be sized small enough so that pageouts do not occur even when the system is busy. However, in the case of VxFS, all memory-mapped IOs show up as page ins/page outs and are not a result of memory pressure. On AIX, the percentage of disk reads that were satisfied in the file system buffer cache (rather than going to disk) during the interval. On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. On the remaining Unix systems, this is the percentage of logical reads satisfied in memory (rather than going to disk) during the interval. This includes inode, indirect block and cylinder group related disk reads, plus file reads from files memory mapped by the virtual memory IO system. On Windows, this is the percentage of buffered reads satisfied in the buffer cache (rather than going to disk) during the interval. This metric is obtained by measuring the number of buffered read calls that were satisfied by the data that was in the system buffer cache. Reads that are not in the buffer cache result in disk IO. Unbuffered IO and virtual memory IO (including memory mapped files), are not counted in this metric. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_MEM_CACHE_UTIL ---------------------------------- The percentage of physical memory used by the buffer cache during the interval. On HP-UX 11i v2 and below, the buffer cache is a memory pool used by the system to stage disk IO data for the driver. On HP-UX 11i v3 and above this metric value represents the usage of the file system buffer cache which is still being used for file system metadata. On SUN, this percentage is based on calculating the buffer cache size by multiplying the system page size times the number of buffer headers (nbuf). For example, on a SPARCstation 10 the buffer size is usually (200 (page size buffers) * 4096 (bytes/page) = 800 KB). On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The “nbuf” value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On AIX, this value should be minimal since most disk IOs are done through memory mapped files. On Windows the value reports ‘copy read hit %’ and ‘Pin read hit %’. GBL_MEM_ENTL_MAX ---------------------------------- In a virtual environment, this metric indicates the maximum amount of memory configured for this logical system. The value is -3 if entitlement is ‘Unlimited’ for this logical system. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is “na” On Solaris non-global zones, this metric value is equivalent to ‘capped- memory’ value for ‘zonecfg -z zonename info’ command. On a standalone system this metric is equivalent to GBL_MEM_PHYS. GBL_MEM_ENTL_MIN ---------------------------------- In a virtual environment, this metric indicates the minimum amount of memory configured for this logical system. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is “na” On a standalone system, this metrics is equivalent to GBL_MEM_PHYS. GBL_MEM_ENTL_UTIL ---------------------------------- In a virtual environment, this metric indicates the maximum amount of memory utilized against memory configured for this logical system. GBL_MEM_FILE_PAGEIN_RATE ---------------------------------- The number of page ins from the file system per second during the interval. On Solaris, this is the same as the “fpi” value from the “vmstat -p” command, divided by page size in KB. On Linux, the value is reported in kilobytes and matches the ‘io/bi’ values from vmstat. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_FILE_PAGEOUT_RATE ---------------------------------- The number of page outs to the file system per second during the interval. On Solaris, this is the same as the “fpo” value from the “vmstat -p” command, divided by page size in KB. On Linux, the value is reported in kilobytes and matches the ‘io/bo’ values from vmstat. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_FREE ---------------------------------- The amount of memory not allocated (in MBs unless otherwise specified). As this value drops, the likelihood increases that swapping or paging out to disk may occur to satisfy new memory requests. On SUN, low values for this metric may not indicate a true memory shortage. This metric can be influenced by the VMM (Virtual Memory Management) system. On uncapped solaris zones, the metric indicates the amount of memory that is available across the whole system that is not consumed by the global zone and other non-global zones. In case of capped solaris zones, the metric indicates the amount of memory that is not consumed by this zone against the memory cap set. On Linux, this metric is sum of ‘free’ and ‘cached’ memory. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. Locality Domain metrics are available on HP-UX 11iv2 and above. GBL_MEM_FREE and LDOM_MEM_FREE, as well as the memory utilization metrics derived from them, may not always fully match. GBL_MEM_FREE represents free memory in the kernel’s reservation layer while LDOM_MEM_FREE shows actual free pages. If memory has been reserved but not actually consumed from the Locality Domains, the two values won’t match. Because GBL_MEM_FREE includes pre-reserved memory, the GBL_MEM_* metrics are a better indicator of actual memory consumption in most situations. GBL_MEM_FREE_UTIL ---------------------------------- The percentage of physical memory that was free at the end of the interval. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_ONLINE ---------------------------------- In a virtual environment, this metric indicates the amount of memory currently online for this logical system. For AIX wpars, this metric will be “na”. GBL_MEM_PAGEIN ---------------------------------- The total number of page ins from the disk during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On HP-UX, this is the same as the “page ins” value from the “vmstat -s” command. On AIX, this is the same as the “paging space page ins” value. Remember that “vmstat -s” reports cumulative counts. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGEIN_BYTE ---------------------------------- The number of KBs (or MBs if specified) of page ins during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEIN_BYTE_RATE ---------------------------------- The number of KBs per second of page ins during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEIN_RATE ---------------------------------- The total number of page ins per second from the disk during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On HP-UX and AIX, this is the same as the “pi” value from the vmstat command. On Solaris, this is the same as the sum of the “epi” and “api” values from the “vmstat -p” command, divided by the page size in KB. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGEOUT ---------------------------------- The total number of page outs to the disk during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On HP-UX, this is the same as the “page outs” value from the “vmstat -s” command. On HP-UX 11iv3 and above this includes filecache page outs also. On AIX, this is the same as the “paging space page outs” value. Remember that “vmstat -s” reports cumulative counts. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGEOUT_BYTE ---------------------------------- The number of KBs (or MBs if specified) of page outs during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGEOUT_BYTE_RATE ---------------------------------- The number of KBs (or MBs if specified) per second of page outs during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGEOUT_RATE ---------------------------------- The total number of page outs to the disk per second during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On HP-UX and AIX, this is the same as the “po” value from the vmstat command. On Solaris, this is the same as the sum of the “epo” and “apo” values from the “vmstat -p” command, divided by the page size in KB. On Windows, this counter also includes paging traffic on behalf of the system cache to access file data for applications and so may be high when there is no memory pressure. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGE_FAULT ---------------------------------- The number of page faults that occurred during the interval. On Linux this metric is available only on 2.6 and above kernel versions. GBL_MEM_PAGE_FAULT_RATE ---------------------------------- The number of page faults per second during the interval. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGE_REQUEST ---------------------------------- The number of page requests to or from the disk during the interval. On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging space and not to the file system. On Windows, this includes pages paged to or from both paging space and the file system. On HP-UX, this is the same as the sun of the “page ins” and “page outs” values from the “vmstat -s” command. On AIX, this is the same as the sum of the “paging space page ins” and “paging space page outs” values. Remember that “vmstat -s” reports cumulative counts. On Windows, this counter also includes paging traffic on behalf of the system cache to access file data for applications and so may be high when there is no memory pressure. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGE_REQUEST_RATE ---------------------------------- The number of page requests to or from the disk per second during the interval. On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging space and not to or from the file system. On Windows, this includes pages paged to or from both paging space and the file system. On HP-UX and AIX, this is the same as the sum of the “pi” and “po” values from the vmstat command. On Solaris, this is the same as the sum of the “epi”, “epo”, “api”, and “apo” values from the “vmstat -p” command, divided by the page size in KB. Higher than normal rates can indicate either a memory or a disk bottleneck. Compare GBL_DISK_UTIL_PEAK and GBL_MEM_UTIL to determine which resource is more constrained. High rates may also indicate memory thrashing caused by a particular application or set of applications. Look for processes with high major fault rates to identify the culprits. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PG_SCAN ---------------------------------- The number of pages scanned by the pageout daemon (or by the Clock Hand on AIX) during the interval. The clock hand algorithm is used to control page aging on the system. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PG_SCAN_RATE ---------------------------------- The number of pages scanned per second by the pageout daemon (or by the Clock Hand on AIX, “vmstat -s” pages examined by clock) during the interval. The clock hand algorithm is used to control page aging on the system. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PG_STEAL_RATE ---------------------------------- The number of pages stolen per second by the Virtual Memory Manager during the interval. GBL_MEM_PHYS ---------------------------------- The amount of physical memory in the system (in MBs unless otherwise specified). On HP-UX, banks with bad memory are not counted. Note that on some machines, the Processor Dependent Code (PDC) code uses the upper 1MB of memory and thus reports less than the actual physical memory of the system. Thus, on a system with 256MB of physical memory, this metric and dmesg(1M) might only report 267,386,880 bytes (255MB). This is all the physical memory that software on the machine can access. On Windows, this is the total memory available, which may be slightly less than the total amount of physical memory present in the system. This value is also reported in the Control Panel’s About Windows NT help topic. On Linux, this is the amount of memory given by dmesg(1M). If the value is not available in kernel ring buffer, then the sum of system memory and available memory will be reported as physical memory. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_SWAPIN_BYTE ---------------------------------- The number of KBs transferred in from disk due to swap ins (or reactivations on HP-UX) during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap- in is a reactivation of a deactivated process. Swap metrics that report swap- out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap- in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_SWAPIN_BYTE_RATE ---------------------------------- The number of KBs per second transferred from disk due to swap ins (or reactivations on HP-UX) during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap- in is a reactivation of a deactivated process. Swap metrics that report swap- out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap- in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_SWAPIN_RATE ---------------------------------- The number of swap ins (or reactivations on HP-UX) per second during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap- in is a reactivation of a deactivated process. Swap metrics that report swap- out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap- in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_SWAPOUT_BYTE ---------------------------------- The number of KBs (or MBs if specified) transferred out to disk due to swap outs (or deactivations on HP-UX) during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap- in is a reactivation of a deactivated process. Swap metrics that report swap- out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap- in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_SWAPOUT_BYTE_RATE ---------------------------------- The number of KBs (or MBs if specified) per second transferred out to disk due to swap outs (or deactivations on HP-UX) during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap- in is a reactivation of a deactivated process. Swap metrics that report swap- out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap- in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_SWAPOUT_RATE ---------------------------------- The number of swap outs (or deactivations on HP-UX) per second during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap- in is a reactivation of a deactivated process. Swap metrics that report swap- out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap- in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_SWAP_QUEUE ---------------------------------- The average number of processes waiting to be swapped in. These processes are inactive because they are waiting for pages to be paged in. This is the same as the “procs b” field reported in vmstat. GBL_MEM_SYS_AND_CACHE_UTIL ---------------------------------- The percentage of physical memory used by the system (kernel) and the buffer cache at the end of the interval. On HP-UX 11iv3, this includes file cache also. On HP-UX 11.0, this metric does not include some kinds of dynamically allocated kernel memory. This has always been reported in the GBL_MEM_USER* metrics. On HP-UX 11.11 and beyond, this metric includes some kinds of dynamically allocated kernel memory. On Solaris non-global zones, this metric is N/A. GBL_MEM_SYS_UTIL ---------------------------------- The percentage of physical memory used by the system during the interval. System memory does not include the buffer cache. On HP-UX and Linux this does not include filecache also. On HP-UX 11.0, this metric does not include some kinds of dynamically allocated kernel memory. This has always been reported in the GBL_MEM_USER* metrics. On HP-UX 11.11 and beyond, this metric includes some kinds of dynamically allocated kernel memory. On Solaris non-global zones, this metric shows value as 0. GBL_MEM_USER_UTIL ---------------------------------- The percent of physical memory allocated to user code and data at the end of the interval. This metric shows the percent of memory owned by user memory regions such as user code, heap, stack and other data areas including shared memory. This does not include memory for buffer cache. On HP-UX and Linux this does not include filecache also. On HP-UX 11.0, this metric includes some kinds of dynamically allocated kernel memory. On HP-UX 11.11 and beyond, this metric does not include some kinds of dynamically allocated kernel memory. This is now reported in the GBL_MEM_SYS* metrics. Large fluctuations in this metric can be caused by programs which allocate large amounts of memory and then either release the memory or terminate. A slow continual increase in this metric may indicate a program with a memory leak. GBL_MEM_UTIL ---------------------------------- The percentage of physical memory in use during the interval. This includes system memory (occupied by the kernel), buffer cache and user memory. On HP-UX 11iv3 and above, this includes file cache. This excludes file cache when cachemem parameter in the parm file is set to free. On HP-UX, this calculation is done using the byte values for physical memory and used memory, and is therefore more accurate than comparing the reported kilobyte values for physical memory and used memory. On Linux, the value of this metric includes file cache when the cachemem parameter in the parm file is set to user. On SUN, high values for this metric may not indicate a true memory shortage. This metric can be influenced by the VMM (Virtual Memory Management) system. This excludes ZFS ARC cache when cachemem parameter in the parm file is set to free. On AIX, this excludes file cache when cachemem parameter in the parm file is set to free. Locality Domain metrics are available on HP-UX 11iv2 and above. GBL_MEM_FREE and LDOM_MEM_FREE, as well as the memory utilization metrics derived from them, may not always fully match. GBL_MEM_FREE represents free memory in the kernel’s reservation layer while LDOM_MEM_FREE shows actual free pages. If memory has been reserved but not actually consumed from the Locality Domains, the two values won’t match. Because GBL_MEM_FREE includes pre-reserved memory, the GBL_MEM_* metrics are a better indicator of actual memory consumption in most situations. GBL_NET_COLLISION ---------------------------------- The number of collisions that occurred on all network interfaces during the interval. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not include deferred packets. This does not include data for loopback interface. For HP-UX, this will be the same as the sum of the “Single Collision Frames”, “Multiple Collision Frames”, “Late Collisions”, and “Excessive Collisions” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Coll” column from the “netstat -i” command (“collisions” from the “netstat -i -e” command on Linux) for a network device. See also netstat(1). AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_COLLISION_1_MIN_RATE ---------------------------------- The number of collisions per minute on all network interfaces during the interval. This metric does not include deferred packets. This does not include data for loopback interface. Collisions occur on any busy network, but abnormal collision rates could indicate a hardware or software problem. AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_COLLISION_PCT ---------------------------------- The percentage of collisions to total outbound packet attempts during the interval. Outbound packet attempts include both successful packets and collisions. This does not include data for loopback interface. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets. AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_COLLISION_RATE ---------------------------------- The number of collisions per second on all network interfaces during the interval. This metric does not include deferred packets. This does not include data for loopback interface. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_DEFERRED_PCT ---------------------------------- The percentage of deferred packets to total outbound packet attempts during the interval. Outbound packet attempts include both packets successfully transmitted and those that were deferred. This does not include data for loopback interface. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_ERROR ---------------------------------- The number of errors that occurred on all network interfaces during the interval. This does not include data for loopback interface. For HP-UX, this will be the same as the sum of the “Inbound Errors” and “Outbound Errors” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_ERROR_1_MIN_RATE ---------------------------------- The number of errors per minute on all network interfaces during the interval. This rate should normally be zero or very small. A large error rate can indicate a hardware or software problem. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_ERROR_RATE ---------------------------------- The number of errors per second on all network interfaces during the interval. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_IN_ERROR_PCT ---------------------------------- The percentage of inbound network errors to total inbound packet attempts during the interval. Inbound packet attempts include both packets successfully received and those that encountered errors. This does not include data for loopback interface. A large number of errors may indicate a hardware problem on the network. The percentage of inbound errors to total packets attempted should remain low. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_IN_ERROR_RATE ---------------------------------- The number of inbound errors per second on all network interfaces during the interval. This does not include data for loopback interface. A large number of errors may indicate a hardware problem on the network. The percentage of inbound errors to total packets attempted should remain low. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_IN_PACKET ---------------------------------- The number of successful packets received through all network interfaces during the interval. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. For HP-UX, this will be the same as the sum of the “Inbound Unicast Packets” and “Inbound Non-Unicast Packets” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Ipkts” column (RX-OK on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_IN_PACKET_RATE ---------------------------------- The number of successful packets per second received through all network interfaces during the interval. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_OUT_ERROR_PCT ---------------------------------- The percentage of outbound network errors to total outbound packet attempts during the interval. Outbound packet attempts include both packets successfully sent and those that encountered errors. This does not include data for loopback interface. The percentage of outbound errors to total packets attempted to be transmitted should remain low. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_OUT_ERROR_RATE ---------------------------------- The number of outbound errors per second on all network interfaces during the interval. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_OUT_PACKET ---------------------------------- The number of successful packets sent through all network interfaces during the last interval. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. For HP-UX, this will be the same as the sum of the “Outbound Unicast Packets” and “Outbound Non-Unicast Packets” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Opkts” column (TX-OK on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_OUT_PACKET_RATE ---------------------------------- The number of successful packets per second sent through the network interfaces during the interval. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_PACKET_RATE ---------------------------------- The number of successful packets per second (both inbound and outbound) for all network interfaces during the interval. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_UTIL_PEAK ---------------------------------- It is the utilisation of the most used network interfaces at the end of the interval. GBL_NFS_CALL ---------------------------------- The number of NFS calls the local system has made as either a NFS client or server during the interval. This includes both successful and unsuccessful calls. Unsuccessful calls are those that cannot be completed due to resource limitations or LAN packet errors. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. On AIX System WPARs, this metric is NA. GBL_NFS_CALL_RATE ---------------------------------- The number of NFS calls per second the system made as either a NFS client or NFS server during the interval. Each computer can operate as both a NFS server, and as an NFS client. This metric includes both successful and unsuccessful calls. Unsuccessful calls are those that cannot be completed due to resource limitations or LAN packet errors. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. On AIX System WPARs, this metric is NA. GBL_NUM_CPU ---------------------------------- The number of physical CPUs on the system. This includes all CPUs, either online or offline. For HP-UX and certain versions of Linux, the sar(1M) command allows you to check the status of the system CPUs. For SUN and DEC, the commands psrinfo(1M) and psradm(1M) allow you to check or change the status of the system CPUs. For AIX, this metric indicates the maximum number of CPUs the system ever had. On a logical system, this metric indicates the number of virtual CPUs configured. When hardware threads are enabled, this metric indicates the number of logical processors. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. The Linux kernel currently doesn’t provide any metadata information for disabled CPUs. This means that there is no way to find out types, speeds, as well as hardware IDs or any other information that is used to determine the number of cores, the number of threads, the HyperThreading state, etc... If the agent (or Glance) is started while some of the CPUs are disabled, some of these metrics will be “na”, some will be based on what is visible at startup time. All information will be updated if/when additional CPUs are enabled and information about them becomes available. The configuration counts will remain at the highest discovered level (i.e. if CPUs are then disabled, the maximum number of CPUs/cores/etc... will remain at the highest observed level). It is recommended that the agent be started with all CPUs enabled. GBL_NUM_CPU_CORE ---------------------------------- This metric provides the total number of CPU cores on a physical system. On VMs, this metric shows information according to resources available on that VM. On non HP-UX system, this metric is equivalent to active CPU cores. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Windows, this metric will be “na” on Windows Server 2003 Itanium systems. The Linux kernel currently doesn’t provide any metadata information for disabled CPUs. This means that there is no way to find out types, speeds, as well as hardware IDs or any other information that is used to determine the number of cores, the number of threads, the HyperThreading state, etc... If the agent (or Glance) is started while some of the CPUs are disabled, some of these metrics will be “na”, some will be based on what is visible at startup time. All information will be updated if/when additional CPUs are enabled and information about them becomes available. The configuration counts will remain at the highest discovered level (i.e. if CPUs are then disabled, the maximum number of CPUs/cores/etc... will remain at the highest observed level). It is recommended that the agent be started with all CPUs enabled. GBL_NUM_DISK ---------------------------------- The number of disks on the system. Only local disk devices are counted in this metric. On HP-UX, this is a count of the number of disks on the system that have ever had activity over the cumulative collection time. On Solaris non-global zones, this metric shows value as 0. On AIX System WPARs, this metric shows value as 0. GBL_NUM_NETWORK ---------------------------------- The number of network interfaces on the system. This includes the loopback interface. On certain platforms, this also include FDDI, Hyperfabric, ATM, Serial Software interfaces such as SLIP or PPP, and Wide Area Network interfaces (WAN) such as ISDN or X.25. The “netstat -i” command also displays the list of network interfaces on the system. GBL_NUM_ONLINE_VCPU ---------------------------------- The number of virtual processors currently online. This metric is same as “Online Virtual CPUs” field of ‘lparstat -i’ command. GBL_NUM_USER ---------------------------------- The number of users logged in at the time of the interval sample. This is the same as the command “who | wc -l”. For Unix systems, the information for this metric comes from the utmp file which is updated by the login command. For more information, read the man page for utmp. Some applications may create users on the system without using login and updating the utmp file. These users are not reflected in this count. This metric can be a general indicator of system usage. In a networked environment, however, users may maintain inactive logins on several systems. On Windows, the information for this metric comes from the Server Sessions counter in the Performance Libraries Server object. It is a count of the number of users using this machine as a file server. GBL_NUM_VIRTUAL_TARGETS ---------------------------------- The number of virtual target devices served by the VIO server. This metric is only valid on aix VIO servers. GBL_OSNAME ---------------------------------- A string representing the name of the operating system. On Unix systems, this is the same as the output from the “uname -s” command. GBL_OSRELEASE ---------------------------------- The current release of the operating system. On most Unix systems, this is same as the output from the “uname -r” command. On AIX, this is the actual patch level of the operating system. This is similar to what is returned by the command “lslpp -l bos.rte” as the most recent level of the COMMITTED Base OS Runtime. For example, “5.2.0”. GBL_OSVERSION ---------------------------------- A string representing the version of the operating system. This is the same as the output from the “uname -v” command. This string is limited to 20 characters, and as a result, the complete version name might be truncated. On Windows, this is a string representing the service pack installed on the operating system. GBL_OTHER_QUEUE ---------------------------------- The average number of processes blocked on other (unknown) activities during the interval. GBL_POOL_CPU_AVAIL ---------------------------------- The available physical processors in the shared processor pool during the interval. This metric will be “na” if pool_util_authority is not set in HMC. pool_util_authority indicates if pool utilization data is available or not. To set pool_util_authority, select the “Allow shared processor pool utilization authority” check box from HMC. On AIX System WPARs, this metric is NA. GBL_POOL_CPU_ENTL ---------------------------------- The number of physical processors available in the shared processor pool to which this logical system belongs. On AIX SPLPAR, this metric is equivalent to “Active Physical CPUs in system” field of ‘lparstat -i’ command. On a standalone system, the value is “na”. On AIX System WPARs, this metric is NA. GBL_POOL_ID ---------------------------------- In a virtual environment, this metric identifies the shared resource pool to which the logical system belongs. On AIX SPLPAR, this metric is equivalent to “Shared Pool ID” field of ‘lparstat -i’ command. On a standalone system, the value is “na”. On AIX System WPARs, this metric is NA. GBL_POOL_NUM_CPU ---------------------------------- The number of physical processors in the shared resource pool to which this logical system belongs. On AIX SPLPAR, this metric is equivalent to “Physical CPUs in system” field of ‘lparstat -i’ command. On a standalone system, the value is “na”. On AIX System WPARs, this metric value is not available. GBL_POOL_TOTAL_UTIL ---------------------------------- Percentage of time, the pool CPU was not idle during the interval. This metric will be “na” if pool_util_authority is not set in HMC. pool_util_authority indicates if pool utilization data is available or not. To set pool_util_authority, select the “Allow shared processor pool utilization authority” check box from HMC. On AIX System WPARs, this metric is NA. GBL_PROC_RUN_TIME ---------------------------------- The average run time, in seconds, for processes that terminated during the interval. GBL_PROC_SAMPLE ---------------------------------- The number of process data samples that have been averaged into global metrics (such as GBL_ACTIVE_PROC) that are based on process samples. GBL_RUN_QUEUE ---------------------------------- On UNIX systems except Linux, this is the average number of threads waiting in the runqueue over the interval. The average is computed against the number of times the run queue is occupied instead of time. The average is updated by the kernel at a fine grain interval, only when the run queue is occupied. It is not averaged against the interval and can therefore be misleading for long intervals when the run queue is empty most or part of the time. This value matches runq-sz reported by the “sar -q” command. The GBL_LOADAVG* metrics are better indicators of run queue pressure. On Linux and Windows, this is instantaneous value obtained at the time of logging. On Linux, it shows the number of threads waiting in the runqueue. On Windows, it shows the Processor Queue Length. On Unix systems, GBL_RUN_QUEUE will typically be a small number. Larger than normal values for this metric indicate CPU contention among threads. This CPU bottleneck is also normally indicated by 100 percent GBL_CPU_TOTAL_UTIL. It may be OK to have GBL_CPU_TOTAL_UTIL be 100 percent if no other threads are waiting for the CPU. However, if GBL_CPU_TOTAL_UTIL is 100 percent and GBL_RUN_QUEUE is greater than the number of processors, it indicates a CPU bottleneck. On Windows, the Processor Queue reflects a count of process threads which are ready to execute. A thread is ready to execute (in the Ready state) when the only resource it is waiting on is the processor. The Windows operating system itself has many system threads which intermittently use small amounts of processor time. Several low priority threads intermittently wake up and execute for very short intervals. Depending on when the collection process samples this queue, there may be none or several of these low-priority threads trying to execute. Therefore, even on an otherwise quiescent system, the Processor Queue Length can be high. High values for this metric during intervals where the overall CPU utilization (gbl_cpu_total_util) is low do not indicate a performance bottleneck. Relatively high values for this metric during intervals where the overall CPU utilization is near 100% can indicate a CPU performance bottleneck. HP-UX RUN/PRI/CPU Queue differences for multi-cpu systems: For example, let’s assume we’re using a system with eight processors. We start eight CPU intensive threads that consume almost all of the CPU resources. The approximate values shown for the CPU related queue metrics would be: GBL_RUN_QUEUE = 1.0 GBL_PRI_QUEUE = 0.1 GBL_CPU_QUEUE = 1.0 Assume we start an additional eight CPU intensive threads. The approximate values now shown are: GBL_RUN_QUEUE = 2.0 GBL_PRI_QUEUE = 8.0 GBL_CPU_QUEUE = 16.0 At this point, we have sixteen CPU intensive threads running on the eight processors. Keeping the definitions of the three queue metrics in mind, the run queue is 2 (that is, 16 / 8); the pri queue is 8 (only half of the threads can be active at any given time); and the cpu queue is 16 (half of the threads waiting in the cpu queue that are ready to run, plus one for each active thread). This illustrates that the run queue is the average of number of threads waiting in the runqueue for all processors; the pri queue is the number of threads that are blocked on “PRI” (priority); and the cpu queue is the number of threads in the cpu queue that are ready to run, including the threads using the CPU. On Solaris non-global zones, this metric shows data from the global zone. GBL_STARTED_PROC ---------------------------------- The number of processes that started during the interval. GBL_STARTED_PROC_RATE ---------------------------------- The number of processes that started per second during the interval. GBL_STATTIME ---------------------------------- An ASCII string representing the time at the end of the interval, based on local time. GBL_SUBPROCSAMPLEINTERVAL ---------------------------------- The SubProcSampleInterval parameter sets the internal sampling interval of process data. This option only changes the frequency of how often the operating system process table is scanned in order to accumulate process statistics during a log interval and does not change the logging interval for process data logging. If, for example, the CPU utilization is higher than expected (possibly due to a large operating system process table), you can decrease the utilization by increasing the sampling interval. Note: Increasing the SUBPROC sample interval (SUBPROC can be used interchangeably with SUBPROCSAMPLEINTERVAL) parameter may decrease the accuracy of application data and process data since short-lived processes (those completing within a sample interval) cannot be captured and hence logged by scopeux. To set process subintervals to 5 (default), 10, 15, 20, 30, or 60 seconds (these are the only values allowed), you will have to enter the SUBPROC or SUBPROCSAMPLEINTERVAL sample interval parameter in your parm file. You cannot input a value lower than 5. For example, to set the interval to 15 seconds, add one of the following lines in your parm file: SUBPROC=15 or SUBPROCSAMPLEINTERVAL=15 Changes made to the parm file are logged every time the Performance Agent is restarted. To check changes made to the SUBPROC sample interval parameter in your parm file, you can use the following command: # utility -xs -D |grep -i sub 04/23/99 13:04 Process Collection Sample SubInterval 5 seconds -> 5 seconds 04/23/99 14:31 Process Collection Sample SubInterval 5 seconds -> 15 seconds 04/23/99 14:43 Process Collection Sample SubInterval 15 seconds -> 30 seconds Specify the full pathname of the performance tool bin directory as needed. You can also export the GBL_SUBPROCSAMPLEINTERVAL metric from the Configuration data. GBL_SUSPENDED_PROCS ---------------------------------- The average number of processes which have been either marked as should be suspended (SGETOUT) or have been suspended (SSWAPPED) during the interval. Processes are suspended when the OS detects that memory thrashing is occurring. The scheduler looks for processes that have a high repage rate compared with the number of major page faults the process has done and suspends these processes. GBL_SWAP_SPACE_AVAIL ---------------------------------- The total amount of potential swap space, in MB. On HP-UX, this is the sum of the device swap areas enabled by the swapon command, the allocated size of any file system swap areas, and the allocated size of pseudo swap in memory if enabled. Note that this is potential swap space. This is the same as (AVAIL: total) as reported by the “swapinfo -mt” command. On SUN, this is the total amount of swap space available from the physical backing store devices (disks) plus the amount currently available from main memory. This is the same as (used + available) /1024, reported by the “swap - s” command. On Linux, this is same as (Swap: total) as reported by the “free -m” command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_SWAP_SPACE_AVAIL_KB ---------------------------------- The total amount of potential swap space, in KB. On HP-UX, this is the sum of the device swap areas enabled by the swapon command, the allocated size of any file system swap areas, and the allocated size of pseudo swap in memory if enabled. Note that this is potential swap space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be usable. For example, on a 61MB disk using 2 MB swap size allocations, 1 MB remains unusable and is considered wasted space. On HP-UX, this is the same as (AVAIL: total) as reported by the “swapinfo -t” command. On SUN, this is the total amount of swap space available from the physical backing store devices (disks) plus the amount currently available from main memory. This is the same as (used + available)/1024, reported by the “swap - s” command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_SWAP_SPACE_USED ---------------------------------- The amount of swap space used, in MB. On HP-UX, “Used” indicates written to disk (or locked in memory), rather than reserved. This is the same as (USED: total - reserve) as reported by the “swapinfo -mt” command. On SUN, “Used” indicates amount written to disk (or locked in memory), rather than reserved. Swap space is reserved (by decrementing a counter) when virtual memory for a program is created. This is the same as (bytes allocated)/1024, reported by the “swap -s” command. On Linux, this is same as (Swap: used) as reported by the “free -m” command. On AIX System WPARs, this metric is NA. On Solaris non-global zones, this metric is N/A. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SWAP_SPACE_USED_UTIL ---------------------------------- This is the percentage of swap space used. On HP-UX, “Used %” indicates percentage of swap space written to disk (or locked in memory), rather than reserved. This is the same as percentage of ((USED: total - reserve)/total)*100, as reported by the “swapinfo -mt” command. On SUN, “Used %” indicates percentage of swap space written to disk (or locked in memory), rather than reserved. Swap space is reserved (by decrementing a counter) when virtual memory for a program is created. This is the same as percentage of ((bytes allocated)/total)*100, reported by the “swap -s” command. On SUN, global swap space is tracked through the operating system. Device swap space is tracked through the devices. For this reason, the amount of swap space used may differ between the global and by-device metrics. Sometimes pages that are marked to be swapped to disk by the operating system are never swapped. The operating system records this as used swap space, but the devices do not, since no physical IOs occur. (Metrics with the prefix “GBL” are global and metrics with the prefix “BYSWP” are by device.) On Linux, this is same as percentage of ((Swap: used)/total)*100, as reported by the “free -m” command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. GBL_SWAP_SPACE_UTIL ---------------------------------- The percent of available swap space that was being used by running processes in the interval. On Windows, this is the percentage of virtual memory, which is available to user processes, that is in use at the end of the interval. It is not an average over the entire interval. It reflects the ratio of committed memory to the current commit limit. The limit may be increased by the operating system if the paging file is extended. This is the same as (Committed Bytes / Commit Limit) * 100 when comparing the results to Performance Monitor. On HP-UX, swap space must be reserved (but not allocated) before virtual memory can be created. If all of available swap is reserved, then no new processes or virtual memory can be created. Swap space locations are actually assigned (used) when a page is actually written to disk or locked in memory (pseudo swap in memory). This is the same as (PCT USED: total) as reported by the “swapinfo -mt” command. On Unix systems, this metric is a measure of capacity rather than performance. As this metric nears 100 percent, processes are not able to allocate any more memory and new processes may not be able to run. Very low swap utilization values may indicate that too much area has been allocated to swap, and better use of disk space could be made by reallocating some swap partitions to be user filesystems. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_SYSCALL ---------------------------------- The number of system calls during the interval. High system call rates are normal on busy systems, especially with IO intensive applications. Abnormally high system call rates may indicate problems such as a “hung” terminal that is stuck in a loop generating read system calls. GBL_SYSCALL_RATE ---------------------------------- The average number of system calls per second during the interval. High system call rates are normal on busy systems, especially with IO intensive applications. Abnormally high system call rates may indicate problems such as a “hung” terminal that is stuck in a loop generating read system calls. On HP-UX, system call rates affect the overhead of the midaemon. Due to the system call instrumentation on HP-UX, the fork and vfork system calls are double counted. In the case of fork and vfork, one process starts the system call, but two processes exit. HP-UX lightweight system calls, such as umask, do not show up in the Glance System Calls display, but will get added to the global system call rates. If a process is being traced (debugged) using standard debugging tools (such as adb or xdb), all system calls used by that process will show up in the System Calls display while being traced. On HP-UX, compare this metric to GBL_DISK_LOGL_IO_RATE to see if high system callrates correspond to high disk IO. GBL_CPU_SYSCALL_UTIL shows the CPU utilization due to processing system calls. GBL_SYSCALL_READ_BYTE_RATE ---------------------------------- The number of KBs transferred per second via read system calls during the interval. This includes reads to all devices including disks, terminals and tapes. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_SYSCALL_WRITE_BYTE_RATE ---------------------------------- The number of KBs per second transferred via write system calls during the interval. This includes writes to all devices including disks, terminals and tapes. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_SYSTEM_ID ---------------------------------- The network node hostname of the system. This is the same as the output from the “uname -n” command. On Windows, the name obtained from GetComputerName. GBL_SYSTEM_UPTIME_HOURS ---------------------------------- The time, in hours, since the last system reboot. GBL_SYSTEM_UPTIME_SECONDS ---------------------------------- The time, in seconds, since the last system reboot. GBL_THRESHOLD_CPU ---------------------------------- The percent of CPU that a process must use to become interesting during an interval. The default for this threshold is “5.0”, which means a process must have a value of at least 5.0% for PROC_CPU_TOTAL_UTIL to exceed this threshold. All threshold values are supplied by the parm file. A process must exceed at least one threshold value in any given interval before it will be considered interesting and be logged. GBL_THRESHOLD_DISK ---------------------------------- On HP-UX, this is the rate (IOs/sec) of physical disk IOs that a process must generate to become interesting during an interval. On Linux, this is the KB rate of physical disk IOs that the system must generate to become interesting during an interval. On the other Unix systems, this is the rate of either block disk IOs or major faults that a process must generate to become interesting during an interval. The default values and corresponding metric for this threshold are noted below. In order to exceed this threshold, the metric noted must match or exceed the value shown. HP-UX 5.0 for PROC_DISK_PHYS_IO_RATE for the given process SUN 5.0 for PROC_DISK_BLOCK_IO_RATE for the given process AIX 5.0 for PROC_DISK_BLOCK_IO_RATE for the given process OSF1 2.0 for PROC_IO_BYTE_RATE for the given process Linux 15.0 for GBL_DISK_PHYS_BYTE_RATE All threshold values are supplied by the parm file. A process must exceed at least one threshold value in any given interval before it will be considered interesting and be logged. GBL_THRESHOLD_NOKILLED ---------------------------------- This is a flag specifying that terminating processes are not interesting. The flag is set by the THRESHOLD NOKILLED statement in the parm file. If this flag is set, then the process will be logged only if it exceeds at least one of the thresholds. The default (blank) is for the flag to be turned off, which means a terminating process will be logged in the interval it exits even if it did not exceed any thresholds during that interval. This is so that the death of a process is recorded even if it does not exceed any of the thresholds. On HP-UX, an exception to this is short-lived processes that are alive for less than one second. By default, short-lived processes are not considered interesting. However, there is a flag (THRESHOLD_SHORTLIVED) to turn on the logging of short-lived processes. GBL_THRESHOLD_NONEW ---------------------------------- This is a flag specifying that newly created processes are not interesting. The flag is set by the THRESHOLD NONEW statement in the parm file. If this flag is set, then the process will be logged only if it exceeds at least one of the thresholds. The default (blank) is for the flag to be turned off, which means a new process will be logged in the interval it was created even if it did not exceed any thresholds during that interval. This is so that the existence of a process is recorded even if it does not exceed any of the thresholds. On HP-UX, an exception to this is short-lived processes that are alive for less than one second. By default, short-lived processes are not considered interesting. However, there is a flag (THRESHOLD_SHORTLIVED) to turn on the logging of short-lived processes. GBL_THRESHOLD_PROCMEM ---------------------------------- The process memory threshold specified in the parm file. GBL_TOTAL_DISPATCH_TIME ---------------------------------- Total lpar dispatch time in seconds during the interval. On AIX 5.3 or below, value of this metric will be “na”. On AIX System WPARs, this metric is NA. GBL_TT_OVERFLOW_COUNT ---------------------------------- The number of new transactions that could not be measured because the Measurement Processing Daemon’s (midaemon) Measurement Performance Database is full. If this happens, the default Measurement Performance Database size is not large enough to hold all of the registered transactions on this system. This can be remedied by stopping and restarting the midaemon process using the -smdvss option to specify a larger Measurement Performance Database size. The current Measurement Performance Database size can be checked using the midaemon -sizes option. GBL_VCSWITCH_RATE ---------------------------------- The average number of Virtual Context switches per second. On AIX System WPARs, this metric is NA. INTERVAL ---------------------------------- The number of seconds in the measurement interval. For the process data class, this is the number of seconds the process was alive during the interval. PROC_APP_ID ---------------------------------- The ID number of the application to which the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) belonged during the interval. Application “other” always has an ID of 1. There can be up to 999 user- defined applications, which are defined in the parm file. PROC_CPU_ALIVE_SYS_MODE_UTIL ---------------------------------- The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) in system mode as a percentage of the time it is alive during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_ALIVE_TOTAL_UTIL ---------------------------------- The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) as a percentage of the time it is alive during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_ALIVE_USER_MODE_UTIL ---------------------------------- The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) in user mode as a percentage of the time it is alive during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_SYS_MODE_TIME ---------------------------------- The CPU time in system mode in the context of the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_SYS_MODE_UTIL ---------------------------------- The percentage of time that the CPU was in system mode in the context of the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. High system mode CPU utilizations are normal for IO intensive programs. Abnormally high system CPU utilization can indicate that a hardware problem is causing a high interrupt rate. It can also indicate programs that are not using system calls efficiently. A classic “hung shell” shows up with very high system mode CPU because it gets stuck in a loop doing terminal reads (a system call) to a device that never responds. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor HP-UX systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. The maximum percentage is 100% times the number of CPUs online. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_TOTAL_TIME ---------------------------------- The total CPU time, in seconds, consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) during the interval. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On HP-UX, the total CPU time is the sum of the CPU time components for a process or kernel thread, including system, user, context switch, interrupts processing, realtime, and nice utilization values. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor HP-UX systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. The maximum percentage is 100% times the number of CPUs online. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_TOTAL_TIME_CUM ---------------------------------- The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) over the cumulative collection time. CPU time is in seconds unless otherwise specified. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. This is calculated as PROC_CPU_TOTAL_TIME_CUM = PROC_CPU_SYS_MODE_TIME_CUM + PROC_CPU_USER_MODE_TIME_CUM On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_TOTAL_UTIL ---------------------------------- The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) as a percentage of the total CPU time available during the interval. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On HP-UX, the total CPU utilization is the sum of the CPU utilization components for a process or kernel thread, including system, user, context switch, interrupts processing, realtime, and nice utilization values. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor HP-UX systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. The maximum percentage is 100% times the number of CPUs online. On AIX SPLPAR, this metric indicates the total physical processing units consumed by processes. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_TOTAL_UTIL_CUM ---------------------------------- The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) as a percentage of the total CPU time available over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On HP-UX, the total CPU utilization is the sum of the CPU utilization components for a process or kernel thread, including system, user, context switch, interrupts processing, realtime, and nice utilization values. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor HP-UX systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. The maximum percentage is 100% times the number of CPUs online. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_USER_MODE_TIME ---------------------------------- The time, in seconds, the process (or kernel threads, if HP-UX/Linux Kernel 2.6 and above) was using the CPU in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_USER_MODE_UTIL ---------------------------------- The percentage of time the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) was using the CPU in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor HP-UX systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. The maximum percentage is 100% times the number of CPUs online. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_DISK_BLOCK_IO ---------------------------------- The number of block IOs made by (or for) a process during the interval. On Sun 5.X (Solaris 2.X or later), these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, block IOs refer to data transferred between disk and the file system buffer cache in block size chunks. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_DISK_BLOCK_IO_CUM ---------------------------------- The number of block IOs made by (or for) a process during its lifetime or over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Sun 5.X (Solaris 2.X or later), these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, block IOs refer to data transferred between disk and the file system buffer cache in block size chunks. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_DISK_BLOCK_IO_RATE ---------------------------------- The number of block IOs per second made by (or for) a process during the interval. On Sun 5.X (Solaris 2.X or later), these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, block IOs refer to data transferred between disk and the file system buffer cache in block size chunks. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_DISK_BLOCK_IO_RATE_CUM ---------------------------------- The average number of block IOs per second made by (or for) a process during its lifetime or over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Sun 5.X (Solaris 2.X or later), these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, block IOs refer to data transferred between disk and the file system buffer cache in block size chunks. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_DISK_BLOCK_READ ---------------------------------- The number of block reads made by a process during the interval. On Sun 5.X (Solaris 2.X or later), these are physical reads generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_DISK_BLOCK_READ_RATE ---------------------------------- The number of block reads per second made by (or for) a process during the interval. On Sun 5.X (Solaris 2.X or later), these are physical reads generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_DISK_BLOCK_WRITE ---------------------------------- Number of block writes made by a process during the interval. Calls destined for NFS mounted files are not included. On Sun 5.X (Solaris 2.X or later), these are physical writes generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_DISK_BLOCK_WRITE_RATE ---------------------------------- The number of block writes per second made by (or for) a process during the interval. On Sun 5.X (Solaris 2.X or later), these are physical writes generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file’s inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_FORCED_CSWITCH ---------------------------------- The number of times that the process (or kernel thread, if HP-UX) was preempted by an external event and another process (or kernel thread, if HP- UX) was allowed to execute during the interval. Examples of reasons for a forced switch include expiration of a time slice or returning from a system call with a higher priority process (or kernel thread, if HP-UX) ready to run. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. PROC_GROUP_ID ---------------------------------- On most systems, this is the real group ID number of the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above). On AIX, this is the effective group ID number of the process. On HP-UX, this is the effective group ID number of the process if not in setgid mode. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_INTEREST ---------------------------------- A string containing the reason(s) why the process or thread is of interest, based on the thresholds specified in the parm file. An ‘A’ indicates that the process or thread exceeds the process CPU threshold, computed using the actual time the process or thread was alive during the interval. A ‘C’ indicates that the process or thread exceeds the process CPU threshold, computed using the collection interval. Currently, the same CPU threshold is used for both CPU interest reasons. A ‘D’ indicates that the process or thread exceeds the process disk IO threshold. An ‘I’ indicates that the process or thread exceeds the IO threshold. An ‘M’ indicates that the process exceeds the process memory threshold. This interest reason is only meaningful for processes and therefore not shown for threads. New processes or threads are identified with an ‘N’, terminated processes or threads are identified with a ‘K’. Note that the parm file ‘nonew’, ‘nokill’ and ‘shortlived’ settings are logging only options and therefore ignored in Glance components. 4 D Disk IOs exceeded threshold 5 blank Not Used 6 blank Not Used 7 blank Not Used 8 blank Not Used 9 blank Not Used 10 blank Not Used 11 blank Not Used 12 blank Special purpose field PROC_INTERVAL_ALIVE ---------------------------------- The number of seconds that the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) was alive during the interval. This may be less than the time of the interval if the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) was new or died during the interval. PROC_IO_BYTE ---------------------------------- On HP-UX, this is the total number of physical IO KBs (unless otherwise specified) that was used by this process or kernel thread, either directly or indirectly, during the interval. On all other systems, this is the total number of physical IO KBs (unless otherwise specified) that was used by this process during the interval. IOs include disk, terminal, tape and network IO. On HP-UX, indirect IOs include paging and deactivation/reactivation activity done by the kernel on behalf of the process or kernel thread. Direct IOs include disk, terminal, tape, and network IO, but exclude all NFS traffic. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On SUN, counts in the MB ranges in general can be attributed to disk accesses and counts in the KB ranges can be attributed to terminal IO. This is useful when looking for processes with heavy disk IO activity. This may vary depending on the sample interval length. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS* metrics will report pages of disk IO specifically. The PROC_IO* metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have “na” values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread’s IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. PROC_IO_BYTE_CUM ---------------------------------- On HP-UX, this is the total number of physical IO KBs (unless otherwise specified) that was used by this process or kernel thread, either directly or indirectly, over the cumulative collection time. On all other systems, this is the total number of physical IO KBs (unless otherwise specified) that was used by this process over the cumulative collection time. IOs include disk, terminal, tape and network IO. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, indirect IOs include paging and deactivation/reactivation activity done by the kernel on behalf of the process or kernel thread. Direct IOs include disk, terminal, tape, and network IO, but exclude all NFS traffic. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS* metrics will report pages of disk IO specifically. The PROC_IO* metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have “na” values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread’s IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. PROC_IO_BYTE_RATE ---------------------------------- On HP-UX, this is the number of physical IO KBs per second that was used by this process or kernel thread, either directly or indirectly, during the interval. On all other systems, this is the number of physical IO KBs per second that was used by this process during the interval. IOs include disk, terminal, tape and network IO. On HP-UX, indirect IOs include paging and deactivation/reactivation activity done by the kernel on behalf of the process or kernel thread. Direct IOs include disk, terminal, tape, and network IO, but exclude all NFS traffic. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On SUN, counts in the MB ranges in general can be attributed to disk accesses and counts in the KB ranges can be attributed to terminal IO. This is useful when looking for processes with heavy disk IO activity. This may vary depending on the sample interval length. Certain types of disk IOs are not counted by AIX at the process level, so they are excluded from this metric. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS* metrics will report pages of disk IO specifically. The PROC_IO* metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have “na” values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread’s IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. PROC_IO_BYTE_RATE_CUM ---------------------------------- On HP-UX, this is the average number of physical IO KBs per second that was used by this process or kernel thread, either directly or indirectly, over the cumulative collection time. On all other systems, this is the average number of physical IO KBs per second that was used by this process over the cumulative collection time. IOs include disk, terminal, tape and network IO. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, indirect IOs include paging and deactivation/reactivation activity done by the kernel on behalf of the process or kernel thread. Direct IOs include disk, terminal, tape, and network IO, but exclude all NFS traffic. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On SUN, counts in the MB ranges in general can be attributed to disk accesses and counts in the KB ranges can be attributed to terminal IO. This is useful when looking for processes with heavy disk IO activity. This may vary depending on the sample interval length. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS* metrics will report pages of disk IO specifically. The PROC_IO* metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have “na” values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread’s IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. PROC_MAJOR_FAULT ---------------------------------- Number of major page faults for this process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) during the interval. On HP-UX, major page faults and minor page faults are a subset of vfaults (virtual faults). Stack and heap accesses can cause vfaults, but do not result in a disk page having to be loaded into memory. PROC_MEM_RES ---------------------------------- The size (in KB) of resident memory allocated for the process(or kernel thread, if HP-UX/Linux Kernel 2.6 and above). On HP-UX, the calculation of this metric differs depending on whether this process has used any CPU time since the midaemon process was started. This metric is less accurate and does not include shared memory regions in its calculation when the process has been idle since the midaemon was started. On HP-UX, for processes that use CPU time subsequent to midaemon startup, the resident memory is calculated as RSS = sum of private region pages + (sum of shared region pages / number of references) The number of references is a count of the number of attachments to the memory region. Attachments, for shared regions, may come from several processes sharing the same memory, a single process with multiple attachments, or combinations of these. This value is only updated when a process uses CPU. Thus, under memory pressure, this value may be higher than the actual amount of resident memory for processes which are idle because their memory pages may no longer be resident or the reference count for shared segments may have changed. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. A value of “na” is displayed when this information is unobtainable. This information may not be obtainable for some system (kernel) processes. It may also not be available for processes. On AIX, this is the same as the RSS value shown by “ps v”. On Windows, this is the number of KBs in the working set of this process. The working set includes the memory pages touched recently by the threads of the process. If free memory in the system is above a threshold, then pages are left in the working set even if they are not in use. When free memory falls below a threshold, pages are trimmed from the working set, but not necessarily paged out to disk from memory. If those pages are subsequently referenced, they will be page faulted back into the working set. Therefore, the working set is a general indicator of the memory resident set size of this process, but it will vary depending on the overall status of memory on the system. Note that the size of the working set is often larger than the amount of pagefile space consumed (PROC_MEM_VIRT). PROC_MEM_VIRT ---------------------------------- The size (in KB) of virtual memory allocated for the process(or kernel thread, if HP-UX/Linux Kernel 2.6 and above). On HP-UX, this consists of the sum of the virtual set size of all private memory regions used by this process, plus this process’ share of memory regions which are shared by multiple processes. For processes that use CPU time, the value is divided by the reference count for those regions which are shared. On HP-UX, this metric is less accurate and does not reflect the reference count for shared regions for processes that were started prior to the midaemon process and have not used any CPU time since the midaemon was started. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. On all other Unix systems, this consists of private text, private data, private stack and shared memory. The reference count for shared memory is not taken into account, so the value of this metric represents the total virtual size of all regions regardless of the number of processes sharing access. Note also that lazy swap algorithms, sparse address space malloc calls, and memory-mapped file access can result in large VSS values. On systems that provide Glance memory regions detail reports, the drilldown detail per memory region is useful to understand the nature of memory allocations for the process. A value of “na” is displayed when this information is unobtainable. This information may not be obtainable for some system (kernel) processes. It may also not be available for processes. On Windows, this is the number of KBs the process has used in the paging file(s). Paging files are used to store pages of memory used by the process, such as local data, that are not contained in other files. Examples of memory pages which are contained in other files include pages storing a program’s .EXE and .DLL files. These would not be kept in pagefile space. Thus, often programs will have a memory working set size (PROC_MEM_RES) larger than the size of its pagefile space. On Linux this value is rounded to PAGESIZE. PROC_MINOR_FAULT ---------------------------------- Number of minor page faults for this process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) during the interval. On HP-UX, major page faults and minor page faults are a subset of vfaults (virtual faults). Stack and heap accesses can cause vfaults, but do not result in a disk page having to be loaded into memory. PROC_PAGEFAULT ---------------------------------- The number of page faults that occurred during the interval for the process(or kernel threads, if HP-UX/Linux Kernel 2.6 and above). PROC_PAGEFAULT_RATE ---------------------------------- The number of page faults per second that occurred during the interval for the process(or kernel threads, if HP-UX/Linux Kernel 2.6 and above). PROC_PARENT_PROC_ID ---------------------------------- The parent process’ PID number. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_PRI ---------------------------------- On Unix systems, this is the dispatch priority of a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) at the end of the interval. The lower the value, the more likely the process is to be dispatched. On Windows, this is the current base priority of this process. On HP-UX, whenever the priority is changed for the selected process or kernel thread, the new value will not be reflected until the process or kernel thread is reactivated if it is currently idle (for example, SLEEPing). On HP-UX, the lower the value, the more the process or kernel thread is likely to be dispatched. Values between zero and 127 are considered to be “real- time” priorities, which the kernel does not adjust. Values above 127 are normal priorities and are modified by the kernel for load balancing. Some special priorities are used in the HP-UX kernel and subsystems for different activities. These values are described in /usr/include/sys/param.h. Priorities less than PZERO 153 are not signalable. Note that on HP-UX, many network-related programs such as inetd, biod, and rlogind run at priority 154 which is PPIPE. Just because they run at this priority does not mean they are using pipes. By examining the open files, you can determine if a process or kernel thread is using pipes. For HP-UX 10.0 and later releases, priorities between -32 and -1 can be seen for processes or kernel threads using the Posix Real-time Schedulers. When specifying a Posix priority, the value entered must be in the range from 0 through 31, which the system then remaps to a negative number in the range of -1 through -32. Refer to the rtsched man pages for more information. On a threaded operating system, such as HP-UX 11.0 and beyond, this metric represents a kernel thread characteristic. If this metric is reported for a process, the value for its last executing kernel thread is given. For example, if a process has multiple kernel threads and kernel thread one is the last to execute during the interval, the metric value for kernel thread one is assigned to the process. On AIX, values for priority range from 0 to 127. Processes running at priorities less than PZERO (40) are not signalable. On Windows, the higher the value the more likely the process or thread is to be dispatched. Values for priority range from 0 to 31. Values of 16 and above are considered to be “realtime” priorities. Threads within a process can raise and lower their own base priorities relative to the process’s base priority. PROC_PROC_ARGV1 ---------------------------------- The first argument (argv[1]) of the process argument list or the second word of the command line, if present. (For kernel threads, if HP-UX/Linux Kernel 2.6 and above this metric returns the value of the associated process). The HP Performance Agent logs the first 32 characters of this metric. For releases that support the parm file javaarg flag, this metric may not be the first argument. When javaarg=true, the value of this metric is replaced (for java processes only) by the java class or jar name. This can then be useful to construct parm file java application definitions using the argv1= keyword. PROC_PROC_CMD ---------------------------------- The full command line with which the process was initiated. (For kernel threads, if HP-UX/Linux Kernel 2.6 and above this metric returns the value of the associated process). On HP-UX, the maximum length returned depends upon the version of the OS, but typically up to 1020 characters are available. On other Unix systems, the maximum length is 4095 characters. On Linux, if the command string exceeds 4096 characters, the kernel instrumentation may not report any value. If the command line contains special characters, such as carriage return and tab, these characters will be converted to , , and so on. PROC_PROC_ID ---------------------------------- The process ID number (or PID) of this process(or associated process for kernel threads, if HPUX/LInux Kernel 2.6 and above) that is used by the kernel to uniquely identify the process. Process numbers are reused, so they only identify a process for its lifetime. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_PROC_NAME ---------------------------------- The process(or kernel thread, if HP-UX/Linux Kernel 2.6 and above) program name. It is limited to 16 characters. On Unix systems, this is derived from the 1st parameter to the exec(2) system call. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. On Windows, the “System Idle Process” is not reported by Perf Agent since Idle is a process that runs to occupy the processors when they are not executing other threads. Idle has one thread per processor. PROC_RUN_TIME ---------------------------------- The elapsed time since a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) started, in seconds. This metric is less than the interval time if the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) was not alive during the entire first or last interval. On a threaded operating system such as HP-UX 11.0 and beyond, this metric is available for a process or kernel thread. PROC_STARTTIME ---------------------------------- The creation date and time of the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above). PROC_STOP_REASON ---------------------------------- A text string describing what caused the process (or kernel thread, if HP- UX/Linux Kernel 2.6 and above) to stop executing. For example, if the process is waiting for a CPU while higher priority processes are executing, then its block reason is PRI. A complete list of block reasons follows: String Reason for Process Block ------------------------------------ died Process terminated during the interval. LOCK Waiting either for serialization or phys lock. new Process was created (via the exec() system call) during the interval. NONE Process is ready to run. It is not apparent that the process is blocked. OTHER Waiting for a reason not decipherable by the measurement software. PRI Process is on the run queue. SLEEP Waiting for an event to complete. TIMER Waiting for the timer. TRACE Received a signal to stop because parent is tracing this process. VM Waiting for a virtual memory operation to complete. ZOMB Process has terminated and the parent is not waiting. PROC_THREAD_COUNT ---------------------------------- The total number of kernel threads for the current process. On Linux systems with Kernel 2.5 and below, every thread has its own process ID so this metric will always be 1. On Solaris systems, this metric reflects the total number of Light Weight Processes (LWPs) associated with the process. PROC_TTY ---------------------------------- The controlling terminal for a process(or kernel threads, if HP-UX/Linux Kernel 2.6 and above). This field is blank if there is no controlling terminal. On HP-UX, Linux, and AIX, this is the same as the “TTY” field of the ps command. On all other Unix systems, the controlling terminal name is found by searching the directories provided in the /etc/ttysrch file. See man page ttysrch(4) for details. The matching criteria field (“M”, “F” or “I” values) of the ttysrch file is ignored. If a terminal is not found in one of the ttysrch file directories, the following directories are searched in the order here: “/dev”, “/dev/pts”, “/dev/term” and “dev/xt”. When a match is found in one of the “/dev” subdirectories, “/dev/” is not displayed as part of the terminal name. If no match is found in the directory searches, the major and minor numbers of the controlling terminal are displayed. In most cases, this value is the same as the “TTY” field of the ps command. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_USER_NAME ---------------------------------- On Unix systems, this is real user name of a process or the login account (from /etc/passwd) of a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above). If more than one account is listed in /etc/passwd with the same user ID (uid) field, the first one is used. If an account cannot be found that matches the uid field, then the uid number is returned. This would occur if the account was removed after a process was started. On Windows, this is the process owner account name, without the domain name this account resides in. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_VOLUNTARY_CSWITCH ---------------------------------- The number of times a process (or kernel thread, if HP-UX) has given up the CPU before an external event preempted it during the interval. Examples of voluntary switches include calls to sleep(2) and select(2). On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. RECORD_TYPE ---------------------------------- ASCII string that identifies the record. Possibilities include: GLOB for global 5 minute detail GSUM for global hourly summary APPL for application 5 minute detail ASUM for application hourly summary CONF for configuration TRAN for transaction tracker detail TSUM for transaction tracker summary Except for Windows Desktop, this also includes: PROC for process 1 minute detail DISK for disk device 5 minute detail DSUM for disk device summary On HP-UX, this also includes: VOLS for logical volume disk detail VSUM for logical volume disk summary STATDATE ---------------------------------- The end date timestamp of the interval for which the information in this record was captured, based on local time. The date is an ASCII field in mm/dd/yyyy format unless localized. If localized, the separators may be different and the subfield may be in a different sequence. In ASCII files this field will always contain 10 characters. Each subfield (mm, dd, yyyy) will contain a leading zero if the value is less than 10. This metric is extracted from GBL_STATTIME, which is obtained using the time() system call at the time of data collection. This field responds to language localization. For example, in Italy the field would appear as dd/mm/yyyy and in Japan it would be yyyy/mm/dd. In binary files this field is in MPE CALENDAR format in the least significant 16 bits of the field. The most significant 16 bits should all be zero. Dividing the field by 512 will isolate the year (that is, 94). This field MOD 512 will isolate the day of the year. STATTIME ---------------------------------- The local time of day for the end of the interval. The time is an ASCII field in hh:mm:ss 24-hour format. This field will always contain 8 characters in ASCII files. The three subfields (hh, mm, ss) will contain a leading zero if the value is less than 10. This metric is extracted from GBL_STATTIME, which is obtained using the time() system call at the end of the interval. This field responds to language localization. In binary files this field contains four byte size subfields. The most significant byte contains the hour, the next most significant byte contains the minute, then the seconds and finally the tenths of a second. The left two bytes can be isolated by dividing by 65536. HHMM = TIME/65536. Then HOUR = HHMM/256 and MINUTE = HHMM mod 256. SSTS = TIME mod 65536. Then SECOND = SSTS/256. TBL_BUFFER_CACHE_AVAIL ---------------------------------- The size (in KBs unless otherwise specified) of the file system buffer cache on the system. On HP-UX 11i v2 and below, these buffers are used for all file system IO operations, as well as all other block IO operations in the system (exec, mount, inode reading, and some device drivers). If dynamic buffer cache is enabled, the system allocates a percentage of available memory not less than dbc_min_pct nor more than dbc_max_pct, depending on the system needs at any given time. On systems with a static buffer cache, this value will remain equal to bufpages, or not less than dbc_min_pct nor more than dbc_max_pct. On HP-UX 11i v3 and above the limits of the file system buffer cache which is still being used for file system metadata are automatically set to certain percentages of filecache_min and filecache_max. On SUN, this value is obtained by multiplying the system page size times the number of buffer headers (nbuf). For example, on a SPARCstation 10 the buffer size is usually (200 (page size buffers) * 4096 (bytes/page) = 800 KB). NOTE: (For SUN systems with VERITAS File System installed) Veritas implemented their Direct I/O feature in their file system to provide mechanism for bypassing the Unix system buffer cache while retaining the on disk structure of a file system. The way in which Direct I/O works involves the way the system buffer cache is handled by the Unix OS. Once the VERITAS file system returns with the requested block, instead of copying the content to a system buffer page, it copies the block into the application’s buffer space. That’s why if you have installed vxfs on your system, the TBL_BUFFER_CACHE_AVAIL can exceed the TBL_BUFFER_CACHE_HWM metric. On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The “nbuf” value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On AIX, this cache is used for all block IO. On AIX System WPARs, this metric is NA. TBL_FILE_TABLE_USED ---------------------------------- The number of entries in the file table currently used by file descriptors. On SUN, this is the number of file cache entries currently used by file descriptors. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_MSG_TABLE_USED ---------------------------------- On HP-UX, this is the number of message queues currently in use. On all other Unix systems, this is the number of message queues that have been built. A message queue is allocated by a program using the msgget(2) call. See ipcs(1) to list the message queues. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_PROC_TABLE_AVAIL ---------------------------------- The configured maximum number of the proc table entries used by the kernel to manage processes. This number includes both free and used entries. On HP-UX, this is set by the NPROC value during system generation. AIX has a “dynamic” proc table, which means that AVAIL has been set higher than should ever be needed. On AIX System WPARs, this metric is NA. TBL_PROC_TABLE_USED ---------------------------------- The number of entries in the proc table currently used by processes. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SEM_TABLE_USED ---------------------------------- On HP-UX, this is the number of semaphore identifiers currently in use. On all other Unix systems, this is the number of semaphore identifiers that have been built. A semaphore identifier is allocated by a program using the semget(2) call. See ipcs(1) to list semaphores. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_ACTIVE ---------------------------------- The size (in KBs unless otherwise specified) of the shared memory segments that have running processes attached to them. This may be less than the amount of shared memory used on the system because a shared memory segment may exist and not have any process attached to it. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_TABLE_USED ---------------------------------- On HP-UX, this is the number of shared memory segments currently in use. On all other Unix systems, this is the number of shared memory segments that have been built. This includes shared memory segments with no processes attached to them. A shared memory segment is allocated by a program using the shmget(2) call. Also refer to ipcs(1). On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_USED ---------------------------------- The size (in KBs unless otherwise specified) of the shared memory segments. Additionally, it includes memory segments to which no processes are attached. If a shared memory segment has zero attachments, the space may not always be allocated in memory. See ipcs(1) to list shared memory segments. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TIME ---------------------------------- The local time of day for the start of the interval. The time is an ASCII field in hh:mm:ss 24-hour format. This field will always contain 8 characters in ASCII files. The three subfields (hh, mm, ss) will contain a leading zero if the value is less than 10. This metric is extracted from GBL_STATTIME, which is obtained using the time() system call at the start of the interval. This field responds to language localization. In binary files this field contains four byte size subfields. The most significant byte contains the hour, the next most significant byte contains the minute, then the seconds and finally the tenths of a second. The left two bytes can be isolated by dividing by 65536. HHMM = TIME/65536. Then HOUR = HHMM/256 and MINUTE = HHMM mod 256. SSTS = TIME mod 65536. Then SECOND = SSTS/256. TTBIN_TRANS_COUNT_1 ---------------------------------- The number of completed transactions in this range during the last interval. On SUN systems, this metric is only available on 5.X or later. TTBIN_TRANS_COUNT_10 ---------------------------------- The number of completed transactions in this range during the last interval. On SUN systems, this metric is only available on 5.X or later. TTBIN_TRANS_COUNT_2 ---------------------------------- The number of completed transactions in this range during the last interval. On SUN systems, this metric is only available on 5.X or later. TTBIN_TRANS_COUNT_3 ---------------------------------- The number of completed transactions in this range during the last interval. On SUN systems, this metric is only available on 5.X or later. TTBIN_TRANS_COUNT_4 ---------------------------------- The number of completed transactions in this range during the last interval. On SUN systems, this metric is only available on 5.X or later. TTBIN_TRANS_COUNT_5 ---------------------------------- The number of completed transactions in this range during the last interval. On SUN systems, this metric is only available on 5.X or later. TTBIN_TRANS_COUNT_6 ---------------------------------- The number of completed transactions in this range during the last interval. On SUN systems, this metric is only available on 5.X or later. TTBIN_TRANS_COUNT_7 ---------------------------------- The number of completed transactions in this range during the last interval. On SUN systems, this metric is only available on 5.X or later. TTBIN_TRANS_COUNT_8 ---------------------------------- The number of completed transactions in this range during the last interval. On SUN systems, this metric is only available on 5.X or later. TTBIN_TRANS_COUNT_9 ---------------------------------- The number of completed transactions in this range during the last interval. On SUN systems, this metric is only available on 5.X or later. TTBIN_UPPER_RANGE_1 ---------------------------------- The upper range (transaction time) for this bin. There are a maximum of nine user-defined transaction response time bins (TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is the overflow bin and will always have a value of -2 (overflow). Note that the values specified in the transaction configuration file cannot exceed 2147483.6, which is the number of seconds in 24.85 days. If the user specifies any values greater than 2147483.6, the numbers reported for those bins or Service Level Objectives (SLO) will be -2. On SUN systems, this metric is only available on 5.X or later. TTBIN_UPPER_RANGE_10 ---------------------------------- The upper range (transaction time) for this bin. There are a maximum of nine user-defined transaction response time bins (TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is the overflow bin and will always have a value of -2 (overflow). Note that the values specified in the transaction configuration file cannot exceed 2147483.6, which is the number of seconds in 24.85 days. If the user specifies any values greater than 2147483.6, the numbers reported for those bins or Service Level Objectives (SLO) will be -2. On SUN systems, this metric is only available on 5.X or later. TTBIN_UPPER_RANGE_2 ---------------------------------- The upper range (transaction time) for this bin. There are a maximum of nine user-defined transaction response time bins (TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is the overflow bin and will always have a value of -2 (overflow). Note that the values specified in the transaction configuration file cannot exceed 2147483.6, which is the number of seconds in 24.85 days. If the user specifies any values greater than 2147483.6, the numbers reported for those bins or Service Level Objectives (SLO) will be -2. On SUN systems, this metric is only available on 5.X or later. TTBIN_UPPER_RANGE_3 ---------------------------------- The upper range (transaction time) for this bin. There are a maximum of nine user-defined transaction response time bins (TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is the overflow bin and will always have a value of -2 (overflow). Note that the values specified in the transaction configuration file cannot exceed 2147483.6, which is the number of seconds in 24.85 days. If the user specifies any values greater than 2147483.6, the numbers reported for those bins or Service Level Objectives (SLO) will be -2. On SUN systems, this metric is only available on 5.X or later. TTBIN_UPPER_RANGE_4 ---------------------------------- The upper range (transaction time) for this bin. There are a maximum of nine user-defined transaction response time bins (TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is the overflow bin and will always have a value of -2 (overflow). Note that the values specified in the transaction configuration file cannot exceed 2147483.6, which is the number of seconds in 24.85 days. If the user specifies any values greater than 2147483.6, the numbers reported for those bins or Service Level Objectives (SLO) will be -2. On SUN systems, this metric is only available on 5.X or later. TTBIN_UPPER_RANGE_5 ---------------------------------- The upper range (transaction time) for this bin. There are a maximum of nine user-defined transaction response time bins (TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is the overflow bin and will always have a value of -2 (overflow). Note that the values specified in the transaction configuration file cannot exceed 2147483.6, which is the number of seconds in 24.85 days. If the user specifies any values greater than 2147483.6, the numbers reported for those bins or Service Level Objectives (SLO) will be -2. On SUN systems, this metric is only available on 5.X or later. TTBIN_UPPER_RANGE_6 ---------------------------------- The upper range (transaction time) for this bin. There are a maximum of nine user-defined transaction response time bins (TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is the overflow bin and will always have a value of -2 (overflow). Note that the values specified in the transaction configuration file cannot exceed 2147483.6, which is the number of seconds in 24.85 days. If the user specifies any values greater than 2147483.6, the numbers reported for those bins or Service Level Objectives (SLO) will be -2. On SUN systems, this metric is only available on 5.X or later. TTBIN_UPPER_RANGE_7 ---------------------------------- The upper range (transaction time) for this bin. There are a maximum of nine user-defined transaction response time bins (TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is the overflow bin and will always have a value of -2 (overflow). Note that the values specified in the transaction configuration file cannot exceed 2147483.6, which is the number of seconds in 24.85 days. If the user specifies any values greater than 2147483.6, the numbers reported for those bins or Service Level Objectives (SLO) will be -2. On SUN systems, this metric is only available on 5.X or later. TTBIN_UPPER_RANGE_8 ---------------------------------- The upper range (transaction time) for this bin. There are a maximum of nine user-defined transaction response time bins (TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is the overflow bin and will always have a value of -2 (overflow). Note that the values specified in the transaction configuration file cannot exceed 2147483.6, which is the number of seconds in 24.85 days. If the user specifies any values greater than 2147483.6, the numbers reported for those bins or Service Level Objectives (SLO) will be -2. On SUN systems, this metric is only available on 5.X or later. TTBIN_UPPER_RANGE_9 ---------------------------------- The upper range (transaction time) for this bin. There are a maximum of nine user-defined transaction response time bins (TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is the overflow bin and will always have a value of -2 (overflow). Note that the values specified in the transaction configuration file cannot exceed 2147483.6, which is the number of seconds in 24.85 days. If the user specifies any values greater than 2147483.6, the numbers reported for those bins or Service Level Objectives (SLO) will be -2. On SUN systems, this metric is only available on 5.X or later. TT_ABORT ---------------------------------- The number of aborted transactions during the last interval for this transaction. TT_ABORT_WALL_TIME_PER_TRAN ---------------------------------- The average time, in seconds, per aborted transaction during the last interval. On SUN systems, this metric is only available on 5.X or later. TT_APP_NAME ---------------------------------- The registered ARM Application name. TT_APP_TRAN_NAME ---------------------------------- A concatenation of TT_APP_NAME and TT_NAME. This provides a way to uniquely identify a specific transaction. The field is limited to 60 characters. TT_CLIENT_ADDRESS ---------------------------------- The correlator address. This is the address where the child transaction originated. TT_CLIENT_ADDRESS_FORMAT ---------------------------------- The correlator address format. This shows the protocol family for the client network address. Refer to the ARM API Guide for the list and description of supported address formats. TT_CLIENT_TRAN_ID ---------------------------------- A numerical ID that uniquely identifies the transaction class in this correlator. TT_COUNT ---------------------------------- The number of completed transactions during the last interval for this transaction. TT_FAILED ---------------------------------- The number of Failed transactions during the last interval for this transaction name. TT_INFO ---------------------------------- The registered ARM Transaction Information for this transaction. TT_NAME ---------------------------------- The registered transaction name for this transaction. TT_NUM_BINS ---------------------------------- The number of distribution ranges. On SUN systems, this metric is only available on 5.X or later. TT_SLO_COUNT ---------------------------------- The number of completed transactions that violated the defined Service Level Objective (SLO) by exceeding the SLO threshold time during the interval. TT_SLO_PERCENT ---------------------------------- The percentage of transactions which violate service level objectives. TT_SLO_THRESHOLD ---------------------------------- The upper range (transaction time) of the Service Level Objective (SLO) threshold value. This value is used to count the number of transactions that exceed this user-supplied transaction time value. TT_TERM_TRAN_1_HR_RATE ---------------------------------- For this transaction name, the number of completed transactions calculated to a 1 hour rate. For example, if you completed five of these transactions in a 5 minute window, the rate is 60 transactions per hour. On SUN systems, this metric is only available on 5.X or later. TT_TRAN_1_MIN_RATE ---------------------------------- For this transaction name, the number of completed transactions calculated to a 1 minute rate. For example, if you completed five of these transactions in a 5 minute window, the rate is one transaction per minute. TT_TRAN_ID ---------------------------------- The registered ARM Transaction ID for this transaction class as returned by arm_getid(). A unique transaction id is returned for a unique application id (returned by arm_init), tran name, and meta data buffer contents. TT_UNAME ---------------------------------- The registered ARM Transaction User Name for this transaction. If the arm_init function has NULL for the appl_user_id field, then the user name is blank. Otherwise, if “*” was specified, then the user name is displayed. For example, to show the user name for the armsample1 program, use: appl_id = arm_init(“armsample1”,”*”,0,0,0); To ignore the user name for the armsample1 program, use: appl_id = arm_init(“armsample1”,NULL,0,0,0); TT_USER_MEASUREMENT_AVG ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the average counter differences of the transaction or transaction instance during the last interval. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this returns the average of the values passed on any ARM call for the transaction or transaction instance during the last interval. TT_USER_MEASUREMENT_AVG_2 ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the average counter differences of the transaction or transaction instance during the last interval. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this returns the average of the values passed on any ARM call for the transaction or transaction instance during the last interval. TT_USER_MEASUREMENT_AVG_3 ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the average counter differences of the transaction or transaction instance during the last interval. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this returns the average of the values passed on any ARM call for the transaction or transaction instance during the last interval. TT_USER_MEASUREMENT_AVG_4 ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the average counter differences of the transaction or transaction instance during the last interval. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this returns the average of the values passed on any ARM call for the transaction or transaction instance during the last interval. TT_USER_MEASUREMENT_AVG_5 ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the average counter differences of the transaction or transaction instance during the last interval. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this returns the average of the values passed on any ARM call for the transaction or transaction instance during the last interval. TT_USER_MEASUREMENT_AVG_6 ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the average counter differences of the transaction or transaction instance during the last interval. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this returns the average of the values passed on any ARM call for the transaction or transaction instance during the last interval. TT_USER_MEASUREMENT_MAX ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the highest measured counter value over the life of the transaction or transaction instance. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this metric returns the highest value passed on any ARM call over the life of the transaction or transaction instance. TT_USER_MEASUREMENT_MAX_2 ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the highest measured counter value over the life of the transaction or transaction instance. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this metric returns the highest value passed on any ARM call over the life of the transaction or transaction instance. TT_USER_MEASUREMENT_MAX_3 ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the highest measured counter value over the life of the transaction or transaction instance. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this metric returns the highest value passed on any ARM call over the life of the transaction or transaction instance. TT_USER_MEASUREMENT_MAX_4 ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the highest measured counter value over the life of the transaction or transaction instance. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this metric returns the highest value passed on any ARM call over the life of the transaction or transaction instance. TT_USER_MEASUREMENT_MAX_5 ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the highest measured counter value over the life of the transaction or transaction instance. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this metric returns the highest value passed on any ARM call over the life of the transaction or transaction instance. TT_USER_MEASUREMENT_MAX_6 ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the highest measured counter value over the life of the transaction or transaction instance. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this metric returns the highest value passed on any ARM call over the life of the transaction or transaction instance. TT_USER_MEASUREMENT_MIN ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the lowest measured counter value over the life of the transaction or transaction instance. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this metric returns the lowest value passed on any ARM call over the life of the transaction or transaction instance. TT_USER_MEASUREMENT_MIN_2 ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the lowest measured counter value over the life of the transaction or transaction instance. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this metric returns the lowest value passed on any ARM call over the life of the transaction or transaction instance. TT_USER_MEASUREMENT_MIN_3 ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the lowest measured counter value over the life of the transaction or transaction instance. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this metric returns the lowest value passed on any ARM call over the life of the transaction or transaction instance. TT_USER_MEASUREMENT_MIN_4 ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the lowest measured counter value over the life of the transaction or transaction instance. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this metric returns the lowest value passed on any ARM call over the life of the transaction or transaction instance. TT_USER_MEASUREMENT_MIN_5 ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the lowest measured counter value over the life of the transaction or transaction instance. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this metric returns the lowest value passed on any ARM call over the life of the transaction or transaction instance. TT_USER_MEASUREMENT_MIN_6 ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the lowest measured counter value over the life of the transaction or transaction instance. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this metric returns the lowest value passed on any ARM call over the life of the transaction or transaction instance. TT_USER_MEASUREMENT_NAME ---------------------------------- The name of the user defined transactional measurement. The length of the string complies with the ARM 2.0 standard, which is 44 characters long (there are 43 usable characters since this is a NULL terminated character string). TT_USER_MEASUREMENT_NAME_2 ---------------------------------- The name of the user defined transactional measurement. The length of the string complies with the ARM 2.0 standard, which is 44 characters long (there are 43 usable characters since this is a NULL terminated character string). TT_USER_MEASUREMENT_NAME_3 ---------------------------------- The name of the user defined transactional measurement. The length of the string complies with the ARM 2.0 standard, which is 44 characters long (there are 43 usable characters since this is a NULL terminated character string). TT_USER_MEASUREMENT_NAME_4 ---------------------------------- The name of the user defined transactional measurement. The length of the string complies with the ARM 2.0 standard, which is 44 characters long (there are 43 usable characters since this is a NULL terminated character string). TT_USER_MEASUREMENT_NAME_5 ---------------------------------- The name of the user defined transactional measurement. The length of the string complies with the ARM 2.0 standard, which is 44 characters long (there are 43 usable characters since this is a NULL terminated character string). TT_USER_MEASUREMENT_NAME_6 ---------------------------------- The name of the user defined transactional measurement. The length of the string complies with the ARM 2.0 standard, which is 44 characters long (there are 43 usable characters since this is a NULL terminated character string). TT_WALL_TIME_PER_TRAN ---------------------------------- The average transaction time, in seconds, during the last interval for this transaction. YEAR ---------------------------------- The year, including the century, the data in this record was captured. This metric will contain 4 digits, such as 2002. ----------------------------------