HP GlancePlus for Linux Dictionary of Operating System Performance Metrics Print Date 05/2013 GlancePlus for Linux Release 11.12 ************************************************************* Legal Notices ============= Warranty -------- The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. The information contained herein is subject to change without notice. Restricted Rights Legend ------------------------ Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. Copyright Notices ----------------- ©Copyright 2013 Hewlett-Packard Development Company, L.P. All rights reserved. ************************************************************* Introduction ============ This dictionary contains definitions of the Linux operating system performance metrics for HP GlancePlus. This document is divided into the following sections: * "Metric Names by Data Class," which lists the metrics alphabetically by data class. * "Metric Definitions," which describes each metric in alphabetical order. * "Glossary," which provides a glossary of performance metric terms. Global Metrics ---------------------------------- GBL_ACTIVE_CPU GBL_ACTIVE_CPU_CORE GBL_ACTIVE_PROC GBL_ALIVE_PROC GBL_BLANK GBL_BOOT_TIME GBL_COLLECTION_MODE GBL_COLLECTOR GBL_COMPLETED_PROC GBL_CPU_CLOCK GBL_CPU_CYCLE_ENTL_MAX GBL_CPU_CYCLE_ENTL_MIN GBL_CPU_ENTL_MAX GBL_CPU_ENTL_MIN GBL_CPU_ENTL_UTIL GBL_CPU_GUEST_TIME GBL_CPU_GUEST_TIME_CUM GBL_CPU_GUEST_UTIL GBL_CPU_GUEST_UTIL_CUM GBL_CPU_GUEST_UTIL_HIGH GBL_CPU_IDLE_TIME GBL_CPU_IDLE_TIME_CUM GBL_CPU_IDLE_UTIL GBL_CPU_IDLE_UTIL_CUM GBL_CPU_IDLE_UTIL_HIGH GBL_CPU_INTERRUPT_TIME GBL_CPU_INTERRUPT_TIME_CUM GBL_CPU_INTERRUPT_UTIL GBL_CPU_INTERRUPT_UTIL_CUM GBL_CPU_INTERRUPT_UTIL_HIGH GBL_CPU_MT_ENABLED GBL_CPU_NICE_TIME GBL_CPU_NICE_TIME_CUM GBL_CPU_NICE_UTIL GBL_CPU_NICE_UTIL_CUM GBL_CPU_NICE_UTIL_HIGH GBL_CPU_NUM_THREADS GBL_CPU_PHYSC GBL_CPU_PHYS_TOTAL_UTIL GBL_CPU_SHARES_PRIO GBL_CPU_STOLEN_TIME GBL_CPU_STOLEN_TIME_CUM GBL_CPU_STOLEN_UTIL GBL_CPU_STOLEN_UTIL_CUM GBL_CPU_STOLEN_UTIL_HIGH GBL_CPU_SYS_MODE_TIME GBL_CPU_SYS_MODE_TIME_CUM GBL_CPU_SYS_MODE_UTIL GBL_CPU_SYS_MODE_UTIL_CUM GBL_CPU_SYS_MODE_UTIL_HIGH GBL_CPU_TOTAL_TIME GBL_CPU_TOTAL_TIME_CUM GBL_CPU_TOTAL_UTIL GBL_CPU_TOTAL_UTIL_CUM GBL_CPU_TOTAL_UTIL_HIGH GBL_CPU_USER_MODE_TIME GBL_CPU_USER_MODE_TIME_CUM GBL_CPU_USER_MODE_UTIL GBL_CPU_USER_MODE_UTIL_CUM GBL_CPU_USER_MODE_UTIL_HIGH GBL_CPU_WAIT_TIME GBL_CPU_WAIT_TIME_CUM GBL_CPU_WAIT_UTIL GBL_CPU_WAIT_UTIL_CUM GBL_CPU_WAIT_UTIL_HIGH GBL_CSWITCH_RATE GBL_CSWITCH_RATE_CUM GBL_CSWITCH_RATE_HIGH GBL_DISK_PHYS_BYTE GBL_DISK_PHYS_BYTE_RATE GBL_DISK_PHYS_IO GBL_DISK_PHYS_IO_CUM GBL_DISK_PHYS_IO_RATE GBL_DISK_PHYS_IO_RATE_CUM GBL_DISK_PHYS_READ GBL_DISK_PHYS_READ_BYTE GBL_DISK_PHYS_READ_BYTE_CUM GBL_DISK_PHYS_READ_BYTE_RATE GBL_DISK_PHYS_READ_CUM GBL_DISK_PHYS_READ_PCT GBL_DISK_PHYS_READ_PCT_CUM GBL_DISK_PHYS_READ_RATE GBL_DISK_PHYS_READ_RATE_CUM GBL_DISK_PHYS_WRITE GBL_DISK_PHYS_WRITE_BYTE GBL_DISK_PHYS_WRITE_BYTE_CUM GBL_DISK_PHYS_WRITE_BYTE_RATE GBL_DISK_PHYS_WRITE_CUM GBL_DISK_PHYS_WRITE_PCT GBL_DISK_PHYS_WRITE_PCT_CUM GBL_DISK_PHYS_WRITE_RATE GBL_DISK_PHYS_WRITE_RATE_CUM GBL_DISK_REQUEST_QUEUE GBL_DISK_SUBSYSTEM_QUEUE GBL_DISK_SUBSYSTEM_WAIT_PCT GBL_DISK_SUBSYSTEM_WAIT_TIME GBL_DISK_TIME_PEAK GBL_DISK_UTIL GBL_DISK_UTIL_PEAK GBL_DISK_UTIL_PEAK_CUM GBL_DISK_UTIL_PEAK_HIGH GBL_DISTRIBUTION GBL_FS_SPACE_UTIL_PEAK GBL_GMTOFFSET GBL_IGNORE_MT GBL_INTERRUPT GBL_INTERRUPT_RATE GBL_INTERRUPT_RATE_CUM GBL_INTERRUPT_RATE_HIGH GBL_INTERVAL GBL_INTERVAL_CUM GBL_JAVAARG GBL_LOADAVG GBL_LOADAVG15 GBL_LOADAVG5 GBL_LOADAVG_CUM GBL_LOADAVG_HIGH GBL_LOST_MI_TRACE_BUFFERS GBL_LS_MODE GBL_LS_ROLE GBL_LS_SHARED GBL_LS_TYPE GBL_MACHINE GBL_MACHINE_MEM_USED GBL_MACHINE_MODEL GBL_MEM_AVAIL GBL_MEM_CACHE GBL_MEM_CACHE_UTIL GBL_MEM_ENTL_MAX GBL_MEM_ENTL_MIN GBL_MEM_FILE_PAGEIN_RATE GBL_MEM_FILE_PAGEOUT_RATE GBL_MEM_FILE_PAGE_CACHE GBL_MEM_FILE_PAGE_CACHE_UTIL GBL_MEM_FREE GBL_MEM_FREE_UTIL GBL_MEM_OVERHEAD GBL_MEM_PAGEIN GBL_MEM_PAGEIN_BYTE GBL_MEM_PAGEIN_BYTE_CUM GBL_MEM_PAGEIN_BYTE_RATE GBL_MEM_PAGEIN_BYTE_RATE_CUM GBL_MEM_PAGEIN_BYTE_RATE_HIGH GBL_MEM_PAGEIN_CUM GBL_MEM_PAGEIN_RATE GBL_MEM_PAGEIN_RATE_CUM GBL_MEM_PAGEIN_RATE_HIGH GBL_MEM_PAGEOUT GBL_MEM_PAGEOUT_BYTE GBL_MEM_PAGEOUT_BYTE_CUM GBL_MEM_PAGEOUT_BYTE_RATE GBL_MEM_PAGEOUT_BYTE_RATE_CUM GBL_MEM_PAGEOUT_BYTE_RATE_HIGH GBL_MEM_PAGEOUT_CUM GBL_MEM_PAGEOUT_RATE GBL_MEM_PAGEOUT_RATE_CUM GBL_MEM_PAGEOUT_RATE_HIGH GBL_MEM_PAGE_FAULT GBL_MEM_PAGE_FAULT_CUM GBL_MEM_PAGE_FAULT_RATE GBL_MEM_PAGE_FAULT_RATE_CUM GBL_MEM_PAGE_FAULT_RATE_HIGH GBL_MEM_PAGE_REQUEST GBL_MEM_PAGE_REQUEST_CUM GBL_MEM_PAGE_REQUEST_RATE GBL_MEM_PAGE_REQUEST_RATE_CUM GBL_MEM_PAGE_REQUEST_RATE_HIGH GBL_MEM_PHYS GBL_MEM_PHYS_SWAPPED GBL_MEM_SHARES_PRIO GBL_MEM_SWAPIN_BYTE GBL_MEM_SWAPIN_BYTE_CUM GBL_MEM_SWAPIN_BYTE_RATE GBL_MEM_SWAPIN_BYTE_RATE_CUM GBL_MEM_SWAPIN_BYTE_RATE_HIGH GBL_MEM_SWAPOUT_BYTE GBL_MEM_SWAPOUT_BYTE_CUM GBL_MEM_SWAPOUT_BYTE_RATE GBL_MEM_SWAPOUT_BYTE_RATE_CUM GBL_MEM_SWAPOUT_BYTE_RATE_HIGH GBL_MEM_SYS GBL_MEM_SYS_UTIL GBL_MEM_USER GBL_MEM_USER_UTIL GBL_MEM_UTIL GBL_MEM_UTIL_CUM GBL_MEM_UTIL_HIGH GBL_NET_COLLISION GBL_NET_COLLISION_1_MIN_RATE GBL_NET_COLLISION_CUM GBL_NET_COLLISION_PCT GBL_NET_COLLISION_PCT_CUM GBL_NET_COLLISION_RATE GBL_NET_ERROR GBL_NET_ERROR_1_MIN_RATE GBL_NET_ERROR_CUM GBL_NET_ERROR_RATE GBL_NET_IN_ERROR GBL_NET_IN_ERROR_CUM GBL_NET_IN_ERROR_PCT GBL_NET_IN_ERROR_PCT_CUM GBL_NET_IN_ERROR_RATE GBL_NET_IN_ERROR_RATE_CUM GBL_NET_IN_PACKET GBL_NET_IN_PACKET_CUM GBL_NET_IN_PACKET_RATE GBL_NET_OUT_ERROR GBL_NET_OUT_ERROR_CUM GBL_NET_OUT_ERROR_PCT GBL_NET_OUT_ERROR_PCT_CUM GBL_NET_OUT_ERROR_RATE GBL_NET_OUT_ERROR_RATE_CUM GBL_NET_OUT_PACKET GBL_NET_OUT_PACKET_CUM GBL_NET_OUT_PACKET_RATE GBL_NET_PACKET GBL_NET_PACKET_RATE GBL_NET_UTIL_PEAK GBL_NFS_CALL GBL_NFS_CALL_RATE GBL_NFS_CLIENT_BAD_CALL GBL_NFS_CLIENT_BAD_CALL_CUM GBL_NFS_CLIENT_CALL GBL_NFS_CLIENT_CALL_CUM GBL_NFS_CLIENT_CALL_RATE GBL_NFS_CLIENT_IO GBL_NFS_CLIENT_IO_CUM GBL_NFS_CLIENT_IO_PCT GBL_NFS_CLIENT_IO_PCT_CUM GBL_NFS_CLIENT_IO_RATE GBL_NFS_CLIENT_IO_RATE_CUM GBL_NFS_CLIENT_READ_RATE GBL_NFS_CLIENT_READ_RATE_CUM GBL_NFS_CLIENT_WRITE_RATE GBL_NFS_CLIENT_WRITE_RATE_CUM GBL_NFS_SERVER_BAD_CALL GBL_NFS_SERVER_BAD_CALL_CUM GBL_NFS_SERVER_CALL GBL_NFS_SERVER_CALL_CUM GBL_NFS_SERVER_CALL_RATE GBL_NFS_SERVER_IO GBL_NFS_SERVER_IO_CUM GBL_NFS_SERVER_IO_PCT GBL_NFS_SERVER_IO_PCT_CUM GBL_NFS_SERVER_IO_RATE GBL_NFS_SERVER_IO_RATE_CUM GBL_NFS_SERVER_READ_RATE GBL_NFS_SERVER_READ_RATE_CUM GBL_NFS_SERVER_WRITE_RATE GBL_NFS_SERVER_WRITE_RATE_CUM GBL_NODENAME GBL_NUM_ACTIVE_LS GBL_NUM_APP GBL_NUM_CPU GBL_NUM_CPU_CORE GBL_NUM_DISK GBL_NUM_LS GBL_NUM_NETWORK GBL_NUM_SOCKET GBL_NUM_SWAP GBL_NUM_TT GBL_NUM_USER GBL_OSKERNELTYPE GBL_OSKERNELTYPE_INT GBL_OSNAME GBL_OSRELEASE GBL_OSVERSION GBL_PRI_QUEUE GBL_PRI_WAIT_PCT GBL_PRI_WAIT_TIME GBL_PROC_SAMPLE GBL_RUN_QUEUE GBL_RUN_QUEUE_CUM GBL_RUN_QUEUE_HIGH GBL_SAMPLE GBL_SERIALNO GBL_STARTDATE GBL_STARTED_PROC GBL_STARTED_PROC_RATE GBL_STARTTIME GBL_STATDATE GBL_STATTIME GBL_SWAP_SPACE_AVAIL GBL_SWAP_SPACE_AVAIL_KB GBL_SWAP_SPACE_DEVICE_AVAIL GBL_SWAP_SPACE_DEVICE_UTIL GBL_SWAP_SPACE_USED GBL_SWAP_SPACE_USED_UTIL GBL_SWAP_SPACE_UTIL GBL_SWAP_SPACE_UTIL_CUM GBL_SWAP_SPACE_UTIL_HIGH GBL_SYSTEM_ID GBL_SYSTEM_TYPE GBL_SYSTEM_UPTIME_HOURS GBL_SYSTEM_UPTIME_SECONDS GBL_THRESHOLD_PROCCPU GBL_THRESHOLD_PROCDISK GBL_THRESHOLD_PROCIO GBL_THRESHOLD_PROCMEM GBL_TT_OVERFLOW_COUNT Table Metrics ---------------------------------- TBL_BUFFER_HEADER_AVAIL TBL_BUFFER_HEADER_USED TBL_BUFFER_HEADER_USED_HIGH TBL_BUFFER_HEADER_UTIL TBL_BUFFER_HEADER_UTIL_HIGH TBL_FILE_LOCK_AVAIL TBL_FILE_LOCK_USED TBL_FILE_LOCK_USED_HIGH TBL_FILE_LOCK_UTIL TBL_FILE_LOCK_UTIL_HIGH TBL_FILE_TABLE_AVAIL TBL_FILE_TABLE_USED TBL_FILE_TABLE_USED_HIGH TBL_FILE_TABLE_UTIL TBL_FILE_TABLE_UTIL_HIGH TBL_INODE_CACHE_AVAIL TBL_INODE_CACHE_HIGH TBL_INODE_CACHE_USED TBL_MSG_BUFFER_ACTIVE TBL_MSG_BUFFER_AVAIL TBL_MSG_BUFFER_HIGH TBL_MSG_BUFFER_USED TBL_MSG_TABLE_ACTIVE TBL_MSG_TABLE_AVAIL TBL_MSG_TABLE_USED TBL_MSG_TABLE_UTIL TBL_MSG_TABLE_UTIL_HIGH TBL_NUM_NFSDS TBL_SEM_TABLE_ACTIVE TBL_SEM_TABLE_AVAIL TBL_SEM_TABLE_USED TBL_SEM_TABLE_UTIL TBL_SEM_TABLE_UTIL_HIGH TBL_SHMEM_ACTIVE TBL_SHMEM_AVAIL TBL_SHMEM_HIGH TBL_SHMEM_TABLE_ACTIVE TBL_SHMEM_TABLE_AVAIL TBL_SHMEM_TABLE_USED TBL_SHMEM_TABLE_UTIL TBL_SHMEM_TABLE_UTIL_HIGH TBL_SHMEM_USED Process Metrics ---------------------------------- PROC_APP_ID PROC_APP_NAME PROC_CHILD_CPU_SYS_MODE_UTIL PROC_CHILD_CPU_TOTAL_UTIL PROC_CHILD_CPU_USER_MODE_UTIL PROC_CPU_ALIVE_SYS_MODE_UTIL PROC_CPU_ALIVE_TOTAL_UTIL PROC_CPU_ALIVE_USER_MODE_UTIL PROC_CPU_LAST_USED PROC_CPU_SYS_MODE_TIME PROC_CPU_SYS_MODE_TIME_CUM PROC_CPU_SYS_MODE_UTIL PROC_CPU_SYS_MODE_UTIL_CUM PROC_CPU_TOTAL_TIME PROC_CPU_TOTAL_TIME_CUM PROC_CPU_TOTAL_UTIL PROC_CPU_TOTAL_UTIL_CUM PROC_CPU_USER_MODE_TIME PROC_CPU_USER_MODE_TIME_CUM PROC_CPU_USER_MODE_UTIL PROC_CPU_USER_MODE_UTIL_CUM PROC_DISK_PHYS_IO_RATE PROC_DISK_PHYS_IO_RATE_CUM PROC_DISK_PHYS_READ PROC_DISK_PHYS_READ_CUM PROC_DISK_PHYS_READ_RATE PROC_DISK_PHYS_WRITE PROC_DISK_PHYS_WRITE_CUM PROC_DISK_PHYS_WRITE_RATE PROC_DISK_SUBSYSTEM_WAIT_PCT PROC_DISK_SUBSYSTEM_WAIT_PCT_CUM PROC_DISK_SUBSYSTEM_WAIT_TIME PROC_DISK_SUBSYSTEM_WAIT_TIME_CUM PROC_EUID PROC_FORCED_CSWITCH PROC_FORCED_CSWITCH_CUM PROC_GROUP_ID PROC_GROUP_NAME PROC_INTEREST PROC_INTERVAL PROC_INTERVAL_ALIVE PROC_INTERVAL_CUM PROC_IO_BYTE PROC_IO_BYTE_CUM PROC_IO_BYTE_RATE PROC_IO_BYTE_RATE_CUM PROC_MAJOR_FAULT PROC_MAJOR_FAULT_CUM PROC_MEM_DATA_VIRT PROC_MEM_LOCKED PROC_MEM_RES PROC_MEM_RES_HIGH PROC_MEM_SHARED_RES PROC_MEM_STACK_VIRT PROC_MEM_TEXT_VIRT PROC_MEM_VIRT PROC_MINOR_FAULT PROC_MINOR_FAULT_CUM PROC_NICE_PRI PROC_PAGEFAULT PROC_PAGEFAULT_RATE PROC_PAGEFAULT_RATE_CUM PROC_PARENT_PROC_ID PROC_PRI PROC_PRI_WAIT_PCT PROC_PRI_WAIT_PCT_CUM PROC_PRI_WAIT_TIME PROC_PRI_WAIT_TIME_CUM PROC_PROC_ARGV1 PROC_PROC_CMD PROC_PROC_ID PROC_PROC_NAME PROC_RUN_TIME PROC_SCHEDULER PROC_STARTTIME PROC_STATE PROC_STATE_FLAG PROC_STOP_REASON PROC_STOP_REASON_FLAG PROC_THREAD_COUNT PROC_THREAD_ID PROC_TIME PROC_TOP_CPU_INDEX PROC_TOP_DISK_INDEX PROC_TTY PROC_TTY_DEV PROC_UID PROC_USER_NAME PROC_VOLUNTARY_CSWITCH PROC_VOLUNTARY_CSWITCH_CUM Application Metrics ---------------------------------- APP_ACTIVE_APP APP_ACTIVE_PROC APP_ALIVE_PROC APP_COMPLETED_PROC APP_CPU_SYS_MODE_TIME APP_CPU_SYS_MODE_UTIL APP_CPU_TOTAL_TIME APP_CPU_TOTAL_UTIL APP_CPU_TOTAL_UTIL_CUM APP_CPU_USER_MODE_TIME APP_CPU_USER_MODE_UTIL APP_DISK_PHYS_IO_RATE APP_DISK_PHYS_READ APP_DISK_PHYS_READ_RATE APP_DISK_PHYS_WRITE APP_DISK_PHYS_WRITE_RATE APP_DISK_SUBSYSTEM_QUEUE APP_DISK_SUBSYSTEM_WAIT_PCT APP_INTERVAL APP_INTERVAL_CUM APP_IO_BYTE APP_IO_BYTE_RATE APP_MAJOR_FAULT APP_MAJOR_FAULT_RATE APP_MEM_RES APP_MEM_UTIL APP_MEM_VIRT APP_MINOR_FAULT APP_MINOR_FAULT_RATE APP_NAME APP_NUM APP_PRI APP_PRI_QUEUE APP_PRI_WAIT_PCT APP_PROC_RUN_TIME APP_SAMPLE APP_TIME Process By File Metrics ---------------------------------- PROC_FILE_MODE PROC_FILE_NAME PROC_FILE_NUMBER PROC_FILE_OPEN PROC_FILE_TYPE By Disk Metrics ---------------------------------- BYDSK_AVG_REQUEST_QUEUE BYDSK_AVG_SERVICE_TIME BYDSK_BUSY_TIME BYDSK_DEVNAME BYDSK_DEVNO BYDSK_DIRNAME BYDSK_ID BYDSK_INTERVAL BYDSK_INTERVAL_CUM BYDSK_PHYS_BYTE BYDSK_PHYS_BYTE_RATE BYDSK_PHYS_BYTE_RATE_CUM BYDSK_PHYS_IO BYDSK_PHYS_IO_RATE BYDSK_PHYS_IO_RATE_CUM BYDSK_PHYS_READ BYDSK_PHYS_READ_BYTE BYDSK_PHYS_READ_BYTE_RATE BYDSK_PHYS_READ_BYTE_RATE_CUM BYDSK_PHYS_READ_RATE BYDSK_PHYS_READ_RATE_CUM BYDSK_PHYS_WRITE BYDSK_PHYS_WRITE_BYTE BYDSK_PHYS_WRITE_BYTE_RATE BYDSK_PHYS_WRITE_BYTE_RATE_CUM BYDSK_PHYS_WRITE_RATE BYDSK_PHYS_WRITE_RATE_CUM BYDSK_QUEUE_0_UTIL BYDSK_QUEUE_2_UTIL BYDSK_QUEUE_4_UTIL BYDSK_QUEUE_8_UTIL BYDSK_QUEUE_X_UTIL BYDSK_REQUEST_QUEUE BYDSK_TIME BYDSK_UTIL File System Metrics ---------------------------------- FS_BLOCK_SIZE FS_DEVNAME FS_DEVNO FS_DIRNAME FS_FRAG_SIZE FS_INODE_UTIL FS_MAX_INODES FS_MAX_SIZE FS_PHYS_IO_RATE FS_PHYS_IO_RATE_CUM FS_PHYS_READ_BYTE_RATE FS_PHYS_READ_BYTE_RATE_CUM FS_PHYS_READ_RATE FS_PHYS_READ_RATE_CUM FS_PHYS_WRITE_BYTE_RATE FS_PHYS_WRITE_BYTE_RATE_CUM FS_PHYS_WRITE_RATE FS_PHYS_WRITE_RATE_CUM FS_SPACE_RESERVED FS_SPACE_USED FS_SPACE_UTIL FS_TYPE By Network Interface Metrics ---------------------------------- BYNETIF_COLLISION BYNETIF_COLLISION_1_MIN_RATE BYNETIF_COLLISION_RATE BYNETIF_COLLISION_RATE_CUM BYNETIF_ERROR BYNETIF_ERROR_1_MIN_RATE BYNETIF_ERROR_RATE BYNETIF_ERROR_RATE_CUM BYNETIF_ID BYNETIF_IN_BYTE BYNETIF_IN_BYTE_RATE BYNETIF_IN_BYTE_RATE_CUM BYNETIF_IN_PACKET BYNETIF_IN_PACKET_RATE BYNETIF_IN_PACKET_RATE_CUM BYNETIF_NAME BYNETIF_NET_SPEED BYNETIF_NET_TYPE BYNETIF_OUT_BYTE BYNETIF_OUT_BYTE_RATE BYNETIF_OUT_BYTE_RATE_CUM BYNETIF_OUT_PACKET BYNETIF_OUT_PACKET_RATE BYNETIF_OUT_PACKET_RATE_CUM BYNETIF_PACKET_RATE BYNETIF_UTIL By Swap Metrics ---------------------------------- BYSWP_SWAP_PRI BYSWP_SWAP_SPACE_AVAIL BYSWP_SWAP_SPACE_NAME BYSWP_SWAP_SPACE_USED BYSWP_SWAP_TYPE By CPU Metrics ---------------------------------- BYCPU_ACTIVE BYCPU_CPU_CLOCK BYCPU_CPU_GUEST_TIME BYCPU_CPU_GUEST_TIME_CUM BYCPU_CPU_GUEST_UTIL BYCPU_CPU_GUEST_UTIL_CUM BYCPU_CPU_INTERRUPT_TIME BYCPU_CPU_INTERRUPT_TIME_CUM BYCPU_CPU_INTERRUPT_UTIL BYCPU_CPU_INTERRUPT_UTIL_CUM BYCPU_CPU_NICE_TIME BYCPU_CPU_NICE_TIME_CUM BYCPU_CPU_NICE_UTIL BYCPU_CPU_NICE_UTIL_CUM BYCPU_CPU_STOLEN_TIME BYCPU_CPU_STOLEN_TIME_CUM BYCPU_CPU_STOLEN_UTIL BYCPU_CPU_STOLEN_UTIL_CUM BYCPU_CPU_SYS_MODE_TIME BYCPU_CPU_SYS_MODE_TIME_CUM BYCPU_CPU_SYS_MODE_UTIL BYCPU_CPU_SYS_MODE_UTIL_CUM BYCPU_CPU_TOTAL_TIME BYCPU_CPU_TOTAL_TIME_CUM BYCPU_CPU_TOTAL_UTIL BYCPU_CPU_TOTAL_UTIL_CUM BYCPU_CPU_TYPE BYCPU_CPU_USER_MODE_TIME BYCPU_CPU_USER_MODE_TIME_CUM BYCPU_CPU_USER_MODE_UTIL BYCPU_CPU_USER_MODE_UTIL_CUM BYCPU_ID BYCPU_INTERRUPT BYCPU_INTERRUPT_RATE BYCPU_STATE Process By Memory Region Metrics ---------------------------------- PROC_REGION_FILENAME PROC_REGION_PRIVATE_SHARED_FLAG PROC_REGION_PROT_FLAG PROC_REGION_TYPE PROC_REGION_VIRT PROC_REGION_VIRT_ADDRS PROC_REGION_VIRT_DATA PROC_REGION_VIRT_OTHER PROC_REGION_VIRT_SHMEM PROC_REGION_VIRT_STACK PROC_REGION_VIRT_TEXT By Operation Metrics ---------------------------------- BYOP_CLIENT_COUNT BYOP_CLIENT_COUNT_CUM BYOP_INTERVAL BYOP_INTERVAL_CUM BYOP_NAME BYOP_SERVER_COUNT BYOP_SERVER_COUNT_CUM Transaction Metrics ---------------------------------- TT_ABORT TT_ABORT_CUM TT_ABORT_WALL_TIME TT_ABORT_WALL_TIME_CUM TT_APPNO TT_APP_NAME TT_CLIENT_CORRELATOR_COUNT TT_COUNT TT_COUNT_CUM TT_FAILED TT_FAILED_CUM TT_FAILED_WALL_TIME TT_FAILED_WALL_TIME_CUM TT_INFO TT_INPROGRESS_COUNT TT_INTERVAL TT_INTERVAL_CUM TT_MEASUREMENT_COUNT TT_NAME TT_SLO_COUNT TT_SLO_COUNT_CUM TT_SLO_PERCENT TT_SLO_THRESHOLD TT_TRAN_1_MIN_RATE TT_TRAN_ID TT_UID TT_UNAME TT_UPDATE TT_UPDATE_CUM TT_WALL_TIME TT_WALL_TIME_CUM TT_WALL_TIME_PER_TRAN TT_WALL_TIME_PER_TRAN_CUM Transaction Measurement Section Metrics ---------------------------------- TTBIN_TRANS_COUNT TTBIN_TRANS_COUNT_CUM TTBIN_UPPER_RANGE Transaction Client Metrics ---------------------------------- TT_CLIENT_ABORT TT_CLIENT_ABORT_CUM TT_CLIENT_ABORT_WALL_TIME TT_CLIENT_ABORT_WALL_TIME_CUM TT_CLIENT_ADDRESS TT_CLIENT_ADDRESS_FORMAT TT_CLIENT_TRAN_ID TT_CLIENT_COUNT TT_CLIENT_COUNT_CUM TT_CLIENT_FAILED TT_CLIENT_FAILED_CUM TT_CLIENT_FAILED_WALL_TIME TT_CLIENT_FAILED_WALL_TIME_CUM TT_CLIENT_INTERVAL TT_CLIENT_INTERVAL_CUM TT_CLIENT_SLO_COUNT TT_CLIENT_SLO_COUNT_CUM TT_CLIENT_UPDATE TT_CLIENT_UPDATE_CUM TT_CLIENT_WALL_TIME TT_CLIENT_WALL_TIME_CUM TT_CLIENT_WALL_TIME_PER_TRAN TT_CLIENT_WALL_TIME_PER_TRAN_CUM Transaction Instance Metrics ---------------------------------- TT_INSTANCE_ID TT_INSTANCE_PROC_ID TT_INSTANCE_START_TIME TT_INSTANCE_STOP_TIME TT_INSTANCE_THREAD_ID TT_INSTANCE_UPDATE_COUNT TT_INSTANCE_UPDATE_TIME TT_INSTANCE_WALL_TIME Transaction User Defined Measurement Metrics ---------------------------------- TT_USER_MEASUREMENT_AVG TT_USER_MEASUREMENT_MAX TT_USER_MEASUREMENT_MIN TT_USER_MEASUREMENT_NAME TT_USER_MEASUREMENT_STRING1024_VALUE TT_USER_MEASUREMENT_STRING32_VALUE TT_USER_MEASUREMENT_TYPE TT_USER_MEASUREMENT_VALUE Transaction Client User Defined Measurement Metrics ---------------------------------- TT_CLIENT_USER_MEASUREMENT_AVG TT_CLIENT_USER_MEASUREMENT_MAX TT_CLIENT_USER_MEASUREMENT_MIN TT_CLIENT_USER_MEASUREMENT_NAME TT_CLIENT_USER_MEASUREMENT_STRING1024_VALUE TT_CLIENT_USER_MEASUREMENT_STRING32_VALUE TT_CLIENT_USER_MEASUREMENT_TYPE TT_CLIENT_USER_MEASUREMENT_VALUE Transaction Instance User Defined Measurement Metrics ---------------------------------- TT_INSTANCE_USER_MEASUREMENT_AVG TT_INSTANCE_USER_MEASUREMENT_MAX TT_INSTANCE_USER_MEASUREMENT_MIN TT_INSTANCE_USER_MEASUREMENT_NAME TT_INSTANCE_USER_MEASUREMENT_STRING1024_VALUE TT_INSTANCE_USER_MEASUREMENT_STRING32_VALUE TT_INSTANCE_USER_MEASUREMENT_TYPE TT_INSTANCE_USER_MEASUREMENT_VALUE By Logical System Metrics ---------------------------------- BYLS_BOOT_TIME BYLS_CLUSTER_NAME BYLS_CPU_CLOCK BYLS_CPU_CYCLE_ENTL_MAX BYLS_CPU_CYCLE_ENTL_MIN BYLS_CPU_CYCLE_TOTAL_USED BYLS_CPU_EFFECTIVE_UTIL BYLS_CPU_ENTL_EMIN BYLS_CPU_ENTL_MAX BYLS_CPU_ENTL_MIN BYLS_CPU_ENTL_UTIL BYLS_CPU_FAILOVER BYLS_CPU_MT_ENABLED BYLS_CPU_PHYSC BYLS_CPU_PHYS_READY_UTIL BYLS_CPU_PHYS_SYS_MODE_UTIL BYLS_CPU_PHYS_TOTAL_TIME BYLS_CPU_PHYS_TOTAL_UTIL BYLS_CPU_PHYS_USER_MODE_UTIL BYLS_CPU_PHYS_WAIT_UTIL BYLS_CPU_SHARES_PRIO BYLS_CPU_SYS_MODE_UTIL BYLS_CPU_TOTAL_UTIL BYLS_CPU_UNRESERVED BYLS_CPU_USER_MODE_UTIL BYLS_DATACENTER_NAME BYLS_DISK_CAPACITY BYLS_DISK_COMMAND_ABORT_RATE BYLS_DISK_FREE_SPACE BYLS_DISK_IORM_ENABLED BYLS_DISK_IORM_THRESHOLD BYLS_DISK_PHYS_BYTE BYLS_DISK_PHYS_BYTE_RATE BYLS_DISK_PHYS_READ BYLS_DISK_PHYS_READ_BYTE_RATE BYLS_DISK_PHYS_READ_RATE BYLS_DISK_PHYS_WRITE BYLS_DISK_PHYS_WRITE_BYTE_RATE BYLS_DISK_PHYS_WRITE_RATE BYLS_DISK_READ_LATENCY BYLS_DISK_SHARE_PRIORITY BYLS_DISK_THROUGHPUT_CONTENTION BYLS_DISK_THROUGPUT_USAGE BYLS_DISK_UTIL BYLS_DISK_UTIL_PEAK BYLS_DISPLAY_NAME BYLS_GUEST_TOOLS_STATUS BYLS_IP_ADDRESS BYLS_LS_CONNECTION_STATE BYLS_LS_HOSTNAME BYLS_LS_HOST_HOSTNAME BYLS_LS_ID BYLS_LS_MODE BYLS_LS_NAME BYLS_LS_NUM_SNAPSHOTS BYLS_LS_OSTYPE BYLS_LS_PARENT_TYPE BYLS_LS_PARENT_UUID BYLS_LS_PATH BYLS_LS_ROLE BYLS_LS_SHARED BYLS_LS_STATE BYLS_LS_STATE_CHANGE_TIME BYLS_LS_TYPE BYLS_LS_UUID BYLS_MACHINE_MODEL BYLS_MEM_ACTIVE BYLS_MEM_AVAIL BYLS_MEM_BALLOON_USED BYLS_MEM_BALLOON_UTIL BYLS_MEM_EFFECTIVE_UTIL BYLS_MEM_ENTL BYLS_MEM_ENTL_MAX BYLS_MEM_ENTL_MIN BYLS_MEM_ENTL_UTIL BYLS_MEM_FREE BYLS_MEM_FREE_UTIL BYLS_MEM_HEALTH BYLS_MEM_OVERHEAD BYLS_MEM_PHYS BYLS_MEM_PHYS_UTIL BYLS_MEM_SHARES_PRIO BYLS_MEM_SWAPIN BYLS_MEM_SWAPOUT BYLS_MEM_SWAPPED BYLS_MEM_SWAPTARGET BYLS_MEM_SWAP_UTIL BYLS_MEM_SYS BYLS_MEM_UNRESERVED BYLS_MEM_USED BYLS_MULTIACC_ENABLED BYLS_NET_BYTE_RATE BYLS_NET_IN_BYTE BYLS_NET_IN_PACKET BYLS_NET_IN_PACKET_RATE BYLS_NET_OUT_BYTE BYLS_NET_OUT_PACKET BYLS_NET_OUT_PACKET_RATE BYLS_NET_PACKET_RATE BYLS_NUM_ACTIVE_LS BYLS_NUM_CLONES BYLS_NUM_CPU BYLS_NUM_CPU_CORE BYLS_NUM_CREATE BYLS_NUM_DEPLOY BYLS_NUM_DESTROY BYLS_NUM_DISK BYLS_NUM_HOSTS BYLS_NUM_LS BYLS_NUM_NETIF BYLS_NUM_RECONFIGURE BYLS_NUM_SOCKET BYLS_SCHEDULING_CLASS BYLS_SUBTYPE BYLS_TOTAL_SV_MOTIONS BYLS_TOTAL_VM_MOTIONS BYLS_UPTIME_HOURS BYLS_UPTIME_SECONDS BYLS_VC_IP_ADDRESS APP_ACTIVE_APP ---------------------------------- The number of applications that had processes active (consuming cpu resources) during the interval. APP_ACTIVE_PROC ---------------------------------- An active process is one that exists and consumes some CPU time. APP_ACTIVE_PROC is the sum of the alive-process-time/interval-time ratios of every process belonging to an application that is active (uses any CPU time) during an interval. The following diagram of a four second interval showing two processes, A and B, for an application should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval, but consumes no CPU. A’s contribution to APP_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to APP_ACTIVE_PROC. B’s contribution to APP_ALIVE_PROC is 3*1/4. B contributes 2*1/4 to APP_ACTIVE_PROC. Thus, for this interval, APP_ACTIVE_PROC equals 0.5 and APP_ALIVE_PROC equals 1.75. Because a process may be alive but not active, APP_ACTIVE_PROC will always be less than or equal to APP_ALIVE_PROC. This metric indicates the number of processes in an application group that are competing for the CPU. This metric is useful, along with other metrics, for comparing loads placed on the system by different groups of processes. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. APP_ALIVE_PROC ---------------------------------- An alive process is one that exists on the system. APP_ALIVE_PROC is the sum of the alive-process-time/interval-time ratios for every process belonging to a given application. The following diagram of a four second interval showing two processes, A and B, for an application should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval but consumes no CPU. A’s contribution to APP_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to APP_ACTIVE_PROC. B’s contribution to APP_ALIVE_PROC is 3*1/4. B contributes 2*1/4 to APP_ACTIVE_PROC. Thus, for this interval, APP_ACTIVE_PROC equals 0.5 and APP_ALIVE_PROC equals 1.75. Because a process may be alive but not active, APP_ACTIVE_PROC will always be less than or equal to APP_ALIVE_PROC. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. APP_COMPLETED_PROC ---------------------------------- The number of processes in this group that completed during the interval. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. APP_CPU_SYS_MODE_TIME ---------------------------------- The time, in seconds, during the interval that the CPU was in system mode for processes in this group. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_SYS_MODE_UTIL ---------------------------------- The percentage of time during the interval that the CPU was used in system mode for processes in this group. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. High system CPU utilizations are normal for IO intensive groups. Abnormally high system CPU utilization can indicate that a hardware problem is causing a high interrupt rate. It can also indicate programs that are not making efficient system calls. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_TOTAL_TIME ---------------------------------- The total CPU time, in seconds, devoted to processes in this group during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_TOTAL_UTIL ---------------------------------- The percentage of the total CPU time devoted to processes in this group during the interval. This indicates the relative CPU load placed on the system by processes in this group. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. Large values for this metric may indicate that this group is causing a CPU bottleneck. This would be normal in a computation-bound workload, but might mean that processes are using excessive CPU time and perhaps looping. If the “other” application shows significant amounts of CPU, you may want to consider tuning your parm file so that process activity is accounted for in known applications. APP_CPU_TOTAL_UTIL = APP_CPU_SYS_MODE_UTIL + APP_CPU_USER_MODE_UTIL NOTE: On Windows, the sum of the APP_CPU_TOTAL_UTIL metrics may not equal GBL_CPU_TOTAL_UTIL. Microsoft states that “this is expected behavior” because the GBL_CPU_TOTAL_UTIL metric is taken from the NT performance library Processor objects while the APP_CPU_TOTAL_UTIL metrics are taken from the Process objects. Microsoft states that there can be CPU time accounted for in the Processor system objects that may not be seen in the Process objects. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_TOTAL_UTIL_CUM ---------------------------------- The average CPU time per interval for processes in this group over the cumulative collection time, or since the last PRM configuration change on HP- UX. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_USER_MODE_TIME ---------------------------------- The time, in seconds, that processes in this group were in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_USER_MODE_UTIL ---------------------------------- The percentage of time that processes in this group were using the CPU in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. High user mode CPU percentages are normal for computation-intensive groups. Low values of user CPU utilization compared to relatively high values for APP_CPU_SYS_MODE_UTIL can indicate a hardware problem or improperly tuned programs in this group. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_DISK_PHYS_IO_RATE ---------------------------------- The number of physical IOs per second for processes in this group during the interval. APP_DISK_PHYS_READ ---------------------------------- The number of physical reads for processes in this group during the interval. APP_DISK_PHYS_READ_RATE ---------------------------------- The number of physical reads per second for processes in this group during the interval. APP_DISK_PHYS_WRITE ---------------------------------- The number of physical writes for processes in this group during the interval. APP_DISK_PHYS_WRITE_RATE ---------------------------------- The number of physical writes per second for processes in this group during the interval. APP_DISK_SUBSYSTEM_QUEUE ---------------------------------- The average number of processes or kernel threads in this group that were blocked on the disk subsystem (waiting for their file system IOs to complete) during the interval. On HP-UX, this is based on the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On Linux, this is based on the sum of all processes or kernel threads blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. APP_DISK_SUBSYSTEM_WAIT_PCT ---------------------------------- The percentage of time processes or kernel threads in this group were blocked on the disk subsystem (waiting for their file system IOs to complete) during the interval. On HP-UX, this is based on the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On Linux, this is based on the sum of all processes or kernel threads blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. A percentage of time spent in a wait state is calculated as the accumulated time kernel threads belonging to processes in this group spent waiting in this state, divided by accumulated alive time of kernel threads belonging to processes in this group during the interval. For example, assume an application has 20 kernel threads. During the interval, ten kernel threads slept the entire time, while ten kernel threads waited on terminal input. As a result, the application wait percent values would be 50% for SLEEP and 50% for TERM (that is, terminal IO). The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. APP_INTERVAL ---------------------------------- The amount of time in the interval. APP_INTERVAL_CUM ---------------------------------- The amount of time over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. APP_IO_BYTE ---------------------------------- The number of characters (in KB) transferred for processes in this group to all devices during the interval. This includes IO to disk, terminal, tape and printers. APP_IO_BYTE_RATE ---------------------------------- The number of characters (in KB) per second transferred for processes in this group to all devices during the interval. This includes IO to disk, terminal, tape and printers. APP_MAJOR_FAULT ---------------------------------- The number of major page faults that required a disk IO for processes in this group during the interval. APP_MAJOR_FAULT_RATE ---------------------------------- The number of major page faults per second that required a disk IO for processes in this group during the interval. APP_MEM_RES ---------------------------------- On Unix systems, this is the sum of the size (in MB) of resident memory for processes in this group that were alive at the end of the interval. This consists of text, data, stack, and shared memory regions. On HP-UX, since PROC_MEM_RES typically takes shared region references into account, this approximates the total resident (physical) memory consumed by all processes in this group. On all other Unix systems, this is the sum of the resident memory region sizes for all processes in this group. When the resident memory size for processes includes shared regions, such as shared memory and library text and data, the shared regions are counted multiple times in this sum. For example, if the application contains four processes that are attached to a 500MB shared memory region that is all resident in physical memory, then 2000MB is contributed towards the sum in this metric. As such, this metric can overestimate the resident memory being used by processes in this group when they share memory regions. Refer to the help text for PROC_MEM_RES for additional information. On Windows, this is the sum of the size (in MB) of the working sets for processes in this group during the interval. The working set counts memory pages referenced recently by the threads making up this group. Note that the size of the working set is often larger than the amount of pagefile space consumed. APP_MEM_UTIL ---------------------------------- On Unix systems, this is the approximate percentage of the system’s physical memory used as resident memory by processes in this group that were alive at the end of the interval. This metric summarizes process private and shared memory in each application. On Windows, this is an estimate of the percentage of the system’s physical memory allocated for working set memory by processes in this group during the interval. On HP-UX, this consists of text, data, stack, as well the process’ portion of shared memory regions (such as, shared libraries, text segments, and shared data). The sum of the shared region pages is typically divided by the number of references. APP_MEM_VIRT ---------------------------------- On Unix systems, this is the sum (in MB) of virtual memory for processes in this group that were alive at the end of the interval. This consists of text, data, stack, and shared memory regions. On HP-UX, since PROC_MEM_VIRT typically takes shared region references into account, this approximates the total virtual memory consumed by all processes in this group. On all other Unix systems, this is the sum of the virtual memory region sizes for all processes in this group. When the virtual memory size for processes includes shared regions, such as shared memory and library text and data, the shared regions are counted multiple times in this sum. For example, if the application contains four processes that are attached to a 500MB shared memory region, then 2000MB is reported in this metric. As such, this metric can overestimate the virtual memory being used by processes in this group when they share memory regions. On Windows, this is the sum (in MB) of paging file space used for all processes in this group during the interval. Groups of processes may have working set sizes (APP_MEM_RES) larger than the size of their pagefile space. APP_MINOR_FAULT ---------------------------------- The number of minor page faults satisfied in memory (a page was reclaimed from one of the free lists) for processes in this group during the interval. APP_MINOR_FAULT_RATE ---------------------------------- The number of minor page faults per second satisfied in memory (pages were reclaimed from one of the free lists) for processes in this group during the interval. APP_NAME ---------------------------------- The name of the application (up to 20 characters). This comes from the parm file where the applications are defined. The application called “other” captures all processes not aggregated into applications specifically defined in the parm file. In other words, if no applications are defined in the parm file, then all process data would be reflected in the “other” application. APP_NUM ---------------------------------- The sequentially assigned number of this application or, on Solaris, the project ID when application grouping by project is enabled. APP_PRI ---------------------------------- On Unix systems, this is the average priority of the processes in this group during the interval. On Windows, this is the average base priority of the processes in this group during the interval. APP_PRI_QUEUE ---------------------------------- The average number of processes or kernel threads in this group blocked on PRI (waiting for their priority to become high enough to get the CPU) during the interval. This is calculated as the accumulated time that all processes or kernel threads in this group spent blocked on PRI divided by the interval time. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. APP_PRI_WAIT_PCT ---------------------------------- The percentage of time processes or kernel threads in this group were blocked on PRI (waiting for their priority to become high enough to get the CPU) during the interval. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. A percentage of time spent in a wait state is calculated as the accumulated time kernel threads belonging to processes in this group spent waiting in this state, divided by accumulated alive time of kernel threads belonging to processes in this group during the interval. For example, assume an application has 20 kernel threads. During the interval, ten kernel threads slept the entire time, while ten kernel threads waited on terminal input. As a result, the application wait percent values would be 50% for SLEEP and 50% for TERM (that is, terminal IO). The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. APP_PROC_RUN_TIME ---------------------------------- The average run time for processes in this group that completed during the interval. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. APP_SAMPLE ---------------------------------- The number of samples of process data that have been averaged or accumulated during this sample. APP_TIME ---------------------------------- The end time of the measurement interval. BYCPU_ACTIVE ---------------------------------- Indicates whether or not this CPU is online. A CPU that is online is considered active. For HP-UX and certain versions of Linux, the sar(1M) command allows you to check the status of the system CPUs. For SUN and DEC, the commands psrinfo(1M) and psradm(1M) allow you to check or change the status of the system CPUs. For AIX, the pstat(1) command allows you to check the status of the system CPUs. BYCPU_CPU_CLOCK ---------------------------------- The clock speed of the CPU in the current slot. The clock speed is in MHz for the selected CPU. The Linux kernel currently doesn’t provide any metadata information for disabled CPUs. This means that there is no way to find out types, speeds, as well as hardware IDs or any other information that is used to determine the number of cores, the number of threads, the HyperThreading state, etc... If the agent (or Glance) is started while some of the CPUs are disabled, some of these metrics will be “na”, some will be based on what is visible at startup time. All information will be updated if/when additional CPUs are enabled and information about them becomes available. The configuration counts will remain at the highest discovered level (i.e. if CPUs are then disabled, the maximum number of CPUs/cores/etc... will remain at the highest observed level). It is recommended that the agent be started with all CPUs enabled. On Linux, this value is always rounded up to the next MHz. BYCPU_CPU_GUEST_TIME ---------------------------------- The time, in seconds, that this CPU was servicing guests during the interval. Guest time, on Linux KVM hosts, is the time that is spent servicing guests. Xen hosts, as of this release, do not update these counters, neither do other OSes. BYCPU_CPU_GUEST_TIME_CUM ---------------------------------- The time, in seconds, that this CPU was servicing guests over the cumulative collection time. Guest time, on Linux KVM hosts, is the time that is spent servicing guests. Xen hosts, as of this release, do not update these counters, neither do other OSes. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYCPU_CPU_GUEST_UTIL ---------------------------------- The percentage of time that this CPU was servicing guests during the interval. Guest time, on Linux KVM hosts, is the time that is spent servicing guests. Xen hosts, as of this release, do not update these counters, neither do other OSes. BYCPU_CPU_GUEST_UTIL_CUM ---------------------------------- The percentage of time that this CPU was servicing guests over the cumulative collection time. Guest time, on Linux KVM hosts, is the time that is spent servicing guests. Xen hosts, as of this release, do not update these counters, neither do other OSes. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYCPU_CPU_INTERRUPT_TIME ---------------------------------- The time, in seconds, that this CPU was performing interrupt processing during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_INTERRUPT_TIME_CUM ---------------------------------- The time, in seconds, that this CPU was performing interrupt processing over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_INTERRUPT_UTIL ---------------------------------- The percentage of time that this CPU was performing interrupt processing during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_INTERRUPT_UTIL_CUM ---------------------------------- The percentage of time that this CPU was performing interrupt processing over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_NICE_TIME ---------------------------------- The time, in seconds, that this CPU was in user mode at a nice priority during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_NICE_TIME_CUM ---------------------------------- The time, in seconds, that this CPU was in user mode at a nice priority over the cumulative collection time. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_NICE_UTIL ---------------------------------- The percentage of time that this CPU was in user mode at a nice priority during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_NICE_UTIL_CUM ---------------------------------- The average percentage of time that this CPU was in user mode at a nice priority over the cumulative collection time. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_STOLEN_TIME ---------------------------------- The time, in seconds, that was stolen from this CPU during the interval. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’. BYCPU_CPU_STOLEN_TIME_CUM ---------------------------------- The time, in seconds, that was stolen from this CPU over the cumulative collection time. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYCPU_CPU_STOLEN_UTIL ---------------------------------- The time, in seconds, that was stolen from this CPU during the interval. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’. BYCPU_CPU_STOLEN_UTIL_CUM ---------------------------------- The average percentage of time that was stolen from this CPU over the cumulative collection time. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYCPU_CPU_SYS_MODE_TIME ---------------------------------- The time, in seconds, that this CPU (or logical processor) was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_SYS_MODE_TIME_CUM ---------------------------------- The time, in seconds, that this CPU (or logical processor) was in system mode over the cumulative collection time. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_SYS_MODE_UTIL ---------------------------------- The percentage of time that this CPU (or logical processor) was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_SYS_MODE_UTIL_CUM ---------------------------------- The percentage of time that this CPU (or logical processor) was in system mode over the cumulative collection time. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_TOTAL_TIME ---------------------------------- The total time, in seconds, that this CPU (or logical processor) was not idle during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_TOTAL_TIME_CUM ---------------------------------- The total time, in seconds, that this CPU (or logical processor) was not idle over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_TOTAL_UTIL ---------------------------------- The percentage of time that this CPU (or logical processor) was not idle during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_TOTAL_UTIL_CUM ---------------------------------- The average percentage of time that this CPU (or logical processor) was not idle over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_TYPE ---------------------------------- The type of processor in the current slot. The Linux kernel currently doesn’t provide any metadata information for disabled CPUs. This means that there is no way to find out types, speeds, as well as hardware IDs or any other information that is used to determine the number of cores, the number of threads, the HyperThreading state, etc... If the agent (or Glance) is started while some of the CPUs are disabled, some of these metrics will be “na”, some will be based on what is visible at startup time. All information will be updated if/when additional CPUs are enabled and information about them becomes available. The configuration counts will remain at the highest discovered level (i.e. if CPUs are then disabled, the maximum number of CPUs/cores/etc... will remain at the highest observed level). It is recommended that the agent be started with all CPUs enabled. BYCPU_CPU_USER_MODE_TIME ---------------------------------- The time, in seconds, during the interval that this CPU (or logical processor) was in user mode. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_USER_MODE_TIME_CUM ---------------------------------- The time, in seconds, that this CPU (or logical processor) was in user mode over the cumulative collection time. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_USER_MODE_UTIL ---------------------------------- The percentage of time that this CPU (or logical processor) was in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_USER_MODE_UTIL_CUM ---------------------------------- The average percentage of time that this CPU (or logical processor) was in user mode over the cumulative collection time. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_ID ---------------------------------- The ID number of this CPU. On some Unix systems, such as SUN, CPUs are not sequentially numbered. BYCPU_INTERRUPT ---------------------------------- The number of device interrupts for this CPU during the interval. On HP-UX, a value of “na” is displayed on a system with multiple CPUs. BYCPU_INTERRUPT_RATE ---------------------------------- The average number of device interrupts per second for this CPU during the interval. On HP-UX, a value of “na” is displayed on a system with multiple CPUs. BYCPU_STATE ---------------------------------- A text string indicating the current state of a processor. On HP-UX, this is either “Enabled”, “Disabled” or “Unknown”. On AIX, this is either “Idle/Offline” or “Online”. On all other systems, this is either “Offline”, “Online” or “Unknown”. BYDSK_AVG_REQUEST_QUEUE ---------------------------------- The average number of IO requests that were in the wait and service queues for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For example, if 4 intervals have passed with average queue lengths of 0, 2, 0, and 6, then the average number of IO requests over all intervals would be 2. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_AVG_SERVICE_TIME ---------------------------------- The average time, in milliseconds, that this disk device spent processing each disk request during the interval. For example, a value of 5.14 would indicate that disk requests during the last interval took on average slightly longer than five one-thousandths of a second to complete for this device. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. This is a measure of the speed of the disk, because slower disk devices typically show a larger average service time. Average service time is also dependent on factors such as the distribution of I/O requests over the interval and their locality. It can also be influenced by disk driver and controller features such as I/O merging and command queueing. Note that this service time is measured from the perspective of the kernel, not the disk device itself. For example, if a disk device can find the requested data in its cache, the average service time could be quicker than the speed of the physical disk hardware. This metric can be used to help determine which disk devices are taking more time than usual to process requests. BYDSK_BUSY_TIME ---------------------------------- The time, in seconds, that this disk device was busy transferring data during the interval. On HP-UX, this is the time, in seconds, during the interval that the disk device had IO in progress from the point of view of the Operating System. In other words, the time, in seconds, the disk was busy servicing requests for this device. BYDSK_DEVNAME ---------------------------------- The name of this disk device. On HP-UX, the name identifying the specific disk spindle is the hardware path which specifies the address of the hardware components leading to the disk device. On SUN, these names are the same disk names displayed by “iostat”. On AIX, this is the path name string of this disk device. This is the fsname parameter in the mount(1M) command. If more than one file system is contained on a device (that is, the device is partitioned), this is indicated by an asterisk (“*”) at the end of the path name. On OSF1, this is the path name string of this disk device. This is the file- system parameter in the mount(1M) command. On Windows, this is the unit number of this disk device. BYDSK_DEVNO ---------------------------------- Major / Minor number of the device. BYDSK_DIRNAME ---------------------------------- The name of the file system directory mounted on this disk device. If more than one file system is mounted on this device, “Multiple FS” is seen. BYDSK_ID ---------------------------------- The ID of the current disk device. BYDSK_INTERVAL ---------------------------------- The amount of time in the interval. BYDSK_INTERVAL_CUM ---------------------------------- The amount of time over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYDSK_PHYS_BYTE ---------------------------------- The number of KBs of physical IOs transferred to or from this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_BYTE_RATE ---------------------------------- The average KBs per second transferred to or from this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_BYTE_RATE_CUM ---------------------------------- The average number of KBs per second of physical reads and writes to or from this disk device over the cumulative collection time. On Unix systems, this includes all types of physical disk IOs including file system, virtual memory, and raw IOs. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYDSK_PHYS_IO ---------------------------------- The number of physical IOs for this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw reads. BYDSK_PHYS_IO_RATE ---------------------------------- The average number of physical IO requests per second for this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system IO, virtual memory and raw IO. BYDSK_PHYS_IO_RATE_CUM ---------------------------------- The average number of physical reads and writes per second for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYDSK_PHYS_READ ---------------------------------- The number of physical reads for this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On AIX, this is an estimated value based on the ratio of read bytes to total bytes transferred. The actual number of reads is not tracked by the kernel. This is calculated as BYDSK_PHYS_READ = BYDSK_PHYS_IO * (BYDSK_PHYS_READ_BYTE / BYDSK_PHYS_IO_BYTE) BYDSK_PHYS_READ_BYTE ---------------------------------- The KBs transferred from this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_READ_BYTE_RATE ---------------------------------- The average KBs per second transferred from this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_READ_BYTE_RATE_CUM ---------------------------------- The average number of KBs per second of physical reads from this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYDSK_PHYS_READ_RATE ---------------------------------- The average number of physical reads per second for this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On AIX, this is an estimated value based on the ratio of read bytes to total bytes transferred. The actual number of reads is not tracked by the kernel. This is calculated as BYDSK_PHYS_READ_RATE = BYDSK_PHYS_IO_RATE * (BYDSK_PHYS_READ_BYTE / BYDSK_PHYS_IO_BYTE) BYDSK_PHYS_READ_RATE_CUM ---------------------------------- The average number of physical reads per second for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYDSK_PHYS_WRITE ---------------------------------- The number of physical writes for this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On AIX, this is an estimated value based on the ratio of write bytes to total bytes transferred because the actual number of writes is not tracked by the kernel. This is calculated as BYDSK_PHYS_WRITE = BYDSK_PHYS_IO * (BYDSK_PHYS_WRITE_BYTE / BYDSK_PHYS_IO_BYTE) BYDSK_PHYS_WRITE_BYTE ---------------------------------- The KBs transferred to this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_WRITE_BYTE_RATE ---------------------------------- The average KBs per second transferred to this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_WRITE_BYTE_RATE_CUM ---------------------------------- The average number of KBs per second of physical writes to this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYDSK_PHYS_WRITE_RATE ---------------------------------- The average number of physical writes per second for this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On AIX, this is an estimated value based on the ratio of write bytes to total bytes transferred. The actual number of writes is not tracked by the kernel. This is calculated as BYDSK_PHYS_WRITE_RATE = BYDSK_PHYS_IO_RATE * (BYDSK_PHYS_WRITE_BYTE / BYDSK_PHYS_IO_BYTE) BYDSK_PHYS_WRITE_RATE_CUM ---------------------------------- The average number of physical writes per second for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYDSK_QUEUE_0_UTIL ---------------------------------- The percentage of intervals during which there were no IO requests pending for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 1.5, 0, and 3, then the value for this metric would be 50% since 50% of the intervals had a zero queue length. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_QUEUE_2_UTIL ---------------------------------- The percentage of intervals during which there were 1 or 2 IO requests pending for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 1, 0, and 2, then the value for this metric would be 50% since 50% of the intervals had a 1-2 queue length. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_QUEUE_4_UTIL ---------------------------------- The percentage of intervals during which there were 3 or 4 IO requests waiting to use this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 3, 0, and 4, then the value for this metric would be 50% since 50% of the intervals had a 3-4 queue length. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_QUEUE_8_UTIL ---------------------------------- The percentage of intervals during which there were between 5 and 8 IO requests pending for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 8, 0, and 5, then the value for this metric would be 50% since 50% of the intervals had a 5-8 queue length. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_QUEUE_X_UTIL ---------------------------------- The percentage of intervals during which there were more than 8 IO requests pending for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 9, 0, and 10, then the value for this metric would be 50% since 50% of the intervals had queue length greater than 8. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_REQUEST_QUEUE ---------------------------------- The average number of IO requests that were in the wait queue for this disk device during the interval. These requests are the physical requests (as opposed to logical IO requests). Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_TIME ---------------------------------- The time of day of the interval. BYDSK_UTIL ---------------------------------- On HP-UX, this is the percentage of the time during the interval that the disk device had IO in progress from the point of view of the Operating System. In other words, the utilization or percentage of time busy servicing requests for this device. On the non-HP-UX systems, this is the percentage of the time that this disk device was busy transferring data during the interval. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. This is a measure of the ability of the IO path to meet the transfer demands being placed on it. Slower disk devices may show a higher utilization with lower IO rates than faster disk devices such as disk arrays. A value of greater than 50% utilization over time may indicate that this device or its IO path is a bottleneck, and the access pattern of the workload, database, or files may need reorganizing for better balance of disk IO load. BYLS_BOOT_TIME ---------------------------------- On vMA, for a host and logical system the metric is the date and time when the system was last booted. The value is NA for resource pool. Note that this date is obtained from the VMware API as an already formatted string and may not conform to the expected localization. BYLS_CLUSTER_NAME ---------------------------------- On vMA, for a host and resource pool it is the name of the cluster to which the host belongs to when it is managed by virtual centre. For a logical system, the value is NA. BYLS_CPU_CLOCK ---------------------------------- On vMA, for a host and logical system, it is the clock speed of the CPUs in MHz if all of the processors have the same clock speed. For a resource pool the value is NA. This metric represents the CPU clock speed. For an AIX frame, this metric is available only if the LPAR supports perfstat_partition_config call from libperfstat.a. This is usually present on AIX 7.1 onwards. For an LPAR, this value will be na. BYLS_CPU_CYCLE_ENTL_MAX ---------------------------------- On vMA, for a host, logical system and resource pool this value indicates the maximum processor capacity, in MHz, configured for the entity. If the maximum processor capacity is not configured for the entity, a value of “-3” will be displayed in PA and “ul”( unlimited ) in other clients. On HPUX, the maximum processor capacity, in MHz, configured for this logical system. BYLS_CPU_CYCLE_ENTL_MIN ---------------------------------- On vMA, for a host, logical system and resource pool this value indicates the minimum processor capacity, in MHz, configured for the entity. On HPUX, the minimum processor capacity, in MHz, configured for this logical system. BYLS_CPU_CYCLE_TOTAL_USED ---------------------------------- On vMA, for host, resource pool and logical system, it is the total time the physical CPUs were utilized during the interval, represented in cpu cycles. On KVM/Xen, this is the number of milliseconds used on all CPUs during the interval. BYLS_CPU_EFFECTIVE_UTIL ---------------------------------- On vMA, for a cluster the metric is theutilization of total available CPU resources of all hosts within that cluster. Effective CPU = Aggregate host CPU capacity - (VMkernel CPU + Service Console CPU + other service CPU). The value is NA for all other entities. BYLS_CPU_ENTL_EMIN ---------------------------------- On vMA, for host, logical system and resource pool the value is “na”. BYLS_CPU_ENTL_MAX ---------------------------------- The maximum CPU units configured for a logical system. On HP-UX HPVM, this metric indicates the maximum percentage of physical CPU that a virtual CPU of this logical system can get. On AIX SPLPAR, this metric is equivalent to “Maximum Capacity” field of ‘lparstat -i’ command. For WPARs, it is the maximum percentage of CPU that a WPAR can have even if there is no contention for CPU. WPAR shares CPU units of its global environment. On Hyper-V host, for Root partition, this metric is NA. On vMA, for a host, the metric is equivalent to total number of cores on the host. For a resource pool and a logical system, this metrics indicates the maximum CPU units configured for it. BYLS_CPU_ENTL_MIN ---------------------------------- The minimum CPU units configured for this logical system. On HP-UX HPVM, this metric indicates the minimum percentage of physical CPU that a virtual CPU of this logical system is guaranteed. On AIX SPLPAR, this metric is equivalent to “Minimum Capacity” field of ‘lparstat -i’ command. For WPARs, it is the minimum CPU share assigned to a WPAR that is guaranteed. WPAR shares CPU units of its global environment. On Hyper-V host, for Root partition, this metric is NA. On vMA, for a host, the metric is equivalent to total number of cores on the host. For a resource pool and a logical system, this metrics indicates the guranteed minimum CPU units configured for it. On Solaris Zones, this metrics indicates the configured minimum CPU percentage reserved for a logical system. For Solaris Zones, this metric is calculated as: BYLS_CPU_ENTL_MIN = ( BYLS_CPU_SHARES_PRIO / Pool-Cpu-Shares ) where, Pool-Cpu-Shares is the total CPU shares available with CPU pool the zone is associated with. Pool-Cpu-Shares is addition of BYLS_CPU_SHARES_PRIO values for all active zones associated with this pool. BYLS_CPU_ENTL_UTIL ---------------------------------- Percentage of entitled processing units (guaranteed processing units allocated to this logical system) consumed by the logical system. On a HP-UX HPVM host the metric indicates the logical system’s CPU utilization with respect to minimum CPU entitlement. On HP-UX HPVM host, this metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_PHYSC / (BYLS_CPU_ENTL_MIN * BYLS_NUM_CPU)) * 100 On AIX, this metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_PHYSC / BYLS_CPU_ENTL) * 100 On WPAR, this metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_PHYSC / BYLS_CPU_ENTL_MAX) * 100 This metric matches “%Resc” of topas command (inside WPAR) On Solaris Zones, the metric indicates the logical system’s CPU utilization with respect to minimum CPU entitlement. This metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_TOTAL_UTIL / BYLS_CPU_SHARES_PRIO) * 100 If a Solaris zone is not assigned a CPU entitlement value then a CPU entitlement value is derived for this zone based on total CPU entitlement associated with the CPU pool this zone is attached to. On Hyper-V host, for Root partition, this metric is NA. On vMA, for a host the value is same as BYLS_CPU_PHYS_TOTAL_UTIL while for logical system and resource pool the value is the percentage of processing units consumed w.r.t minimum CPU entitlement. BYLS_CPU_FAILOVER ---------------------------------- On vMA, for a cluster the metric is theVMware HA Number of failures that can be tolerated.The value is NA for all other entities. BYLS_CPU_MT_ENABLED ---------------------------------- Indicates whether the CPU hardware threads are enabled(“On”) or not(“Off”) for a logical system. For AIX WPARs, the metric will be “na”. On vMA, this metric indicates whether the CPU hardware threads are enabled or not for a host while for a resource pool and a logical system the value is not available(“na”). BYLS_CPU_PHYSC ---------------------------------- This metric indicates the number of CPU units utilized by the logical system. On an Uncapped logical system, this value will be equal to the CPU units capacity used by the logical system during the interval. This can be more than the value entitled for a logical system. BYLS_CPU_PHYS_READY_UTIL ---------------------------------- On vMA, for a logical system it is the percentage of time, during the interval, that the CPU was in ready state. For a host and resource pool the value is NA. BYLS_CPU_PHYS_SYS_MODE_UTIL ---------------------------------- The percentage of time the physical CPUs were in system mode (kernel mode) for the logical system during the interval. On AIX LPAR, this value is equivalent to “%sys” field reported by the “lparstat” command. On Hyper-V host, this metric indicates the percentage of time spent in Hypervisor code. On vMA, the metric indicates the percentage of time the physical CPUs were in system mode during the interval for the host or logical system. On vMA, for a resource pool, this metric is “na”. BYLS_CPU_PHYS_TOTAL_TIME ---------------------------------- Total time in seconds, spent by the logical system on the physical CPUs. On HPUX, this information is updated internally every 10 seconds so it may take that long for these values to be updated in PA/Glance. On vMA, the value indicates the time spent in seconds on the physical CPU. by logical system or host or resource pool, On KVM/Xen, this value is core-normalized if GBL_IGNORE_MT is enabled on the server. BYLS_CPU_PHYS_TOTAL_UTIL ---------------------------------- Percentage of total time the physical CPUs were utilized by this logical system during the interval. On HPUX, this information is updated internally every 10 seconds so it may take that long for these values to be updated in PA/Glance. On Solaris, this metric is calculated with respect to the available active physical CPUs on the system. On AIX, this metric is equivalent to sum of BYLS_CPU_PHYS_USER_MODE_UTIL and BYLS_CPU_PHYS_SYS_MODE_UTIL. For AIX lpars, the metric is calculated with respect to the available physical CPUs in the pool to which this LPAR belongs to. For AIX WPARs, the metric is calculated with respect to the available physical CPUs in the resource set or Global Environment. On vMA, the value indicates percentage of total time the physical CPUs were utilized by logical system or host or resource pool, On KVM/Xen, this value is core-normalized if GBL_IGNORE_MT is enabled on the server. BYLS_CPU_PHYS_USER_MODE_UTIL ---------------------------------- The percentage of time the physical CPUs were in user mode for the logical system during the interval. On AIX LPAR, this value is equivalent to “%user” field reported by the “lparstat” command. On Hyper-V host, this metric indicates the percentage of time spent in guest code. On vMA, the metrics indicates the percentage of time the physical CPUs were in user mode during the interval for the host or logical system. On vMA, for a resource pool, this metric is “na”. BYLS_CPU_PHYS_WAIT_UTIL ---------------------------------- On vMA, for a logical system it is the percentage of time, during the interval, that the virtual CPU was waiting for the IOs to complete. For a host and resource pool the value is NA. BYLS_CPU_SHARES_PRIO ---------------------------------- This metric indicates the weightage/priority assigned to a Uncapped logical system. This value determines the minimum share of unutilized processing units that this logical system can utilize. The value of this metric will be “-3” in PA and “ul” in other clients if cpu shares value is ‘Unlimited’ for a logical system. On AIX SPLPAR this value is dependent on the available processing units in the pool and can range from 0 to 255. For WPARs, this metric represents how much of a particular resource a WPAR receives relative to the other WPARs. On vMA, for logical system and resource pool this value can range from 1 to 1000000 while for host the value is NA. On Solaris Zones, this metric sets a limit on the number of fair share scheduler (FSS) CPU shares for a zone. On Hyper-V host, this metric specifies allocation of CPU resources when more than one virtual machine is running and competing for resources. This value can range from 0 to 10000. For Root partition, this metric is NA. BYLS_CPU_SYS_MODE_UTIL ---------------------------------- On vMA, for a host and a logical system, this metric indicates the percentage of time the CPU was in system mode. On vMA, for a resource pool, this metric is “na”. during the interval. BYLS_CPU_TOTAL_UTIL ---------------------------------- Percentage of total time the logical CPUs were not idle during this interval. This metric is calculated against the number of logical CPUs configured for this logical system. For AIX wpars, the metric represents the percentage of time the physical CPUs were not idle during this interval. BYLS_CPU_UNRESERVED ---------------------------------- On vMA, for host, it is the number of CPU cycles that are available for creating a new logical system. For a logical system and resource pool the value is NA. BYLS_CPU_USER_MODE_UTIL ---------------------------------- On vMA, for a host and a logical system, this metric indicates the percentage of time the CPU was in user mode during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DATACENTER_NAME ---------------------------------- On vMA, for a host it is the name of the datacenter to which the host belongs to when it is managed by virtual center. To uniquely identify datacenter in a virtual center, datacenter name is appended with the folder names in bottom up order. For a logical system and resource pool, the value is NA. BYLS_DISK_CAPACITY ---------------------------------- On vMA, for a datastore the metric is the capacityof the datastore(in MB).The value is NA for all other entities. BYLS_DISK_COMMAND_ABORT_RATE ---------------------------------- On vMA, for a host, the metric is the measureof the disk commands abort rate on the host. It is calculated bydividing the total commands aborted in the interval by the total commands issued in that interval. The value is NA for all other entities. BYLS_DISK_FREE_SPACE ---------------------------------- On vMA, for a datastore the metric is the amountof free space (in MB) available in the datastore.The value is NA for all other entities. BYLS_DISK_IORM_ENABLED ---------------------------------- On vMA, for a datastore the metric is the measureof whether IORM is enabled for the datastore.The value is NA for all other entities. BYLS_DISK_IORM_THRESHOLD ---------------------------------- On vMA, for a datastore the metric is the thresholdvalue of the IORM of the datastore.The value is NA for all other entities. BYLS_DISK_PHYS_BYTE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the number of KBs transferred to and from disks during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DISK_PHYS_BYTE_RATE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the average number of KBs per second at which data was transferred to and from disks during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DISK_PHYS_READ ---------------------------------- On vMA, for a host and a logical system this metric indicates the number of physical reads during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DISK_PHYS_READ_BYTE_RATE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the average number of KBs transferred from the disk per second during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DISK_PHYS_READ_RATE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the number of physical reads per second during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DISK_PHYS_WRITE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the number of physical writes during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DISK_PHYS_WRITE_BYTE_RATE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the average number of KBs transferred to the disk per second during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DISK_PHYS_WRITE_RATE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the number of physical writes per second during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DISK_READ_LATENCY ---------------------------------- On vMA, for a host and guest, the metric is the totalread latency experienced on the entity.The value is NA for all other entities. HELP_ENDFUNCTION_BEGIN return (metric_t) lsr->ls_disk_totalReadLatency; FUNCTION_END METRIC BYLS_DISK_WRITE_LATENCY = 3775 CLASS _BYLS TYPE T_NBR AVAIL LX PCSLABEL LS_DISK_WRITE_LATENCY HEADING1 “Dsk Wrt” HEADING2 “Latency” WIDTH 8 PRECISION 0 SCALE10 0 HELP_BEGIN On vMA, for a host and guest, the metric is the totalwrite latency experienced on the entity.The value is NA for all other entities. HELP_ENDFUNCTION_BEGIN return (metric_t) lsr->ls_disk_totalWriteLatency; FUNCTION_END METRIC BYLS_DISK_QUEUE_DEPTH_PEAK = 3776 CLASS _BYLS TYPE T_NBR AVAIL LX PCSLABEL LS_DISK_QUEUE_DEPTH_PEAK HEADING1 “Dsk Q” HEADING2 “Dpth Pk” WIDTH 8 PRECISION 0 SCALE10 0 HELP_BEGIN On vMA, for a host, the metric is the measureof the wait queue depth experienced on the host.The value is NA for all other entities. BYLS_DISK_SHARE_PRIORITY ---------------------------------- On vMA, for a datastore the metric is the measureof the shares priority the datastore.The value is NA for all other entities. BYLS_DISK_THROUGHPUT_CONTENTION ---------------------------------- On vMA, for a datastore the metric is the diskthroughput contention in that interval.The value is NA for all other entities. BYLS_DISK_THROUGPUT_USAGE ---------------------------------- On vMA, for a datastore the metric is the diskthroughput usage.The value is NA for all other entities. BYLS_DISK_UTIL ---------------------------------- On vMA, for a host, it is the average percentage of time during the interval (average utilization) that all the disks had IO in progress. For logical system and resource pool the value is NA. BYLS_DISK_UTIL_PEAK ---------------------------------- On vMA, for a host, it is the utilization of the busiest disk during the interval. For a logical system and resource pool the value is NA. BYLS_DISPLAY_NAME ---------------------------------- On vMA, this metric indicates the name of the host or logical system or resource pool. On HPVM, this metric indicates the Virtual Machine name of the logical systemand is equivalent to “Virtual Machine Name” field of ‘hpvmstatus’ command. On AIX the value is as returned by the command “uname -n” (that is, the string returned from the “hostname” program). On Solaris Zones, this metric indicates the zone name and is equivalent to ‘NAME’ field of ‘zoneadm list -vc’ command. On Hyper-V host, this metric indicates the Virtual Machine name of the logical systemand is equivalent to the Name displayed in Hyper-V Manager. For Root partition, the value is always “Root”. BYLS_GUEST_TOOLS_STATUS ---------------------------------- On vMA, for a guest the metric is thecurrent status of Guest Integration Tools in the guest operating system, if known. The value is NA for all other entities. BYLS_IP_ADDRESS ---------------------------------- This metric indicates IP Address of the particular logical system. On vMA, this metric indicates the IP Address for a host and a logical system while for a resource pool the value is NA. BYLS_LS_CONNECTION_STATE ---------------------------------- For a host this metric is the current status of the connection.For logical systems, it indicates whether or not the entity is available for management. It can have values as - Connected, Disconnected or NotResponding. The value is NA for all other entities. BYLS_LS_HOSTNAME ---------------------------------- This is the DNS registered name of the system. On Hyper-V host, this metric is NA if the logical system is not active or Hyper-V Integration Components are not installed on it. On vMA, for a host and logical system the metric is the Fully Qualified Domain Name, while for resource pool the value is NA. BYLS_LS_HOST_HOSTNAME ---------------------------------- On vMA, for logical system and resource pool, it is the FQDN of the host on which they are hosted. For a host, the value is NA. BYLS_LS_ID ---------------------------------- An unique identifier of the logical system. On HPVM, this metric is a numeric id and is equivalent to “VM # “ field of ‘hpvmstatus’ command. On AIX LPAR, this metric indicates partition number and is equivalent to “Partition Number” field of ‘lparstat -i’ command. For aix wpar, this metric represents the partition number and is equivalent to “uname -W” from inside wpar. On Solaris Zones, this metric indicates the zone id and is equivalent to ‘ID’ field of ‘zoneadm list -vc’ command. On Hyper-V host, this metric indicates the PID of the process corresponding to this logical system. For Root partition, this metric is NA. On vMA, this metric is a unique identifier for a host, resource pool and a logical system. The value of this metric may change for an instance across collection intervals. BYLS_LS_MODE ---------------------------------- This metric indicates whether the CPU entitlement for the logical system is Capped or Uncapped. On AIX SPLPAR, this metric is same as “Mode” field of ‘lparstat -i’ command. For WPARs, this metric is always CAPPED. On vMA, the value is Capped for a host and Uncapped for a logical system. For resource pool, the value is Uncapped or Capped depending on whether the reservation is expandable or not for it. On Solaris Zones, this metric is “Capped” when the zone is assigned CPU shares and is attached to a valid CPU pool. BYLS_LS_NAME ---------------------------------- This is the name of the computer. On HPVM, this metric indicates the Virtual Machine name of the logical systemand is equivalent to “Virtual Machine Name” field of ‘hpvmstatus’ command. On AIX the value is as returned by the command “uname -n” (that is, the string returned from the “hostname” program). On vMA, this metric is a unique identifier for host, resource pool and a logical system. The value of this metric remains the same, for an instance, across collection intervals. On Solaris Zones, this metric indicates the zone name and is equivalent to ‘NAME’ field of ‘zoneadm list -vc’ command. On Hyper-V host, this metric indicates the name of the XML file which has configuration information of the logical system. This file will be present under the logical system’s installation directory indicated by BYLS_LS_PATH. For Root partition, the value is always “Root”. BYLS_LS_NUM_SNAPSHOTS ---------------------------------- For a guest, the metric is the number of snapshots created for the system. The value is NA for all other entities. BYLS_LS_OSTYPE ---------------------------------- The Guest OS this logical system is hosting. On HPVM, the metric can have following values: HP-UX Linux Windows OpenVMS Other Unknown On Hyper-V host, the metric can have following values: Windows Other On Hyper-V host, this metric is NA if the logical system is not active or Hyper-V Integration Components are not installed on it. On vMA, the metric can have the following values for host and logical system: ESX/ESXi followed by version or ESX-Serv (applicable only for a host) Linux Windows Solaris Unknown The value is NA for resource pool BYLS_LS_PARENT_TYPE ---------------------------------- On vMA, the metric indicates the type of parent entity. The value is HOST if the parent is a host, RESPOOL if the parent is resource pool. For a host, the value is NA. BYLS_LS_PARENT_UUID ---------------------------------- On vMA, the metric indicates the UUID appended to display_name of the parent entity. For logical system and resource pool this metric could indicate the UUID appended to display_name of a host or resource pool as they can be created under a host or resource pool. For a host, the value is NA. For an LPAR , if the frame is discovered the value will be BYLS_LS_UUID of the frame. BYLS_LS_PATH ---------------------------------- This metric indicates the installation path for the logical system. On Hyper-V host, for Root partition, this metric is NA. On vMA, the metric indicates the installation path for host or logical system. On vMA, for a resource pool and a host, this metric is “na”. BYLS_LS_ROLE ---------------------------------- On vMA, for a host the metric is HOST. For a logical system the value is GUEST and for a resource pool the value is RESPOOL. For logical system which is a vMA or VA, the value is PROXY. For datacenter, the value is DATACENTER. For cluster, the value is CLUSTER. For datastore, the value is DATASTORE. For template, the value is TEMPLATE. For an AIX frame, the role is “Host”. For an LPAR, the role is “Guest”. BYLS_LS_SHARED ---------------------------------- This metric indicates whether the physical CPUs are dedicated to this logical system or shared. On HPUX HPVM, and Hyper-V host,this metric is always “Shared”. On vMA, the value is “Dedicated” for host, and “Shared” for logical system and resource pool. On AIX SPLPAR, this metric is equivalent to “Type” field of ‘lparstat -i’ command. For AIX wpars,this metric will be always “Shared”. On Solaris Zones, this metric is “Dedicated” when this zone is attached to a CPU pool not shared by any other zone. BYLS_LS_STATE ---------------------------------- The state of this logical system. On HPVM, the logical systems can have one of the following states: Unknown Other invalid Up Down Boot Crash Shutdown Hung On vMA, this metric can have one of the following states for a host: on off unknown The values for a logical system can be one of the following: on off suspended unknown The value is NA for resource pool. On Solaris Zones, the logical systems can have one of the following states: configured incomplete installed ready running shutting down mounted On AIX lpars, the logical system will be always active. On AIX wpars, the logical systems can have one of the following states: Broken Transitional Defined Active Loaded Paused Frozen Error A logical system on a Hyper-V host can have the following states: unknown enabled disabled paused suspended starting snapshtng migrating saving stopping deleted pausing resuming BYLS_LS_STATE_CHANGE_TIME ---------------------------------- For a guest, the metric is the epoch time when the last state change was observed. The value is NA for all other entities. BYLS_LS_TYPE ---------------------------------- The type of this logical system. On AIX, the logical systems can have one of the following types: lpar sys wpar app wpar On vMA, the value of this metric is “VMware”. For an AIX frame, the value of this metric is “FRAME”. BYLS_LS_UUID ---------------------------------- UUID of this logical system. This Id uniquely identifies this logical system across multiple hosts. On Hyper-V host, for Root partition, this metric is NA. On vMA, for a logical system or a host, the value indicates the UUID appended to display_name of the system. For a resource pool the value is hostname of the host where resource pool is hosted followed by the unique id of resource pool. For an AIX frame, the value is the display name appended with serial number. For an LPAR, this value is the frame’s name appended with serial number. BYLS_MACHINE_MODEL ---------------------------------- On vMA, for a host, it is the CPU model of the host system. For a logical system and resource pool the value is “na”. The machine model of the AIX Frame if present. For an LPAR, this value would be “na”. BYLS_MEM_ACTIVE ---------------------------------- On vMA, for a logical system it is the amount of memory, that is actively used. For a host and resource pool the value is NA. BYLS_MEM_AVAIL ---------------------------------- On vMA, for a host, the amount of physical available memory in the host system (in MBs unless otherwise specified). For a logical system and resource pool the value is NA. BYLS_MEM_BALLOON_USED ---------------------------------- On vMA, for logical system and cluster, it is the amount of memory held by memory control for ballooning. The value is represented in KB. For a host and resource pool the value is NA. On KVM/Xen, this value will be “na” if version of libvirt doesn’t support memory stats. BYLS_MEM_BALLOON_UTIL ---------------------------------- On vMA, for logical system, it is the amount of memory held by memory control for ballooning. It is represented as a percentage of BYLS_MEM_ENTL. For a host, and resource pool the value is NA. On KVM/Xen, this value will be “na” if version of libvirt doesn’t support memory stats. BYLS_MEM_EFFECTIVE_UTIL ---------------------------------- On vMA, for a cluster the metric is theutilization of total amount of machine memory of all hosts in the cluster that is available for use for virtual machine memory (physical memory for use by the Guest OS) and virtual machine overhead memory. Effective Memory = Aggregate host machine memory - (VMkernel memory + Service Console memory + other service memory). The value is NA for all other entities. BYLS_MEM_ENTL ---------------------------------- The entitled memory configured for this logical system (in MB). On Hyper-V host, for Root partition, this metric is NA. On vMA, for host the value is the physical memory available in the system and for logical system this metric indicates the minimum memory configured while for resource pool the value is NA. For an AIX frame, this value is obtained from the command “lshwres -m -r mem --level sys “. BYLS_MEM_ENTL_MAX ---------------------------------- The maximum amount of memory configured for a logical system, in MB. The value of this metric will be “-3” in PA and “ul” in other clients if entitlement is ‘Unlimited’ for a logical system. On AIX LPARs, this metric will be “na”. On vMA, this metric indicates the maximum amount of memory configured for a resource pool or a logical system. For a host, the value is the amount of physical memory available in the system. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. BYLS_MEM_ENTL_MIN ---------------------------------- The minimum amount of memory configured for the logical system, in MB. On AIX LPARs, this metric will be “na”. On vMA, this metric indicates the reserved amount of memory configured for a host, resource pool or a logical system. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. BYLS_MEM_ENTL_UTIL ---------------------------------- The percentage of entitled memory in use during the interval. On vMA, for a logical system or a host, the value indicates percentage of entitled memory in use during the interval by it. For an AIX frame, this is calculated using “lshwres -r mempool -m “ from HMC. Active Memory Sharing has to be turned on for this. On vMA, for a resource pool, this metric is “na”. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. BYLS_MEM_FREE ---------------------------------- The amount of free memory on the logical system, in MB. On vMA, for a host and logical system, it is the amount of memory not allocated. For a resource pool the value is “na”. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. BYLS_MEM_FREE_UTIL ---------------------------------- The percentage of memory that is free at the end of the interval. On vMA, for a resource pool the value is NA. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. BYLS_MEM_HEALTH ---------------------------------- On vMA, for a host, it is a number that indicates the state of the memory. Low number indicates system is not under memory pressure. For a logical system and resource pool the value is “na”. On vMA, the values are defined as: 0 - High - indicates free memory is available and no memory pressure. 1 - Soft 2 - Hard 3 - Low - indicates there is a pressure for free memory. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. For relevant guests, these values represent the level of memory pressure, 0 being none and 3 being very high. BYLS_MEM_OVERHEAD ---------------------------------- The amount of memory associated with a logical system, that is currently consumed on the host system, due to virtualization. On vMA, this metric indicates the amount of overhead memory associated with a host, logical system and resource pool. BYLS_MEM_PHYS ---------------------------------- On vMA, for host the value is the physical memory available in the system and for logical system this metric indicates the minimum memory configured. On vMA, for a resource pool, this metric is “na”. On HPVM, this metric matches the data in the “Memory Details” section of “hpvmstatus -V”, when the dynamic memory driver is not enabled, and it matches the data in the “Dynamic Memory Information” section when the dynamic memory driver is active. The dynamic memory driver is currently only available on guests running HPUX 11iv3 or newer versions. BYLS_MEM_PHYS_UTIL ---------------------------------- The percentage of physical memory used during the interval. On vMA and Cluster, the metric indicates the percentage of physical memory used by a host, logical system. On vMA, for a resource pool, this metric is “na”. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. On KVM/Xen, this is the percentage of the total memory assigned to the VM that is currently used. For Domain-0 or any other instance with unlimited memory entitlement, it is NA. BYLS_MEM_SHARES_PRIO ---------------------------------- The weightage/priority for memory assigned to this logical system. This value influences the share of unutilized physical Memory that this logical system can utilize. The value of this metric will be “-3” in PA and “ul” in other clients if memory shares value is ‘Unlimited’ for a logical system. On AIX LPARs, this metric will be “na”. On vMA, this metric indicates the share of memory configured to a resource pool and a logical system. For a host the value is NA. BYLS_MEM_SWAPIN ---------------------------------- On vMA, for a logical system the value indicates the amount of memory that is swapped in during the interval. For a host and resource pool the value is NA. On KVM/Xen, this value will be “na” if extended memory statistics are not available. BYLS_MEM_SWAPOUT ---------------------------------- On vMA, for a logical system the value indicates the amount of memory that is swapped out during the interval. For a host and resource pool the value is NA. On KVM/Xen, this value will be “na” if extended memory statistics are not available. BYLS_MEM_SWAPPED ---------------------------------- On vMA, for a host, logical system and resource pool, this metrics indicates the amount of memory that has been transparently swapped to and from the disk. BYLS_MEM_SWAPTARGET ---------------------------------- On vMA, for a logical system the value indicates the amount of memory that can be swapped. For a host and resource pool the value is “na”. BYLS_MEM_SWAP_UTIL ---------------------------------- On Solaris, this metric indicates the percentage of swap memory consumed by the zone with respect to total configured swap memory (BYLS_MEM_SWAP). This metric is calculated as : BYLS_MEM_SWAP_UTIL = (BYLS_MEM_SWAP_USED ) / (BYLS_MEM_SWAP) * 100 On vMA, for a logical system, it is the percentage of swap memory utilized w.r.t the amount of swap memory available for a logical system. For host and resource pool the value is NA. For a logical system this metric is calculated using the below formula: (BYLS_MEM_SWAPPED * 100)/(BYLS_MEM_ENTL - BYLS_MEM_ENTL_MIN) BYLS_MEM_SYS ---------------------------------- On vMA, for a host, it is the amount of physical memory (in MBs unless otherwise specified) used by the system (kernel) during the interval. For logical system and resource pool the value is NA. BYLS_MEM_UNRESERVED ---------------------------------- On vMA, for a host it is the amount of memory, that is unreserved. For a logical system and resource pool the value is “na”. Memory reservation not used by the Service Console, VMkernel, vSphere services and other powered on VMs user-specified memory reservations and overhead memory. BYLS_MEM_USED ---------------------------------- The amount of memory used by the logical system at the end of the interval. On vMA, this applies to hosts, resource pools and logical systems. On vMA, for a resource pool, this metric is “na”. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. BYLS_MULTIACC_ENABLED ---------------------------------- On vMA, for a datastore the metric is the measurewhether multi access has been enabled for the datastore.The value is NA for all other entities. BYLS_NET_BYTE_RATE ---------------------------------- On vMA, for a host and logical system, it is the sum of data transmitted and received for all the NIC instances of the host and virtual machine. It is represented in KBps. For a resource pool the value is NA. BYLS_NET_IN_BYTE ---------------------------------- On vMA, for a host and logical system, it is number of bytes, in MB, received during the interval. For a resource pool the value is NA. BYLS_NET_IN_PACKET ---------------------------------- On vMA, for a host and a logical system, this metric indicates the number of successful packets received through all network interfaces during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_NET_IN_PACKET_RATE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the number of successful packets per second received through all network interfaces during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_NET_OUT_BYTE ---------------------------------- On vMA, for a host and logical system, it is the number of bytes, in MB, transmitted during the interval. For a resource pool the value is NA. BYLS_NET_OUT_PACKET ---------------------------------- On vMA, for a host and a logical system, it is the number of successful packets sent through all network interfaces during the last interval. On vMA, for a resource pool, this metric is “na”. BYLS_NET_OUT_PACKET_RATE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the number of successful packets per second sent through the network interfaces during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_NET_PACKET_RATE ---------------------------------- On vMA, for a host and a logical system, it is the number of successful packets per second, both sent and received, for all network interfaces during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_NUM_ACTIVE_LS ---------------------------------- On vMA, for a host, this indicates the number of logical systems hosted in a system that are active. For a logical system and resource pool the value is NA. For an AIX frame, this is the number of LPARs in “Running” state. For an LPAR, this value will be “na”. BYLS_NUM_CLONES ---------------------------------- On vMA, for a cluster the metric is Number ofvirtual machine clone operations for that cluster in that interval. The value is NA for all other entities. BYLS_NUM_CPU ---------------------------------- The number of virtual CPUs configured for this logical system. This metric is equivalent to GBL_NUM_CPU on the corresponding logical system. On HPVM, the maximum CPUs a logical system can have is 4 with respect to HPVM 3.x. On AIX SPLPAR, the number of CPUs can be configured irrespective of the available physical CPUs in the pool this logical system belongs to. For AIX wpars, this metric represents the logical CPUs of the global environment. On vMA, for a host the metric is the number of physical CPU threads on the host. For a logical system, the metric is the number of virtual cpus configured.For a resource pool the metric is NA. On Solaris Zones, this metric represents number of CPUs in the CPU pool this zone is attached to. This metric value is equivalent to GBL_NUM_CPU inside corresponding non-global zone. BYLS_NUM_CPU_CORE ---------------------------------- On vMA, for a host this metric provides the total number of CPU cores on the system. For a logical system or a resource pool the value is NA. BYLS_NUM_CREATE ---------------------------------- On vMA, for a cluster the metric is Number ofvirtual machine create operations for that cluster in that interval. The value is NA for all other entities. BYLS_NUM_DEPLOY ---------------------------------- On vMA, for a cluster the metric is Number ofvirtual machine template deploy operations for that cluster in that interval. The value is NA for all other entities. BYLS_NUM_DESTROY ---------------------------------- On vMA, for a cluster the metric is Number ofvirtual machine delete operations for that cluster in that interval. The value is NA for all other entities. BYLS_NUM_DISK ---------------------------------- The number of disks configured for this logical system. Only local disk devices and optical devices present on the system are counted in this metric. On vMA, for a host the metric is the number of disks configured for the host . For a logical system, the metric is the number of logical disk devices present on the logical system. For a resource pool the metric is NA. For AIX wpars, this metric will be “na”. On Hyper-V host, this metric value is equivalent to GBL_NUM_DISK inside corresponding Hyper-V guest. On Hyper-V host, this metric is NA if the logical system is not active. BYLS_NUM_HOSTS ---------------------------------- On vMA, for a DataCenter

as first half and the second half is the ESX host name. BYNETIF_NET_SPEED ---------------------------------- The speed of this interface. This is the bandwidth in Mega bits/sec. Some AIX systems report a speed that is lower than the measured throughput and this can result in BYNETIF_UTIL and GBL_NET_UTIL_PEAK showing more than 100% utilization. On Linux, root permission is required to obtain network interface bandwidth so values will be n/a when running in non-root mode. Also, maximum bandwidth for virtual interfaces (vnetN) may be reported wrongly on KVM or Xen server so, similarly to AIX, utilization may exceed 100%. BYNETIF_NET_TYPE ---------------------------------- The type of network device the interface communicates through. Lan - local area network card Loop - software loopback interface (not tied to a hardware device) Loop6 - software loopback interface IPv6 (not tied to a hardware device) Serial - serial modem port Vlan - virtual lan Wan - wide area network card Tunnel - tunnel interface Apa - HP LinkAggregate Interface (APA) Other - hardware network interface type is unknown. ESXVLan - The card type belongs to network cards of ESX hosts which are monitored on vMA. BYNETIF_OUT_BYTE ---------------------------------- The number of KBs sent to the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate. If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_OUT_BYTE_RATE ---------------------------------- The number of KBs per second sent to the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate. If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_OUT_BYTE_RATE_CUM ---------------------------------- The average number of KBs per second sent to the network via this interface over the cumulative collection time. Only the bytes in packets that carry data are included in this rate. If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the Lan card in the host. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_OUT_PACKET ---------------------------------- The number of successful physical packets sent through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. For HP-UX, this will be the same as the sum of the “Outbound Unicast Packets” and “Outbound Non-Unicast Packets” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Opkts” column (TX-OK on Linux) from the “netstat -i” command for a network device. See also netstat(1). If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_OUT_PACKET_RATE ---------------------------------- The number of successful physical packets per second sent through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_OUT_PACKET_RATE_CUM ---------------------------------- The average number of successful physical packets per second sent through the network interface over the cumulative collection time. If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the Lan card in the host. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_PACKET_RATE ---------------------------------- The number of successful physical packets per second sent and received through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the Lan card in the host. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_UTIL ---------------------------------- The percentage of bandwidth used with respect to the total available bandwidth on a given network interface at the end of the interval. On vMA this value will be N/A for those Lan cards which are of type ESXVLan. Some AIX systems report a speed that is lower than the measured throughput and this can result in BYNETIF_UTIL and GBL_NET_UTIL_PEAK showing more than 100% utilization. On Linux, root permission is required to obtain network interface bandwidth so values will be n/a when running in non-root mode. Also, maximum bandwidth for virtual interfaces (vnetN) may be reported wrongly on KVM or Xen server so, similarly to AIX, utilization may exceed 100%. BYOP_CLIENT_COUNT ---------------------------------- The number of current NFS operations that the local machine has processed as a NFS client during the interval. A host on the network can act both as a client, or as a server at the same time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYOP_CLIENT_COUNT_CUM ---------------------------------- The number of current NFS operations that the local machine has processed as a NFS client over the cumulative collection time. A host on the network can act both as a client, or as a server at the same time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYOP_INTERVAL ---------------------------------- The amount of time in the interval. BYOP_INTERVAL_CUM ---------------------------------- The amount of time over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYOP_NAME ---------------------------------- String mnemonic for the NFS operation. One of the following: For NFS Version 2 Name Operation/Action ------------------------------------ getattr Return the current attributes of a file. setattr Set the attributes of a file and returns the new attributes. lookup Return the attributes of a file. readlink Return the string in the symbolic link of a file. read Return data from a file. write Put data into a file. create Create a file. remove Remove a file. rename Give a file a new name. link Create a hard link to a file. symlink Create a symbolic link to a file. mkdir Create a directory. rmdir Remove a directory. readdir Read a directory entry. statfs Return mounted file system information. null Verify NFS service connections and timing. On HP-UX, no actual work done. writecache Flush the server write cache if a special write cache exists. Most systems use the file buffer cache and not a special server cache. Not used on HP-UX. root Find root file system handle (probably obsolete). Not used on HP-UX. For NFS Version 3 Name Operation/Action ------------------------------------ getattr Return the current attributes of a file. setattr Set the attributes of a file and returns the new attributes. lookup Return the attributes of a file. access Check access permissions of a user. readlink Return the string in the symbolic link of a file. read Return data from a file. write Put data into a file. create Create a file. mkdir Make a directory. symlink Create a symbolic link to a file. mknod Create a special device. remove Remove a file. rmdir Remove a directory. rename Give a file a new name. link Create a hard link to a file. readdir Read a directory entry. readdirplus Extended read of a directory entry. fsstat Get dynamic file system information. fsinfo Get static file system information. pathconf Retrieve POSIX information. commit Commit cached data on server to stable storage. null Verify NFS services. No actual work done. BYOP_SERVER_COUNT ---------------------------------- The number of current NFS operations that the local machine has processed as a NFS server during the interval. A host on the network can act both as a client, or as a server at the same time. BYOP_SERVER_COUNT_CUM ---------------------------------- The number of current NFS operations that the local machine has processed as a NFS server over the cumulative collection time. A host on the network can act both as a client, or as a server at the same time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYSWP_SWAP_PRI ---------------------------------- The priority of this swap device. This value is set by either the swapon(1M) command, or by the “pri=“ field in /etc/fstab. On HP-UX, swap space is used by the lower value priorities first. Since device swap is faster than file system swap, it is advisable to have lower values for device swap. The legal values for priority range from 0 to 10. On HP-UX, the “memory” swap area has no priority and will be shown as -1. This indicates that using memory as a swap area is only done after all other swap resources have been exhausted. This is true in extreme cases of memory pressure forcing the kernel to swap the entire process to disk. In cases of process deactivation, the memory pseudo swap actually has the highest priority - deactivated pages are not moved - they are simply marked as deactivated and the space they occupy is considered pseudo swap. On Linux, swap space is used by the higher value priorities first. The legal values for priority range from 0 to 32767. The system assigns negative priority values if no priority is specified during the creation of swap area. See swapon(8) for details. BYSWP_SWAP_SPACE_AVAIL ---------------------------------- The capacity (in MB) for swapping in this swap area. On HP-UX, for “device” type swap, this value is constant. However, for “filesys” swap this value grows as needed. File system swap grows in units of “SWCHUNKS” x DEV_BSIZE bytes, which is typically 2MB. This metric is similar to the “AVAIL” parameters returned from /usr/sbin/swapinfo. For “memory” type swap, this value also grows as needed or as possible, given that any memory reserved for swap cannot be used for normal virtual memory. Note that this is potential swap space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be usable. For example, on a 61 MB disk using 2 MB swap size allocations, 1 MB remains unusable and is considered wasted space. On SUN, this is the same as (blocks * .5)/1024, reported by the “swap -l” command. On AIX, this metric is set to “na” for inactive swap devices. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. BYSWP_SWAP_SPACE_NAME ---------------------------------- On Unix systems, this is the name of the device file or file system where the swap space is located. On HP-UX, part of the system’s physical memory may be allocated as a pseudo- swap device. It is enabled by setting the “SWAPMEM_ON” kernel parameter to 1. On SunOS 5.X, part of the system’s physical memory may be allocated as a pseudo-swap device. Also note, “/tmp” is usually configured as a memory based file system and is not used for swap space. Therefore, it will not be listed with the swap devices. This is noted because “df” uses the label “swap” for the “/tmp” file system which may be confusing. See tmpfs(7). BYSWP_SWAP_SPACE_USED ---------------------------------- The amount of swap space (in MB) used in this area. On HP-UX, this value is similar to the “USED” column returned by the /usr/sbin/swapinfo command. On SUN, “Used” indicates amount written to disk (or locked in memory), rather than reserved. Swap space is reserved (by decrementing a counter) when virtual memory for a program is created. This is the same as (blocks - free) * .5/1024, reported by the “swap -l” command. On SUN, global swap space is tracked through the operating system. Device swap space is tracked through the devices. For this reason, the amount of swap space used may differ between the global and by-device metrics. Sometimes pages that are marked to be swapped to disk by the operating system are never swapped. The operating system records this as used swap space, but the devices do not, since no physical IOs occur. (Metrics with the prefix “GBL” are global and metrics with the prefix “BYSWP” are by device.) On AIX, this metric is set to “na” for inactive swap devices. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. BYSWP_SWAP_TYPE ---------------------------------- The type of swap space allocated on the system. On HP-UX and SUN, types of swap space are device, file system (“filesys”), or memory. “Device” swap is accessed directly without going through the file system, and is therefore faster than “filesys” swap. “Filesys” swap can be to a local or NFS mounted swap file. “Memory” swap is space in the system’s physical memory reserved for pseudo-swap for running processes. Using pseudo-swap means the pages are simply locked in memory rather than copied to a swap area. On SUN, note that “/tmp” is usually configured as a memory based file system and is not used for swap space. Therefore, it will not be listed with the swap devices, and “swap” or “tmpfs” will not be swap types. This is noted because “df” uses the label “swap” for the “/tmp” file system which may be confusing. See tmpfs(7). On AIX, “Device” swap is accessed directly without going through the file system. For “Device” swap, the device is specially allocated for swapping purpose only. The device can be logical volume, “lv” or remote file system, “remote fs”. The swap is often referred as paging to paging space. FS_BLOCK_SIZE ---------------------------------- The maximum block size of this file system, in bytes. A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. FS_DEVNAME ---------------------------------- On Unix systems, this is the path name string of the current device. On Windows, this is the disk drive string of the current device. On HP-UX, this is the “fsname” parameter in the mount(1M) command. For NFS devices, this includes the name of the node exporting the file system. It is possible that a process may mount a device using the mount(2) system call. This call does not update the “/etc/mnttab” and its name is blank. This situation is rare, and should be corrected by syncer(1M). Note that once a device is mounted, its entry is displayed, even after the device is unmounted, until the midaemon process terminates. On SUN, this is the path name string of the current device, or “tmpfs” for memory based file systems. See tmpfs(7). FS_DEVNO ---------------------------------- On Unix systems, this is the major and minor number of the file system. On Windows, this is the unit number of the disk device on which the logical disk resides. The scope collector logs the value of this metric in decimal format. FS_DIRNAME ---------------------------------- On Unix systems, this is the path name of the mount point of the file system. On Windows, this is the drive letter associated with the selected disk partition. On HP-UX, this is the path name of the mount point of the file system if the logical volume has a mounted file system. This is the directory parameter of the mount(1M) command for most entries. Exceptions are: * For lvm swap areas, this field contains “lvm swap device”. * For logical volumes with no mounted file systems, this field contains “Raw Logical Volume” (relevant only to Perf Agent). On HP-UX, the file names are in the same order as shown in the “/usr/sbin/mount -p” command. File systems are not displayed until they exhibit IO activity once the midaemon has been started. Also, once a device is displayed, it continues to be displayed (even after the device is unmounted) until the midaemon process terminates. On SUN, only “UFS”, “HSFS” and “TMPFS” file systems are listed. See mount(1M) and mnttab(4). “TMPFS” file systems are memory based filesystems and are listed here for convenience. See tmpfs(7). On AIX, see mount(1M) and filesystems(4). On OSF1, see mount(2). FS_FRAG_SIZE ---------------------------------- The fundamental file system block size, in bytes. A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. FS_INODE_UTIL ---------------------------------- Percentage of this file system’s inodes in use during the interval. A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. FS_MAX_INODES ---------------------------------- Number of configured file system inodes. A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. FS_MAX_SIZE ---------------------------------- Maximum number that this file system could obtain if full, in MB. Note that this is the user space capacity - it is the file system space accessible to non root users. On most Unix systems, the df command shows the total file system capacity which includes the extra file system space accessible to root users only. The equivalent fields to look at are “used” and “avail”. For the target file system, to calculate the maximum size in MB, use FS Max Size = (used + avail)/1024 A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. On HP-UX, this metric is updated at 4 minute intervals to minimize collection overhead. FS_PHYS_IO_RATE ---------------------------------- The number of physical IOs per second directed to this file system during the interval. FS_PHYS_IO_RATE_CUM ---------------------------------- The average number of physical IOs per second directed to this file system over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. FS_PHYS_READ_BYTE_RATE ---------------------------------- The number of physical KBs per second read from this file system during the interval. FS_PHYS_READ_BYTE_RATE_CUM ---------------------------------- The average number of KBs per second of physical reads from this file system over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. FS_PHYS_READ_RATE ---------------------------------- The number of physical reads per second directed to this file system during the interval. On Unix systems, physical reads are generated by user file access, virtual memory access (paging), file system management, or raw device access. FS_PHYS_READ_RATE_CUM ---------------------------------- The average number of physical reads per second directed to this file system over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. FS_PHYS_WRITE_BYTE_RATE ---------------------------------- The number of physical KBs per second written to this file system during the interval. FS_PHYS_WRITE_BYTE_RATE_CUM ---------------------------------- The average number of KBs per second of physical writes to this file system over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. FS_PHYS_WRITE_RATE ---------------------------------- The number of physical writes per second directed to this file system during the interval. FS_PHYS_WRITE_RATE_CUM ---------------------------------- The average number of physical writes per second directed to this file system over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. FS_SPACE_RESERVED ---------------------------------- The amount of file system space in MBs reserved for superuser allocation. On AIX, this metric is typically zero for local filesystems because by default AIX does not reserve any file system space for the superuser. FS_SPACE_USED ---------------------------------- The amount of file system space in MBs that is being used. FS_SPACE_UTIL ---------------------------------- Percentage of the file system space in use during the interval. Note that this is the user space capacity - it is the file system space accessible to non root users. On most Unix systems, the df command shows the total file system capacity which includes the extra file system space accessible to root users only. A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. On HP-UX, this metric is updated at 4 minute intervals to minimize collection overhead. FS_TYPE ---------------------------------- A string indicating the file system type. On Unix systems, some of the possible types are: hfs - user file system ufs - user file system ext2 - user file system cdfs - CD-ROM file system vxfs - Veritas (vxfs) file system nfs - network file system nfs3 - network file system Version 3 On Windows, some of the possible types are: NTFS - New Technology File System FAT - 16-bit File Allocation Table FAT32 - 32-bit File Allocation Table FAT uses a 16-bit file allocation table entry (216 clusters). FAT32 uses a 32-bit file allocation table entry. However, Windows 2000 reserves the first 4 bits of a FAT32 file allocation table entry, which means FAT32 has a theoretical maximum of 228 clusters. NTFS is native file system of Windows NT and beyond. GBL_ACTIVE_CPU ---------------------------------- The number of CPUs online on the system. For HP-UX and certain versions of Linux, the sar(1M) command allows you to check the status of the system CPUs. For SUN and DEC, the commands psrinfo(1M) and psradm(1M) allow you to check or change the status of the system CPUs. For AIX, the pstat(1) command allows you to check the status of the system CPUs. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment if RSET is not configured for the System WPAR. If RSET is configured for the System WPAR, this metric value will report the number of CPUs in the RSET. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_ACTIVE_CPU_CORE ---------------------------------- This metric provides the total number of active CPU cores on a physical system. GBL_ACTIVE_PROC ---------------------------------- An active process is one that exists and consumes some CPU time. GBL_ACTIVE_PROC is the sum of the alive-process-time/interval-time ratios of every process that is active (uses any CPU time) during an interval. The following diagram of a four second interval during which two processes exist on the system should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval but consumes no CPU. A’s contribution to GBL_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to GBL_ACTIVE_PROC. B’s contribution to GBL_ALIVE_PROC is 3*1/4. B contributes 2*1/4 to GBL_ACTIVE_PROC. Thus, for this interval, GBL_ACTIVE_PROC equals 0.5 and GBL_ALIVE_PROC equals 1.75. Because a process may be alive but not active, GBL_ACTIVE_PROC will always be less than or equal to GBL_ALIVE_PROC. This metric is a good overall indicator of the workload of the system. An unusually large number of active processes could indicate a CPU bottleneck. To determine if the CPU is a bottleneck, compare this metric with GBL_CPU_TOTAL_UTIL and GBL_RUN_QUEUE. If GBL_CPU_TOTAL_UTIL is near 100 percent and GBL_RUN_QUEUE is greater than one, there is a bottleneck. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. GBL_ALIVE_PROC ---------------------------------- An alive process is one that exists on the system. GBL_ALIVE_PROC is the sum of the alive-process-time/interval-time ratios for every process. The following diagram of a four second interval during which two processes exist on the system should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval but consumes no CPU. A’s contribution to GBL_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to GBL_ACTIVE_PROC. B’s contribution to GBL_ALIVE_PROC is 3*1/4. B contributes 2*1/4 to GBL_ACTIVE_PROC. Thus, for this interval, GBL_ACTIVE_PROC equals 0.5 and GBL_ALIVE_PROC equals 1.75. Because a process may be alive but not active, GBL_ACTIVE_PROC will always be less than or equal to GBL_ALIVE_PROC. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. GBL_BLANK ---------------------------------- A string of blanks. GBL_BOOT_TIME ---------------------------------- The date and time when the system was last booted. GBL_COLLECTION_MODE ---------------------------------- This metric reports whether the data collection is running as “root” (super- user) or “non-root” (regular user). Running as non-root results in a loss of functionality which varies across Unix platforms. Running non-root is not available on HP-UX. The value is always “admin” on Windows. GBL_COLLECTOR ---------------------------------- ASCII field containing collector name and version. The collector name will appear as either “SCOPE/xx V.UU.FF.LF” or “Coda RV.UU.FF.LF”. xx identifies the platform; V = version, UU = update level, FF = fix level, and LF = lab fix id. For example, SCOPE/UX C.04.00.00; or Coda A.07.10.04. GBL_COMPLETED_PROC ---------------------------------- The number of processes that terminated during the interval. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. GBL_CPU_CLOCK ---------------------------------- The clock speed of the CPUs in MHz if all of the processors have the same clock speed. Otherwise, “na” is shown if the processors have different clock speeds. Note that Linux supports dynamic frequency scaling and if it is enabled then there can be a change in CPU speed with varying load. GBL_CPU_CYCLE_ENTL_MAX ---------------------------------- On a recognized VMware ESX guest, where VMware guest SDK is enabled,, this value indicates the maximum processor capacity, in MHz, configured for this logical system. The value is -3 if entitlement is ‘Unlimited’ for this logical system. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is “na”. On a standalone system, the value is the sum of clock speed of individual CPUs. GBL_CPU_CYCLE_ENTL_MIN ---------------------------------- On a recognized VMware ESX guest, where VMware guest SDK is enabled,, this value indicates the minimum processor capacity, in MHz, configured for this logical system. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is “na”. On a standalone system, the value is the sum of clock speed of individual CPUs. GBL_CPU_ENTL_MAX ---------------------------------- In a virtual environment, this metric indicates the maximum number of processing units configured for this logical system. On AIX SPLPAR, this metric is equivalent to “Maximum Capacity” field of ‘lparstat -i’ command. On a recognized VMware ESX guest the value is equivalent to GBL_CPU_CYCLE_ENTL_MAX represented in CPU units. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is “na”. On a standalone system the value is same as GBL_NUM_CPU. GBL_CPU_ENTL_MIN ---------------------------------- In a virtual environment, this metric indicates the minimum number of processing units configured for this Logical system. On AIX SPLPAR, this metric is equivalent to “Minimum Capacity” field of ‘lparstat -i’ command. On a recognized VMware ESX guest, where VMware guest SDK is enabled, the value is equivalent to GBL_CPU_CYCLE_ENTL_MIN represented in CPU units. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is “na”. On a standalone system the value is same as GBL_NUM_CPU. GBL_CPU_ENTL_UTIL ---------------------------------- Percentage of entitled processing units (guaranteed processing units allocated to this logical system) consumed by the logical system. On AIX, this metric is calculated as: GBL_CPU_ENTL_UTIL = (GBL_CPU_PHYSC / GBL_CPU_ENTL) * 100 On a recognized VMware ESX guest, where VMware guest SDK is enabled, this metric is calculated as: GBL_CPU_ENTL_UTIL = (GBL_CPU_PHYSC / GBL_CPU_ENTL_MIN) * 100 On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is “na”. On a standalone system, the value is same as GBL_CPU_TOTAL_UTIL. GBL_CPU_GUEST_TIME ---------------------------------- The time, in seconds, spent by CPUs to service guests during the interval. Guest time, on Linux KVM hosts, is the time that is spent servicing guests. Xen hosts, as of this release, do not update these counters, neither do other OSes. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_GUEST_TIME_CUM ---------------------------------- The time, in seconds, spent by CPUs to service guests over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. Guest time, on Linux KVM hosts, is the time that is spent servicing guests. Xen hosts, as of this release, do not update these counters, neither do other OSes. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_GUEST_UTIL ---------------------------------- The percentage of time that the CPUs were used to service guests during the interval. Guest time, on Linux KVM hosts, is the time that is spent servicing guests. Xen hosts, as of this release, do not update these counters, neither do other OSes. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_GUEST_UTIL_CUM ---------------------------------- The percentage of time that the CPUs were used to service guests over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. Guest time, on Linux KVM hosts, is the time that is spent servicing guests. Xen hosts, as of this release, do not update these counters, neither do other OSes. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_GUEST_UTIL_HIGH ---------------------------------- The highest percentage of guest CPU time during any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_IDLE_TIME ---------------------------------- The time, in seconds, that the CPU was idle during the interval. This is the total idle time, including waiting for I/O (and stolen time on Linux). On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. On AIX System WPARs, this metric value is calculated against physical cpu time. On Solaris non-global zones, this metric is N/A. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_IDLE_TIME_CUM ---------------------------------- The time, in seconds, that the CPU was idle over the cumulative collection time. This is the total idle time, including waiting for I/O (and stolen time on Linux). The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_IDLE_UTIL ---------------------------------- The percentage of time that the CPU was idle during the interval. This is the total idle time, including waiting for I/O (and stolen time on Linux). On Unix systems, this is the same as the sum of the “%idle” and “%wio” fields reported by the “sar -u” command. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. On Solaris non-global zones, this metric is N/A. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_IDLE_UTIL_CUM ---------------------------------- The percentage of time that the CPU was idle over the cumulative collection time. This is the total idle time, including waiting for I/O (and stolen time on Linux). The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_IDLE_UTIL_HIGH ---------------------------------- The highest percentage of time that the CPU was idle during any one interval over the cumulative collection time. This is the total idle time, including waiting for I/O (and stolen time on Linux). The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_INTERRUPT_TIME ---------------------------------- The time, in seconds, that the CPU spent processing interrupts during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On Hyper-V host, this metric is NA. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_INTERRUPT_TIME_CUM ---------------------------------- The time, in seconds, that the CPU spent processing interrupts over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_INTERRUPT_UTIL ---------------------------------- The percentage of time that the CPU spent processing interrupts during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On Hyper-V host, this metric is NA. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_INTERRUPT_UTIL_CUM ---------------------------------- The percentage of time that the CPU spent processing interrupts over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_INTERRUPT_UTIL_HIGH ---------------------------------- The highest percentage of time that the CPU spent processing interrupts during any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_MT_ENABLED ---------------------------------- On AIX, this metric indicates if this (Logical) System has SMT enabled or not. Other platforms, this metric shows either HyperThreading(HT) is Enabled or Disabled/Not Supported. On Linux, this state is dynamic: if HyperThreading is enabled but all the CPUs have only one logical processor enabled, this metric will report that HT is disabled. On AIX System WPARs, this metric is NA. On Windows, this metric will be “na” on Windows Server 2003 Itanium systems. GBL_CPU_NICE_TIME ---------------------------------- The time, in seconds, that the CPU was in user mode at a nice priority during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_NICE_TIME_CUM ---------------------------------- The time, in seconds, that the CPU was in user mode at a nice priority over the cumulative collection time. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_NICE_UTIL ---------------------------------- The percentage of time that the CPU was in user mode at a nice priority during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_NICE_UTIL_CUM ---------------------------------- The percentage of time that the CPU was in user mode at a nice priority over the cumulative collection time. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_NICE_UTIL_HIGH ---------------------------------- The highest percentage of time during any one interval that the CPU was in user mode at a nice priority over the cumulative collection time. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_NUM_THREADS ---------------------------------- The number of active CPU threads supported by the CPU architecture. The Linux kernel currently doesn’t provide any metadata information for disabled CPUs. This means that there is no way to find out types, speeds, as well as hardware IDs or any other information that is used to determine the number of cores, the number of threads, the HyperThreading state, etc... If the agent (or Glance) is started while some of the CPUs are disabled, some of these metrics will be “na”, some will be based on what is visible at startup time. All information will be updated if/when additional CPUs are enabled and information about them becomes available. The configuration counts will remain at the highest discovered level (i.e. if CPUs are then disabled, the maximum number of CPUs/cores/etc... will remain at the highest observed level). It is recommended that the agent be started with all CPUs enabled. On AIX System WPARs, this metric is NA. GBL_CPU_PHYSC ---------------------------------- The number of physical processors utilized by the logical system. On an Uncapped logical system (partition), this value will be equal to the physical processor capacity used by the logical system during the interval. This can be more than the value entitled for a logical system. On a standalone system the value is calculated based on GBL_CPU_TOTAL_UTIL GBL_CPU_PHYS_TOTAL_UTIL ---------------------------------- The percentage of time the available physical CPUs were not idle for this logical system during the interval. On AIX, this metric is calculated as : GBL_CPU_PHYS_TOTAL_UTIL = GBL_CPU_PHYS_USER_MODE_UTIL + GBL_CPU_PHYS_SYS_MODE_UTIL ; GBL_CPU_PHYS_TOTAL_UTIL + GBL_CPU_PHYS_WAIT_UTIL + GBL_CPU_PHYS_IDLE_UTIL = 100% On Power5 based systems, traditional sample based calculations cannot be made because the dispatch cycle for each of the virtual CPUs is not same. So Power5 processor maintains a per-thread register PURR. The thread is dispatching instructions or the thread that last dispatched an instruction will be incremented at every processor clock cycle. This makes the value to be distributed between the two threads. Power5 processor also maintains two more registers, one is timebase - which gets incremented at every tick and decrementer - that provided periodic interrupts. On a Shared LPAR environment, PURR is equal to the time that a virtual processor has spent on a physical processor. Hypervisor maintains a virtual timebase which is same as the sum of two PURRs. On a Capped Shared logical system (partition), the calculations for the metric GBL_CPU_PHYS_USER_MODE_UTIL is as follows: (delta PURR in user mode/entitlement) * 100 On an Uncapped Shared logical system (partition): (delta PURR in user mode/entitlement consumed) * 100 The calculations for the other utilizations such as GBL_CPU_PHYS_USER_MODE_UTIL, GBL_CPU_PHYS_SYS_MODE_UTIL, and GBL_CPU_PHYS_WAIT_UTIL are also similar. On a standalone system, the value will be equivalent to GBL_CPU_TOTAL_UTIL. On AIX System WPARs, this metric value is calculated against physical cpu time. GBL_CPU_SHARES_PRIO ---------------------------------- The weightage/priority assigned to a Uncapped logical system. This value determines the minimum share of unutilized processing units that this logical system can utilize. On AIX SPLPAR this value is dependent on the available processing units in the pool and can range from 0 to 255 On recognized VMware ESX guest, this value can range from 1 to 100000 On a standalone system the value will be “na”. GBL_CPU_STOLEN_TIME ---------------------------------- The time, in seconds, that was stolen from all the CPUs during the interval. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_STOLEN_TIME_CUM ---------------------------------- The time, in seconds, that was stolen from all the CPUs over the cumulative collection time. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_STOLEN_UTIL ---------------------------------- The percentage of time that was stolen from all CPUs during the interval. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_STOLEN_UTIL_CUM ---------------------------------- The percentage of time that was stolen from all CPUs over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_STOLEN_UTIL_HIGH ---------------------------------- The highest percentage of stolen CPU time during any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_SYS_MODE_TIME ---------------------------------- The time, in seconds, that the CPU was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On AIX System WPARs, this metric value is calculated against physical cpu time. On Hyper-V host, this metric indicates the time spent in Hypervisor code. GBL_CPU_SYS_MODE_TIME_CUM ---------------------------------- The time, in seconds, that the CPU was in system mode over the cumulative collection time. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On AIX System WPARs, this metric value is calculated against physical cpu time. GBL_CPU_SYS_MODE_UTIL ---------------------------------- Percentage of time the CPU was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. This metric is a subset of the GBL_CPU_TOTAL_UTIL percentage. This is NOT a measure of the amount of time used by system daemon processes, since most system daemons spend part of their time in user mode and part in system calls, like any other process. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. High system mode CPU percentages are normal for IO intensive applications. Abnormally high system mode CPU percentages can indicate that a hardware problem is causing a high interrupt rate. It can also indicate programs that are not calling system calls efficiently. On a logical system, this metric indicates the percentage of time the logical processor was in kernel mode during this interval. On Hyper-V host, this metric indicates the percentage of time spent in Hypervisor code. GBL_CPU_SYS_MODE_UTIL_CUM ---------------------------------- The percentage of time that the CPU was in system mode over the cumulative collection time. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_SYS_MODE_UTIL_HIGH ---------------------------------- The highest percentage of time during any one interval that the CPU was in system mode over the cumulative collection time. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_TOTAL_TIME ---------------------------------- The total time, in seconds, that the CPU was not idle in the interval. This is calculated as GBL_CPU_TOTAL_TIME = GBL_CPU_USER_MODE_TIME + GBL_CPU_SYS_MODE_TIME On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On AIX System WPARs, this metric value is calculated against physical cpu time. GBL_CPU_TOTAL_TIME_CUM ---------------------------------- The total time that the CPU was not idle over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On AIX System WPARs, this metric value is calculated against physical cpu time. GBL_CPU_TOTAL_UTIL ---------------------------------- Percentage of time the CPU was not idle during the interval. This is calculated as GBL_CPU_TOTAL_UTIL = GBL_CPU_USER_MODE_UTIL + GBL_CPU_SYS_MODE_UTIL On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_TOTAL_UTIL + GBL_CPU_IDLE_UTIL = 100% This metric varies widely on most systems, depending on the workload. A consistently high CPU utilization can indicate a CPU bottleneck, especially when other indicators such as GBL_RUN_QUEUE and GBL_ACTIVE_PROC are also high. High CPU utilization can also occur on systems that are bottlenecked on memory, because the CPU spends more time paging and swapping. NOTE: On Windows, this metric may not equal the sum of the APP_CPU_TOTAL_UTIL metrics. Microsoft states that “this is expected behavior” because this GBL_CPU_TOTAL_UTIL metric is taken from the performance library Processor objects while the APP_CPU_TOTAL_UTIL metrics are taken from the Process objects. Microsoft states that there can be CPU time accounted for in the Processor system objects that may not be seen in the Process objects. On a logical system, this metric indicates the logical utilization with respect to number of processors available for the logical system (GBL_NUM_CPU). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_TOTAL_UTIL_CUM ---------------------------------- The percentage of total CPU time that the processor was not idle over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_TOTAL_UTIL_HIGH ---------------------------------- The highest percentage of total CPU time during any one interval that the processor was not idle over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_USER_MODE_TIME ---------------------------------- The time, in seconds, that the CPU was in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On AIX System WPARs, this metric value is calculated against physical cpu time. On Hyper-V host, this metric indicates the time spent in guest code. GBL_CPU_USER_MODE_TIME_CUM ---------------------------------- The time, in seconds, that the CPU was in user mode over the cumulative collection time. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On AIX System WPARs, this metric value is calculated against physical cpu time. GBL_CPU_USER_MODE_UTIL ---------------------------------- The percentage of time the CPU was in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. This metric is a subset of the GBL_CPU_TOTAL_UTIL percentage. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. High user mode CPU percentages are normal for computation-intensive applications. Low values of user CPU utilization compared to relatively high values for GBL_CPU_SYS_MODE_UTIL can indicate an application or hardware problem. On a logical system, this metric indicates the percentage of time the logical processor was in user mode during this interval. On Hyper-V host, this metric indicates the percentage of time spent in guest code. GBL_CPU_USER_MODE_UTIL_CUM The percentage of time that the CPU was in user mode over the cumulative collection time. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, ---- ------------------------------process and thread metrics. GBL_CPU_USER_MODE_UTIL_HIGH ---------------------------------- The highest percentage of time during any one interval that the CPU was in user mode over the cumulative collection time. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. GBL_CPU_WAIT_TIME ---------------------------------- The time, in seconds, that the CPU was idle and there were processes waiting for physical IOs to complete during the interval. IO wait time is included in idle time on all systems. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On AIX System WPARs, this metric value is calculated against physical cpu time. On Solaris non-global zones, this metric is N/A. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On Linux, wait time includes CPU steal time. GBL_CPU_WAIT_TIME_CUM ---------------------------------- The total time since the beginning of measurement, in seconds, that the CPU was idle and there were processes waiting for physical IOs to complete IO wait time is included in idle time on all systems. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On Linux, wait time includes CPU steal time. GBL_CPU_WAIT_UTIL ---------------------------------- The percentage of time during the interval that the CPU was idle and there were processes waiting for physical IOs to complete. IO wait time is included in idle time on all systems. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On Solaris non-global zones, this metric is N/A. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On Linux, wait time includes CPU steal time. GBL_CPU_WAIT_UTIL_CUM ---------------------------------- The percentage of time since the beginning of measurement that the CPU was idle and there were processes waiting for physical IOs to complete. IO wait time is included in idle time on all systems. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On Linux, wait time includes CPU steal time. GBL_CPU_WAIT_UTIL_HIGH ---------------------------------- The highest percentage of CPU wait time during any one interval over the cumulative collection time. IO wait time is included in idle time on all systems. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. On Linux, wait time includes CPU steal time. GBL_CSWITCH_RATE ---------------------------------- The average number of context switches per second during the interval. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. On Windows, this includes switches from one thread to another either inside a single process or across processes. A thread switch can be caused either by one thread asking another for information or by a thread being preempted by another higher priority thread becoming ready to run. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_CSWITCH_RATE_CUM ---------------------------------- The average number of context switches per second over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. GBL_CSWITCH_RATE_HIGH ---------------------------------- The highest number of context switches per second during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. GBL_DISK_PHYS_BYTE ---------------------------------- The number of KBs transferred to and from disks during the interval. The bytes for all types of physical IOs are counted. Only local disks are counted in this measurement. NFS devices are excluded. It is not directly related to the number of IOs, since IO requests can be of differing lengths. On Unix systems, this includes file system IO, virtual memory IO, and raw IO. On Windows, all types of physical IOs are counted. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_BYTE_RATE ---------------------------------- The average number of KBs per second at which data was transferred to and from disks during the interval. The bytes for all types physical IOs are counted. Only local disks are counted in this measurement. NFS devices are excluded. This is a measure of the physical data transfer rate. It is not directly related to the number of IOs, since IO requests can be of differing lengths. This is an indicator of how much data is being transferred to and from disk devices. Large spikes in this metric can indicate a disk bottleneck. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw reads. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_IO ---------------------------------- The number of physical IOs during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk IOs are counted, including file system IO, virtual memory IO and raw IO. On HP-UX, this is calculated as GBL_DISK_PHYS_IO = GBL_DISK_FS_IO + GBL_DISK_VM_IO + GBL_DISK_SYSTEM_IO + GBL_DISK_RAW_IO On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_IO_CUM ---------------------------------- The total number of physical IOs over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_IO_RATE ---------------------------------- The number of physical IOs per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk IOs are counted, including file system IO, virtual memory IO and raw IO. On HP-UX, this is calculated as GBL_DISK_PHYS_IO_RATE = GBL_DISK_FS_IO_RATE + GBL_DISK_VM_IO_RATE + GBL_DISK_SYSTEM_IO_RATE + GBL_DISK_RAW_IO_RATE On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_IO_RATE_CUM ---------------------------------- The number of physical IOs per second over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_READ ---------------------------------- The number of physical reads during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On HP-UX, there are many reasons why there is not a direct correlation between the number of logical IOs and physical IOs. For example, small sequential logical reads may be satisfied from the buffer cache, resulting in fewer physical IOs than logical IOs. Conversely, large logical IOs or small random IOs may result in more physical than logical IOs. Logical volume mappings, logical disk mirroring, and disk striping also tend to remove any correlation. On HP-UX, this is calculated as GBL_DISK_PHYS_READ = GBL_DISK_FS_READ + GBL_DISK_VM_READ + GBL_DISK_SYSTEM_READ + GBL_DISK_RAW_READ On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_READ_BYTE ---------------------------------- The number of KBs physically transferred from the disk during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_READ_BYTE_CUM ---------------------------------- The number of KBs (or MBs if specified) physically transferred from the disk over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_READ_BYTE_RATE ---------------------------------- The average number of KBs transferred from the disk per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_READ_CUM ---------------------------------- The total number of physical reads over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_READ_PCT ---------------------------------- The percentage of physical reads of total physical IO during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_READ_PCT_CUM ---------------------------------- The percentage of physical reads of total physical IO over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_READ_RATE ---------------------------------- The number of physical reads per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On HP-UX, this is calculated as GBL_DISK_PHYS_READ_RATE = GBL_DISK_FS_READ_RATE + GBL_DISK_VM_READ_RATE + GBL_DISK_SYSTEM_READ_RATE + GBL_DISK_RAW_READ_RATE On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_READ_RATE_CUM ---------------------------------- The average number of physical reads per second over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_WRITE ---------------------------------- The number of physical writes during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On HP-UX, there are many reasons why there is not a direct correlation between logical IOs and physical IOs. For example, small logical writes may end up entirely in the buffer cache, and later generate fewer physical IOs when written to disk due to the larger IO size. Or conversely, small logical writes may require physical prefetching of the corresponding disk blocks before the data is merged and posted to disk. Logical volume mappings, logical disk mirroring, and disk striping also tend to remove any correlation. On HP-UX, this is calculated as GBL_DISK_PHYS_WRITE = GBL_DISK_FS_WRITE + GBL_DISK_VM_WRITE + GBL_DISK_SYSTEM_WRITE + GBL_DISK_RAW_WRITE On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_WRITE_BYTE ---------------------------------- The number of KBs (or MBs if specified) physically transferred to the disk during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_WRITE_BYTE_CUM ---------------------------------- The number of KBs (or MBs if specified) physically transferred to the disk over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_WRITE_BYTE_RATE ---------------------------------- The average number of KBs transferred to the disk per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_WRITE_CUM ---------------------------------- The total number of physical writes over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_WRITE_PCT ---------------------------------- The percentage of physical writes of total physical IO during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. GBL_DISK_PHYS_WRITE_PCT_CUM ---------------------------------- The percentage of physical writes of total physical IO over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_WRITE_RATE ---------------------------------- The number of physical writes per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On HP-UX, this is calculated as GBL_DISK_PHYS_WRITE_RATE = GBL_DISK_FS_WRITE_RATE + GBL_DISK_VM_WRITE_RATE + GBL_DISK_SYSTEM_WRITE_RATE + GBL_DISK_RAW_WRITE_RATE On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_PHYS_WRITE_RATE_CUM ---------------------------------- The number of physical writes per second over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_REQUEST_QUEUE ---------------------------------- The total length of all of the disk queues at the end of the interval. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_SUBSYSTEM_QUEUE ---------------------------------- The average number of processes or kernel threads blocked on the disk subsystem (in a “queue” waiting for their file system disk IO to complete) during the interval. On HP-UX, this is based on the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On Linux, this is based on the sum of all processes or kernel threads blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. This is calculated as the accumulated time mentioned above divided by the interval time. As this number rises, it is an indication of a disk bottleneck. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. GBL_DISK_SUBSYSTEM_WAIT_PCT ---------------------------------- The percentage of time processes or kernel threads were blocked on the disk subsystem (waiting for their file system IOs to complete) during the interval. On HP-UX, this is based on the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On Linux, this is based on the sum of all processes or kernel threads blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. This is calculated as the accumulated time mentioned above divided by the accumulated time that all processes or kernel threads were alive during the interval. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. GBL_DISK_SUBSYSTEM_WAIT_TIME ---------------------------------- On HP-UX, the accumulated time, in seconds, that all processes or kernel threads were blocked on the disk subsystem (waiting for their file system IOs to complete) during the interval. This is the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. On Linux, the accumulated time, in seconds, that all processes or kernel threads were blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. GBL_DISK_TIME_PEAK ---------------------------------- The time, in seconds, during the interval that the busiest disk was performing IO transfers. This is for the busiest disk only, not all disk devices. This counter is based on an end-to-end measurement for each IO transfer updated at queue entry and exit points. Only local disks are counted in this measurement. NFS devices are excluded. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_UTIL ---------------------------------- On HP-UX, this is the average percentage of time during the interval that all disks had IO in progress from the point of view of the Operating System. This is the average utilization for all disks. On all other Unix systems, this is the average percentage of disk in use time of the total interval (that is, the average utilization). Only local disks are counted in this measurement. NFS devices are excluded. GBL_DISK_UTIL_PEAK ---------------------------------- The utilization of the busiest disk during the interval. On HP-UX, this is the percentage of time during the interval that the busiest disk device had IO in progress from the point of view of the Operating System. On all other systems, this is the percentage of time during the interval that the busiest disk was performing IO transfers. It is not an average utilization over all the disk devices. Only local disks are counted in this measurement. NFS devices are excluded. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. A peak disk utilization of more than 50 percent often indicates a disk IO subsystem bottleneck situation. A bottleneck may not be in the physical disk drive itself, but elsewhere in the IO path. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_DISK_UTIL_PEAK_CUM ---------------------------------- The average utilization of the busiest disk in each interval over the cumulative collection time. Utilization is the percentage of time in use versus the time in the measurement interval. For each interval a different disk may be the busiest. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. GBL_DISK_UTIL_PEAK_HIGH ---------------------------------- The highest utilization of any disk during any interval over the cumulative collection time. Utilization is the percentage of time in use versus the time in the measurement interval. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. GBL_DISTRIBUTION ---------------------------------- The software distribution, if available. GBL_FS_SPACE_UTIL_PEAK ---------------------------------- The percentage of occupied disk space to total disk space for the fullest file system found during the interval. Only locally mounted file systems are counted in this metric. This metric can be used as an indicator that at least one file system on the system is running out of disk space. On Unix systems, CDROM and PC file systems are also excluded. This metric can exceed 100 percent. This is because a portion of the file system space is reserved as a buffer and can only be used by root. If the root user has made the file system grow beyond the reserved buffer, the utilization will be greater than 100 percent. This is a dangerous situation since if the root user totally fills the file system, the system may crash. On Windows, CDROM file systems are also excluded. On Solaris non-global zones, this metric shows data from the global zone. GBL_GMTOFFSET ---------------------------------- The difference, in minutes, between local time and GMT (Greenwich Mean Time). GBL_IGNORE_MT ---------------------------------- This boolean value indicates whether the CPU normalization is on or off. If the metric value is “true”, CPU related metrics in the global class will report values which are normalized against the number of active cores on the system. If the metric value is “false”, CPU related metrics in the global class will report values which are normalized against the number of CPU threads on the system. If CPU MultiThreading is turned off this configuration option is a no-op and the metric value will be “true”. On Linux, this metric will only report “true” if this configuration is on and if the kernel provides enough information to determine whether MultiThreading is turned on. On HPUX, this metric will report “na” if the processor doesn’t support the feature. GBL_INTERRUPT ---------------------------------- The number of IO interrupts during the interval. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_INTERRUPT_RATE ---------------------------------- The average number of IO interrupts per second during the interval. On HPUX and SUN this value includes clock interrupts. To get non-clock device interrupts, subtract clock interrupts from the value. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. GBL_INTERRUPT_RATE_CUM ---------------------------------- The average number of IO interrupts per second over the cumulative collection time. On HPUX and SUN this value includes clock interrupts. To get non-clock device interrupts, subtract clock interrupts from the value. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. GBL_INTERRUPT_RATE_HIGH ---------------------------------- The highest number of IO interrupts per second during any one interval over the cumulative collection time. On HPUX and SUN this value includes clock interrupts. To get non-clock device interrupts, subtract clock interrupts from the value. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. GBL_INTERVAL ---------------------------------- The amount of time in the interval. This measured interval is slightly larger than the desired or configured interval if the collection program is delayed by a higher priority process and cannot sample the data immediately. GBL_INTERVAL_CUM ---------------------------------- The amount of time over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. GBL_JAVAARG ---------------------------------- This boolean value indicates whether the java class overloading mechanism is enabled or not. This metric will be set when the javaarg flag in the parm file is set. The metric affected by this setting is PROC_PROC_ARGV1. This setting is useful to construct parm file java application definitions using the argv1= keyword. GBL_LOADAVG ---------------------------------- The 1 minute load average of the system obtained at the time of logging. On windows this is the load average of the system over the interval. Load average on windows is the average number of threads that have been waiting in ready state during the interval. This is obtained by checking the number of threads in ready state every sub proc interval, accumulating them over the interval and averaging over the interval. On Solaris non-global zones, this metric shows data from the global zone. GBL_LOADAVG15 ---------------------------------- The 15 minute load average of the system obtained at the time of logging. GBL_LOADAVG5 ---------------------------------- The 5 minute load average of the system obtained at the time of logging. On Solaris non-global zones, this metric shows data from the global zone. GBL_LOADAVG_CUM ---------------------------------- The average load average of the system over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. GBL_LOADAVG_HIGH ---------------------------------- The highest value of the load average during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. GBL_LOST_MI_TRACE_BUFFERS ---------------------------------- The number of trace buffers lost by the measurement processing daemon. On HP-UX systems, if this value is > 0, the measurement subsystem is not keeping up with the system events that generate traces. For other Unix systems, if this value is > 0, the measurement subsystem is not keeping up with the ARM API calls that generate traces. Note: The value reported for this metric will roll over to 0 once it crosses INTMAX. GBL_LS_MODE ---------------------------------- Indicates whether the CPU entitlement for the logical system is Capped or Uncapped. On a recognized VMware ESX guest, where VMware guest SDK is enabled, the value is “Uncapped” if maximum CPU entitlement (GBL_CPU_ENTL_MAX) is unlimited. Else, the value is always “Capped”. GBL_LS_ROLE ---------------------------------- Indicates whether Perf Agent is installed on Logical system or host or standalone system. This metric will be either “GUEST”, “HOST” or “STAND”. GBL_LS_SHARED ---------------------------------- In a virtual environment, this metric indicates whether the physical CPUs are dedicated to this Logical system or shared. On AIX SPLPAR, this metric is equivalent to “Type” field of ‘lparstat -i’ command. On a recognized VMware ESX guest, where VMware guest SDK is enabled, the value is “Shared”. On a standalone system the value of this metrics is “Dedicated”. On AIX System WPARs, this metric is NA. GBL_LS_TYPE ---------------------------------- The virtulization technology if applicable. The value of this metric is “HPVM” on HP-UX host, “LPAR” on AIX LPAR, “Sys WPAR” on system WPAR, “Zone” on Solaris Zones, “VMware” on recognized VMware ESX guest and VMware ESX Server console, “Hyper-V” on Hyper-V host, else “NoVM”. In conjunction with GBL_LS_ROLE this metric could be used to identify the environment in which Perf Agent/Glance is running. For example, if GBL_LS_ROLE is “Guest” and GBL_LS_TYPE is “VMware” then PA/Glance is running on a VMware Guest. GBL_MACHINE ---------------------------------- An ASCII string representing the Processor Architecture. And machine hardware model is represented by GBL_MACHINE_MODEL metric. GBL_MACHINE_MEM_USED ---------------------------------- The amount of physical host memory currently consumed for this logical system’s physical memory. On a standalone system, the value will be (GBL_MEM_UTIL * GBL_MEM_PHYS) / 100 GBL_MACHINE_MODEL ---------------------------------- The CPU model. This is similar to the information returned by the GBL_MACHINE metric and the uname command(except for Solaris 10 x86/x86_64). However, this metric returns more information on some processors. On HP-UX, this is the same information returned by the model command. GBL_MEM_AVAIL ---------------------------------- The amount of physical available memory in the system (in MBs unless otherwise specified). On Windows, memory resident operating system code and data is not included as available memory. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_CACHE ---------------------------------- The amount of physical memory (in MBs unless otherwise specified) used by the buffer cache during the interval. On HP-UX 11i v2 and below, the buffer cache is a memory pool used by the system to stage disk IO data for the driver. On HP-UX 11i v3 and above this metric value represents the usage of the file system buffer cache which is still being used for file system metadata. On SUN, this value is obtained by multiplying the system page size times the number of buffer headers (nbuf). For example, on a SPARCstation 10 the buffer size is usually (200 (page size buffers) * 4096 (bytes/page) = 800 KB). On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The “nbuf” value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On AIX, this value should be minimal since most disk IOs are done through memory mapped files. GBL_MEM_CACHE_UTIL ---------------------------------- The percentage of physical memory used by the buffer cache during the interval. On HP-UX 11i v2 and below, the buffer cache is a memory pool used by the system to stage disk IO data for the driver. On HP-UX 11i v3 and above this metric value represents the usage of the file system buffer cache which is still being used for file system metadata. On SUN, this percentage is based on calculating the buffer cache size by multiplying the system page size times the number of buffer headers (nbuf). For example, on a SPARCstation 10 the buffer size is usually (200 (page size buffers) * 4096 (bytes/page) = 800 KB). On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The “nbuf” value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On AIX, this value should be minimal since most disk IOs are done through memory mapped files. On Windows the value reports ‘copy read hit %’ and ‘Pin read hit %’. GBL_MEM_ENTL_MAX ---------------------------------- In a virtual environment, this metric indicates the maximum amount of memory configured for this logical system. The value is -3 if entitlement is ‘Unlimited’ for this logical system. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is “na” On Solaris non-global zones, this metric value is equivalent to ‘capped- memory’ value for ‘zonecfg -z zonename info’ command. On a standalone system this metric is equivalent to GBL_MEM_PHYS. GBL_MEM_ENTL_MIN ---------------------------------- In a virtual environment, this metric indicates the minimum amount of memory configured for this logical system. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is “na” On a standalone system, this metrics is equivalent to GBL_MEM_PHYS. GBL_MEM_FILE_PAGEIN_RATE ---------------------------------- The number of page ins from the file system per second during the interval. On Solaris, this is the same as the “fpi” value from the “vmstat -p” command, divided by page size in KB. On Linux, the value is reported in kilobytes and matches the ‘io/bi’ values from vmstat. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_FILE_PAGEOUT_RATE ---------------------------------- The number of page outs to the file system per second during the interval. On Solaris, this is the same as the “fpo” value from the “vmstat -p” command, divided by page size in KB. On Linux, the value is reported in kilobytes and matches the ‘io/bo’ values from vmstat. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_FILE_PAGE_CACHE ---------------------------------- The amount of physical memory (in MBs unless otherwise specified) used by the file cache during the interval. File cache is a memory pool used by the system to stage disk IO data for the driver. This metric is supported on HP-UX 11iv3 and above. The filecache_min and filecache_max tunables control the filecache memory usage on the system. The filecache_min tunable specifies the amount of physical memory that is guaranteed to be available for filecache on the system. The filecache memory usage can grow beyond filecache_min, up to the limit set by the filecache_max tunable. The Virtual Memory(VM) subsystem always pre reserves ‘filecache_min’ tunable value worth of pages on the system for filecache, even in the case of filecache under utilization (actual filecache utilization < filecache_min value). This preserved memory by the VM is not available for the user. In this scenario, this metric will show the ‘filecache_min’ as the filecache value, rather than showing the actual filecache utilization. On Linux, this metric is equal to ‘cached’ value of ‘free -m’ command output. GBL_MEM_FILE_PAGE_CACHE_UTIL ---------------------------------- The percentage of physical_memory used by the file cache during the interval. File cache is a memory pool used by the system to stage disk IO data for the driver. This metric is supported on HP-UX 11iv3 and above. The filecache_min and filecache_max tunables control the filecache memory usage on the system. The filecache_min tunable specifies the amount of physical memory that is guaranteed to be available for filecache on the system. The filecache memory usage can grow beyond filecache_min, up to the limit set by the filecache_max tunable. The Virtual Memory(VM) subsystem always pre reserves ‘filecache_min’ tunable value worth of pages on the system for filecache, even in the case of filecache under utilization (actual filecache utilization < filecache_min value). This preserved memory by the VM is not available for the user. In this scenario, this metric will show the ‘filecache_min’ as the filecache value, rather than showing the actual filecache utilization. On Linux, this metric is derived from ‘cached’ value of ‘free -m’ command output. GBL_MEM_FREE ---------------------------------- The amount of memory not allocated (in MBs unless otherwise specified). As this value drops, the likelihood increases that swapping or paging out to disk may occur to satisfy new memory requests. On SUN, low values for this metric may not indicate a true memory shortage. This metric can be influenced by the VMM (Virtual Memory Management) system. On uncapped solaris zones, the metric indicates the amount of memory that is available across the whole system that is not consumed by the global zone and other non-global zones. In case of capped solaris zones, the metric indicates the amount of memory that is not consumed by this zone against the memory cap set. On Linux, this metric is sum of ‘free’ and ‘cached’ memory. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. Locality Domain metrics are available on HP-UX 11iv2 and above. GBL_MEM_FREE and LDOM_MEM_FREE, as well as the memory utilization metrics derived from them, may not always fully match. GBL_MEM_FREE represents free memory in the kernel’s reservation layer while LDOM_MEM_FREE shows actual free pages. If memory has been reserved but not actually consumed from the Locality Domains, the two values won’t match. Because GBL_MEM_FREE includes pre-reserved memory, the GBL_MEM_* metrics are a better indicator of actual memory consumption in most situations. GBL_MEM_FREE_UTIL ---------------------------------- The percentage of physical memory that was free at the end of the interval. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_OVERHEAD ---------------------------------- The amount of “overhead” memory associated with this logical system that is currently consumed on the host system. On VMware ESX Server console, the value is equivalent to sum of the current overhead memory for all running virtual machines On a standalone system, the value will be 0. On a recognized VMware ESX guest, where VMware guest SDK is disabled, the value is “na”. GBL_MEM_PAGEIN ---------------------------------- The total number of page ins from the disk during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On HP-UX, this is the same as the “page ins” value from the “vmstat -s” command. On AIX, this is the same as the “paging space page ins” value. Remember that “vmstat -s” reports cumulative counts. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGEIN_BYTE ---------------------------------- The number of KBs (or MBs if specified) of page ins during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEIN_BYTE_CUM ---------------------------------- The number of KBs (or MBs if specified) of page ins over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEIN_BYTE_RATE ---------------------------------- The number of KBs per second of page ins during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEIN_BYTE_RATE_CUM ---------------------------------- The average number of KBs per second of page ins over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEIN_BYTE_RATE_HIGH ---------------------------------- The highest number of KBs per second of page ins during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEIN_CUM ---------------------------------- The total number of page ins from the disk over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEIN_RATE ---------------------------------- The total number of page ins per second from the disk during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On HP-UX and AIX, this is the same as the “pi” value from the vmstat command. On Solaris, this is the same as the sum of the “epi” and “api” values from the “vmstat -p” command, divided by the page size in KB. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGEIN_RATE_CUM ---------------------------------- The average number of page ins per second over the cumulative collection time. This includes pages paged in from paging space and, except for AIX, from the file system. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEIN_RATE_HIGH ---------------------------------- The highest number of page ins per second from disk during any interval over the cumulative collection time. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEOUT ---------------------------------- The total number of page outs to the disk during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On HP-UX, this is the same as the “page outs” value from the “vmstat -s” command. On HP-UX 11iv3 and above this includes filecache page outs also. On AIX, this is the same as the “paging space page outs” value. Remember that “vmstat -s” reports cumulative counts. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGEOUT_BYTE ---------------------------------- The number of KBs (or MBs if specified) of page outs during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGEOUT_BYTE_CUM ---------------------------------- The number of KBs (or MBs if specified) of page outs over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEOUT_BYTE_RATE ---------------------------------- The number of KBs (or MBs if specified) per second of page outs during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGEOUT_BYTE_RATE_CUM ---------------------------------- The average number of KBs per second of page outs over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEOUT_BYTE_RATE_HIGH ---------------------------------- The highest number of KBs per second of page outs during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEOUT_CUM ---------------------------------- The total number of page outs to the disk over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEOUT_RATE ---------------------------------- The total number of page outs to the disk per second during the interval. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. On HP-UX and AIX, this is the same as the “po” value from the vmstat command. On Solaris, this is the same as the sum of the “epo” and “apo” values from the “vmstat -p” command, divided by the page size in KB. On Windows, this counter also includes paging traffic on behalf of the system cache to access file data for applications and so may be high when there is no memory pressure. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGEOUT_RATE_CUM ---------------------------------- The average number of page outs to the disk per second over the cumulative collection time. This includes pages paged out to paging space and, except for AIX, to the file system. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEOUT_RATE_HIGH ---------------------------------- The highest number of page outs per second to disk during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, Solaris, Linux and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGE_FAULT ---------------------------------- The number of page faults that occurred during the interval. On Linux this metric is available only on 2.6 and above kernel versions. GBL_MEM_PAGE_FAULT_CUM ---------------------------------- The number of page faults that occurred over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. GBL_MEM_PAGE_FAULT_RATE ---------------------------------- The number of page faults per second during the interval. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGE_FAULT_RATE_CUM ---------------------------------- The average number of page faults per second over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. GBL_MEM_PAGE_FAULT_RATE_HIGH ---------------------------------- The highest page fault per second during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. GBL_MEM_PAGE_REQUEST ---------------------------------- The number of page requests to or from the disk during the interval. On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging space and not to the file system. On Windows, this includes pages paged to or from both paging space and the file system. On HP-UX, this is the same as the sun of the “page ins” and “page outs” values from the “vmstat -s” command. On AIX, this is the same as the sum of the “paging space page ins” and “paging space page outs” values. Remember that “vmstat -s” reports cumulative counts. On Windows, this counter also includes paging traffic on behalf of the system cache to access file data for applications and so may be high when there is no memory pressure. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGE_REQUEST_CUM ---------------------------------- The total number of page requests to or from the disk over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging space and not to or from the file system. On Windows, this includes pages paged to or from both paging space and the file system. On Windows, this counter also includes paging traffic on behalf of the system cache to access file data for applications and so may be high when there is no memory pressure. GBL_MEM_PAGE_REQUEST_RATE ---------------------------------- The number of page requests to or from the disk per second during the interval. On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging space and not to or from the file system. On Windows, this includes pages paged to or from both paging space and the file system. On HP-UX and AIX, this is the same as the sum of the “pi” and “po” values from the vmstat command. On Solaris, this is the same as the sum of the “epi”, “epo”, “api”, and “apo” values from the “vmstat -p” command, divided by the page size in KB. Higher than normal rates can indicate either a memory or a disk bottleneck. Compare GBL_DISK_UTIL_PEAK and GBL_MEM_UTIL to determine which resource is more constrained. High rates may also indicate memory thrashing caused by a particular application or set of applications. Look for processes with high major fault rates to identify the culprits. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PAGE_REQUEST_RATE_CUM ---------------------------------- The average number of page requests to or from the disk per second over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging space and not to or from the file system. On Windows, this includes pages paged to or from both paging space and the file system. GBL_MEM_PAGE_REQUEST_RATE_HIGH ---------------------------------- The highest number of page requests per second during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging space and not to or from the file system. On Windows, this includes pages paged to or from both paging space and the file system. GBL_MEM_PHYS ---------------------------------- The amount of physical memory in the system (in MBs unless otherwise specified). On HP-UX, banks with bad memory are not counted. Note that on some machines, the Processor Dependent Code (PDC) code uses the upper 1MB of memory and thus reports less than the actual physical memory of the system. Thus, on a system with 256MB of physical memory, this metric and dmesg(1M) might only report 267,386,880 bytes (255MB). This is all the physical memory that software on the machine can access. On Windows, this is the total memory available, which may be slightly less than the total amount of physical memory present in the system. This value is also reported in the Control Panel’s About Windows NT help topic. On Linux, this is the amount of memory given by dmesg(1M). If the value is not available in kernel ring buffer, then the sum of system memory and available memory will be reported as physical memory. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_PHYS_SWAPPED ---------------------------------- On a recognized VMware ESX guest, where VMware guest SDK is enabled, this metrics indicates the amount of memory that has been reclaimed by ESX Server from this logical system by transparently swapping logical system’s memory to disk. The value is “na” otherwise. GBL_MEM_SHARES_PRIO ---------------------------------- The weightage/priority for memory assigned to this logical system. This value influences the share of unutilized physical Memory that this logical system can utilize. On a recognized VMware ESX guest, where VMware guest SDK is enabled, this value can range from 0 to 100000. The value will be “na” otherwise. GBL_MEM_SWAPIN_BYTE ---------------------------------- The number of KBs transferred in from disk due to swap ins (or reactivations on HP-UX) during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap- in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap- in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_SWAPIN_BYTE_CUM ---------------------------------- The number of KBs transferred in from disk due to swap ins (or reactivations on HP-UX) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap- in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap- in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPIN_BYTE_RATE ---------------------------------- The number of KBs per second transferred from disk due to swap ins (or reactivations on HP-UX) during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap- in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap- in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_SWAPIN_BYTE_RATE_CUM ---------------------------------- The number of KBs per second transferred from disk due to swap ins (or reactivations on HP-UX) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap- in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap- in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPIN_BYTE_RATE_HIGH ---------------------------------- The highest number of KBs per second transferred from disk due to swap ins (or reactivations on HP-UX) during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap- in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap- in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPOUT_BYTE ---------------------------------- The number of KBs (or MBs if specified) transferred out to disk due to swap outs (or deactivations on HP-UX) during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap- in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap- in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_SWAPOUT_BYTE_CUM ---------------------------------- The number of KBs (or MBs if specified) transferred out to disk due to swap outs (or deactivations on HP-UX) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap- in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap- in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPOUT_BYTE_RATE ---------------------------------- The number of KBs (or MBs if specified) per second transferred out to disk due to swap outs (or deactivations on HP-UX) during the interval. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap- in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap- in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. On Solaris non-global zones with Uncapped Memory scenario, this metric value is same as seen in global zone. GBL_MEM_SWAPOUT_BYTE_RATE_CUM ---------------------------------- The average number of KBs (or MBs if specified) per second transferred out to disk due to swap outs (or deactivations on HP-UX) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap- in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap- in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPOUT_BYTE_RATE_HIGH ---------------------------------- The highest number of KBs (or MBs if specified) per second transferred out to disk due to swap outs (or deactivations on HP-UX) during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Linux and AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap- in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap- in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SYS ---------------------------------- The amount of physical memory (in MBs unless otherwise specified) used by the system (kernel) during the interval. System memory does not include the buffer cache. On HP-UX and Linux this does not include filecache also. On HP-UX 11.0, this metric does not include some kinds of dynamically allocated kernel memory. This has always been reported in the GBL_MEM_USER* metrics. On HP-UX 11.11 and beyond, this metric includes some kinds of dynamically allocated kernel memory. On Solaris non-global zones, this metric shows value as 0. GBL_MEM_SYS_UTIL ---------------------------------- The percentage of physical memory used by the system during the interval. System memory does not include the buffer cache. On HP-UX and Linux this does not include filecache also. On HP-UX 11.0, this metric does not include some kinds of dynamically allocated kernel memory. This has always been reported in the GBL_MEM_USER* metrics. On HP-UX 11.11 and beyond, this metric includes some kinds of dynamically allocated kernel memory. On Solaris non-global zones, this metric shows value as 0. GBL_MEM_USER ---------------------------------- The amount of physical memory (in MBs unless otherwise specified) allocated to user code and data at the end of the interval. User memory regions include code, heap, stack, and other data areas including shared memory. This does not include memory for buffer cache. On HP-UX and Linux this does not include filecache also. On HP-UX 11.0, this metric includes some kinds of dynamically allocated kernel memory. On HP-UX 11.11 and beyond, this metric does not include some kinds of dynamically allocated kernel memory. This is now reported in the GBL_MEM_SYS* metrics. Large fluctuations in this metric can be caused by programs which allocate large amounts of memory and then either release the memory or terminate. A slow continual increase in this metric may indicate a program with a memory leak. GBL_MEM_USER_UTIL ---------------------------------- The percent of physical memory allocated to user code and data at the end of the interval. This metric shows the percent of memory owned by user memory regions such as user code, heap, stack and other data areas including shared memory. This does not include memory for buffer cache. On HP-UX and Linux this does not include filecache also. On HP-UX 11.0, this metric includes some kinds of dynamically allocated kernel memory. On HP-UX 11.11 and beyond, this metric does not include some kinds of dynamically allocated kernel memory. This is now reported in the GBL_MEM_SYS* metrics. Large fluctuations in this metric can be caused by programs which allocate large amounts of memory and then either release the memory or terminate. A slow continual increase in this metric may indicate a program with a memory leak. GBL_MEM_UTIL ---------------------------------- The percentage of physical memory in use during the interval. This includes system memory (occupied by the kernel), buffer cache and user memory. On HP-UX 11iv3 and above, this includes file cache. This excludes file cache when cachemem parameter in the parm file is set to free. On HP-UX, this calculation is done using the byte values for physical memory and used memory, and is therefore more accurate than comparing the reported kilobyte values for physical memory and used memory. On Linux, the value of this metric includes file cache when the cachemem parameter in the parm file is set to user. On SUN, high values for this metric may not indicate a true memory shortage. This metric can be influenced by the VMM (Virtual Memory Management) system. This excludes ZFS ARC cache when cachemem parameter in the parm file is set to free. On AIX, this excludes file cache when cachemem parameter in the parm file is set to free. Locality Domain metrics are available on HP-UX 11iv2 and above. GBL_MEM_FREE and LDOM_MEM_FREE, as well as the memory utilization metrics derived from them, may not always fully match. GBL_MEM_FREE represents free memory in the kernel’s reservation layer while LDOM_MEM_FREE shows actual free pages. If memory has been reserved but not actually consumed from the Locality Domains, the two values won’t match. Because GBL_MEM_FREE includes pre-reserved memory, the GBL_MEM_* metrics are a better indicator of actual memory consumption in most situations. GBL_MEM_UTIL_CUM ---------------------------------- The average percentage of physical memory in use over the cumulative collection time. This includes system memory (occupied by the kernel), buffer cache and user memory. On HP-UX 11iv3 and above, this includes file cache also. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. GBL_MEM_UTIL_HIGH ---------------------------------- The highest percentage of physical memory in use in any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. GBL_NET_COLLISION ---------------------------------- The number of collisions that occurred on all network interfaces during the interval. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not include deferred packets. This does not include data for loopback interface. For HP-UX, this will be the same as the sum of the “Single Collision Frames”, “Multiple Collision Frames”, “Late Collisions”, and “Excessive Collisions” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Coll” column from the “netstat -i” command (“collisions” from the “netstat -i -e” command on Linux) for a network device. See also netstat(1). AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_COLLISION_1_MIN_RATE ---------------------------------- The number of collisions per minute on all network interfaces during the interval. This metric does not include deferred packets. This does not include data for loopback interface. Collisions occur on any busy network, but abnormal collision rates could indicate a hardware or software problem. AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_COLLISION_CUM ---------------------------------- The number of collisions that occurred on all network interfaces over the cumulative collection time. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not include deferred packets. This does not include data for loopback interface. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For HP-UX, this will be the same as the sum of the “Single Collision Frames”, “Multiple Collision Frames”, “Late Collisions”, and “Excessive Collisions” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. For this release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For other Unix systems, this is the same as the sum of the “Coll” column from the “netstat -i” command (“collisions” from the “netstat -i -e” command on Linux) for a network device. See also netstat(1). AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_COLLISION_PCT ---------------------------------- The percentage of collisions to total outbound packet attempts during the interval. Outbound packet attempts include both successful packets and collisions. This does not include data for loopback interface. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets. AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_COLLISION_PCT_CUM ---------------------------------- The percentage of collisions to total outbound packet attempts over the cumulative collection time. Outbound packet attempts include both successful packets and collisions. This does not include data for loopback interface. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets. AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_COLLISION_RATE ---------------------------------- The number of collisions per second on all network interfaces during the interval. This metric does not include deferred packets. This does not include data for loopback interface. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_ERROR ---------------------------------- The number of errors that occurred on all network interfaces during the interval. This does not include data for loopback interface. For HP-UX, this will be the same as the sum of the “Inbound Errors” and “Outbound Errors” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_ERROR_1_MIN_RATE ---------------------------------- The number of errors per minute on all network interfaces during the interval. This rate should normally be zero or very small. A large error rate can indicate a hardware or software problem. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_ERROR_CUM ---------------------------------- The number of errors that occurred on all network interfaces over the cumulative collection time. This does not include data for loopback interface. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For HP-UX, this will be the same as the total sum of the “Inbound Errors” and “Outbound Errors” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_ERROR_RATE ---------------------------------- The number of errors per second on all network interfaces during the interval. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_IN_ERROR ---------------------------------- The number of inbound errors that occurred on all network interfaces during the interval. A large number of errors may indicate a hardware problem on the network. This does not include data for loopback interface. For HP-UX, this will be the same as the sum of the “Inbound Errors” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_IN_ERROR_CUM ---------------------------------- The number of inbound errors that occurred on all network interfaces over the cumulative collection time. This does not include data for loopback interface. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. A large number of errors may indicate a hardware problem on the network. For HP-UX, this will be the same as the total sum of the “Inbound Errors” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_IN_ERROR_PCT ---------------------------------- The percentage of inbound network errors to total inbound packet attempts during the interval. Inbound packet attempts include both packets successfully received and those that encountered errors. This does not include data for loopback interface. A large number of errors may indicate a hardware problem on the network. The percentage of inbound errors to total packets attempted should remain low. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_IN_ERROR_PCT_CUM ---------------------------------- The percentage of inbound network errors to total inbound packet attempts over the cumulative collection time. Inbound packet attempts include both packets successfully received and those that encountered errors. This does not include data for loopback interface. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. A large number of errors may indicate a hardware problem on the network. The percentage of inbound errors to total packets attempted should remain low. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_IN_ERROR_RATE ---------------------------------- The number of inbound errors per second on all network interfaces during the interval. This does not include data for loopback interface. A large number of errors may indicate a hardware problem on the network. The percentage of inbound errors to total packets attempted should remain low. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_IN_ERROR_RATE_CUM ---------------------------------- The average number of inbound errors per second on all network interfaces over the cumulative collection time. This does not include data for loopback interface. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_IN_PACKET ---------------------------------- The number of successful packets received through all network interfaces during the interval. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. For HP-UX, this will be the same as the sum of the “Inbound Unicast Packets” and “Inbound Non-Unicast Packets” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Ipkts” column (RX-OK on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_IN_PACKET_CUM ---------------------------------- The number of successful packets received through all network interfaces over the cumulative collection time. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For HP-UX, this will be the same as the total sum of the “Inbound Unicast Packets” and “Inbound Non-Unicast Packets” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Ipkts” column (RX-OK on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_IN_PACKET_RATE ---------------------------------- The number of successful packets per second received through all network interfaces during the interval. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_OUT_ERROR ---------------------------------- The number of outbound errors that occurred on all network interfaces during the interval. This does not include data for loopback interface. For HP-UX, this will be the same as the sum of the “Outbound Errors” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_OUT_ERROR_CUM ---------------------------------- The number of outbound errors that occurred on all network interfaces over the cumulative collection time. This does not include data for loopback interface. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For HP-UX, this will be the same as the total sum of the “Outbound Errors” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_OUT_ERROR_PCT ---------------------------------- The percentage of outbound network errors to total outbound packet attempts during the interval. Outbound packet attempts include both packets successfully sent and those that encountered errors. This does not include data for loopback interface. The percentage of outbound errors to total packets attempted to be transmitted should remain low. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_OUT_ERROR_PCT_CUM ---------------------------------- The percentage of outbound network errors to total outbound packet attempts over the cumulative collection time. Outbound packet attempts include both packets successfully sent and those that encountered errors. This does not include data for loopback interface. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. The percentage of outbound errors to total packets attempted to be transmitted should remain low. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_OUT_ERROR_RATE ---------------------------------- The number of outbound errors per second on all network interfaces during the interval. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_OUT_ERROR_RATE_CUM ---------------------------------- The number of outbound errors per second on all network interfaces over the cumulative collection time. This does not include data for loopback interface. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_OUT_PACKET ---------------------------------- The number of successful packets sent through all network interfaces during the last interval. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. For HP-UX, this will be the same as the sum of the “Outbound Unicast Packets” and “Outbound Non-Unicast Packets” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Opkts” column (TX-OK on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_OUT_PACKET_CUM ---------------------------------- The number of successful packets sent through all network interfaces over the cumulative collection time. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For HP-UX, this will be the same as the total sum of the “Outbound Unicast Packets” and “Outbound Non-Unicast Packets” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Opkts” column (TX-OK on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_OUT_PACKET_RATE ---------------------------------- The number of successful packets per second sent through the network interfaces during the interval. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_PACKET ---------------------------------- The total number of successful inbound and outbound packets for all network interfaces during the interval. These are the packets that have been processed without errors or collisions. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. GBL_NET_PACKET_RATE ---------------------------------- The number of successful packets per second (both inbound and outbound) for all network interfaces during the interval. Successful packets are those that have been processed without errors or collisions. This does not include data for loopback interface. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. On Solaris non-global zones, this metric shows data from the global zone. GBL_NET_UTIL_PEAK ---------------------------------- It is the utilisation of the most used network interfaces at the end of the interval. Some AIX systems report a speed that is lower than the measured throughput and this can result in BYNETIF_UTIL and GBL_NET_UTIL_PEAK showing more than 100% utilization. On Linux, root permission is required to obtain network interface bandwidth so values will be n/a when running in non-root mode. Also, maximum bandwidth for virtual interfaces (vnetN) may be reported wrongly on KVM or Xen server so, similarly to AIX, utilization may exceed 100%. GBL_NFS_CALL ---------------------------------- The number of NFS calls the local system has made as either a NFS client or server during the interval. This includes both successful and unsuccessful calls. Unsuccessful calls are those that cannot be completed due to resource limitations or LAN packet errors. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. On AIX System WPARs, this metric is NA. GBL_NFS_CALL_RATE ---------------------------------- The number of NFS calls per second the system made as either a NFS client or NFS server during the interval. Each computer can operate as both a NFS server, and as an NFS client. This metric includes both successful and unsuccessful calls. Unsuccessful calls are those that cannot be completed due to resource limitations or LAN packet errors. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. On AIX System WPARs, this metric is NA. GBL_NFS_CLIENT_BAD_CALL ---------------------------------- The number of failed NFS client calls during the interval. Calls fail due to lack of system resources (lack of virtual memory) as well as network errors. GBL_NFS_CLIENT_BAD_CALL_CUM ---------------------------------- The number of failed NFS client calls over the cumulative collection time. Calls fail due to lack of system resources (lack of virtual memory) as well as network errors. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. GBL_NFS_CLIENT_CALL ---------------------------------- The number of NFS calls the local machine has processed as a NFS client during the interval. Calls are the system calls used to initiate physical NFS operations. These calls are not always successful due to resource constraints or LAN errors, which means that the call rate should exceed the IO rate. This metric includes both successful and unsuccessful calls. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. GBL_NFS_CLIENT_CALL_CUM ---------------------------------- The number of NFS calls the local machine has processed as a NFS client over the cumulative collection time. Calls are the system calls used to initiate physical NFS operations. These calls are not always successful due to resource constraints or LAN errors, which means that the call rate should exceed the IO rate. This metric includes both successful and unsuccessful calls. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. GBL_NFS_CLIENT_CALL_RATE ---------------------------------- The number of NFS calls the local machine has processed as a NFS client per second during the interval. Calls are the system call used to initiate physical NFS operations. These calls are not always successful due to resource constraints or LAN errors, which means that the call rate should exceed the IO rate. This metric includes both successful and unsuccessful calls. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. GBL_NFS_CLIENT_IO ---------------------------------- The number of NFS IOs the local machine has completed as an NFS client during the interval. This number represents physical IOs sent by the client in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both an NFS server, and as a NFS client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_CLIENT_IO_CUM ---------------------------------- The number of NFS IOs the local machine has completed as an NFS client over the cumulative collection time. This number represents physical IOs sent by the client in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both an NFS server, and as a NFS client. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_CLIENT_IO_PCT ---------------------------------- The percentage of NFs IOs the local machine has completed as an NFS client versus total NFS IOs completed during the interval. This number represents physical IOs sent by the client in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both an NFS server, and as a NFS client. A percentage greater than 50 indicates that this machine is acting more as a client. A percentage less than 50 indicates this machine is acting more as a server for others. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_CLIENT_IO_PCT_CUM ---------------------------------- The percentage of NFS IOs the local machine has completed as an NFS client versus total NFS IOs completed over the cumulative collection time. This number represents physical IOs sent by the client in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both an NFS server, and as a NFS client. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. A percentage greater than 50 indicates that this machine is acting more as a client. A percentage less than 50 indicates this machine is acting more as a server for others. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_CLIENT_IO_RATE ---------------------------------- The number of NFS IOs per second the local machine has completed as an NFS client during the interval. This number represents physical IOs sent by the client in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both an NFS server, and as a NFS client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_CLIENT_IO_RATE_CUM ---------------------------------- The number of NFS IOs per second the local machine has completed as an NFS client over the cumulative collection time. This number represents physical IOs sent by the client in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both an NFS server, and as a NFS client. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_CLIENT_READ_RATE ---------------------------------- The number of NFS “read” operations per second the system generated as an NFS client during the interval. NFS Version 2 read operations consist of getattr, lookup, readlink, readdir, null, root, statfs, and read. NFS Version 3 read operations consist of getattr, lookup, access, readlink, read, readdir, readdirplus, fsstat, fsinfo, and null. GBL_NFS_CLIENT_READ_RATE_CUM ---------------------------------- The average number of NFS “read” operations per second the system generated as an NFS client over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. NFS Version 2 read operations consist of getattr, lookup, readlink, readdir, null, root, statfs, and read. NFS Version 3 read operations consist of getattr, lookup, access, readlink, read, readdir, readdirplus, fsstat, fsinfo, and null. GBL_NFS_CLIENT_WRITE_RATE ---------------------------------- The number of NFS “write” operations per second the system generated as an NFS client during the interval. NFS Version 2 write operations consist of setattr, write, writecache, create, remove, rename, link, symlink, mkdir, and rmdir. NFS Version 3 write operations consist of setattr, write, create, mkdir, symlink, mknod, remove, rmdir, rename, link, pathconf, and commit. GBL_NFS_CLIENT_WRITE_RATE_CUM ---------------------------------- The average number of NFS “write” operations per second the system generated as an NFS client over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. NFS Version 2 write operations consist of setattr, write, writecache, create, remove, rename, link, symlink, mkdir, and rmdir. NFS Version 3 write operations consist of setattr, write, create, mkdir, symlink, mknod, remove, rmdir, rename, link, pathconf, and commit. GBL_NFS_SERVER_BAD_CALL ---------------------------------- The number of failed NFS server calls during the interval. Calls fail due to lack of system resources (lack of virtual memory) as well as network errors. GBL_NFS_SERVER_BAD_CALL_CUM ---------------------------------- The number of failed NFS server calls over the cumulative collection time. Calls fail due to lack of system resources (lack of virtual memory) as well as network errors. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. GBL_NFS_SERVER_CALL ---------------------------------- The number of NFS calls the local machine has processed as a NFS server during the interval. Calls are the system calls used to initiate physical NFS operations. These calls are not always successful due to resource constraints or LAN errors, which means that the call rate could exceed the IO rate. This metric includes both successful and unsuccessful calls. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. GBL_NFS_SERVER_CALL_CUM ---------------------------------- The number of NFS calls the local machine has processed as a NFS server over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. Calls are the system calls used to initiate physical NFS operations. These calls are not always successful due to resource constraints or LAN errors, which means that the call rate could exceed the IO rate. This metric includes both successful and unsuccessful calls. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. GBL_NFS_SERVER_CALL_RATE ---------------------------------- The number of NFS calls the local machine has processed per second as a NFS server during the interval. Calls are the system calls used to initiate physical NFS operations. These calls are not always successful due to resource constraints or LAN errors, which means that the call rate could exceed the IO rate. This metric includes both successful and unsuccessful calls. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. GBL_NFS_SERVER_IO ---------------------------------- The number of NFS IOs the local machine has completed as an NFS server during the interval. This number represents physical IOs received by the serverein contrast to a call which is an attempt to initiate these operations. Each computer can operate as both a NFS server, and as an NFS client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_SERVER_IO_CUM ---------------------------------- The number of NFS IOs the local machine has completed as an NFS server over the cumulative collection time. This number represents physical IOs received by the server n contrast to a call which is an attempt to initiate these operations. Each computer can operate as both a NFS server, and as an NFS client. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_SERVER_IO_PCT ---------------------------------- The percentage of NFS IOs the local machine has completed as an NFS server versus total NFS IOs completed during the interval. This number represents physical IOs received by the server in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both a NFS server, and as an NFS client. A percentage greater than 50 indicates that this machine is acting more as a server for others. A percentage less than 50 indicates this machine is acting more as a client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_SERVER_IO_PCT_CUM ---------------------------------- The percentage of NFs IOs the local machine has completed as an NFS server versus total NFS IOs completed over the cumulative collection time. This number represents physical IOs received by the server in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both a NFS server, and as an NFS client. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. A percentage greater than 50 indicates that this machine is acting more as a server for others. A percentage less than 50 indicates this machine is acting more as a client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_SERVER_IO_RATE ---------------------------------- The number of NFS IOs per second the local machine has completed as an NFS server during the interval. This number represents physical IOs received by the server in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both a NFS server, and as an NFS client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_SERVER_IO_RATE_CUM ---------------------------------- The number of NFS IOs per second the local machine has completed as an NFS server over the cumulative collection time. This number represents physical IOs received by the server in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both a NFS server, and as an NFS client. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_SERVER_READ_RATE ---------------------------------- The number of NFS “read” operations per second the system processed as an NFS server during the interval. NFS Version 2 read operations consist of getattr, lookup, readlink, readdir, null, root, statfs, and read. NFS Version 3 read operations consist of getattr, lookup, access, readlink, read, readdir, readdirplus, fsstat, fsinfo, and null. GBL_NFS_SERVER_READ_RATE_CUM ---------------------------------- The average number of NFS “read” operations per second the system processed as an NFS server over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. NFS Version 2 read operations consist of getattr, lookup, readlink, readdir, null, root, statfs, and read. NFS Version 3 read operations consist of getattr, lookup, access, readlink, read, readdir, readdirplus, fsstat, fsinfo, and null. GBL_NFS_SERVER_WRITE_RATE ---------------------------------- The number of NFS “write” operations per second the system processed as an NFS server during the interval. NFS Version 2 write operations consist of setattr, write, writecache, create, remove, rename, link, symlink, mkdir, and rmdir. NFS Version 3 write operations consist of setattr, write, create, mkdir, symlink, mknod, remove, rmdir, rename, link, pathconf, and commit. GBL_NFS_SERVER_WRITE_RATE_CUM ---------------------------------- The average number of NFS “write” operations per second the system processed as an NFS server over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. NFS Version 2 write operations consist of setattr, write, writecache, create, remove, rename, link, symlink, mkdir, and rmdir. NFS Version 3 write operations consist of setattr, write, create, mkdir, symlink, mknod, remove, rmdir, rename, link, pathconf, and commit. GBL_NODENAME ---------------------------------- On Unix systems, this is the name of the computer as returned by the command “uname -n” (that is, the string returned from the “hostname” program). On Windows, this is the name of the computer as returned by GetComputerName. GBL_NUM_ACTIVE_LS ---------------------------------- This indicates the number of LS hosted in a system that are active . If Perf Agent is installed in a guest or in a standalone system this value will be 0. On Solaris non-global zones, this metric shows value as 0. GBL_NUM_APP ---------------------------------- The number of applications defined in the parm file plus one (for “other”). The application called “other” captures all other processes not defined in the parm file. You can define up to 999 applications. GBL_NUM_CPU ---------------------------------- The number of physical CPUs on the system. This includes all CPUs, either online or offline. For HP-UX and certain versions of Linux, the sar(1M) command allows you to check the status of the system CPUs. For SUN and DEC, the commands psrinfo(1M) and psradm(1M) allow you to check or change the status of the system CPUs. For AIX, this metric indicates the maximum number of CPUs the system ever had. On a logical system, this metric indicates the number of virtual CPUs configured. When hardware threads are enabled, this metric indicates the number of logical processors. On Solaris non-global zones with Uncapped CPUs, this metric shows data from the global zone. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. The Linux kernel currently doesn’t provide any metadata information for disabled CPUs. This means that there is no way to find out types, speeds, as well as hardware IDs or any other information that is used to determine the number of cores, the number of threads, the HyperThreading state, etc... If the agent (or Glance) is started while some of the CPUs are disabled, some of these metrics will be “na”, some will be based on what is visible at startup time. All information will be updated if/when additional CPUs are enabled and information about them becomes available. The configuration counts will remain at the highest discovered level (i.e. if CPUs are then disabled, the maximum number of CPUs/cores/etc... will remain at the highest observed level). It is recommended that the agent be started with all CPUs enabled. GBL_NUM_CPU_CORE ---------------------------------- This metric provides the total number of CPU cores on a physical system. On VMs, this metric shows information according to resources available on that VM. On non HP-UX system, this metric is equivalent to active CPU cores. On AIX System WPARs, this metric value is identical to the value on AIX Global Environment. On Windows, this metric will be “na” on Windows Server 2003 Itanium systems. The Linux kernel currently doesn’t provide any metadata information for disabled CPUs. This means that there is no way to find out types, speeds, as well as hardware IDs or any other information that is used to determine the number of cores, the number of threads, the HyperThreading state, etc... If the agent (or Glance) is started while some of the CPUs are disabled, some of these metrics will be “na”, some will be based on what is visible at startup time. All information will be updated if/when additional CPUs are enabled and information about them becomes available. The configuration counts will remain at the highest discovered level (i.e. if CPUs are then disabled, the maximum number of CPUs/cores/etc... will remain at the highest observed level). It is recommended that the agent be started with all CPUs enabled. GBL_NUM_DISK ---------------------------------- The number of disks on the system. Only local disk devices are counted in this metric. On HP-UX, this is a count of the number of disks on the system that have ever had activity over the cumulative collection time. On Solaris non-global zones, this metric shows value as 0. On AIX System WPARs, this metric shows value as 0. GBL_NUM_LS ---------------------------------- This indicates the number of LS hosted in a system. If Perf Agent is installed in a guest or in a standalone system this value will be 0. On Solaris non-global zones, this metric shows value as 0. GBL_NUM_NETWORK ---------------------------------- The number of network interfaces on the system. This includes the loopback interface. On certain platforms, this also include FDDI, Hyperfabric, ATM, Serial Software interfaces such as SLIP or PPP, and Wide Area Network interfaces (WAN) such as ISDN or X.25. The “netstat -i” command also displays the list of network interfaces on the system. GBL_NUM_SOCKET ---------------------------------- The number of physical cpu sockets on the system. On VMs, this metric shows information according to resources available on that VM. On Windows, this metric will be “na” on Windows Server 2003 Itanium systems. GBL_NUM_SWAP ---------------------------------- The number of configured swap areas. GBL_NUM_TT ---------------------------------- The number of unique Transaction Tracker (TT) transactions that have been registered on this system. GBL_NUM_USER ---------------------------------- The number of users logged in at the time of the interval sample. This is the same as the command “who | wc -l”. For Unix systems, the information for this metric comes from the utmp file which is updated by the login command. For more information, read the man page for utmp. Some applications may create users on the system without using login and updating the utmp file. These users are not reflected in this count. This metric can be a general indicator of system usage. In a networked environment, however, users may maintain inactive logins on several systems. On Windows, the information for this metric comes from the Server Sessions counter in the Performance Libraries Server object. It is a count of the number of users using this machine as a file server. GBL_OSKERNELTYPE ---------------------------------- This indicates the word size of the current kernel on the system. Some hardware can load the 64-bit kernel or the 32-bit kernel. GBL_OSKERNELTYPE_INT ---------------------------------- This indicates the word size of the current kernel on the system. Some hardware can load the 64-bit kernel or the 32-bit kernel. GBL_OSNAME ---------------------------------- A string representing the name of the operating system. On Unix systems, this is the same as the output from the “uname -s” command. GBL_OSRELEASE ---------------------------------- The current release of the operating system. On most Unix systems, this is same as the output from the “uname -r” command. On AIX, this is the actual patch level of the operating system. This is similar to what is returned by the command “lslpp -l bos.rte” as the most recent level of the COMMITTED Base OS Runtime. For example, “5.2.0”. GBL_OSVERSION ---------------------------------- A string representing the version of the operating system. This is the same as the output from the “uname -v” command. This string is limited to 20 characters, and as a result, the complete version name might be truncated. On Windows, this is a string representing the service pack installed on the operating system. GBL_PRI_QUEUE ---------------------------------- The average number of processes or kernel threads blocked on PRI (waiting for their priority to become high enough to get the CPU) during the interval. To determine if the CPU is a bottleneck, compare this metric with GBL_CPU_TOTAL_UTIL. If GBL_CPU_TOTAL_UTIL is near 100 percent and GBL_PRI_QUEUE is greater than three, there is a high probability of a CPU bottleneck. This is calculated as the accumulated time that all processes or kernel threads spent blocked on PRI divided by the interval time. HP-UX RUN/PRI/CPU Queue differences for multi-cpu systems: For example, let’s assume we’re using a system with eight processors. We start eight CPU intensive threads that consume almost all of the CPU resources. The approximate values shown for the CPU related queue metrics would be: GBL_RUN_QUEUE = 1.0 GBL_PRI_QUEUE = 0.1 GBL_CPU_QUEUE = 1.0 Assume we start an additional eight CPU intensive threads. The approximate values now shown are: GBL_RUN_QUEUE = 2.0 GBL_PRI_QUEUE = 8.0 GBL_CPU_QUEUE = 16.0 At this point, we have sixteen CPU intensive threads running on the eight processors. Keeping the definitions of the three queue metrics in mind, the run queue is 2 (that is, 16 / 8); the pri queue is 8 (only half of the threads can be active at any given time); and the cpu queue is 16 (half of the threads waiting in the cpu queue that are ready to run, plus one for each active thread). This illustrates that the run queue is the average of number of threads waiting in the runqueue for all processors; the pri queue is the number of threads that are blocked on “PRI” (priority); and the cpu queue is the number of threads in the cpu queue that are ready to run, including the threads using the CPU. Note that if the value for GBL_PRI_QUEUE greatly exceeds the value for GBL_RUN_QUEUE, this may be a side-effect of the measurement interface having lost trace data. In this case, check the value of the GBL_LOST_MI_TRACE_BUFFERS metric. If there has been buffer loss, you can correct the value of GBL_PRI_QUEUE by restarting the midaemon and the performance tools. You can use the /opt/perf/bin/midaemon -T command to force immediate shutdown of the measurement interface. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. GBL_PRI_WAIT_PCT ---------------------------------- The percentage of time processes or kernel threads were blocked on PRI (waiting for their priority to become high enough to get the CPU) during the interval. This is calculated as the accumulated time that all processes or kernel threads spent blocked on PRI divided by the accumulated time that all processes or kernel threads were alive during the interval. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. The Global QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues. The Global WAIT PCT metrics, which are also based on block states, represent the percentage of all processes or kernel threads that were alive on the system. No direct comparison is reasonable with the Application WAIT PCT metrics since they represent percentages within the context of a specific application and cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. GBL_PRI_WAIT_TIME ---------------------------------- The accumulated time, in seconds, that all processes or kernel threads were blocked on PRI (waiting for their priority to become high enough to get the CPU) during the interval. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. GBL_PROC_SAMPLE ---------------------------------- The number of process data samples that have been averaged into global metrics (such as GBL_ACTIVE_PROC) that are based on process samples. GBL_RUN_QUEUE ---------------------------------- On UNIX systems except Linux, this is the average number of threads waiting in the runqueue over the interval. The average is computed against the number of times the run queue is occupied instead of time. The average is updated by the kernel at a fine grain interval, only when the run queue is occupied. It is not averaged against the interval and can therefore be misleading for long intervals when the run queue is empty most or part of the time. This value matches runq-sz reported by the “sar -q” command. The GBL_LOADAVG* metrics are better indicators of run queue pressure. On Linux and Windows, this is instantaneous value obtained at the time of logging. On Linux, it shows the number of threads waiting in the runqueue. On Windows, it shows the Processor Queue Length. On Unix systems, GBL_RUN_QUEUE will typically be a small number. Larger than normal values for this metric indicate CPU contention among threads. This CPU bottleneck is also normally indicated by 100 percent GBL_CPU_TOTAL_UTIL. It may be OK to have GBL_CPU_TOTAL_UTIL be 100 percent if no other threads are waiting for the CPU. However, if GBL_CPU_TOTAL_UTIL is 100 percent and GBL_RUN_QUEUE is greater than the number of processors, it indicates a CPU bottleneck. On Windows, the Processor Queue reflects a count of process threads which are ready to execute. A thread is ready to execute (in the Ready state) when the only resource it is waiting on is the processor. The Windows operating system itself has many system threads which intermittently use small amounts of processor time. Several low priority threads intermittently wake up and execute for very short intervals. Depending on when the collection process samples this queue, there may be none or several of these low-priority threads trying to execute. Therefore, even on an otherwise quiescent system, the Processor Queue Length can be high. High values for this metric during intervals where the overall CPU utilization (gbl_cpu_total_util) is low do not indicate a performance bottleneck. Relatively high values for this metric during intervals where the overall CPU utilization is near 100% can indicate a CPU performance bottleneck. HP-UX RUN/PRI/CPU Queue differences for multi-cpu systems: For example, let’s assume we’re using a system with eight processors. We start eight CPU intensive threads that consume almost all of the CPU resources. The approximate values shown for the CPU related queue metrics would be: GBL_RUN_QUEUE = 1.0 GBL_PRI_QUEUE = 0.1 GBL_CPU_QUEUE = 1.0 Assume we start an additional eight CPU intensive threads. The approximate values now shown are: GBL_RUN_QUEUE = 2.0 GBL_PRI_QUEUE = 8.0 GBL_CPU_QUEUE = 16.0 At this point, we have sixteen CPU intensive threads running on the eight processors. Keeping the definitions of the three queue metrics in mind, the run queue is 2 (that is, 16 / 8); the pri queue is 8 (only half of the threads can be active at any given time); and the cpu queue is 16 (half of the threads waiting in the cpu queue that are ready to run, plus one for each active thread). This illustrates that the run queue is the average of number of threads waiting in the runqueue for all processors; the pri queue is the number of threads that are blocked on “PRI” (priority); and the cpu queue is the number of threads in the cpu queue that are ready to run, including the threads using the CPU. On Solaris non-global zones, this metric shows data from the global zone. GBL_RUN_QUEUE_CUM ---------------------------------- On UNIX systems except Linux, this is the average number of threads waiting in the runqueue over the cumulative collection time. On Linux, this is approximately the number of threads waiting in the runqueue over the cumulative collection time. On Windows, this is approximately the average Processor Queue Length over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. In this case, this metric is a cumulative average of data that was collected as an average. This metric is derived from GBL_RUN_QUEUE. HP-UX RUN/PRI/CPU Queue differences for multi-cpu systems: For example, let’s assume we’re using a system with eight processors. We start eight CPU intensive threads that consume almost all of the CPU resources. The approximate values shown for the CPU related queue metrics would be: GBL_RUN_QUEUE = 1.0 GBL_PRI_QUEUE = 0.1 GBL_CPU_QUEUE = 1.0 Assume we start an additional eight CPU intensive threads. The approximate values now shown are: GBL_RUN_QUEUE = 2.0 GBL_PRI_QUEUE = 8.0 GBL_CPU_QUEUE = 16.0 At this point, we have sixteen CPU intensive threads running on the eight processors. Keeping the definitions of the three queue metrics in mind, the run queue is 2 (that is, 16 / 8); the pri queue is 8 (only half of the threads can be active at any given time); and the cpu queue is 16 (half of the threads waiting in the cpu queue that are ready to run, plus one for each active thread). This illustrates that the run queue is the average of number of threads waiting in the runqueue for all processors; the pri queue is the number of threads that are blocked on “PRI” (priority); and the cpu queue is the number of threads in the cpu queue that are ready to run, including the threads using the CPU. GBL_RUN_QUEUE_HIGH ---------------------------------- On UNIX systems except Linux, this is the highest value of average number of threads waiting in the runqueue during any interval over the cumulative collection time. On Linux, this is the highest value of number of threads waiting in the runqueue during any interval over the cumulative collection time. GBL_SAMPLE ---------------------------------- The number of data samples (intervals) that have occurred over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. GBL_SERIALNO ---------------------------------- On HP-UX, this is the ID number of the computer as returned by the command “uname -i”. If this value is not available, an empty string is returned. On SUN, this is the ASCII representation of the hardware-specific serial number. This is printed in hexadecimal as presented by the “hostid” command when possible. If that is not possible, the decimal format is provided instead. On AIX, this is the machine ID number as returned by the command “uname -m”. This number has the form xxyyyyyymmss. For the RISC System/6000, “xx” position is always 00. The “yyyyyy” positions contain the unique ID number for the central processing unit (cpu). While “mm” represents the model number, and “ss” is the submodel number (always 00). On Linux, this is the ASCII representation of the hardware-specific serial number, as returned by the command “hostid”. GBL_STARTDATE ---------------------------------- The date that the collector started. GBL_STARTED_PROC ---------------------------------- The number of processes that started during the interval. GBL_STARTED_PROC_RATE ---------------------------------- The number of processes that started per second during the interval. GBL_STARTTIME ---------------------------------- The time of day that the collector started. GBL_STATDATE ---------------------------------- The date at the end of the interval, based on local time. GBL_STATTIME ---------------------------------- An ASCII string representing the time at the end of the interval, based on local time. GBL_SWAP_SPACE_AVAIL ---------------------------------- The total amount of potential swap space, in MB. On HP-UX, this is the sum of the device swap areas enabled by the swapon command, the allocated size of any file system swap areas, and the allocated size of pseudo swap in memory if enabled. Note that this is potential swap space. This is the same as (AVAIL: total) as reported by the “swapinfo -mt” command. On SUN, this is the total amount of swap space available from the physical backing store devices (disks) plus the amount currently available from main memory. This is the same as (used + available) /1024, reported by the “swap -s” command. On Linux, this is same as (Swap: total) as reported by the “free -m” command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_SWAP_SPACE_AVAIL_KB ---------------------------------- The total amount of potential swap space, in KB. On HP-UX, this is the sum of the device swap areas enabled by the swapon command, the allocated size of any file system swap areas, and the allocated size of pseudo swap in memory if enabled. Note that this is potential swap space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be usable. For example, on a 61MB disk using 2 MB swap size allocations, 1 MB remains unusable and is considered wasted space. On HP-UX, this is the same as (AVAIL: total) as reported by the “swapinfo -t” command. On SUN, this is the total amount of swap space available from the physical backing store devices (disks) plus the amount currently available from main memory. This is the same as (used + available)/1024, reported by the “swap - s” command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_SWAP_SPACE_DEVICE_AVAIL ---------------------------------- The amount of swap space configured on disk devices exclusively as swap space (in MB). On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. GBL_SWAP_SPACE_DEVICE_UTIL ---------------------------------- On HP-UX, this is the percentage of device swap space currently in use of the total swap space available. This does not include file system or remote swap space. On HP-UX, note that available swap is only potential swap space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be usable. For example, on a 61 MB disk using 2 MB swap size allocations, 1 MB remains unusable and is considered wasted space. Consequently, 100 percent utilization on a single device is not always obtainable. The wasted swap space, and the remainder of allocated SWCHUNKs that have not been used is what is reported in the hold field of the /usr/sbin/swapinfo command. On HP-UX, when compared to the “swapinfo -mt” command results, this is calculated as: Util = ((USED: dev) sum / (AVAIL: total)) * 100 On SUN, this is the percentage of total system device swap space currently in use. This metric only gives the percentage of swap space used from the available physical swap device space, and does not include the memory that can be used for swap. (On SunOS 5.X, the virtual swap swapfs can allocate swap space from memory.) On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. GBL_SWAP_SPACE_USED ---------------------------------- The amount of swap space used, in MB. On HP-UX, “Used” indicates written to disk (or locked in memory), rather than reserved. This is the same as (USED: total - reserve) as reported by the “swapinfo -mt” command. On SUN, “Used” indicates amount written to disk (or locked in memory), rather than reserved. Swap space is reserved (by decrementing a counter) when virtual memory for a program is created. This is the same as (bytes allocated)/1024, reported by the “swap -s” command. On Linux, this is same as (Swap: used) as reported by the “free -m” command. On AIX System WPARs, this metric is NA. On Solaris non-global zones, this metric is N/A. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SWAP_SPACE_USED_UTIL ---------------------------------- This is the percentage of swap space used. On HP-UX, “Used %” indicates percentage of swap space written to disk (or locked in memory), rather than reserved. This is the same as percentage of ((USED: total - reserve)/total)*100, as reported by the “swapinfo -mt” command. On SUN, “Used %” indicates percentage of swap space written to disk (or locked in memory), rather than reserved. Swap space is reserved (by decrementing a counter) when virtual memory for a program is created. This is the same as percentage of ((bytes allocated)/total)*100, reported by the “swap -s” command. On SUN, global swap space is tracked through the operating system. Device swap space is tracked through the devices. For this reason, the amount of swap space used may differ between the global and by-device metrics. Sometimes pages that are marked to be swapped to disk by the operating system are never swapped. The operating system records this as used swap space, but the devices do not, since no physical IOs occur. (Metrics with the prefix “GBL” are global and metrics with the prefix “BYSWP” are by device.) On Linux, this is same as percentage of ((Swap: used)/total)*100, as reported by the “free -m” command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. GBL_SWAP_SPACE_UTIL ---------------------------------- The percent of available swap space that was being used by running processes in the interval. On Windows, this is the percentage of virtual memory, which is available to user processes, that is in use at the end of the interval. It is not an average over the entire interval. It reflects the ratio of committed memory to the current commit limit. The limit may be increased by the operating system if the paging file is extended. This is the same as (Committed Bytes / Commit Limit) * 100 when comparing the results to Performance Monitor. On HP-UX, swap space must be reserved (but not allocated) before virtual memory can be created. If all of available swap is reserved, then no new processes or virtual memory can be created. Swap space locations are actually assigned (used) when a page is actually written to disk or locked in memory (pseudo swap in memory). This is the same as (PCT USED: total) as reported by the “swapinfo -mt” command. On Unix systems, this metric is a measure of capacity rather than performance. As this metric nears 100 percent, processes are not able to allocate any more memory and new processes may not be able to run. Very low swap utilization values may indicate that too much area has been allocated to swap, and better use of disk space could be made by reallocating some swap partitions to be user filesystems. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. On AIX System WPARs, this metric is NA. GBL_SWAP_SPACE_UTIL_CUM ---------------------------------- The average percentage of available swap space currently in use (has memory belonging to processes paged or swapped out on it) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, note that available swap is only potential swap space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be usable. For example, on a 61 MB disk using 2 MB swap size allocations, 1 MB remains unusable and is considered wasted space. Consequently, 100 percent utilization on a single device is not always obtainable. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SWAP_SPACE_UTIL_HIGH ---------------------------------- The highest average percentage of available swap space currently in use (has memory belonging to processes paged or swapped out on it) in any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, note that available swap is only potential swap space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be usable. For example, on a 61 MB disk using 2 MB swap size allocations, 1 MB remains unusable and is considered wasted space. Consequently, 100 percent utilization on a single device is not always obtainable. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SYSTEM_ID ---------------------------------- The network node hostname of the system. This is the same as the output from the “uname -n” command. On Windows, the name obtained from GetComputerName. GBL_SYSTEM_TYPE ---------------------------------- On Unix systems, this is either the model of the system or the instruction set architecture of the system. On Windows, this is the processor architecture of the system. GBL_SYSTEM_UPTIME_HOURS ---------------------------------- The time, in hours, since the last system reboot. GBL_SYSTEM_UPTIME_SECONDS ---------------------------------- The time, in seconds, since the last system reboot. GBL_THRESHOLD_PROCCPU ---------------------------------- The process CPU threshold specified in the parm file. GBL_THRESHOLD_PROCDISK ---------------------------------- The process disk threshold specified in the parm file. GBL_THRESHOLD_PROCIO ---------------------------------- The process IO threshold specified in the parm file. GBL_THRESHOLD_PROCMEM ---------------------------------- The process memory threshold specified in the parm file. GBL_TT_OVERFLOW_COUNT ---------------------------------- The number of new transactions that could not be measured because the Measurement Processing Daemon’s (midaemon) Measurement Performance Database is full. If this happens, the default Measurement Performance Database size is not large enough to hold all of the registered transactions on this system. This can be remedied by stopping and restarting the midaemon process using the -smdvss option to specify a larger Measurement Performance Database size. The current Measurement Performance Database size can be checked using the midaemon -sizes option. PROC_APP_ID ---------------------------------- The ID number of the application to which the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) belonged during the interval. Application “other” always has an ID of 1. There can be up to 999 user- defined applications, which are defined in the parm file. PROC_APP_NAME ---------------------------------- The application name of a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above). Processes (or kernel threads, if HP-UX/Linux Kernel 2.6 and above) are assigned into application groups based upon rules in the parm file. If a process does not fit any rules in this file, it is assigned to the application “other.” The rules include decisions based upon pathname, user ID, priority, and so forth. As these values change during the life of a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above), it is re-assigned to another application. This re-evaluation is done every measurement interval. PROC_CHILD_CPU_SYS_MODE_UTIL ---------------------------------- The percentage of system time accumulated by this process’s children processes during the interval. On Unix systems, when a process terminates, its CPU counters (user and system) are accumulated in the parent’s “children times” counters. This occurs when the parent waits for (or reaps) the child. See getrusage(2). If the process is an orphan process, its parent becomes the init(1m) process, and its CPU times will be accumulated to the init process upon termination. The PROC*_CHILD_* metrics attempt to report these counters in a meaningful way. If these counters were reported unconditionally as they are incremented, they would be misleading. For example, consider a shell process that forks another process and that process accumulates 100 minutes of CPU time. When that process terminates, the shell would report a huge child time utilization for that interval even though it was generally idle, waiting for that child to terminate. The child process was most likely already reported in previous intervals as it used the CPU time, and therefore it would be confusing to report this time in the parent. If, on the other hand, a process was continuously forking short-lived processes during the interval, it would be useful to report the CPU time used by those children processes. The simple algorithm chosen is to only report children times when their total CPU time is less than the process alive interval, and zero otherwise. It is not fool- proof but it generally yields the right results, i.e., if a process reports high child time utilization for several intervals in a row, it could be a runaway forking process. An example of such a runaway process (or “fork bomb”) is: while true ; do ps -ef | grep something done Moderate children times are also a useful way to identify daemons that rely on child processes, or, in the case of the init process it may indicate that many short-lived orphan processes are being created. Note that this metric is only valid at the process level. It reports CPU time of processes forked and does not report on threads created by processes. The PROC*_CHILD* metrics have no meaning at the thread level, therefore the thread metric of the same name, on systems that report per-thread data, will show “na”. PROC_CHILD_CPU_TOTAL_UTIL ---------------------------------- The percentage of system + user time accumulated by this process’s children processes during the interval. On Unix systems, when a process terminates, its CPU counters (user and system) are accumulated in the parent’s “children times” counters. This occurs when the parent waits for (or reaps) the child. See getrusage(2). If the process is an orphan process, its parent becomes the init(1m) process, and its CPU times will be accumulated to the init process upon termination. The PROC*_CHILD_* metrics attempt to report these counters in a meaningful way. If these counters were reported unconditionally as they are incremented, they would be misleading. For example, consider a shell process that forks another process and that process accumulates 100 minutes of CPU time. When that process terminates, the shell would report a huge child time utilization for that interval even though it was generally idle, waiting for that child to terminate. The child process was most likely already reported in previous intervals as it used the CPU time, and therefore it would be confusing to report this time in the parent. If, on the other hand, a process was continuously forking short-lived processes during the interval, it would be useful to report the CPU time used by those children processes. The simple algorithm chosen is to only report children times when their total CPU time is less than the process alive interval, and zero otherwise. It is not fool- proof but it generally yields the right results, i.e., if a process reports high child time utilization for several intervals in a row, it could be a runaway forking process. An example of such a runaway process (or “fork bomb”) is: while true ; do ps -ef | grep something done Moderate children times are also a useful way to identify daemons that rely on child processes, or, in the case of the init process it may indicate that many short-lived orphan processes are being created. Note that this metric is only valid at the process level. It reports CPU time of processes forked and does not report on threads created by processes. The PROC*_CHILD* metrics have no meaning at the thread level, therefore the thread metric of the same name, on systems that report per-thread data, will show “na”. PROC_CHILD_CPU_USER_MODE_UTIL ---------------------------------- The percentage of user time accumulated by this process’s children processes during the interval. On Unix systems, when a process terminates, its CPU counters (user and system) are accumulated in the parent’s “children times” counters. This occurs when the parent waits for (or reaps) the child. See getrusage(2). If the process is an orphan process, its parent becomes the init(1m) process, and its CPU times will be accumulated to the init process upon termination. The PROC*_CHILD_* metrics attempt to report these counters in a meaningful way. If these counters were reported unconditionally as they are incremented, they would be misleading. For example, consider a shell process that forks another process and that process accumulates 100 minutes of CPU time. When that process terminates, the shell would report a huge child time utilization for that interval even though it was generally idle, waiting for that child to terminate. The child process was most likely already reported in previous intervals as it used the CPU time, and therefore it would be confusing to report this time in the parent. If, on the other hand, a process was continuously forking short-lived processes during the interval, it would be useful to report the CPU time used by those children processes. The simple algorithm chosen is to only report children times when their total CPU time is less than the process alive interval, and zero otherwise. It is not fool- proof but it generally yields the right results, i.e., if a process reports high child time utilization for several intervals in a row, it could be a runaway forking process. An example of such a runaway process (or “fork bomb”) is: while true ; do ps -ef | grep something done Moderate children times are also a useful way to identify daemons that rely on child processes, or, in the case of the init process it may indicate that many short-lived orphan processes are being created. Note that this metric is only valid at the process level. It reports CPU time of processes forked and does not report on threads created by processes. The PROC*_CHILD* metrics have no meaning at the thread level, therefore the thread metric of the same name, on systems that report per-thread data, will show “na”. PROC_CPU_ALIVE_SYS_MODE_UTIL ---------------------------------- The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) in system mode as a percentage of the time it is alive during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_ALIVE_TOTAL_UTIL ---------------------------------- The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) as a percentage of the time it is alive during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_ALIVE_USER_MODE_UTIL ---------------------------------- The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) in user mode as a percentage of the time it is alive during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_LAST_USED ---------------------------------- The ID number of the processor that last ran the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above). For uni-processor systems, this value is always zero. On a threaded operating system, such as HP-UX 11.0 and beyond, this metric represents a kernel thread characteristic. If this metric is reported for a process, the value for its last executing kernel thread is given. For example, if a process has multiple kernel threads and kernel thread one is the last to execute during the interval, the metric value for kernel thread one is assigned to the process. PROC_CPU_SYS_MODE_TIME ---------------------------------- The CPU time in system mode in the context of the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_SYS_MODE_TIME_CUM ---------------------------------- The CPU time in system mode in the context of the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) over the cumulative collection time. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_SYS_MODE_UTIL ---------------------------------- The percentage of time that the CPU was in system mode in the context of the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. High system mode CPU utilizations are normal for IO intensive programs. Abnormally high system CPU utilization can indicate that a hardware problem is causing a high interrupt rate. It can also indicate programs that are not using system calls efficiently. A classic “hung shell” shows up with very high system mode CPU because it gets stuck in a loop doing terminal reads (a system call) to a device that never responds. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_SYS_MODE_UTIL_CUM ---------------------------------- The average percentage of time that the CPU was in system mode in the context of the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) over the cumulative collection time. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_TOTAL_TIME ---------------------------------- The total CPU time, in seconds, consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) during the interval. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On HP-UX, the total CPU time is the sum of the CPU time components for a process or kernel thread, including system, user, context switch, interrupts processing, realtime, and nice utilization values. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_TOTAL_TIME_CUM ---------------------------------- The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) over the cumulative collection time. CPU time is in seconds unless otherwise specified. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. This is calculated as PROC_CPU_TOTAL_TIME_CUM = PROC_CPU_SYS_MODE_TIME_CUM + PROC_CPU_USER_MODE_TIME_CUM On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_TOTAL_UTIL ---------------------------------- The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) as a percentage of the total CPU time available during the interval. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On HP-UX, the total CPU utilization is the sum of the CPU utilization components for a process or kernel thread, including system, user, context switch, interrupts processing, realtime, and nice utilization values. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_TOTAL_UTIL_CUM ---------------------------------- The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) as a percentage of the total CPU time available over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On HP-UX, the total CPU utilization is the sum of the CPU utilization components for a process or kernel thread, including system, user, context switch, interrupts processing, realtime, and nice utilization values. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_USER_MODE_TIME ---------------------------------- The time, in seconds, the process (or kernel threads, if HP-UX/Linux Kernel 2.6 and above) was using the CPU in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_USER_MODE_TIME_CUM ---------------------------------- The time, in seconds, the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) was using the CPU in user mode over the cumulative collection time. collection time. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_USER_MODE_UTIL ---------------------------------- The percentage of time the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) was using the CPU in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_CPU_USER_MODE_UTIL_CUM ---------------------------------- The average percentage of time the process (or kernel thread, if HP_UX/Linux Kernel 2.6 and above) was using the CPU in user mode over the cumulative collection time. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. If there is no CPU multi-threading, the maximum percentage is 100% times the number of Cores on the system. On a system with multi-threaded CPUs, the maximum percentage is : 100 % times the number of cores X 2. ( i.e the total number of logical CPUs on the system). On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. PROC_DISK_PHYS_IO_RATE ---------------------------------- The average number of physical disk IOs per second made by the process or kernel thread during the interval. For processes which run for less than the measurement interval, this metric is normalized over the measurement interval. For example, a process ran for 1 second and did 50 IOs during its life. If the measurement interval is 5 seconds, it is reported as having done 10 IOs per second. If the measurement interval is 60 seconds, it is reported as having done 50/60 or 0.83 IOs per second. “Disk” in this instance refers to any locally attached physical disk drives (that is, “spindles”) that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS* metrics will report pages of disk IO specifically. The PROC_IO* metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have “na” values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread’s IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. PROC_DISK_PHYS_IO_RATE_CUM ---------------------------------- The number of physical disk IOs per second made by the selected process or kernel thread over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. “Disk” in this instance refers to any locally attached physical disk drives (that is, “spindles”) that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS* metrics will report pages of disk IO specifically. The PROC_IO* metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have “na” values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread’s IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. PROC_DISK_PHYS_READ ---------------------------------- The number of physical reads made by (or for) a process or kernel thread during the last interval. “Disk” refers to a physical drive (that is, “spindle”), not a partition on a drive (unless the partition occupies the entire physical disk). NFS mounted disks are not included in this list. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS* metrics will report pages of disk IO specifically. The PROC_IO* metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have “na” values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread’s IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. PROC_DISK_PHYS_READ_CUM ---------------------------------- The number of physical reads made by (or for) a process or kernel thread over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. “Disk” refers to a physical drive (that is, “spindle”), not a partition on a drive (unless the partition occupies the entire physical disk). NFS mounted disks are not included in this list. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS* metrics will report pages of disk IO specifically. The PROC_IO* metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have “na” values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread’s IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. PROC_DISK_PHYS_READ_RATE ---------------------------------- The number of physical reads per second made by (or for) a process or kernel thread during the interval. “Disk” refers to a physical drive (that is, “spindle”), not a partition on a drive (unless the partition occupies the entire physical disk). NFS mounted disks are not included in this list. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS* metrics will report pages of disk IO specifically. The PROC_IO* metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have “na” values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread’s IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. PROC_DISK_PHYS_WRITE ---------------------------------- The number of physical writes made by (or for) a process or kernel thread during the last interval. “Disk” in this instance refers to any locally attached physical disk drives (that is, “spindles”) that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS* metrics will report pages of disk IO specifically. The PROC_IO* metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have “na” values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread’s IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. PROC_DISK_PHYS_WRITE_CUM ---------------------------------- The number of physical writes made by (or for) a process or kernel thread over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. “Disk” in this instance refers to any locally attached physical disk drives (that is, “spindles”) that may hold file systems and/or swap. NFS mounted disks are not included in this list. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS* metrics will report pages of disk IO specifically. The PROC_IO* metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have “na” values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread’s IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. PROC_DISK_PHYS_WRITE_RATE ---------------------------------- The number of physical writes per second made by (or for) a process or kernel thread during the interval. “Disk” refers to a physical drive (that is, “spindle”), not a partition on a drive (unless the partition occupies the entire physical disk). NFS mounted disks are not included in this list. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS* metrics will report pages of disk IO specifically. The PROC_IO* metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have “na” values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread’s IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. PROC_DISK_SUBSYSTEM_WAIT_PCT ---------------------------------- The percentage of time the process or kernel thread was blocked on the disk subsystem (waiting for its file system IOs to complete) during the interval. On HP-UX, this is based on the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On Linux, this is based on the sum of all processes or kernel threads blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. PROC_DISK_SUBSYSTEM_WAIT_PCT_CUM ---------------------------------- The percentage of time the process or kernel thread was blocked on the disk subsystem (waiting for its file system IOs to complete) over the cumulative collection time. On HP-UX, this is based on the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On Linux, this is based on the sum of all processes or kernel threads blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. PROC_DISK_SUBSYSTEM_WAIT_TIME ---------------------------------- The time, in seconds, that the process or kernel thread was blocked on the disk subsystem (waiting for its file system IOs to complete) during the interval. On HP-UX, this is based on the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On Linux, this is based on the sum of all processes or kernel threads blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. PROC_DISK_SUBSYSTEM_WAIT_TIME_CUM ---------------------------------- The time, in seconds, that the process or kernel thread was blocked on the disk subsystem (waiting for its file system IOs to complete) over the cumulative collection time. On HP-UX, this is based on the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On Linux, this is based on the sum of all processes or kernel threads blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. PROC_EUID ---------------------------------- The Effective User ID of a process(or kernel thread, if HP-UX/Linux Kernel 2.6 and above). On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_FILE_MODE ---------------------------------- A text string summarizing the type of open mode: rd/wr Opened for input & output read Opened for input only write Opened for output only PROC_FILE_NAME ---------------------------------- The path name or identifying information about the open file descriptor. If the path name string exceeds 40 characters in length, the beginning and the end of the path is shown and the middle of the name is replaced by “...”. An attempt is made to obtain the file path name by either searching the current cylinder group to find directory entries that point to the currently opened inode, or by searching the kernel name cache. Since looking up file path names would require high disk overhead, some names may not be resolved. If the path name can not be resolved, a string is returned indicating the type and inode number of the file. For the string format including an inode number, you may use the ncheck(1M) program to display the file path name relative to the mount point. Sometimes files may be deleted before they are closed. In these cases, the process file table may still have the inode even though the file is not actually present and as a result, ncheck will fail. PROC_FILE_NUMBER ---------------------------------- The file number of the current open file. PROC_FILE_OPEN ---------------------------------- Number of files the current process has remaining open as of the end of the interval. PROC_FILE_TYPE ---------------------------------- A text string describing the type of the current file. This is one of: block Block special device char Character device dir Directory fifo A pipe or named pipe file Simple file link Symbolic file link other An unknown file type PROC_FORCED_CSWITCH ---------------------------------- The number of times that the process (or kernel thread, if HP-UX) was preempted by an external event and another process (or kernel thread, if HP- UX) was allowed to execute during the interval. Examples of reasons for a forced switch include expiration of a time slice or returning from a system call with a higher priority process (or kernel thread, if HP-UX) ready to run. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. PROC_FORCED_CSWITCH_CUM ---------------------------------- The number of times the process (or kernel thread, if HP-UX) was preempted by an external event and another process (or kernel thread, if HP-UX) was allowed to execute over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. Examples of reasons for a forced switch include expiration of a time slice or returning from a system call with a higher priority process (or kernel thread, if HP-UX) ready to run. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. PROC_GROUP_ID ---------------------------------- On most systems, this is the real group ID number of the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above). On AIX, this is the effective group ID number of the process. On HP-UX, this is the effective group ID number of the process if not in setgid mode. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_GROUP_NAME ---------------------------------- The group name (from /etc/group) of a process(or kernel thread, if HP- UX/Linux Kernel 2.6 and above). The group identifier is obtained from searching the /etc/passwd file using the user ID (uid) as a key. Therefore, if more than one account is listed in /etc/passwd with the same user ID (uid) field, the first one is used. If no entry can be found for the user ID in /etc/passwd, the group name is the uid number. If no matching entry in /etc/group can be found, the group ID is returned as the group name. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_INTEREST ---------------------------------- A string containing the reason(s) why the process or thread is of interest, based on the thresholds specified in the parm file. An ‘A’ indicates that the process or thread exceeds the process CPU threshold, computed using the actual time the process or thread was alive during the interval. A ‘C’ indicates that the process or thread exceeds the process CPU threshold, computed using the collection interval. Currently, the same CPU threshold is used for both CPU interest reasons. A ‘D’ indicates that the process or thread exceeds the process disk IO threshold. An ‘I’ indicates that the process or thread exceeds the IO threshold. An ‘M’ indicates that the process exceeds the process memory threshold. This interest reason is only meaningful for processes and therefore not shown for threads. New processes or threads are identified with an ‘N’, terminated processes or threads are identified with a ‘K’. Note that the parm file ‘nonew’, ‘nokill’ and ‘shortlived’ settings are logging only options and therefore ignored in Glance components. PROC_INTERVAL ---------------------------------- The amount of time in the interval. This is the same value for all processes (and kernel threads, if HP-UX/Linux Kernel 2.6 and above), regardless of whether they were alive for the entire interval. Note, calculations such as utilizations or rates are calculated using this standardized process interval (PROC_INTERVAL), rather than the actual alive time during the interval (PROC_INTERVAL_ALIVE). Thus, if a process was only alive for 1 second and used the CPU during its entire life (1 second), but the process sample interval was 5 seconds, it would be reported as using 1/5 or 20% CPU utilization, rather than 100% CPU utilization. PROC_INTERVAL_ALIVE ---------------------------------- The number of seconds that the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) was alive during the interval. This may be less than the time of the interval if the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) was new or died during the interval. PROC_INTERVAL_CUM ---------------------------------- The amount of time over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On SUN, AIX, and OSF1, this differs from PROC_RUN_TIME in that PROC_RUN_TIME may not include all of the first and last sample interval times and PROC_INTERVAL_CUM does. PROC_IO_BYTE ---------------------------------- On HP-UX, this is the total number of physical IO KBs (unless otherwise specified) that was used by this process or kernel thread, either directly or indirectly, during the interval. On all other systems, this is the total number of physical IO KBs (unless otherwise specified) that was used by this process during the interval. IOs include disk, terminal, tape and network IO. On HP-UX, indirect IOs include paging and deactivation/reactivation activity done by the kernel on behalf of the process or kernel thread. Direct IOs include disk, terminal, tape, and network IO, but exclude all NFS traffic. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On SUN, counts in the MB ranges in general can be attributed to disk accesses and counts in the KB ranges can be attributed to terminal IO. This is useful when looking for processes with heavy disk IO activity. This may vary depending on the sample interval length. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS* metrics will report pages of disk IO specifically. The PROC_IO* metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have “na” values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread’s IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. PROC_IO_BYTE_CUM ---------------------------------- On HP-UX, this is the total number of physical IO KBs (unless otherwise specified) that was used by this process or kernel thread, either directly or indirectly, over the cumulative collection time. On all other systems, this is the total number of physical IO KBs (unless otherwise specified) that was used by this process over the cumulative collection time. IOs include disk, terminal, tape and network IO. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, indirect IOs include paging and deactivation/reactivation activity done by the kernel on behalf of the process or kernel thread. Direct IOs include disk, terminal, tape, and network IO, but exclude all NFS traffic. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS* metrics will report pages of disk IO specifically. The PROC_IO* metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have “na” values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread’s IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. PROC_IO_BYTE_RATE ---------------------------------- On HP-UX, this is the number of physical IO KBs per second that was used by this process or kernel thread, either directly or indirectly, during the interval. On all other systems, this is the number of physical IO KBs per second that was used by this process during the interval. IOs include disk, terminal, tape and network IO. On HP-UX, indirect IOs include paging and deactivation/reactivation activity done by the kernel on behalf of the process or kernel thread. Direct IOs include disk, terminal, tape, and network IO, but exclude all NFS traffic. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On SUN, counts in the MB ranges in general can be attributed to disk accesses and counts in the KB ranges can be attributed to terminal IO. This is useful when looking for processes with heavy disk IO activity. This may vary depending on the sample interval length. Certain types of disk IOs are not counted by AIX at the process level, so they are excluded from this metric. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS* metrics will report pages of disk IO specifically. The PROC_IO* metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have “na” values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread’s IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. PROC_IO_BYTE_RATE_CUM ---------------------------------- On HP-UX, this is the average number of physical IO KBs per second that was used by this process or kernel thread, either directly or indirectly, over the cumulative collection time. On all other systems, this is the average number of physical IO KBs per second that was used by this process over the cumulative collection time. IOs include disk, terminal, tape and network IO. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, indirect IOs include paging and deactivation/reactivation activity done by the kernel on behalf of the process or kernel thread. Direct IOs include disk, terminal, tape, and network IO, but exclude all NFS traffic. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On SUN, counts in the MB ranges in general can be attributed to disk accesses and counts in the KB ranges can be attributed to terminal IO. This is useful when looking for processes with heavy disk IO activity. This may vary depending on the sample interval length. Linux release versions vary with regards to the amount of process-level IO statistics that are available. Some kernels instrument only disk IO, while some provide statistics for all devices together (including tty and other devices with disk IO). When it is available from your specific release of Linux, the PROC_DISK_PHYS* metrics will report pages of disk IO specifically. The PROC_IO* metrics will report the sum of all types of IO including disk IO, in Kilobytes or KB rates. These metrics will have “na” values on kernels that do not support the instrumentation. For multi-threaded processes, some Linux kernels only report IO statistics for the main thread. In that case, patches are available that will allow the process instrumentation to report the sum of all thread’s IOs, and will also enable per-thread reporting. Starting with 2.6.3X, at least some kernels will include IO data from the children of the process in the process data. This results in misleading inflated IO metrics for processes that fork a lot of children, such as shells, or the init(1m) process. PROC_MAJOR_FAULT ---------------------------------- Number of major page faults for this process (or kernel thread, if HP- UX/Linux Kernel 2.6 and above) during the interval. On HP-UX, major page faults and minor page faults are a subset of vfaults (virtual faults). Stack and heap accesses can cause vfaults, but do not result in a disk page having to be loaded into memory. PROC_MAJOR_FAULT_CUM ---------------------------------- Number of major page faults for this process (or kernel thread, if HP- UX/Linux Kernel 2.6 and above) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, major page faults and minor page faults are a subset of vfaults (virtual faults). Stack and heap accesses can cause vfaults, but do not result in a disk page having to be loaded into memory. PROC_MEM_DATA_VIRT ---------------------------------- On SUN, this is the virtual set size (in KB) of the heap memory for this process. Note that heap can reside partially in BSS and partially in the data segment, so its value will not be the same as PROC_REGION_VIRT of the data segment or PROC_REGION_VIRT_DATA, which is the sum of all data segments for the process. On the other non HP-UX systems, this is the virtual set size (in KB) of the data segment for this process(or kernel thread, if Linux Kernel 2.6 and above). A value of “na” is displayed when this information is unobtainable. On AIX, this is the same as the SIZE value reported by “ps v”. On Linux this value is rounded to PAGESIZE. PROC_MEM_LOCKED ---------------------------------- The number of KBs of virtual memory allocated by the process, marked as locked memory. On Windows, this is the non-paged pool memory of the process. This memory is allocated from the system-wide non-paged pool, and is not affected by the pageout process. Device drivers may allocate memory from the non-paged pool, charging quota against the current (caller) thread. The kernel and driver code use the non-paged pool for data that should always be in the physical memory. The size of the non-paged pool is limited to approximately 128 MB on Windows NT systems and to 256 MB on Windows 2000 systems. The failure to allocate memory from the non-paged pool can cause a system crash. PROC_MEM_RES ---------------------------------- The size (in KB) of resident memory allocated for the process(or kernel thread, if HP-UX/Linux Kernel 2.6 and above). On HP-UX, the calculation of this metric differs depending on whether this process has used any CPU time since the midaemon process was started. This metric is less accurate and does not include shared memory regions in its calculation when the process has been idle since the midaemon was started. On HP-UX, for processes that use CPU time subsequent to midaemon startup, the resident memory is calculated as RSS = sum of private region pages + (sum of shared region pages / number of references) The number of references is a count of the number of attachments to the memory region. Attachments, for shared regions, may come from several processes sharing the same memory, a single process with multiple attachments, or combinations of these. This value is only updated when a process uses CPU. Thus, under memory pressure, this value may be higher than the actual amount of resident memory for processes which are idle because their memory pages may no longer be resident or the reference count for shared segments may have changed. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. A value of “na” is displayed when this information is unobtainable. This information may not be obtainable for some system (kernel) processes. It may also not be available for processes. On AIX, this is the same as the RSS value shown by “ps v”. On Windows, this is the number of KBs in the working set of this process. The working set includes the memory pages touched recently by the threads of the process. If free memory in the system is above a threshold, then pages are left in the working set even if they are not in use. When free memory falls below a threshold, pages are trimmed from the working set, but not necessarily paged out to disk from memory. If those pages are subsequently referenced, they will be page faulted back into the working set. Therefore, the working set is a general indicator of the memory resident set size of this process, but it will vary depending on the overall status of memory on the system. Note that the size of the working set is often larger than the amount of pagefile space consumed (PROC_MEM_VIRT). PROC_MEM_RES_HIGH ---------------------------------- The largest value of resident memory (in KB) during its lifetime. See the description for PROC_MEM_RES for details about how resident memory is determined. A value of “na” is displayed when this information is unobtainable. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_MEM_SHARED_RES ---------------------------------- The size (in KB) of resident memory of shared regions only, such as shared text, shared memory, and shared libraries. On HP-UX, this value is not affected by the reference count. A value of “na” is displayed when this information is unobtainable. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_MEM_STACK_VIRT ---------------------------------- Size (in KB) of the stack for this process(or kernel thread, if Linux Kernel 2.6 and above). On SUN, the stack is initialized to 8K bytes. On Linux this value is rounded to PAGESIZE. PROC_MEM_TEXT_VIRT ---------------------------------- Size (in KB) of the private text for this process(or kernel thread, if Linux Kernel 2.6 and above). On AIX, this is the same as the TSIZ field shown by “ps v”. On Linux this value is rounded to PAGESIZE. PROC_MEM_VIRT ---------------------------------- The size (in KB) of virtual memory allocated for the process(or kernel thread, if HP-UX/Linux Kernel 2.6 and above). On HP-UX, this consists of the sum of the virtual set size of all private memory regions used by this process, plus this process’ share of memory regions which are shared by multiple processes. For processes that use CPU time, the value is divided by the reference count for those regions which are shared. On HP-UX, this metric is less accurate and does not reflect the reference count for shared regions for processes that were started prior to the midaemon process and have not used any CPU time since the midaemon was started. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. On all other Unix systems, this consists of private text, private data, private stack and shared memory. The reference count for shared memory is not taken into account, so the value of this metric represents the total virtual size of all regions regardless of the number of processes sharing access. Note also that lazy swap algorithms, sparse address space malloc calls, and memory-mapped file access can result in large VSS values. On systems that provide Glance memory regions detail reports, the drilldown detail per memory region is useful to understand the nature of memory allocations for the process. A value of “na” is displayed when this information is unobtainable. This information may not be obtainable for some system (kernel) processes. It may also not be available for processes. On Windows, this is the number of KBs the process has used in the paging file(s). Paging files are used to store pages of memory used by the process, such as local data, that are not contained in other files. Examples of memory pages which are contained in other files include pages storing a program’s .EXE and .DLL files. These would not be kept in pagefile space. Thus, often programs will have a memory working set size (PROC_MEM_RES) larger than the size of its pagefile space. On Linux this value is rounded to PAGESIZE. PROC_MINOR_FAULT ---------------------------------- Number of minor page faults for this process (or kernel thread, if HP- UX/Linux Kernel 2.6 and above) during the interval. On HP-UX, major page faults and minor page faults are a subset of vfaults (virtual faults). Stack and heap accesses can cause vfaults, but do not result in a disk page having to be loaded into memory. PROC_MINOR_FAULT_CUM ---------------------------------- Number of minor page faults for this process (or kernel thread, if HP- UX/Linux Kernel 2.6 and above) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, major page faults and minor page faults are a subset of vfaults (virtual faults). Stack and heap accesses can cause vfaults, but do not result in a disk page having to be loaded into memory. PROC_NICE_PRI ---------------------------------- The nice priority for the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) when it was last dispatched. The value is a bias used to adjust the priority for the process. On AIX, the nice user value, makes a process less favored than it otherwise would be, has a range of 0-40 with a default value of 20. The value of PUSER is always added to the value of nice to weight the user process down below the range of priorities expected to be in use by system jobs like the scheduler and special wait queues. On all other Unix systems, the value ranges from 0 to 39. A higher value causes a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) to be dispatched less. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_PAGEFAULT ---------------------------------- The number of page faults that occurred during the interval for the process(or kernel threads, if HP-UX/Linux Kernel 2.6 and above). PROC_PAGEFAULT_RATE ---------------------------------- The number of page faults per second that occurred during the interval for the process(or kernel threads, if HP-UX/Linux Kernel 2.6 and above). PROC_PAGEFAULT_RATE_CUM ---------------------------------- The average number of page faults per second that occurred over the cumulative collection time for the process(or kernel threads, if HP-UX/Linux Kernel 2.6 and above). PROC_PARENT_PROC_ID ---------------------------------- The parent process’ PID number. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_PRI ---------------------------------- On Unix systems, this is the dispatch priority of a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) at the end of the interval. The lower the value, the more likely the process is to be dispatched. On Windows, this is the current base priority of this process. On HP-UX, whenever the priority is changed for the selected process or kernel thread, the new value will not be reflected until the process or kernel thread is reactivated if it is currently idle (for example, SLEEPing). On HP-UX, the lower the value, the more the process or kernel thread is likely to be dispatched. Values between zero and 127 are considered to be “real-time” priorities, which the kernel does not adjust. Values above 127 are normal priorities and are modified by the kernel for load balancing. Some special priorities are used in the HP-UX kernel and subsystems for different activities. These values are described in /usr/include/sys/param.h. Priorities less than PZERO 153 are not signalable. Note that on HP-UX, many network-related programs such as inetd, biod, and rlogind run at priority 154 which is PPIPE. Just because they run at this priority does not mean they are using pipes. By examining the open files, you can determine if a process or kernel thread is using pipes. For HP-UX 10.0 and later releases, priorities between -32 and -1 can be seen for processes or kernel threads using the Posix Real-time Schedulers. When specifying a Posix priority, the value entered must be in the range from 0 through 31, which the system then remaps to a negative number in the range of -1 through -32. Refer to the rtsched man pages for more information. On a threaded operating system, such as HP-UX 11.0 and beyond, this metric represents a kernel thread characteristic. If this metric is reported for a process, the value for its last executing kernel thread is given. For example, if a process has multiple kernel threads and kernel thread one is the last to execute during the interval, the metric value for kernel thread one is assigned to the process. On AIX, values for priority range from 0 to 127. Processes running at priorities less than PZERO (40) are not signalable. On Windows, the higher the value the more likely the process or thread is to be dispatched. Values for priority range from 0 to 31. Values of 16 and above are considered to be “realtime” priorities. Threads within a process can raise and lower their own base priorities relative to the process’s base priority. PROC_PRI_WAIT_PCT ---------------------------------- The percentage of time during the interval the process or kernel thread was blocked on priority (waiting for its priority to become high enough to get the CPU). On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. PROC_PRI_WAIT_PCT_CUM ---------------------------------- The percentage of time the process or kernel thread was blocked on priority over the cumulative collection time. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. A percentage of time spent in a wait state is calculated as the time a kernel thread (or all kernel threads of a process) spent waiting in this state, divided by the alive time of the kernel thread (or all kernel threads of the process) during the interval. If this metric is reported for a kernel thread, the percentage value is for that single kernel thread. If this metric is reported for a process, the percentage value is calculated with the sum of the wait and alive times of all of its kernel threads. For example, if a process has 2 kernel threads, one sleeping for the entire interval and one waiting on terminal input for the interval, the process wait percent values will be 50% on Sleep and 50% on Terminal. The kernel thread wait values will be 100% on Sleep for the first kernel thread and 100% on Terminal for the second kernel thread. For another example, consider the same process as above, with 2 kernel threads, one of which was created half-way through the interval, and which then slept for the remainder of the interval. The other kernel thread was waiting for terminal input for half the interval, then used the CPU actively for the remainder of the interval. The process wait percent values will be 33% on Sleep and 33% on Terminal (each one third of the total alive time). The kernel thread wait values will be 100% on Sleep for the first kernel thread and 50% on Terminal for the second kernel thread. PROC_PRI_WAIT_TIME ---------------------------------- The time, in seconds, that the process or kernel thread was blocked on PRI (waiting for its priority to become high enough to get the CPU) during the interval. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. PROC_PRI_WAIT_TIME_CUM ---------------------------------- The time, in seconds, that the process or kernel thread was blocked on PRI (waiting for its priority to become high enough to get the CPU) over the cumulative collection time. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On a threaded operating system, such as HP-UX 11.0 and beyond, process wait time is calculated by summing the wait times of its kernel threads. If this metric is reported for a kernel thread, the value is the wait time of that single kernel thread. If this metric is reported for a process, the value is the sum of the wait times of all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. For multi-threaded processes, the wait times can exceed the length of the measurement interval. PROC_PROC_ARGV1 ---------------------------------- The first argument (argv[1]) of the process argument list or the second word of the command line, if present. (For kernel threads, if HP-UX/Linux Kernel 2.6 and above this metric returns the value of the associated process). The HP Performance Agent logs the first 32 characters of this metric. For releases that support the parm file javaarg flag, this metric may not be the first argument. When javaarg=true, the value of this metric is replaced (for java processes only) by the java class or jar name. This can then be useful to construct parm file java application definitions using the argv1= keyword. PROC_PROC_CMD ---------------------------------- The full command line with which the process was initiated. (For kernel threads, if HP-UX/Linux Kernel 2.6 and above this metric returns the value of the associated process). On HP-UX, the maximum length returned depends upon the version of the OS, but typically up to 1020 characters are available. On other Unix systems, the maximum length is 4095 characters. On Linux, if the command string exceeds 4096 characters, the kernel instrumentation may not report any value. If the command line contains special characters, such as carriage return and tab, these characters will be converted to , , and so on. PROC_PROC_ID ---------------------------------- The process ID number (or PID) of this process(or associated process for kernel threads, if HPUX/LInux Kernel 2.6 and above) that is used by the kernel to uniquely identify the process. Process numbers are reused, so they only identify a process for its lifetime. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_PROC_NAME ---------------------------------- The process(or kernel thread, if HP-UX/Linux Kernel 2.6 and above) program name. It is limited to 16 characters. On Unix systems, this is derived from the 1st parameter to the exec(2) system call. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. On Windows, the “System Idle Process” is not reported by Perf Agent since Idle is a process that runs to occupy the processors when they are not executing other threads. Idle has one thread per processor. PROC_REGION_FILENAME ---------------------------------- The file path that corresponds to the front store file of a memory region. For text and data regions, this is the name of the program; for shared libraries it is the library name. Certain “special” names are displayed if there is no actual “front store” for a memory region. These special names correspond to the region type (for example, ). If the name is ““, then this is a memory region without “front store,” created by the system call mmap(2). If the file format includes an inode number, use the program ncheck (1M) to display the filename relative to the mount point. Sometimes files may be deleted before they are closed. In these cases, the process file table may still have the inode even though the file is not actually present and as a result, ncheck will fail. PROC_REGION_PRIVATE_SHARED_FLAG ---------------------------------- A text indicator of either private memory (Priv) or shared (Shared) for this memory region. Private memory is only being used by the current process. Shared memory is mapped into the address space of other processes. PROC_REGION_PROT_FLAG ---------------------------------- The protection mode of the process memory segment. It represents Read/Write/eXecute permissions in the same way as ls(1) does for files. This metric is available only for regions that have global protection mode. It is not available (“na”) for regions that use per-page protection. PROC_REGION_TYPE ---------------------------------- A text name for the type of this memory region. It can be one of the following: DATA Data region LIBDAT Shared Library data LIBTXT Shared Library text STACK Stack region TEXT Text (that is, code) On HP-UX, it can also be one of the following: GRAPH Frame buffer lock page IOMAP IO region (iomap) MEMMAP Memory-mapped file, which includes shared libraries (text and data), or memory created by calls to mmap(2) NULLDR Null pointer dereference shared page (see below) RSESTA Itanium Registered stack engine region SIGSTK Signal stack region UAREA User Area region UNKNWN Region of unknown type On HP-UX, a whole page is allocated for NULL pointer dereferencing, which is reported as the NULLDR area. If the program is compiled with the “-z” option (which disallows NULL dereferencing), this area is missing. Shared libraries are accessed as memory mapped files, so that the code will show up as “MEMMAP/Shared” and data will show up as “MEMMAP/Priv”. On SUN, it can also be one of the following: BSS Static initialized data MEMMAP Memory mapped files NULLDR Null pointer dereference shared page (see below). SHMEM Shared memory UNKNWN Region of unknown type On SUN, programs might have an area for NULL pointer dereferencing, which is reported as the NULLDR area. Special segment types that are supported by the kernel that are used for frame buffer devices or other purposes are typed as UNKNWN. The following kernel processes are examples of this: sched, pageout, and fsflush. On AIX, as of mid-2010, the OS only provides information for text and data. PROC_REGION_VIRT ---------------------------------- The size (in KBs unless otherwise indicated) of the virtual memory occupied by this memory region. This value is not affected by the reference count. The number of references is a count of the number of attachments to the memory region. Attachments, for shared regions, may come from several processes sharing the same memory, a single process with multiple attachments, or combinations of these. On AIX, as of mid-2010, the OS only provides information for text and data. Other sizes will always be zero. Note also that the total virtual size may not match the sum of the regions due to inconsistencies in the AIX measurement interfaces. PROC_REGION_VIRT_ADDRS ---------------------------------- The virtual address of this memory region displayed in hexadecimal showing the space and offset of the region. On HP-UX, this is a 64-bit (96-bit on a 64-bit OS) hexadecimal value indicating the space and space offset of the region. PROC_REGION_VIRT_DATA ---------------------------------- The size (in KBs unless otherwise indicated) of the total virtual memory occupied by data regions of this process. This value is not affected by the reference count since all data regions are private. This metric is specific to the process as a whole and will not change its value. If this metric is used in a glance adviser script, only pick up one value. Do not sum the values since the same value is shown for all regions. On AIX, as of mid-2010, the OS only provides information for text and data. Other sizes will always be zero. Note also that the total virtual size may not match the sum of the regions due to inconsistencies in the AIX measurement interfaces. PROC_REGION_VIRT_OTHER ---------------------------------- The size (in KBs unless otherwise indicated) of the total virtual memory occupied by regions of this process that are not text, data, stack, or shared memory. This value is not affected by the reference count. This metric is specific to the process as a whole and will not change its value. If this metric is used in a glance adviser script, only pick up one value. Do not sum the values since the same value is shown for all regions. The number of references is a count of the number of attachments to the memory region. Attachments, for shared regions, may come from several processes sharing the same memory, a single process with multiple attachments, or combinations of these. On AIX, as of mid-2010, the OS only provides information for text and data. Other sizes will always be zero. Note also that the total virtual size may not match the sum of the regions due to inconsistencies in the AIX measurement interfaces. PROC_REGION_VIRT_SHMEM ---------------------------------- The size (in KBs unless otherwise indicated) of the total virtual memory occupied by shared memory regions of this process. Note that this memory is shared by other processes and this figure is reported in their metrics also. This value is not affected by the reference count. This metric is specific to the process as a whole and will not change its value. If this metric is used in a glance adviser script, only pick up one value. Do not sum the values since the same value is shown for all regions. The number of references is a count of the number of attachments to the memory region. Attachments, for shared regions, may come from several processes sharing the same memory, a single process with multiple attachments, or combinations of these. On AIX, as of mid-2010, the OS only provides information for text and data. Other sizes will always be zero. Note also that the total virtual size may not match the sum of the regions due to inconsistencies in the AIX measurement interfaces. PROC_REGION_VIRT_STACK ---------------------------------- The size (in KBs unless otherwise indicated) of the total virtual memory occupied by stack regions of this process. Stack regions are always private and will have a reference count of one. This metric is specific to the process as a whole and will not change its value. If this metric is used in a glance adviser script, only pick up one value. Do not sum the values since the same value is shown for all regions. On AIX, as of mid-2010, the OS only provides information for text and data. Other sizes will always be zero. Note also that the total virtual size may not match the sum of the regions due to inconsistencies in the AIX measurement interfaces. PROC_REGION_VIRT_TEXT ---------------------------------- The size (in KBs unless otherwise indicated) of the total virtual memory occupied by text regions of this process. This value is not affected by the reference count. This metric is specific to the process as a whole and will not change its value. If this metric is used in a glance adviser script, only pick up one value. Do not sum the values since the same value is shown for all regions. On AIX, as of mid-2010, the OS only provides information for text and data. Other sizes will always be zero. Note also that the total virtual size may not match the sum of the regions due to inconsistencies in the AIX measurement interfaces. PROC_RUN_TIME ---------------------------------- The elapsed time since a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) started, in seconds. This metric is less than the interval time if the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) was not alive during the entire first or last interval. On a threaded operating system such as HP-UX 11.0 and beyond, this metric is available for a process or kernel thread. PROC_SCHEDULER ---------------------------------- The scheduling policy for this process or kernel thread. On HP-UX, the available scheduling policies are: HPUX - Normal timeshare NOAGE - Timeshare without usage decay RTPRIO - HP-UX Real-time FIFO - Posix First In/First Out RR - Posix Round-Robin RR2 - Posix Round-Robin with a per-priority time slice interval On Linux, they are: TS - Normal timeshare FF - Posix First In/First Out RR - Posix Round-Robin B - Batch ISO - Reserved IDL - Idle On a threaded operating system, such as HP-UX 11.0 and beyond, this metric represents a kernel thread characteristic. If this metric is reported for a process, the value for its last executing kernel thread is given. For example, if a process has multiple kernel threads and kernel thread one is the last to execute during the interval, the metric value for kernel thread one is assigned to the process. PROC_STARTTIME ---------------------------------- The creation date and time of the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above). PROC_STATE ---------------------------------- A text string summarizing the current state of a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above), either: new This is the first interval the process has been displayed. active Process is continuing. died Process expired during the interval. PROC_STATE_FLAG ---------------------------------- The Unix STATE flag of the process(or kernel thread, if Linux Kernel 2.6 and above) during the interval. PROC_STOP_REASON ---------------------------------- A text string describing what caused the process (or kernel thread, if HP- UX/Linux Kernel 2.6 and above) to stop executing. For example, if the process is waiting for a CPU while higher priority processes are executing, then its block reason is PRI. A complete list of block reasons follows: String Reason for Process Block ------------------------------------ died Process terminated during the interval. new Process was created (via the exec() system call) during the interval. NONE Process is ready to run. It is not apparent that the process is blocked. OTHER Waiting for a reason not decipherable by the measurement software. PRI Process is on the run queue. SLEEP Waiting for an event to complete. TRACE Received a signal to stop because parent is tracing this process. ZOMB Process has terminated and the parent is not waiting. PROC_STOP_REASON_FLAG ---------------------------------- A numeric value for the stop reason. This is used by scopeux instead of the ASCII string returned by PROC_STOP_REASON in order to conserve space in the log file. On a threaded operating system, such as HP-UX 11.0 and beyond, this metric represents a kernel thread characteristic. If this metric is reported for a process, the value for its last executing kernel thread is given. For example, if a process has multiple kernel threads and kernel thread one is the last to execute during the interval, the metric value for kernel thread one is assigned to the process. PROC_THREAD_COUNT ---------------------------------- The total number of kernel threads for the current process. On Linux systems with Kernel 2.5 and below, every thread has its own process ID so this metric will always be 1. On Solaris systems, this metric reflects the total number of Light Weight Processes (LWPs) associated with the process. PROC_THREAD_ID ---------------------------------- The thread ID number of this kernel thread, used to uniquely identify it. On Linux systems this metric shall be available from Linux Kernel 2.6 onwards. PROC_TIME ---------------------------------- The time the data for the process (or kernel threads, if HP-UX/Linux Kernel 2.6 and above) was collected, in local time. PROC_TOP_CPU_INDEX ---------------------------------- The index of the process which consumed the most CPU during the interval. From this index, the process PID, process name, and CPU utilization can be obtained. (Even for kernel threads if HPUX/Linux Kernel 2.6 and above this metric returns the index of the process) This metric is used by the Performance Tools to index into the Data collection interface’s internal table. This is not a metric that will be interesting to Tool users. PROC_TOP_DISK_INDEX ---------------------------------- The index of the process which did the most physical IOs during the last interval. On HP-UX, note that NFS mounted disks are not considered in this calculation. With this index, the PID, process name, and IOs per second can be obtained. This metric is used by the Performance Tools to index into the Data collection interface’s internal table. This is not a metric that will be interesting to Tool’s users. PROC_TTY ---------------------------------- The controlling terminal for a process(or kernel threads, if HP-UX/Linux Kernel 2.6 and above). This field is blank if there is no controlling terminal. On HP-UX, Linux, and AIX, this is the same as the “TTY” field of the ps command. On all other Unix systems, the controlling terminal name is found by searching the directories provided in the /etc/ttysrch file. See man page ttysrch(4) for details. The matching criteria field (“M”, “F” or “I” values) of the ttysrch file is ignored. If a terminal is not found in one of the ttysrch file directories, the following directories are searched in the order here: “/dev”, “/dev/pts”, “/dev/term” and “dev/xt”. When a match is found in one of the “/dev” subdirectories, “/dev/” is not displayed as part of the terminal name. If no match is found in the directory searches, the major and minor numbers of the controlling terminal are displayed. In most cases, this value is the same as the “TTY” field of the ps command. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_TTY_DEV ---------------------------------- The device number of the controlling terminal for a process(or kernel threads, if HP-UX/Linux Kernel 2.6 and above). On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_UID ---------------------------------- The real UID (user ID number) of a process(or kernel threads, if HP-UX/Linux Kernel 2.6 and above). This is the UID returned from the getuid system call. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_USER_NAME ---------------------------------- On Unix systems, this is real user name of a process or the login account (from /etc/passwd) of a process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above). If more than one account is listed in /etc/passwd with the same user ID (uid) field, the first one is used. If an account cannot be found that matches the uid field, then the uid number is returned. This would occur if the account was removed after a process was started. On Windows, this is the process owner account name, without the domain name this account resides in. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_VOLUNTARY_CSWITCH ---------------------------------- The number of times a process (or kernel thread, if HP-UX) has given up the CPU before an external event preempted it during the interval. Examples of voluntary switches include calls to sleep(2) and select(2). On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. PROC_VOLUNTARY_CSWITCH_CUM ---------------------------------- The number of times a process (or kernel thread, if HP-UX) has given up the CPU before an external event preempted it over the cumulative collection time. Examples of voluntary switches include calls to sleep(2) and select(2). The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On multi-threaded operating systems, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. TBL_BUFFER_HEADER_AVAIL ---------------------------------- This is the maximum number of headers pointing to buffers in the file system buffer cache. On HP-UX, this is the configured number, not the maximum number. This can be set by the “nbuf” kernel configuration parameter. nbuf is used to determine the maximum total number of buffers on the system. On HP-UX, these are used to manage the buffer cache, which is used for all block IO operations. When nbuf is zero, this value depends on the “bufpages” size of memory (see System Administration Tasks manual). A value of “na” indicates either a dynamic buffer cache configuration, or the nbuf kernel parameter has been left unconfigured and allowed to “float” with the bufpages parameter. This is not a maximum available value in a fixed buffer cache configuration. Instead, it is the initial configured value. The actual number of used buffer headers can grow beyond this initial value. On SUN, this value is “nbuf”. On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The “nbuf” value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. TBL_BUFFER_HEADER_USED ---------------------------------- The number of buffer headers currently in use. On HP-UX, this dynamic value will rarely change once the system boots. During the system bootup, the kernel allocates a large number of buffer headers and the count is likely to stay at that value after the bootup completes. If the value increases beyond the initial boot value, it will not decrease. Buffer headers are allocated in kernel memory, not user memory, and therefore, will not decrease. This value can exceed the available or configured number of buffer headers in a fixed buffer cache configuration. On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The “nbuf” value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_BUFFER_HEADER_USED_HIGH ---------------------------------- The largest number of buffer headers used in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The “nbuf” value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_BUFFER_HEADER_UTIL ---------------------------------- The percentage of buffer headers currently used. On HP-UX, a value of “na” indicates either a dynamic buffer cache configuration, or the nbuf kernel parameter has been left unconfigured and allowed to “float” with the bufpages parameter. On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The “nbuf” value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_BUFFER_HEADER_UTIL_HIGH ---------------------------------- The highest percentage of buffer header used in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On HP-UX, a value of “na” indicates either a dynamic buffer cache configuration, or the nbuf kernel parameter has been left unconfigured and allowed to “float” with the bufpages parameter. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_FILE_LOCK_AVAIL ---------------------------------- The configured number of file or record locks that can be allocated on the system. Files and/or records are locked by calls to lockf(2). On Linux kernel versions 2.4 and above, available file orrecord locks is a dynamic value which can grow upto max unsigned long. TBL_FILE_LOCK_USED ---------------------------------- The number of file or record locks currently in use. One file can have multiple locks. Files and/or records are locked by calls to lockf(2). On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. On Solaris non-global zones, this metric is N/A. TBL_FILE_LOCK_USED_HIGH ---------------------------------- The highest number of file locks used by the file system in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_FILE_LOCK_UTIL ---------------------------------- The percentage of configured file or record locks currently in use. On Linux 2.4 and above kernel versions, this may not give correct picture as file or record locks available may change dynamically and can grow upto max unsigned long. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_FILE_LOCK_UTIL_HIGH ---------------------------------- The highest percentage of configured file or record locks that have been in use during any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_FILE_TABLE_AVAIL ---------------------------------- The number of entries in the file table. On HP-UX and AIX, this is the configured maximum number of the file table entries used by the kernel to manage open file descriptors. On HP-UX, this is the sum of the “nfile” and “file_pad” values used in kernel generation. On SUN, this is the number of entries in the file cache. This is a size. All entries are not always in use. The cache size is dynamic. Entries in this cache are used to manage open file descriptors. They are reused as files are closed and new ones are opened. The size of the cache will go up or down in chunks as more or less space is required in the cache. On AIX, the file table entries are dynamically allocated by the kernel if there is no entry available. These entries are allocated in chunks. TBL_FILE_TABLE_USED ---------------------------------- The number of entries in the file table currently used by file descriptors. On SUN, this is the number of file cache entries currently used by file descriptors. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_FILE_TABLE_USED_HIGH ---------------------------------- The highest number of entries in the file table that is used by file descriptors in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_FILE_TABLE_UTIL ---------------------------------- The percentage of file table entries currently used by file descriptors. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_FILE_TABLE_UTIL_HIGH ---------------------------------- The highest percentage of entries in the file table used by file descriptors in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_INODE_CACHE_AVAIL ---------------------------------- On HP-UX, this is the configured total number of entries for the incore inode tables on the system. For HP-UX releases prior to 11.2x, this value reflects only the HFS inode table. For subsequent HP-UX releases, this value is the sum of inode tables for both HFS and VxFS file systems (ninode plus vxfs_ninode). On HP-UX, file system directory activity is done through inodes that are stored on disk. The kernel keeps a memory cache of active and recently accessed inodes to reduce disk IOs. When a file is opened through a pathname, the kernel converts the pathname to an inode number and attempts to obtain the inode information from the cache based on the filesystem type. If the inode entry is not in the cache, the inode is read from disk into the inode cache. On HP-UX, the number of used entries in the inode caches are usually at or near the capacity. This does not necessarily indicate that the configured sizes are too small because the tables may contain recently used inodes and inodes referenced by entries in the directory name lookup cache. When a new inode cache entry is required and a free entry does not exist, inactive entries referenced by the directory name cache are used. If after freeing inode entries only referenced by the directory name cache does not create enough free space, the message “inode: table is full” message may appear on the console. If this occurs, increase the size of the kernel parameter, ninode. Low directory name cache hit ratios may also indicate an underconfigured inode cache. On HP-UX, the default formula for the ninode size is: ninode = ((nproc+16+maxusers)+32+ (2*npty)+(4*num_clients)) On all other Unix systems, this is the number of entries in the inode cache. This is a size. All entries are not always in use. The cache size is dynamic. Entries in this cache are reused as files are closed and new ones are opened. The size of the cache will go up or down in chunks as more or less space is required in the cache. Inodes are used to store information about files within the file system. Every file has at least two inodes associated with it (one for the directory and one for the file itself). The information stored in an inode includes the owners, timestamps, size, and an array of indices used to translate logical block numbers to physical sector numbers. There is a separate inode maintained for every view of a file, so if two processes have the same file open, they both use the same directory inode, but separate inodes for the file. TBL_INODE_CACHE_HIGH ---------------------------------- On HP-UX and OSF1, this is the highest number of inodes that have been used in any one interval over the cumulative collection time. On HP-UX, file system directory activity is done through inodes that are stored on disk. The kernel keeps a memory cache of active and recently accessed inodes to reduce disk IOs. When a file is opened through a pathname, the kernel converts the pathname to an inode number and attempts to obtain the inode information from the cache based on the filesystem type. If the inode entry is not in the cache, the inode is read from disk into the inode cache. On HP-UX, the number of used entries in the inode caches are usually at or near the capacity. This does not necessarily indicate that the configured sizes are too small because the tables may contain recently used inodes and inodes referenced by entries in the directory name lookup cache. When a new inode cache entry is required and a free entry does not exist, inactive entries referenced by the directory name cache are used. If after freeing inode entries only referenced by the directory name cache does not create enough free space, the message “inode: table is full” message may appear on the console. If this occurs, increase the size of the kernel parameter, ninode. Low directory name cache hit ratios may also indicate an underconfigured inode cache. On HP-UX, the default formula for the ninode size is: ninode = ((nproc+16+maxusers)+32+ (2*npty)+(4*num_clients)) On all other Unix systems, this is the largest size of the inode cache in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_INODE_CACHE_USED The number of inode cache entries currently in use. On HP-UX, this is the number of “non-free” inodes currently used. Since the inode table contains recently closed inodes as well as open inodes, the table often appears to be fully utilized. When a new entry is needed, one can usually be found by reusing one of the recently closed inode entries. On HP-UX, file system directory activity is done through inodes that are stored on disk. The kernel keeps a memory cache of active and recently accessed inodes to reduce disk IOs. When a file is opened through a pathname, the kernel converts the pathname to an inode number and attempts to obtain the inode information from the cache based on the filesystem type. If the inode entry is not in the cache, the inode is read from disk into the inode cache. On HP-UX, the number of used entries in the inode caches are usually at or near the capacity. This does not necessarily indicate that the configured sizes are too small because the tables may contain recently used inodes and inodes referenced by entries in the directory name lookup cache. When a new inode cache entry is required and a free entry does not exist, inactive entries referenced by the directory name cache are used. If after freeing inode entries only referenced by the directory name cache does not create enough free space, the message “inode: table is full” message may appear on the console. If this occurs, increase the size of the kernel parameter, ninode. Low directory name cache hit ratios may also indicate an underconfigured inode cache. On HP-UX, the default formula for the ninode size is: ninode = ((nproc+16+maxusers)+32+ (2*npty)+(4*num_clients)) On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_MSG_BUFFER_ACTIVE The current active total size (in KBs unless otherwise specified) of all IPC message buffers. These buffers are created by msgsnd(2) calls and released by msgrcv(2) calls. This metric only counts the active message queue buffers, which means that a msgsnd(2) call has been made and the msgrcv(2) has not yet been done on the queue entry or a msgrcv(2) call is waiting on a message queue entry. On Unix systems, this metric is updated every 30 seconds or the sampling --- -------------------------------interval, whichever is greater. TBL_MSG_BUFFER_AVAIL ---------------------------------- The maximum achievable size (in KBs unless otherwise specified) of the message queue buffer pool on the system. Each message queue can contain many buffers which are created whenever a program issues a msgsnd(2) call. Each of these buffers is allocated from this buffer pool. Refer to the ipcs(1) man page for more information. This value is determined by taking the product of the three kernel configuration variables “msgseg”, “msgssz” and “msgmni”. If the value adds up to a value > 2048GB, “o/f” may be reported on some platforms. On SUN, the InterProcess Communication facilities are dynamically loadable. If the amount available is zero, this facility was not loaded when data collection began, and its data is not obtainable. The data collector is unable to determine that a facility has been loaded once data collection has started. If you know a new facility has been loaded, restart the data collection, and the data for that facility will be collected. See ipcs(1) to report on interprocess communication resources. TBL_MSG_BUFFER_HIGH ---------------------------------- The largest size (in KBs unless otherwise specified) of the message queues in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_MSG_BUFFER_USED ---------------------------------- The current total size (in KBs unless otherwise specified) of all IPC message buffers. These buffers are created by msgsnd(2) calls and released by msgrcv(2) calls. On HP-UX and OSF1, this field corresponds to the CBYTES field of the “ipcs - qo” command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_MSG_TABLE_ACTIVE ---------------------------------- The number of message queues currently active. A message queue is allocated by a program using the msgget(2) call. This metric returns only the entries in the message queue currently active. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_MSG_TABLE_AVAIL ---------------------------------- The configured maximum number of message queues that can be allocated on the system. A message queue is allocated by a program using the msgget(2) call. Refer to the ipcs(1) man page for more information. On SUN, the InterProcess Communication facilities are dynamically loadable. If the amount available is zero, this facility was not loaded when data collection began, and its data is not obtainable. The data collector is unable to determine that a facility has been loaded once data collection has started. If you know a new facility has been loaded, restart the data collection, and the data for that facility will be collected. See ipcs(1) to report on interprocess communication resources. TBL_MSG_TABLE_USED ---------------------------------- On HP-UX, this is the number of message queues currently in use. On all other Unix systems, this is the number of message queues that have been built. A message queue is allocated by a program using the msgget(2) call. See ipcs(1) to list the message queues. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_MSG_TABLE_UTIL ---------------------------------- The percentage of configured message queues currently in use. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_MSG_TABLE_UTIL_HIGH ---------------------------------- The highest percentage of configured message queues that have been in use during any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_NUM_NFSDS ---------------------------------- The number of NFS servers configured. This is the value “nservers” passed to nfsd (the NFS daemon) upon startup. If no value is specified, the default is one. This value determines the maximum number of concurrent NFS requests that the server can handle. See man page for “nfsd”. TBL_SEM_TABLE_ACTIVE ---------------------------------- The number of semaphore identifiers currently active. This means that the semaphores are currently locked by processes. Any new process requesting this semaphore is blocked if IPC_NOWAIT flag is not set. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SEM_TABLE_AVAIL ---------------------------------- The configured number of semaphore identifiers (sets) that can be allocated on the system. On SUN, the InterProcess Communication facilities are dynamically loadable. If the amount available is zero, this facility was not loaded when data collection began, and its data is not obtainable. The data collector is unable to determine that a facility has been loaded once data collection has started. If you know a new facility has been loaded, restart the data collection, and the data for that facility will be collected. See ipcs(1) to report on interprocess communication resources. TBL_SEM_TABLE_USED ---------------------------------- On HP-UX, this is the number of semaphore identifiers currently in use. On all other Unix systems, this is the number of semaphore identifiers that have been built. A semaphore identifier is allocated by a program using the semget(2) call. See ipcs(1) to list semaphores. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SEM_TABLE_UTIL ---------------------------------- The percentage of configured semaphores identifiers currently in use. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SEM_TABLE_UTIL_HIGH ---------------------------------- The highest percentage of configured semaphore identifiers that have been in use during any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_ACTIVE ---------------------------------- The size (in KBs unless otherwise specified) of the shared memory segments that have running processes attached to them. This may be less than the amount of shared memory used on the system because a shared memory segment may exist and not have any process attached to it. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_AVAIL ---------------------------------- The maximum achievable size (in MB unless otherwise specified) of the shared memory pool on the system. This is a theoretical maximum determined by multiplying the configured maximum number of shared memory entries (shmmni) by the maximum size of each shared memory segment (shmmax). Your system may not have enough virtual memory to actually reach this theoretical limit - one cannot allocate more shared memory than the available reserved space configured for virtual memory. It should be noted that this value does not include any architectural limitations. (For example, on a 32-bit kernel, there is an addressing limit of 1.75 GB.). If the value adds up to a value > 2048TB, “o/f” may be reported on some platforms. On SUN, the InterProcess Communication facilities are dynamically loadable. If the amount available is zero, this facility was not loaded when data collection began, and its data is not obtainable. The data collector is unable to determine that a facility has been loaded once data collection has started. If you know a new facility has been loaded, restart the data collection, and the data for that facility will be collected. See ipcs(1) to report on interprocess communication resources. TBL_SHMEM_HIGH ---------------------------------- The highest size (in KBs unless otherwise specified) of shared memory used in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_TABLE_ACTIVE ---------------------------------- The number of shared memory segments that have running processes attached to them. This may be less than the number of shared memory segments that have been allocated. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_TABLE_AVAIL ---------------------------------- The configured number of shared memory segments that can be allocated on the system. On SUN, the InterProcess Communication facilities are dynamically loadable. If the amount available is zero, this facility was not loaded when data collection began, and its data is not obtainable. The data collector is unable to determine that a facility has been loaded once data collection has started. If you know a new facility has been loaded, restart the data collection, and the data for that facility will be collected. See ipcs(1) to report on interprocess communication resources. TBL_SHMEM_TABLE_USED ---------------------------------- On HP-UX, this is the number of shared memory segments currently in use. On all other Unix systems, this is the number of shared memory segments that have been built. This includes shared memory segments with no processes attached to them. A shared memory segment is allocated by a program using the shmget(2) call. Also refer to ipcs(1). On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_TABLE_UTIL ---------------------------------- The percentage of configured shared memory segments currently in use. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_TABLE_UTIL_HIGH ---------------------------------- The highest percentage of configured shared memory segments that have been in use during any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_USED ---------------------------------- The size (in KBs unless otherwise specified) of the shared memory segments. Additionally, it includes memory segments to which no processes are attached. If a shared memory segment has zero attachments, the space may not always be allocated in memory. See ipcs(1) to list shared memory segments. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TTBIN_TRANS_COUNT TT_CLIENT_BIN_TRANS_COUNT ---------------------------------- The number of completed transactions in this range during the last interval. TTBIN_TRANS_COUNT_CUM TT_CLIENT_BIN_TRANS_COUNT_CUM ---------------------------------- The number of completed transactions in this range over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. TTBIN_UPPER_RANGE ---------------------------------- The upper range (transaction time) for this TT bin. There are a maximum of nine user-defined transaction response time bins (TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is the overflow bin and will always have a value of -2 (overflow). Note that the values specified in the transaction configuration file cannot exceed 2147483.6, which is the number of seconds in 24.85 days. If the user specifies any values greater than 2147483.6, the numbers reported for those bins or Service Level Objectives (SLO) will be -2. TT_ABORT TT_CLIENT_ABORT ---------------------------------- The number of aborted transactions during the last interval for this transaction. TT_ABORT_CUM TT_CLIENT_ABORT_CUM ---------------------------------- The number of aborted transactions over the cumulative collection time for this transaction. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. TT_ABORT_WALL_TIME TT_CLIENT_ABORT_WALL_TIME ---------------------------------- The total time, in seconds, of all aborted transactions during the last interval for this transaction. TT_ABORT_WALL_TIME_CUM TT_CLIENT_ABORT_WALL_TIME_CUM ---------------------------------- The total time, in seconds, of all aborted transactions over the cumulative collection time for this transaction class. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. TT_APPNO ---------------------------------- The registered ARM Application/User ID for this transaction class. TT_APP_NAME ---------------------------------- The registered ARM Application name. TT_CLIENT_ADDRESS TT_INSTANCE_CLIENT_ADDRESS ---------------------------------- The correlator address. This is the address where the child transaction originated. TT_CLIENT_ADDRESS_FORMAT TT_INSTANCE_CLIENT_ADDRESS_FORMAT ---------------------------------- The correlator address format. This shows the protocol family for the client network address. Refer to the ARM API Guide for the list and description of supported address formats. TT_CLIENT_CORRELATOR_COUNT ---------------------------------- The number of client or child transaction correlators this transaction has started over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. TT_CLIENT_TRAN_ID TT_INSTANCE_CLIENT_TRAN_ID ---------------------------------- A numerical ID that uniquely identifies the transaction class in this correlator. TT_COUNT TT_CLIENT_COUNT ---------------------------------- The number of completed transactions during the last interval for this transaction. TT_COUNT_CUM TT_CLIENT_COUNT_CUM ---------------------------------- The number of completed transactions over the cumulative collection time for this transaction. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. TT_FAILED TT_CLIENT_FAILED ---------------------------------- The number of Failed transactions during the last interval for this transaction name. TT_FAILED_CUM TT_CLIENT_FAILED_CUM ---------------------------------- The number of failed transactions over the cumulative collection time for this transaction name. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. TT_FAILED_WALL_TIME TT_CLIENT_FAILED_WALL_TIME ---------------------------------- The total time, in seconds, of all failed transactions during the last interval for this transaction name. TT_FAILED_WALL_TIME_CUM TT_CLIENT_FAILED_WALL_TIME_CUM ---------------------------------- The total time, in seconds, of all failed transactions over the cumulative collection time for this transaction name. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. TT_INFO ---------------------------------- The registered ARM Transaction Information for this transaction. TT_INPROGRESS_COUNT ---------------------------------- The number of transactions in progress (started, but not stopped) at the end of the interval for this transaction class. TT_INSTANCE_ID ---------------------------------- A numerical ID that uniquely identifies this transaction instance at the end of the interval. TT_INSTANCE_PROC_ID ---------------------------------- The ID of the process that started or last updated the transaction instance. TT_INSTANCE_START_TIME ---------------------------------- The time this transaction instance started. TT_INSTANCE_STOP_TIME ---------------------------------- The time this transaction instance stopped. If the transaction instance is currently active, the value returned will be -1. It will be shown as “na” in Glance and GPM to indicate that the transaction instance did not stop during the interval. TT_INSTANCE_THREAD_ID ---------------------------------- The ID of the kernel thread that started or last updated the transaction instance. TT_INSTANCE_UPDATE_COUNT ---------------------------------- The number of times this transaction instance called update since the start of this transaction instance. TT_INSTANCE_UPDATE_TIME ---------------------------------- The time this transaction instance last called update. If the transaction instance is currently active, the value returned will be -1. It will be shown as “na” in Glance and GPM to indicate that a call to update did not occur during the interval. TT_INSTANCE_WALL_TIME ---------------------------------- The elapsed time since this transaction instance was started. TT_INTERVAL TT_CLIENT_INTERVAL ---------------------------------- The amount of time in the collection interval. TT_INTERVAL_CUM TT_CLIENT_INTERVAL_CUM ---------------------------------- The amount of time over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. TT_MEASUREMENT_COUNT ---------------------------------- The number of user defined measurements for this transaction class. TT_NAME ---------------------------------- The registered transaction name for this transaction. TT_SLO_COUNT TT_CLIENT_SLO_COUNT ---------------------------------- The number of completed transactions that violated the defined Service Level Objective (SLO) by exceeding the SLO threshold time during the interval. TT_SLO_COUNT_CUM TT_CLIENT_SLO_COUNT_CUM ---------------------------------- The number of completed transactions that violated the defined Service Level Objective by exceeding the SLO threshold time over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. TT_SLO_PERCENT ---------------------------------- The percentage of transactions which violate service level objectives. TT_SLO_THRESHOLD ---------------------------------- The upper range (transaction time) of the Service Level Objective (SLO) threshold value. This value is used to count the number of transactions that exceed this user-supplied transaction time value. TT_TRAN_1_MIN_RATE ---------------------------------- For this transaction name, the number of completed transactions calculated to a 1 minute rate. For example, if you completed five of these transactions in a 5 minute window, the rate is one transaction per minute. TT_TRAN_ID ---------------------------------- The registered ARM Transaction ID for this transaction class as returned by arm_getid(). A unique transaction id is returned for a unique application id (returned by arm_init), tran name, and meta data buffer contents. TT_UID ---------------------------------- The registered ARM Transaction User ID for this transaction name. TT_UNAME ---------------------------------- The registered ARM Transaction User Name for this transaction. If the arm_init function has NULL for the appl_user_id field, then the user name is blank. Otherwise, if “*” was specified, then the user name is displayed. For example, to show the user name for the armsample1 program, use: appl_id = arm_init(“armsample1”,”*”,0,0,0); To ignore the user name for the armsample1 program, use: appl_id = arm_init(“armsample1”,NULL,0,0,0); TT_UPDATE TT_CLIENT_UPDATE ---------------------------------- The number of updates during the last interval for this transaction class. This count includes update calls for completed and in progress transactions. TT_UPDATE_CUM TT_CLIENT_UPDATE_CUM ---------------------------------- The number of updates over the cumulative collection time for this transaction class. This count includes update calls for completed and in progress transactions. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. TT_USER_MEASUREMENT_AVG TT_INSTANCE_USER_MEASUREMENT_AVG TT_CLIENT_USER_MEASUREMENT_AVG ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the average counter differences of the transaction or transaction instance during the last interval. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this returns the average of the values passed on any ARM call for the transaction or transaction instance during the last interval. TT_USER_MEASUREMENT_MAX TT_INSTANCE_USER_MEASUREMENT_MAX TT_CLIENT_USER_MEASUREMENT_MAX ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the highest measured counter value over the life of the transaction or transaction instance. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this metric returns the highest value passed on any ARM call over the life of the transaction or transaction instance. TT_USER_MEASUREMENT_MIN TT_INSTANCE_USER_MEASUREMENT_MIN TT_CLIENT_USER_MEASUREMENT_MIN ---------------------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the lowest measured counter value over the life of the transaction or transaction instance. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this metric returns the lowest value passed on any ARM call over the life of the transaction or transaction instance. TT_USER_MEASUREMENT_NAME TT_INSTANCE_USER_MEASUREMENT_NAME TT_CLIENT_USER_MEASUREMENT_NAME ---------------------------------- The name of the user defined transactional measurement. The length of the string complies with the ARM 2.0 standard, which is 44 characters long (there are 43 usable characters since this is a NULL terminated character string). TT_USER_MEASUREMENT_STRING1024_VALUE TT_INSTANCE_USER_MEASUREMENT_STRING1024_VALUE TT_CLIENT_USER_MEASUREMENT_STRING1024_VALUE ---------------------------------- The last value of the user defined measurement of type string 1024. This type is not implemented and the value is always “na”. TT_USER_MEASUREMENT_STRING32_VALUE TT_INSTANCE_USER_MEASUREMENT_STRING32_VALUE TT_CLIENT_USER_MEASUREMENT_STRING32_VALUE ---------------------------------- The last value of the user defined measurement of type string 32. TT_USER_MEASUREMENT_TYPE TT_INSTANCE_USER_MEASUREMENT_TYPE TT_CLIENT_USER_MEASUREMENT_TYPE ---------------------------------- The type of the user defined transactional measurement. 1 = ARM_COUNTER32 2 = ARM_COUNTER64 3 = ARM_CNTRDIVR32 4 = ARM_GAUGE32 5 = ARM_GAUGE64 6 = ARM_GAUGEDIVR32 7 = ARM_NUMERICID32 8 = ARM_NUMERICID64 9 = ARM_STRING8 (max 8 chars) 10 = ARM_STRING32 (max 32 chars) 11 = ARM_STRING1024 (max 1024 char -- not implemented) TT_USER_MEASUREMENT_VALUE TT_INSTANCE_USER_MEASUREMENT_VALUE TT_CLIENT_USER_MEASUREMENT_VALUE ---------------------------------- The last value of the user defined measurement of type counter, gauge, numeric ID, or string 8. Both 32 and 64 bit numeric types are returned as 64 bit values. TT_WALL_TIME TT_CLIENT_WALL_TIME ---------------------------------- The total time, in seconds, of all transactions completed during the last interval for this transaction. TT_WALL_TIME_CUM TT_CLIENT_WALL_TIME_CUM ---------------------------------- The total time, in seconds, of all transactions completed over the cumulative collection time for this transaction. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. TT_WALL_TIME_PER_TRAN TT_CLIENT_WALL_TIME_PER_TRAN ---------------------------------- The average transaction time, in seconds, during the last interval for this transaction. TT_WALL_TIME_PER_TRAN_CUM TT_CLIENT_WALL_TIME_PER_TRAN_CUM ---------------------------------- The average transaction time, in seconds, over the cumulative collection time for this transaction. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. ---------------------------------- Glossary ========================== alarm ---------------------------------- A signal that an event has occurred. The signal can be either a notification or an automatically triggered action. The event can be a pre-defined threshold that is exceeded, a network node in trouble, and so on. Alarm information can be sent to Network Node Manager and HP Operations Manager (HPOM). Alarms can also be identified in historical log file data. alarm generator ---------------------------------- The service that handles the communication of alarm information. It consists of the alarm generator server (perfalarm), the alarm generator database server (agdbserver), and the alarm generator database (agdb) that is managed by agdbserver. The agdb contains of list of various on/off flags that are set to define when and where the alarm information is sent. alarm definitions file ---------------------------------- The text file containing the alarm definitions for the Performance Collection Component in which alarm conditions are specified. For the HP Operations agent on UNIX/Linux platforms, the default file name is alarmdef; for the HP Operations agent on Windows, the default file name is alarmdef.mwc. alert ---------------------------------- A message sent when alarm conditions or conditions in an IF statement have been met. analysis software ---------------------------------- Analysis software analyzes system performance data. The optional HP Performance Manager product provides a central window from which you can monitor, manage, and troubleshoot the performance of all networked systems in your computing environment, as well as analyze historical data from HP Operations agent systems. With HP Performance Manager, you view graphs of a system's performance data to help you diagnose and resolve performance problems quickly. application ---------------------------------- A user-defined group of related processes or program files. Applications are defined so that performance software can collect performance metrics for and report on the combined activities of the processes and programs. available memory ---------------------------------- Available memory is that part of physical memory not allocated by the kernel. This includes the buffer cache, user allocated memory, and free memory. backtrack ---------------------------------- Backtracking allows the large data structures used by the Virtual Memory Manager (VMM) to be pageable. It is a method of safely allowing the VMM to handle page faults within its own critical sections of code. Examples of backtracking are: * A process page faults. * The VMM attempts to locate the missing page via its External Page table (XPT). * The VMM page faults due to the required XPT itself having been paged out. * The VMM safely saves enough information on the stack to restart the process at its first fault. * Normal VMM pagein/out routines are used to recover the missing XPT. * The required XPT is now present, so the missing page is located and paged-in. * The process continues normal execution at the original page fault. bad call ---------------------------------- A failed NFS server call. Calls fail due to lack of system resources (lack of virtual memory) and network errors. biod ---------------------------------- A daemon process responsible for asynchronous block IO on the NFS client. It is used to buffer read-ahead and write-behind IOs. block IO ---------------------------------- Buffered reads and writes. Data is held in the buffer cache, then transferred in fixed-size blocks. Any hardware device that transmits and receives data in blocks is a block-mode device. Compare with character mode. block IO buffer ---------------------------------- A buffer used to store data being transferred to or from a block- mode device through file system input and output, as opposed to character-mode or raw-mode devices. block IO operation ---------------------------------- Any operation being carried out on a block-mode device (such as read, write, or mount). block size ---------------------------------- The size of the primary unit of information used for a file system. It is set when a file system is created. blocked on ---------------------------------- The reason for the last recorded process block. blocked state ---------------------------------- The reason for the last recorded process block. Also called blocked-on state. bottleneck ---------------------------------- A situation that occurs when a system resource is constrained by demand that exceeds its capability. The resource is said to be "bottlenecked." A bottleneck causes system performance to degrade. A primary characteristic of a bottleneck is that it does not occur in all resources at the same time; other resources may instead be underutilized. buffer ---------------------------------- A memory storage area used to temporarily hold code or data until used for input/output operations. buffer cache ---------------------------------- An area of memory that mediates between application programs and disk drives. When a program writes data, it is first placed in the buffer cache, then delivered to the disk at a later time. This allows the disk driver to perform IO operations in batches, minimizing seek time. buffer header ---------------------------------- Entries used by all block IO operations to point to buffers in the file system buffer cache. buffer pool ---------------------------------- See buffer cache. cache ---------------------------------- See buffer cache. cache efficiency ---------------------------------- The extent to which buffered read and read-ahead requests can be satisfied by data already in the cache. cache hit ---------------------------------- Read requests that are satisfied by data already in the buffer cache. See also cache efficiency. capped ---------------------------------- A capped partition indicates that the logical partition will never exceed its assigned processing capacity. Any unused processing resources will be used only by the uncapped partitions in the shared processor pool. character mode ---------------------------------- The mode in which data transfers are accomplished byte-by-byte, rather than in blocks. Printers, plotters, and terminals are examples of character-mode devices. Also known as raw mode. Compare with block IO. child process ---------------------------------- A new process created at another active process' request through a fork or vfork system call. The process making the request becomes the parent process. client ---------------------------------- A system that requests a service from a server. In the context of diskless clusters, a client uses the server's disks and has none of its own. In the context of NFS, a client mounts file systems that physically reside on another system (the Network File System server). clock hand algorithm ---------------------------------- The algorithm used by the page daemon to scan pages. clock hand cycle ---------------------------------- The clock hand algorithm used to control paging and to select pages for removal from system memory. When page faults and/or system demands cause the free list size to fall below a certain level, the page replacement algorithm starts the clock hand and it cycles through the page table. cluster ---------------------------------- One or more work stations linked by a local area network (LAN) but having only one root file system. cluster server process ---------------------------------- (CSPs). A special kernel process that runs in a cluster and handles requests from remote cnodes. cnode ---------------------------------- The client on a diskless system. The term cnode is derived from "client node." coda ---------------------------------- A daemon that provides collected data to the alarm generator and analysis product data sources, including scopeux log files or DSI log files. coda reads the data from the data sources listed in the datasources configuration file. collision ---------------------------------- Occurs when the system attempts to send a packet at the same time that another system is attempting a send on the same LAN. The result is garbled transmissions and both sides have to resubmit the packet. Some collisions occur during normal operation. context switch ---------------------------------- The action of the dispatcher (scheduler) changing from running one process to another. The scheduler maintains algorithms for managing process switching, mostly directed by process priorities. CPU ---------------------------------- Central Processing Unit. The part of a computer that executes program instructions. CPU entitlement ---------------------------------- The percentage of CPU guaranteed to a particular process resource group when the total system CPU use is at 100%. The system administrator assigns the CPU entitlement for each process resource group in the PRM configuration file (/etc/prmconf). The minimum entitlement for the System group, PRMID 0, is 20%. The minimum entitlement for all other groups is 1%. PRM distributes unused time to other groups in proportion to their CPU entitlement. CPU queue ---------------------------------- The average number of processes in the "run" state awaiting CPU scheduling, which includes processes short waited for IOs. This is calculated from GBL-RUN-QUEUE and the number of times this metric is updated. This is also a measure of how busy the system's CPU resource is. cyclical redundancy check ---------------------------------- (CRC). A networking checksum protocol used to detect transmission errors. cylinder ---------------------------------- The tracks of a disk accessible from one position of the head assembly. cylinder group ---------------------------------- In the file system, a collection of cylinders on a disk drive grouped together for the purpose of localizing information. The files system allocates inodes and data blocks on a per- cylinder-group basis. daemon ---------------------------------- A process that runs continuously in the background but provides important system services. data class ---------------------------------- A particular category of data collected by a data collection process. Single-instance data classes, such as the global class, contain a single set of metrics that appear only once in any data source. Multiple-instance classes, such as the application class, may have many occurrences in a single data source, with the same set of metrics collected for each occurrence of the class. (Also known as data type.) data locality ---------------------------------- The location of data relative to associated data. Associated data has good data locality if it is located near one another, because accesses are limited to a small number of pages and the data is more likely to be in memory. Poor data locality means associated data must be obtained from different data pages. data point ---------------------------------- A specific point in time displayed on a performance graph where data has been summarized every five, fifteen, or thirty minutes, or every hour, two hours or one day. data segment ---------------------------------- A section of memory reserved for storing a process' static and dynamic data. data source ---------------------------------- A data source consists of one or more data types or classes of data in a single scopeux, scopent, or DSI log file set. For example, the default Performance Collection Component data source, SCOPE, is a scopeux or scopent log file set consisting of global data. datasources configuration file ---------------------------------- A configuration file residing in the /var/opt/OV/conf/perf/ directory. Each entry in the file represents a scopeux or DSI data source consisting of a single log file set. data source integration (DSI) ---------------------------------- Enables the Performance Collection Component to receive, log, and detect alarms on data from external sources such as applications, databases, networks, and other operating systems. deactivated pages out ---------------------------------- Pages from deactivated process regions that are moved from memory to the swap area. These pages are swapped out only when they are needed by another active process. When a process becomes reactivated, the pages are moved from the swap area back to memory. default ---------------------------------- An option that is automatically selected or chosen by the system. deferred packet ---------------------------------- A deferred packet occurs when the network hardware detects that the LAN is already in use. Rather than incur a collision, the outbound packet transmission is delayed until the LAN is available. device driver ---------------------------------- A collection of kernel routines and data structures that handle the lowest levels of input and output between a peripheral device and executing processes. Device drivers are part of the UNIX kernel. device file ---------------------------------- A special file that permits direct access to a hardware device. device swap space ---------------------------------- Space devoted to swapping. directory name lookup cache ---------------------------------- The directory name lookup cache (DNLC) is used to cache directory and file names. When a file is referenced by name, the name must be broken into its components and each component's inode must be looked up. By caching the component names, disk IOs are reduced. disk bandwidth entitlement ---------------------------------- The percentage of disk (volume group) bandwidth guaranteed to a particular PRM group when the total system disk bandwidth use is at its maximum. The system administrator assigns the disk bandwidth entitlement for each PRM group in the PRM configuration file. The minimum entitlement for groups other than the system group is 1%. PRM distributes unused time to other groups in proportion to their disk bandwidth entitlements. diskless cluster server ---------------------------------- A system that supports disk activity for diskless client nodes. diskless file system buffer ---------------------------------- A buffer pool that is used only by the diskless server for diskless cluster traffic. dispatcher ---------------------------------- A module of the kernel responsible for allocating CPU resources among several competing processes. DSI log file ---------------------------------- A log file, created by the Performance Collection Component's DSI (data source integration) programs, that contains self-describing data. empty space ---------------------------------- The difference between the maximum size of a log file and its current size. error (LAN) ---------------------------------- Unsuccessful transmission of a packet over a local area network (LAN). Inbound errors are typically checksum errors. Outbound errors are typically local hardware problems. exec fill page ---------------------------------- When a process is 'execed' the working segments of the process are marked as copy on write. Only when segments change are they copied into a separate segment private to the process that is modifying the page. extract program ---------------------------------- The Performance Collection Component program that allows you to extract data from raw or previously extracted log files, summarize it, and write it to extracted log files. It also lets you export data for use by analysis programs and other tools. extracted log file ---------------------------------- A Performance Collection Component log file containing a user- defined subset of data extracted (copied) from a raw or previously extracted log file. It is formatted for optimal access by HP Performance Manager. Extracted log files are also used for archiving performance data. file IO ---------------------------------- IO activity to a physical disk. It includes file system IOs, system IOs to manage the file system, both raw and block activity, and excludes virtual memory management IOs. file lock ---------------------------------- A file lock guarantees exclusive access to an entire file, or parts of a file. file system ---------------------------------- The organization and placement of files and directories on a hard disk. The file system includes the operating system software's facilities for naming the files and controlling access to these files. file system activity ---------------------------------- Access calls (read, write, control) of file system block IO files contained on disk. file system swap ---------------------------------- File system space identified as available to be used as swap. This is a lower performance method of swapping as its operations are processed through the file system. file table ---------------------------------- The table contains inode descriptors used by the user file descriptors for all open files. It is set to the maximum number of files the system can have open at any one time. fork ---------------------------------- A system call that enables a process to duplicate itself into two identical processes - a parent and a child process. Unlike the vfork system call, the child process produced does not have access to the parent process' memory and control. free list ---------------------------------- The system keeps a list of free pages on the system. Free list points to all the pages that are marked free. free memory ---------------------------------- Memory not currently allocated to any user process or to the kernel. GlancePlus ---------------------------------- An online diagnostic tool that displays current performance data directly to a user terminal or workstation. It is designed to assist you in identifying and troubleshooting system performance problems as they occur. global ---------------------------------- A qualifier implying the whole system. Thus "global metrics" are metrics that describe the activities and states of each system. Similarly, application metrics describe application activity; process metrics describe process activity. global log file ---------------------------------- The raw log file, logglob, where the collector places summarized measurements of the system-wide workload. host ---------------------------------- An ESX or ESXi system that is managed by a vMA. hypervisor ---------------------------------- The hypervisor provides the ability to divide physical system resources into isolated logical partitions. Each logical partition operates like an independent system running its own operating environment. The hypervisor can assign dedicated processors, I/O, and memory, to each logical partition. The hypervisor can also assign shared processors to each logical partition. The hypervisor creates a shared processor pool from which it allocates virtual processors to the logical partitions as needed. idle biod ---------------------------------- The number of inactive NFS daemons on a client. idle ---------------------------------- The state in which the CPU is idle when it is waiting for the dispatcher (scheduler) to provide processes to execute. initial group ---------------------------------- The first process resource group listed in a PRM user record of the PRM configuration file. This is the group where prmconfig, prmmove -i, login, at, and cron place user processes. inode ---------------------------------- A reference pointer to a file. This reference pointer contains a description of the disk layout of the file data and other information, such as the file owner, access permissions, and access times. Inode is a contraction of the term 'index node'. inode cache ---------------------------------- An in memory table containing up-to-date information on the state of a currently referenced file. interesting process ---------------------------------- A filter mechanism that allows the user to limit the number of process entries to view. A process becomes interesting when it is first created, when it ends, and when it exceeds user-defined thresholds for CPU use, disk use, response time, and so on. interrupt ---------------------------------- High priority interruptions of the CPU to notify it that something has happened. For example, a disk IO completion is an interrupt. interval ---------------------------------- A specific time period during which performance data is gathered. ioctl ---------------------------------- A system call that provides an interface to allow processes to control IO or pseudo devices. IO done ---------------------------------- The Virtual Memory Management (VMM) system reads and writes from the disk and keeps track of how many IOs are completed by the system. Since IOs are asynchronous, they are not completed immediately. Sometimes IOs done can be higher than IO starts, since some of the IOs that are started in the previous interval can be completed. IO start ---------------------------------- The Virtual Memory Management (VMM) system reads and writes from the disk and keeps track of how many IOs are started by the system. Since IOs are async, they are not completed immediately. InterProcess Communication (IPC) ---------------------------------- Communication protocols used between processes. kernel ---------------------------------- The core of the UNIX operating system. It is the code responsible for managing the computer's resources and performing functions such as allocating memory. The kernel also performs administrative functions required for overall system performance. kernel table ---------------------------------- An internal system table such as the Process Table or Text Table. A table's configured size can affect system behavior. last measurement reset ---------------------------------- When you run a performance product, it starts collecting performance data. Cumulative metrics begin to accumulate at this time. When you reset measurement to zero, all cumulative metrics are set to zero and averages are reset so their values are calculated beginning with the next interval. load average ---------------------------------- A measure of the CPU load on the system. The load average is defined as an average of the number of processes running and ready to run, as sampled over the previous one-minute interval of system operation. The kernel maintains this data. lock miss ---------------------------------- The Virtual Memory Management (VMM) system locks pages for synchronization purposes. If the lock has to be broken for any reason that is considered a lock miss. Usually this is a very small number. logappl (application log file) ---------------------------------- The raw log file that contains summary measurements of processes in each user-defined application. logdev (device log file) ---------------------------------- The raw log file that contains measurements of individual device (such as disk) performance. logglob (global log file) ---------------------------------- The raw log file that contains measurements of the system-wide, or global, workload. logindex ---------------------------------- The raw log file that contains information required for accessing data in the other log files. logproc (process log file) ---------------------------------- The raw log file that contains measurements of selected interesting processes. logtran (transaction log file) ---------------------------------- The raw log file that contains measurements of transaction data. log files ---------------------------------- Performance measurement files that contain either raw or extracted log file data. logical IO ---------------------------------- A read or write system call to a file system to obtain data. Because of the effects of buffer caching, this operation may not require a physical access to the disk if the buffer is located in the buffer cache. macro ---------------------------------- A group of instructions that you can combine into a single instruction for the application to execute. major fault ---------------------------------- A page fault requiring an access to disk to retrieve the page. measurement interface ---------------------------------- A set of proprietary library calls used by the performance applications to obtain performance data. mem entitlement ---------------------------------- The percentage of memory guaranteed to a particular Process Resource Manager (PRM) group when the total system memory use is at its maximum. The system administrator assigns the memory entitlement for each PRM group in a PRM configuration file. The minimum entitlement for groups other than the system group is 1%. PRM distributes unused time to other groups in proportion to their memory entitlements. memory pressure ---------------------------------- A situation that occurs when processes are requesting more memory space than is available. memory upperbound ---------------------------------- The upper memory threshold is a flexible (soft) upper boundary. If a group's memory use is above its upper memory threshold and system memory use is approaching 100%, then regardless of whether other groups are currently in need of memory, Process Resource Manager (PRM) will control the group's memory use by suppressing the group's processes. memory swap space ---------------------------------- The part of physical memory allocated for swapping. memory thrashing ---------------------------------- See thrashing. message buffer pool ---------------------------------- A cache used to store all used message queue buffers on the system. message queue ---------------------------------- The messaging mechanism allows processes to send formatted data streams to arbitrary processes. A message queue holds the buffers from which processes read the data. message table ---------------------------------- A table that shows the maximum number of message queues allowed for the system. metric ---------------------------------- A specific measurement that defines performance characteristics. midaemon ---------------------------------- The process that monitors system performance and creates counters from system event traces that are read and displayed by performance applications. minor fault ---------------------------------- A page fault that is satisfied by a memory access (the page was not yet released from memory). mount/unmount ---------------------------------- The process of adding or removing additional, functionally- independent file systems to or from the pool of available file systems. NFS call ---------------------------------- A physical Network File System (NFS) operation a system has received or processed. NFS client ---------------------------------- A node that requests data or services from other nodes on the network. NFS Logical IO ---------------------------------- A logical I/O request made to an NFS mounted file system. NFS-mounted ---------------------------------- A file system connected by software to one system but physically residing on another system's disk. NFS IO ---------------------------------- A system count of the NFS calls. NFS server ---------------------------------- A node that provides data or services to other nodes on the network. NFS transfer ---------------------------------- Transfer of data packets across a local area network (LAN) to support Network File System (NFS) services. Network Node Manager (NNM) ---------------------------------- A network management application that provides the network map. network time ---------------------------------- The amount of time required for a particular network request to be completed. nice ---------------------------------- Altering the priority of a time-share process, using either the nice/renice command or the nice system call. High nice values lessen the priority; low nice values increase the priority. node ---------------------------------- A computing resource on a network, such as a networked computer system, hub, or bridge. normal CPU ---------------------------------- CPU time spent processing user applications which have not been real-time dispatched or niced. outbound read/write ---------------------------------- The designation used when a local process requests a read from or write to a remote system via NFS. o/f (overflow) ---------------------------------- This designates that the measurement software has detected a number that is too large to fit in the available space. packet ---------------------------------- A unit of information that is transferred between a server and a client over the LAN. packet in/out ---------------------------------- A request sent to the server by a client is an "in" packet. A request sent to a client by the server is an "out" packet. page ---------------------------------- A basic unit of memory. A process is accessed in pages (demand paging) during execution. page fault ---------------------------------- An event recorded when a process tries to execute code instructions or to reference a data page not resident in a process' mapped physical memory. The system must page-in the missing code or data to allow execution to continue. page freed ---------------------------------- When a paging daemon puts a page in the free list, it is considered as page freed. page reclaim ---------------------------------- Virtual address space is partitioned into segments, which are then partitioned into fixed size units called pages. There are usually two kinds of segments: persistent segments, and working segments. Files containing data or executable programs are mapped into persistent segments. A persistent segment (text) has a permanent storage location on disk so the Virtual Memory Manager writes the page back to that location when the page has been modified and it is no longer kept in real memory. If the page has not changed, its frame is simply reclaimed. page scan ---------------------------------- The clock hand algorithm used to control page and to select pages for removal from system memory. It scans pages to select pages for possible removal. page steal ---------------------------------- Occurs when a page used by a process is taken away by the Virtual Memory Management system. page in/page out ---------------------------------- Moving pages of data from virtual memory (disk) to physical memory (page in) or vice versa (page out). pagedaemon ---------------------------------- A system daemon responsible for writing parts of a process' address space to secondary storage (disk) to support the paging capability of the virtual memory system. pagein routine ---------------------------------- A kernel routine that brings pages of a process' address space into physical memory. pageout routine ---------------------------------- A kernel routing that executes when physical memory space is scarce, and the pagedaemon is activated to remove the least- needed pages from memory by writing them to swap space or to the file system. page request ---------------------------------- A page fault that has to be satisfied by accessing virtual memory. page space ---------------------------------- The area of a disk or memory reserved for paging out portions of processes or swapping out entire processes. Also known as swap space. parm file ---------------------------------- The file containing the parameters used by the Performance Collection Component's scope data collector to customize data collection. Also used to define your applications. performance distribution range ---------------------------------- An amount of time that you define with the range= keyword in the transaction configuration file. Performance Manager ---------------------------------- Performance Manager provides integrated performance management for multi-vendor distributed networks. It uses a single workstation to monitor environment performance on networks that range in size from tens to thousands of nodes. perfstat ---------------------------------- The script used for viewing the status of all Hewlett-Packard performance products on your system. To view a list of all perfstat options, type perfstat -? from the Windows Command Prompt. To view the status of all performance products from Performance Collection Component on Windows, choose Status from the Agent menu on the main window. HP Performance Manager ---------------------------------- A tool that provides integrated performance management for multi- vendor distributed networks. Uses a single workstation to monitor environment performance on networks that range in size from tens to thousands of nodes. pfaults ---------------------------------- Most resolvable pfaults (protection faults) are caused by copy on writes (for example, writing to private memory segments). Most other pfaults are protection violations (for example, writing to a read-only region) and result in SIGBUS. See mprotect(2). physical IO ---------------------------------- A input/output operation where data is transferred from memory to disk or vice versa. Physical IO includes file system IO, raw IO, system IO, and virtual memory IO. physical memory ---------------------------------- The actual hardware memory components contained within your computer system. PID ---------------------------------- A process identifier - a process' unique identification number that distinguishes it from all other processes on the system. PPID is a parent process identifier - the process identifier of a process that forked or vforked another process. pipe ---------------------------------- A mechanism that allows a stream of data to be passed between read and write processes. priority ---------------------------------- The number assigned to a PID that determines its importance to the CPU scheduler. PRM configuration file ---------------------------------- The Process Resource Manager (PRM) configuration file defines PRM groups, CPU entitlements and caps, memory entitlements and caps, user access permissions, application/PRM group associations, and disk bandwidth entitlements. The default PRM configuration file is /etc/prmconf. The configuration file can contain five types of records; however, you do not have to use each type of record. The record types are: * Group (required) - defines PRM groups and CPU entitlements * Memory - defines real memory entitlements and caps * User - specifies which PRM groups a user can access * Application - defines associations between applications and PRM groups * Disk - defines disk bandwidth entitlements for a specific logical volume group proc table ---------------------------------- The process table that holds information for every process on the system. process ---------------------------------- The execution of a program file. This execution can represent an interactive user (processes running at normal, nice, or real-time priorities) or an operating system process. process block ---------------------------------- A process block occurs when a process is not executing because it is waiting for a resource or IO completion. process deactivation ---------------------------------- A technique used for memory management. Process deactivation marks pages of memory within a process as available for use by other more active processes. A process becomes a candidate for deactivation when physical memory becomes scarce or when a system starts thrashing. Processes are reactivated when they become ready to run. process resource group ---------------------------------- A group of users that is entitled to a minimum percentage of CPU. Process resource groups, or PRM groups, are defined in the PRM configuration file /etc/prmconf. Each PRM group has a name, a number (PRMID), and a CPU entitlement. process resource group ID ---------------------------------- An integer between zero and fifteen, inclusive, that uniquely identifies a process resource group. PRMID 0 is reserved for the System Group. PRMID 1 is reserved for the User Default Group. process state ---------------------------------- Different types of tasks executed by a CPU on behalf of a process. For example: user, nice, system and interrupt. pseudo terminal (pty) ---------------------------------- A software device that operates in pairs. Output directed to one member of the pair is sent to the input of the other member. Input is sent to the upstream module. queue ---------------------------------- A waiting line in which unsatisfied requests are placed until a resource becomes available. raw IO ---------------------------------- Unbuffered input/output that transfers data directly between a disk device and the user program requesting the data. It bypasses the file system's buffer cache. Also known as character mode. Compare with block mode. raw log file ---------------------------------- The file into which scope logs collected data. It contains summarized measurements of system data. See logglob, logappl, logproc, logdev, logtran, and logindx. read byte rate ---------------------------------- The rate of kilobytes per second the system sent or received doing read operations. read rate ---------------------------------- The number of NFS and local read operations per second a system has processed. Read operations consist of getattr, lookup, readlink, readdir, null, root, statfs, and read. Read/write Qlen ---------------------------------- The number of pending NFS operations. read/write system call ---------------------------------- A request that a program uses to tell the kernel to perform a specific service on the program's behalf. When the user requests a read, a read system call is activated. When the user requests a write, a write system call is activated. real time ---------------------------------- The actual time in which an event takes place. real-time cpu ---------------------------------- Time the CPU spent executing processes that have a real-time priority. remote swapping ---------------------------------- Swapping that uses swap space from a pool located on a different system's swap device. This type of swapping is often used by diskless systems that swap on a server machine. repeat time ---------------------------------- An action that can be selected for performance alarms. Repeat time designates the amount of time that must pass before an activated and continuing alarm condition triggers another alarm signal. reserved swap space ---------------------------------- Area set aside on your disk for virtual memory. resident buffer ---------------------------------- Data stored in physical memory. resident memory ---------------------------------- Information currently loaded into memory for the execution of a process. resident set size ---------------------------------- The amount of physical memory a process is using. It includes memory allocated for the process' data, stack, and text segments. resize ---------------------------------- Changing the overall size of a raw log file. resource pool ---------------------------------- A resource pool Is a logical abstraction for flexible management of resources. Resource pools can be grouped into hierarchies and used to hierarchically partition available CPU and memory resources on an ESX server or a cluster. response time ---------------------------------- The time spent to service all NFS operations. roll back ---------------------------------- Deleting one or more days worth of data from a raw log file with the oldest data deleted first. Roll backs are performed when a raw log file exceeds its maximum size parameter. rxlog ---------------------------------- The default extract log file created when data is extracted from raw log files. SCOPE ---------------------------------- The Performance Collection Component's default data source that contains a scopeux or scopent global log file set. scopeux ---------------------------------- The Performance Collection Component's data collector program that collects performance data and writes (logs) it to raw log files for later analysis or archiving. scopent ---------------------------------- The Performance Collection Component's data collector program that collects performance data and writes (logs) it to raw log files for later analysis or archiving. scopeux log files ---------------------------------- The raw log files that are created by the scopeux collector: logglob, logappl, logproc, logdev, logtran, and logindx. scopent log files ---------------------------------- The raw log files that are created by the scopent collector: logglob, logappl, logproc, logdev, logtran, and logindx. semaphore ---------------------------------- Special types of flags used for signaling between two cooperating processes. They are typically used to guard critical sections of code that modify shared data structures. semaphore table ---------------------------------- Maximum number of semaphores currently allowed for the system. service level objective ---------------------------------- A definable level of responsiveness for a transaction. For example, if you decide that all database updates must occur within 2 seconds, set the Service Level Objective (SLO) for that transaction as slo=2 in the transaction configuration file. service level agreement ---------------------------------- A document prepared for a business critical application that explicitly defines the service level objectives that IT (Information Technology) is expected to deliver to users. It specifies what the users can expect in terms of system response, quantities of work, and system availability. shared memory ---------------------------------- System memory allocated for sharing data among processes. It includes shared text, data and stack. shared memory pool ---------------------------------- The cache in which shared memory segments are stored. shared memory segment ---------------------------------- A portion of a system's memory dedicated to sharing data for several processes. shared memory table ---------------------------------- A list of entries that identifies shared memory segments currently allocated on your system. shared text segment ---------------------------------- Code shared between several processes. signal ---------------------------------- A software event to notify a process of a change. Similar to a hardware interrupt. sleeping process ---------------------------------- A process that either has blocked itself or that has been blocked, and is placed in a waiting state. SMT ---------------------------------- SMT is a hardware feature that is designed to maximize CPU utilization. When SMT is enabled, the OS creates a virtual processor for each CPU thread but these virtual processors share the same main execution resources. The benefits of SMT vary depending on the application. socket operation ---------------------------------- A process that creates an endpoint for communication and returns a descriptor for use in all subsequent socket-related system calls. start of collection ---------------------------------- When you run a performance product, it starts collecting performance data. summary data ---------------------------------- The time period represented in one data point of a performance measurement. Summary levels can be five minutes, one hour, and one day. swap ---------------------------------- A memory management technique used to shuttle information between the main memory and a dedicated area on a disk (swap space). Swapping allows the system to run more processes than could otherwise fit into the main memory at a given time. swap in/out ---------------------------------- Moving information between the main memory and a dedicated (reserved) area on a disk. ''Swapping in'' is reading in to virtual memory; ''swapping out'' is reading out from virtual memory. swap space ---------------------------------- The area of a disk or memory reserved for swapping out entire processes or paging out portions of processes. Also known as page space. system call ---------------------------------- A command that a program uses to tell the kernel to perform a specific service on the program's behalf. This is the user's and application programmer's interface to the UNIX kernel. system code ---------------------------------- Kernel code that is executed through system calls. system CPU ---------------------------------- Time that the CPU was busy executing kernel code. Also called kernel mode. system disk ---------------------------------- Physical disk IO generated for file system management. These include inode access, super block access and cylinder group access. system group ---------------------------------- The process resource group with PRMID 0. PRM places all system processes, such as init and swapper, in this group by default. system interrupt handling code ---------------------------------- Kernel code that processes interrupts. terminal transaction ---------------------------------- A terminal transaction occurs whenever a read is completed to a terminal device or MPE message file. On a terminal device, a read is normally completed when the user presses the return or the enter key. Some devices such as serial printers may satisfy terminal reads by returning hardware status information. Several metrics are collected to characterize terminal transactions. The FIRST_RESPONSE_TIME metric measures the time between the completion of the read and the completion of the first write back to that device. This metric is most often quoted in bench marks as it yields the quickest response time. For transactions which return a large amount of data to the terminal, such as reading an electronic mail message, the time to first response may be the best indicator of overall system responsiveness. The RESPONSE_TIME_TO_PROMPT metric measures the time between the completion of the read and the posting of the next read. It is the amount of time that a user must wait before being able to enter the next transaction. This response time includes the amount of time it took to write data back to the terminal as a result of the transaction. The response time to prompt is the best metric for determining the limits of transaction throughput. The THINK_TIME metric measures the time between posting a read and its completion. It is a measure of how much time the user took to examine the results of the transaction and then complete entering the next transaction. Terminal transaction metrics are expressed as average times per transaction and as total times in seconds. Total times are calculated by multiplying the average time per transaction times the number of transactions completed. Terminal transactions can be created by interactive or batch processes that do reads to terminal devices or message files. Reads to terminal devices or message files done by system processes will not be counted as transactions. text segment ---------------------------------- A memory segment that holds executable program code. thrashing ---------------------------------- A condition in which a system is spending too much time swapping data in and out, and too little time doing useful work. This is characteristic of situations in which either too many page faults are being created or too much swapping is occurring. Thrashing causes the system's performance to degrade and the response time for the interactive users to increase. threadpool queue ---------------------------------- A queue of requests waiting for an available server thread. threshold ---------------------------------- Numerical values that can be set to define alarm conditions. When a threshold is surpassed, an alarm is triggered. tooltip Display of the full text of a truncated data string in a row- column formated GlancePlus report window. Tooltips are enabled and disabled by choosing Tooltips from the window's Configure menu or by clicking the "T" button in the upper right corner of the window. trap ---------------------------------- Software interrupt that requires service from a trap handler routine. An example would be a floating point exception on a system that does not have floating point hardware support. This requires the floating point operations to be emulated in the software trap handler code. transaction ---------------------------------- Some amount of work performed by a computer system on behalf of a user. The boundaries of this work are defined by the user. transaction tracking ---------------------------------- The Performance Collection Component feature that lets information technology (IT) managers measure end-to-end response time of business application transactions. To collect transaction data, the Performance Collection Component must have a process running that is instrumented with the Application Response Measurement (ARM) API. trap handler code ---------------------------------- Traps are measured when the kernel executes the code in the trap handler routine. For a list of trap types, refer to the file /usr/include/machine/trap.h. ttd conf ---------------------------------- The configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms) where you define each transaction and the information to be tracked for each transaction, such as transaction name, performance distribution range, and service level objective. uncapped ---------------------------------- An Uncapped partition is allowed to consume more processor resources than its entitlement. The maximum amount of processor capacity that an uncapped partition can use is limited by the number of virtual processors. A virtual processor is part of a physical processor's capacity as presented to a partition. Each virtual processor can represent between 0.1 and 1.0 CPUs (processing units). unmount/mount ---------------------------------- The process of removing or adding functionally-independent file systems from or to the root file system. update interval ---------------------------------- The interval of time between updates of the metrics that display in a report window or graph. user CPU ---------------------------------- Time that the CPU was busy executing user code. This includes time spent executing non-kernel code by daemon processes. It does not include CPU time spent executing system calls, context switching, or interrupt handling. user disk ---------------------------------- Physical disk IO generated by accessing the file system. user code ---------------------------------- Code that does not perform system calls. user default group ---------------------------------- The process resource group with PRMID 1. PRM uses this group as the initial group for any user who does not have a PRM user record in the PRM configuration file. utility program ---------------------------------- A Performance Collection Component program that lets you check parm file and alarm definitions file syntax, resize log files, scan log files for information, and obtain alarm information from historical log file data. vfault CPU ---------------------------------- CPU time spent handling page faults. vfork ---------------------------------- A version of the fork system call that spawns a child process that is capable of sharing code and data with its parent process. vMA ---------------------------------- The VMware Infrastructure Management Assistant (vMA) is a virtual machine which includes packaged software that developers and administrators can use to run agents and scripts to manage ESX and ESXi systems. virtual memory ---------------------------------- Secondary memory that exists on a portion of a disk or other storage device. It is used as an extension of the primary physical memory. virtual memory IO ---------------------------------- The virtual memory reads or writes from the disk for memory mapped files, and for paging out pages from paging area (swap area). Since all the files are memory mapped, all the reads or writes are virtual memory reads or writes as well. The computational memory of the processes that are changing is paged out if necessary to the swap area and read or written from there again. write byte rate ---------------------------------- The rate of kilobytes per second the system sent or received during write operations. write rate ---------------------------------- The number of NFS and local write operations the local machine has processed per second. Write operations include setattr, writecache, create, remove, rename, link, symlink, mkdir, rmdir, and write. X-Axis ---------------------------------- The horizontal scale on a graph. Y-Axis ---------------------------------- The vertical scale on a graph. zero fill page ---------------------------------- When pages are requested by the processes they are usually allocated by the Virtual Memory Management system and filled with zeros. objects ---------------------------------- Representations of threads and processes, sections or shared memory, and physical devices of a computer. Examples include software applications (such as Microsoft Exchange), physical disks (hard disks on a computer system), and logical disks (partitions on a disk drive). Objects are used by the Collection Builder function in the Performance Collection Component for Windows. instances ---------------------------------- In the Performance Collection Component for Windows, specific occurrences of objects (threads and processes, sections of shared memory, and physical devices) within a PC. For example, drive C: is an instance of a logical disk. See also fixed instances and variable instances. counters ---------------------------------- In Windows, units pertaining to an object (threads and processes, sections of shared memory, and physical devices) that can be measured (or counted). policy ---------------------------------- In the Performance Collection Component for Windows, a Collection Builder performance measurement configuration file that contains information about the Windows counter set, instance selection, log file locations, and data collection rates and calculations. This file can be reused to define uniquely named collections of performance counters/metrics on multiple PCs using Windows. collection ---------------------------------- In the Performance Collection Component for Windows, a defined set of counters/metrics that has been registered and assigned a unique name and is based on a policy established by the Collection Builder task. fixed instances ---------------------------------- Permanently-named occurrences of an object type in the Performance Collection Component for Windows. In a Collection Builder policy, a fixed-instance policy contains metrics (counters) that refer to a specific instance of an object and the instance name cannot be changed when the policy is used to create new collections for multiple PCs. Such a policy works effectively where PCs are uniformly configured and instance names are duplicated across the distributed application environment. variable instances ---------------------------------- Non-specific occurrences of an object type in the Performance Collection Component for Windows. In a Collection Builder policy, a variable-instance policy contains metrics (counters) that do not refer to a specific named object instance. Such a policy enables users to select a specific instance later when they create new collections for multiple PCs. A variable- instance policy works effectively where PCs are configured differently and instance names are likely to differ from PC to PC. sampling interval ---------------------------------- The frequency at which data values are retrieved for the counter/metric set of any given collection in the Performance Collection Component for Windows. These values are averaged and logged according to the records per hour setting in the policy file. Windows registry ---------------------------------- A database repository for information about a computer's configuration. It is organized in a hierarchical structure and consists of subtrees and their keys, hives, and value entries. AIX SPLPAR ---------------------------------- Using Micro-Partitioning technology, physical processors are divided into virtual processors that are shared in a pool between one or more LPARs. An LPAR that can use processors from shared pool is called as Shared Processor LPAR or Micro-Partition. recognized VMWare ESX guest ---------------------------------- A logical system, hosted on VMWare ESX Server, and VMWare tools is installed. VMWare ESX Server console ---------------------------------- refers to the services console of VMWare ESX Server logical system ---------------------------------- refers to a LPAR or a virtual machine hosted as a guest on HPVM or ESX Server or Hyper-V host. virtual environment ---------------------------------- refers to a logical system, VMWare ESX Server Console and HP-UX system hosting HPVM. AIX LPAR ---------------------------------- A subset of logical resources that are capable of supporting an operating system. A logical partition consists of CPUs, memory, and I/O slots that are a subset of the pool of available resources within a system. virtual CPUs ---------------------------------- number of CPUs allocated to a Logical System virtual machines ---------------------------------- An empty, isolated, virtual environment, lying on top of a host OS, equipped with virtual hardware (processor, memory, disks, network interfaces, etc.) and managed by a virtualization product. It's where the guest OS is installed HP-UX host ---------------------------------- The system where HPVM is installed. vfaults ---------------------------------- A vfault (virtual fault) is the mechanism that causes paging. Accessing an unmapped valid page causes a resolvable vfault. Accessing an illegal address results in a SIGSEGV. solaris_zone ---------------------------------- A zone is a virtual operating system abstraction that provides a protected environment in which applications run. Each application receives a dedicated namespace in which to run, and cannot see, monitor, or affect applications running in another zone. Hyper-V ---------------------------------- Hyper-V role enabled Windows 2008 server. aix_global_environment ---------------------------------- Global Environment refers to the part of the AIX operating system that hosts workload partitions (WPARs). aix_system_wpar ---------------------------------- A workload partition(WPAR) is a software created, virtualized OS environment within a single AIX V6 image. Each WPAR is a secure and isolated environment for the application it hosts. A system WPAR is similar to a typical AIX environment. aix_wpar ---------------------------------- A workload partition(WPAR) is a software created, virtualized OS environment within a single AIX V6 image. Each WPAR is a secure and isolated environment for the application it hosts. A system WPAR is similar to a typical AIX environment. Root partition ---------------------------------- Hyper-V role enabled Windows 2008 server that hosts Virtual Machines. The Root partition is also represented as an instance in BYLS class along with the Virtual Machine hosted on it. file_cache ---------------------------------- File cache is a memory pool used by the system to stage disk IO data for the driver.