HP GlancePlus for Linux Dictionary of Operating System Performance Metrics Print Date 05/2013 GlancePlus for Linux Release 11.12 ************************************************************* Legal Notices ============= Warranty -------- The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. The information contained herein is subject to change without notice. Restricted Rights Legend ------------------------ Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. Copyright Notices ----------------- ©Copyright 2013 Hewlett-Packard Development Company, L.P. All rights reserved. ************************************************************* Introduction ============ This dictionary contains definitions of the Linux operating system performance metrics for HP GlancePlus. This document is divided into the following sections: * "Metric Names by Data Class," which lists the metrics alphabetically by data class. * "Metric Definitions," which describes each metric in alphabetical order. * "Glossary," which provides a glossary of performance metric terms. Global Metrics ---------------------------------- GBL_ACTIVE_CPU GBL_ACTIVE_CPU_CORE GBL_ACTIVE_PROC GBL_ALIVE_PROC GBL_BLANK GBL_BOOT_TIME GBL_COLLECTION_MODE GBL_COLLECTOR GBL_COMPLETED_PROC GBL_CPU_CLOCK GBL_CPU_CYCLE_ENTL_MAX GBL_CPU_CYCLE_ENTL_MIN GBL_CPU_ENTL_MAX GBL_CPU_ENTL_MIN GBL_CPU_ENTL_UTIL GBL_CPU_GUEST_TIME GBL_CPU_GUEST_TIME_CUM GBL_CPU_GUEST_UTIL GBL_CPU_GUEST_UTIL_CUM GBL_CPU_GUEST_UTIL_HIGH GBL_CPU_IDLE_TIME GBL_CPU_IDLE_TIME_CUM GBL_CPU_IDLE_UTIL GBL_CPU_IDLE_UTIL_CUM GBL_CPU_IDLE_UTIL_HIGH GBL_CPU_INTERRUPT_TIME GBL_CPU_INTERRUPT_TIME_CUM GBL_CPU_INTERRUPT_UTIL GBL_CPU_INTERRUPT_UTIL_CUM GBL_CPU_INTERRUPT_UTIL_HIGH GBL_CPU_MT_ENABLED GBL_CPU_NICE_TIME GBL_CPU_NICE_TIME_CUM GBL_CPU_NICE_UTIL GBL_CPU_NICE_UTIL_CUM GBL_CPU_NICE_UTIL_HIGH GBL_CPU_NUM_THREADS GBL_CPU_PHYSC GBL_CPU_PHYS_TOTAL_UTIL GBL_CPU_SHARES_PRIO GBL_CPU_STOLEN_TIME GBL_CPU_STOLEN_TIME_CUM GBL_CPU_STOLEN_UTIL GBL_CPU_STOLEN_UTIL_CUM GBL_CPU_STOLEN_UTIL_HIGH GBL_CPU_SYS_MODE_TIME GBL_CPU_SYS_MODE_TIME_CUM GBL_CPU_SYS_MODE_UTIL GBL_CPU_SYS_MODE_UTIL_CUM GBL_CPU_SYS_MODE_UTIL_HIGH GBL_CPU_TOTAL_TIME GBL_CPU_TOTAL_TIME_CUM GBL_CPU_TOTAL_UTIL GBL_CPU_TOTAL_UTIL_CUM GBL_CPU_TOTAL_UTIL_HIGH GBL_CPU_USER_MODE_TIME GBL_CPU_USER_MODE_TIME_CUM GBL_CPU_USER_MODE_UTIL GBL_CPU_USER_MODE_UTIL_CUM GBL_CPU_USER_MODE_UTIL_HIGH GBL_CPU_WAIT_TIME GBL_CPU_WAIT_TIME_CUM GBL_CPU_WAIT_UTIL GBL_CPU_WAIT_UTIL_CUM GBL_CPU_WAIT_UTIL_HIGH GBL_CSWITCH_RATE GBL_CSWITCH_RATE_CUM GBL_CSWITCH_RATE_HIGH GBL_DISK_PHYS_BYTE GBL_DISK_PHYS_BYTE_RATE GBL_DISK_PHYS_IO GBL_DISK_PHYS_IO_CUM GBL_DISK_PHYS_IO_RATE GBL_DISK_PHYS_IO_RATE_CUM GBL_DISK_PHYS_READ GBL_DISK_PHYS_READ_BYTE GBL_DISK_PHYS_READ_BYTE_CUM GBL_DISK_PHYS_READ_BYTE_RATE GBL_DISK_PHYS_READ_CUM GBL_DISK_PHYS_READ_PCT GBL_DISK_PHYS_READ_PCT_CUM GBL_DISK_PHYS_READ_RATE GBL_DISK_PHYS_READ_RATE_CUM GBL_DISK_PHYS_WRITE GBL_DISK_PHYS_WRITE_BYTE GBL_DISK_PHYS_WRITE_BYTE_CUM GBL_DISK_PHYS_WRITE_BYTE_RATE GBL_DISK_PHYS_WRITE_CUM GBL_DISK_PHYS_WRITE_PCT GBL_DISK_PHYS_WRITE_PCT_CUM GBL_DISK_PHYS_WRITE_RATE GBL_DISK_PHYS_WRITE_RATE_CUM GBL_DISK_REQUEST_QUEUE GBL_DISK_SUBSYSTEM_QUEUE GBL_DISK_SUBSYSTEM_WAIT_PCT GBL_DISK_SUBSYSTEM_WAIT_TIME GBL_DISK_TIME_PEAK GBL_DISK_UTIL GBL_DISK_UTIL_PEAK GBL_DISK_UTIL_PEAK_CUM GBL_DISK_UTIL_PEAK_HIGH GBL_DISTRIBUTION GBL_FS_SPACE_UTIL_PEAK GBL_GMTOFFSET GBL_IGNORE_MT GBL_INTERRUPT GBL_INTERRUPT_RATE GBL_INTERRUPT_RATE_CUM GBL_INTERRUPT_RATE_HIGH GBL_INTERVAL GBL_INTERVAL_CUM GBL_JAVAARG GBL_LOADAVG GBL_LOADAVG15 GBL_LOADAVG5 GBL_LOADAVG_CUM GBL_LOADAVG_HIGH GBL_LOST_MI_TRACE_BUFFERS GBL_LS_MODE GBL_LS_ROLE GBL_LS_SHARED GBL_LS_TYPE GBL_MACHINE GBL_MACHINE_MEM_USED GBL_MACHINE_MODEL GBL_MEM_AVAIL GBL_MEM_CACHE GBL_MEM_CACHE_UTIL GBL_MEM_ENTL_MAX GBL_MEM_ENTL_MIN GBL_MEM_FILE_PAGEIN_RATE GBL_MEM_FILE_PAGEOUT_RATE GBL_MEM_FILE_PAGE_CACHE GBL_MEM_FILE_PAGE_CACHE_UTIL GBL_MEM_FREE GBL_MEM_FREE_UTIL GBL_MEM_OVERHEAD GBL_MEM_PAGEIN GBL_MEM_PAGEIN_BYTE GBL_MEM_PAGEIN_BYTE_CUM GBL_MEM_PAGEIN_BYTE_RATE GBL_MEM_PAGEIN_BYTE_RATE_CUM GBL_MEM_PAGEIN_BYTE_RATE_HIGH GBL_MEM_PAGEIN_CUM GBL_MEM_PAGEIN_RATE GBL_MEM_PAGEIN_RATE_CUM GBL_MEM_PAGEIN_RATE_HIGH GBL_MEM_PAGEOUT GBL_MEM_PAGEOUT_BYTE GBL_MEM_PAGEOUT_BYTE_CUM GBL_MEM_PAGEOUT_BYTE_RATE GBL_MEM_PAGEOUT_BYTE_RATE_CUM GBL_MEM_PAGEOUT_BYTE_RATE_HIGH GBL_MEM_PAGEOUT_CUM GBL_MEM_PAGEOUT_RATE GBL_MEM_PAGEOUT_RATE_CUM GBL_MEM_PAGEOUT_RATE_HIGH GBL_MEM_PAGE_FAULT GBL_MEM_PAGE_FAULT_CUM GBL_MEM_PAGE_FAULT_RATE GBL_MEM_PAGE_FAULT_RATE_CUM GBL_MEM_PAGE_FAULT_RATE_HIGH GBL_MEM_PAGE_REQUEST GBL_MEM_PAGE_REQUEST_CUM GBL_MEM_PAGE_REQUEST_RATE GBL_MEM_PAGE_REQUEST_RATE_CUM GBL_MEM_PAGE_REQUEST_RATE_HIGH GBL_MEM_PHYS GBL_MEM_PHYS_SWAPPED GBL_MEM_SHARES_PRIO GBL_MEM_SWAPIN_BYTE GBL_MEM_SWAPIN_BYTE_CUM GBL_MEM_SWAPIN_BYTE_RATE GBL_MEM_SWAPIN_BYTE_RATE_CUM GBL_MEM_SWAPIN_BYTE_RATE_HIGH GBL_MEM_SWAPOUT_BYTE GBL_MEM_SWAPOUT_BYTE_CUM GBL_MEM_SWAPOUT_BYTE_RATE GBL_MEM_SWAPOUT_BYTE_RATE_CUM GBL_MEM_SWAPOUT_BYTE_RATE_HIGH GBL_MEM_SYS GBL_MEM_SYS_UTIL GBL_MEM_USER GBL_MEM_USER_UTIL GBL_MEM_UTIL GBL_MEM_UTIL_CUM GBL_MEM_UTIL_HIGH GBL_NET_COLLISION GBL_NET_COLLISION_1_MIN_RATE GBL_NET_COLLISION_CUM GBL_NET_COLLISION_PCT GBL_NET_COLLISION_PCT_CUM GBL_NET_COLLISION_RATE GBL_NET_ERROR GBL_NET_ERROR_1_MIN_RATE GBL_NET_ERROR_CUM GBL_NET_ERROR_RATE GBL_NET_IN_ERROR GBL_NET_IN_ERROR_CUM GBL_NET_IN_ERROR_PCT GBL_NET_IN_ERROR_PCT_CUM GBL_NET_IN_ERROR_RATE GBL_NET_IN_ERROR_RATE_CUM GBL_NET_IN_PACKET GBL_NET_IN_PACKET_CUM GBL_NET_IN_PACKET_RATE GBL_NET_OUT_ERROR GBL_NET_OUT_ERROR_CUM GBL_NET_OUT_ERROR_PCT GBL_NET_OUT_ERROR_PCT_CUM GBL_NET_OUT_ERROR_RATE GBL_NET_OUT_ERROR_RATE_CUM GBL_NET_OUT_PACKET GBL_NET_OUT_PACKET_CUM GBL_NET_OUT_PACKET_RATE GBL_NET_PACKET GBL_NET_PACKET_RATE GBL_NET_UTIL_PEAK GBL_NFS_CALL GBL_NFS_CALL_RATE GBL_NFS_CLIENT_BAD_CALL GBL_NFS_CLIENT_BAD_CALL_CUM GBL_NFS_CLIENT_CALL GBL_NFS_CLIENT_CALL_CUM GBL_NFS_CLIENT_CALL_RATE GBL_NFS_CLIENT_IO GBL_NFS_CLIENT_IO_CUM GBL_NFS_CLIENT_IO_PCT GBL_NFS_CLIENT_IO_PCT_CUM GBL_NFS_CLIENT_IO_RATE GBL_NFS_CLIENT_IO_RATE_CUM GBL_NFS_CLIENT_READ_RATE GBL_NFS_CLIENT_READ_RATE_CUM GBL_NFS_CLIENT_WRITE_RATE GBL_NFS_CLIENT_WRITE_RATE_CUM GBL_NFS_SERVER_BAD_CALL GBL_NFS_SERVER_BAD_CALL_CUM GBL_NFS_SERVER_CALL GBL_NFS_SERVER_CALL_CUM GBL_NFS_SERVER_CALL_RATE GBL_NFS_SERVER_IO GBL_NFS_SERVER_IO_CUM GBL_NFS_SERVER_IO_PCT GBL_NFS_SERVER_IO_PCT_CUM GBL_NFS_SERVER_IO_RATE GBL_NFS_SERVER_IO_RATE_CUM GBL_NFS_SERVER_READ_RATE GBL_NFS_SERVER_READ_RATE_CUM GBL_NFS_SERVER_WRITE_RATE GBL_NFS_SERVER_WRITE_RATE_CUM GBL_NODENAME GBL_NUM_ACTIVE_LS GBL_NUM_APP GBL_NUM_CPU GBL_NUM_CPU_CORE GBL_NUM_DISK GBL_NUM_LS GBL_NUM_NETWORK GBL_NUM_SOCKET GBL_NUM_SWAP GBL_NUM_TT GBL_NUM_USER GBL_OSKERNELTYPE GBL_OSKERNELTYPE_INT GBL_OSNAME GBL_OSRELEASE GBL_OSVERSION GBL_PRI_QUEUE GBL_PRI_WAIT_PCT GBL_PRI_WAIT_TIME GBL_PROC_SAMPLE GBL_RUN_QUEUE GBL_RUN_QUEUE_CUM GBL_RUN_QUEUE_HIGH GBL_SAMPLE GBL_SERIALNO GBL_STARTDATE GBL_STARTED_PROC GBL_STARTED_PROC_RATE GBL_STARTTIME GBL_STATDATE GBL_STATTIME GBL_SWAP_SPACE_AVAIL GBL_SWAP_SPACE_AVAIL_KB GBL_SWAP_SPACE_DEVICE_AVAIL GBL_SWAP_SPACE_DEVICE_UTIL GBL_SWAP_SPACE_USED GBL_SWAP_SPACE_USED_UTIL GBL_SWAP_SPACE_UTIL GBL_SWAP_SPACE_UTIL_CUM GBL_SWAP_SPACE_UTIL_HIGH GBL_SYSTEM_ID GBL_SYSTEM_TYPE GBL_SYSTEM_UPTIME_HOURS GBL_SYSTEM_UPTIME_SECONDS GBL_THRESHOLD_PROCCPU GBL_THRESHOLD_PROCDISK GBL_THRESHOLD_PROCIO GBL_THRESHOLD_PROCMEM GBL_TT_OVERFLOW_COUNT Table Metrics ---------------------------------- TBL_BUFFER_HEADER_AVAIL TBL_BUFFER_HEADER_USED TBL_BUFFER_HEADER_USED_HIGH TBL_BUFFER_HEADER_UTIL TBL_BUFFER_HEADER_UTIL_HIGH TBL_FILE_LOCK_AVAIL TBL_FILE_LOCK_USED TBL_FILE_LOCK_USED_HIGH TBL_FILE_LOCK_UTIL TBL_FILE_LOCK_UTIL_HIGH TBL_FILE_TABLE_AVAIL TBL_FILE_TABLE_USED TBL_FILE_TABLE_USED_HIGH TBL_FILE_TABLE_UTIL TBL_FILE_TABLE_UTIL_HIGH TBL_INODE_CACHE_AVAIL TBL_INODE_CACHE_HIGH TBL_INODE_CACHE_USED TBL_MSG_BUFFER_ACTIVE TBL_MSG_BUFFER_AVAIL TBL_MSG_BUFFER_HIGH TBL_MSG_BUFFER_USED TBL_MSG_TABLE_ACTIVE TBL_MSG_TABLE_AVAIL TBL_MSG_TABLE_USED TBL_MSG_TABLE_UTIL TBL_MSG_TABLE_UTIL_HIGH TBL_NUM_NFSDS TBL_SEM_TABLE_ACTIVE TBL_SEM_TABLE_AVAIL TBL_SEM_TABLE_USED TBL_SEM_TABLE_UTIL TBL_SEM_TABLE_UTIL_HIGH TBL_SHMEM_ACTIVE TBL_SHMEM_AVAIL TBL_SHMEM_HIGH TBL_SHMEM_TABLE_ACTIVE TBL_SHMEM_TABLE_AVAIL TBL_SHMEM_TABLE_USED TBL_SHMEM_TABLE_UTIL TBL_SHMEM_TABLE_UTIL_HIGH TBL_SHMEM_USED Process Metrics ---------------------------------- PROC_APP_ID PROC_APP_NAME PROC_CHILD_CPU_SYS_MODE_UTIL PROC_CHILD_CPU_TOTAL_UTIL PROC_CHILD_CPU_USER_MODE_UTIL PROC_CPU_ALIVE_SYS_MODE_UTIL PROC_CPU_ALIVE_TOTAL_UTIL PROC_CPU_ALIVE_USER_MODE_UTIL PROC_CPU_LAST_USED PROC_CPU_SYS_MODE_TIME PROC_CPU_SYS_MODE_TIME_CUM PROC_CPU_SYS_MODE_UTIL PROC_CPU_SYS_MODE_UTIL_CUM PROC_CPU_TOTAL_TIME PROC_CPU_TOTAL_TIME_CUM PROC_CPU_TOTAL_UTIL PROC_CPU_TOTAL_UTIL_CUM PROC_CPU_USER_MODE_TIME PROC_CPU_USER_MODE_TIME_CUM PROC_CPU_USER_MODE_UTIL PROC_CPU_USER_MODE_UTIL_CUM PROC_DISK_PHYS_IO_RATE PROC_DISK_PHYS_IO_RATE_CUM PROC_DISK_PHYS_READ PROC_DISK_PHYS_READ_CUM PROC_DISK_PHYS_READ_RATE PROC_DISK_PHYS_WRITE PROC_DISK_PHYS_WRITE_CUM PROC_DISK_PHYS_WRITE_RATE PROC_DISK_SUBSYSTEM_WAIT_PCT PROC_DISK_SUBSYSTEM_WAIT_PCT_CUM PROC_DISK_SUBSYSTEM_WAIT_TIME PROC_DISK_SUBSYSTEM_WAIT_TIME_CUM PROC_EUID PROC_FORCED_CSWITCH PROC_FORCED_CSWITCH_CUM PROC_GROUP_ID PROC_GROUP_NAME PROC_INTEREST PROC_INTERVAL PROC_INTERVAL_ALIVE PROC_INTERVAL_CUM PROC_IO_BYTE PROC_IO_BYTE_CUM PROC_IO_BYTE_RATE PROC_IO_BYTE_RATE_CUM PROC_MAJOR_FAULT PROC_MAJOR_FAULT_CUM PROC_MEM_DATA_VIRT PROC_MEM_LOCKED PROC_MEM_RES PROC_MEM_RES_HIGH PROC_MEM_SHARED_RES PROC_MEM_STACK_VIRT PROC_MEM_TEXT_VIRT PROC_MEM_VIRT PROC_MINOR_FAULT PROC_MINOR_FAULT_CUM PROC_NICE_PRI PROC_PAGEFAULT PROC_PAGEFAULT_RATE PROC_PAGEFAULT_RATE_CUM PROC_PARENT_PROC_ID PROC_PRI PROC_PRI_WAIT_PCT PROC_PRI_WAIT_PCT_CUM PROC_PRI_WAIT_TIME PROC_PRI_WAIT_TIME_CUM PROC_PROC_ARGV1 PROC_PROC_CMD PROC_PROC_ID PROC_PROC_NAME PROC_RUN_TIME PROC_SCHEDULER PROC_STARTTIME PROC_STATE PROC_STATE_FLAG PROC_STOP_REASON PROC_STOP_REASON_FLAG PROC_THREAD_COUNT PROC_THREAD_ID PROC_TIME PROC_TOP_CPU_INDEX PROC_TOP_DISK_INDEX PROC_TTY PROC_TTY_DEV PROC_UID PROC_USER_NAME PROC_VOLUNTARY_CSWITCH PROC_VOLUNTARY_CSWITCH_CUM Application Metrics ---------------------------------- APP_ACTIVE_APP APP_ACTIVE_PROC APP_ALIVE_PROC APP_COMPLETED_PROC APP_CPU_SYS_MODE_TIME APP_CPU_SYS_MODE_UTIL APP_CPU_TOTAL_TIME APP_CPU_TOTAL_UTIL APP_CPU_TOTAL_UTIL_CUM APP_CPU_USER_MODE_TIME APP_CPU_USER_MODE_UTIL APP_DISK_PHYS_IO_RATE APP_DISK_PHYS_READ APP_DISK_PHYS_READ_RATE APP_DISK_PHYS_WRITE APP_DISK_PHYS_WRITE_RATE APP_DISK_SUBSYSTEM_QUEUE APP_DISK_SUBSYSTEM_WAIT_PCT APP_INTERVAL APP_INTERVAL_CUM APP_IO_BYTE APP_IO_BYTE_RATE APP_MAJOR_FAULT APP_MAJOR_FAULT_RATE APP_MEM_RES APP_MEM_UTIL APP_MEM_VIRT APP_MINOR_FAULT APP_MINOR_FAULT_RATE APP_NAME APP_NUM APP_PRI APP_PRI_QUEUE APP_PRI_WAIT_PCT APP_PROC_RUN_TIME APP_SAMPLE APP_TIME Process By File Metrics ---------------------------------- PROC_FILE_MODE PROC_FILE_NAME PROC_FILE_NUMBER PROC_FILE_OPEN PROC_FILE_TYPE By Disk Metrics ---------------------------------- BYDSK_AVG_REQUEST_QUEUE BYDSK_AVG_SERVICE_TIME BYDSK_BUSY_TIME BYDSK_DEVNAME BYDSK_DEVNO BYDSK_DIRNAME BYDSK_ID BYDSK_INTERVAL BYDSK_INTERVAL_CUM BYDSK_PHYS_BYTE BYDSK_PHYS_BYTE_RATE BYDSK_PHYS_BYTE_RATE_CUM BYDSK_PHYS_IO BYDSK_PHYS_IO_RATE BYDSK_PHYS_IO_RATE_CUM BYDSK_PHYS_READ BYDSK_PHYS_READ_BYTE BYDSK_PHYS_READ_BYTE_RATE BYDSK_PHYS_READ_BYTE_RATE_CUM BYDSK_PHYS_READ_RATE BYDSK_PHYS_READ_RATE_CUM BYDSK_PHYS_WRITE BYDSK_PHYS_WRITE_BYTE BYDSK_PHYS_WRITE_BYTE_RATE BYDSK_PHYS_WRITE_BYTE_RATE_CUM BYDSK_PHYS_WRITE_RATE BYDSK_PHYS_WRITE_RATE_CUM BYDSK_QUEUE_0_UTIL BYDSK_QUEUE_2_UTIL BYDSK_QUEUE_4_UTIL BYDSK_QUEUE_8_UTIL BYDSK_QUEUE_X_UTIL BYDSK_REQUEST_QUEUE BYDSK_TIME BYDSK_UTIL File System Metrics ---------------------------------- FS_BLOCK_SIZE FS_DEVNAME FS_DEVNO FS_DIRNAME FS_FRAG_SIZE FS_INODE_UTIL FS_MAX_INODES FS_MAX_SIZE FS_PHYS_IO_RATE FS_PHYS_IO_RATE_CUM FS_PHYS_READ_BYTE_RATE FS_PHYS_READ_BYTE_RATE_CUM FS_PHYS_READ_RATE FS_PHYS_READ_RATE_CUM FS_PHYS_WRITE_BYTE_RATE FS_PHYS_WRITE_BYTE_RATE_CUM FS_PHYS_WRITE_RATE FS_PHYS_WRITE_RATE_CUM FS_SPACE_RESERVED FS_SPACE_USED FS_SPACE_UTIL FS_TYPE By Network Interface Metrics ---------------------------------- BYNETIF_COLLISION BYNETIF_COLLISION_1_MIN_RATE BYNETIF_COLLISION_RATE BYNETIF_COLLISION_RATE_CUM BYNETIF_ERROR BYNETIF_ERROR_1_MIN_RATE BYNETIF_ERROR_RATE BYNETIF_ERROR_RATE_CUM BYNETIF_ID BYNETIF_IN_BYTE BYNETIF_IN_BYTE_RATE BYNETIF_IN_BYTE_RATE_CUM BYNETIF_IN_PACKET BYNETIF_IN_PACKET_RATE BYNETIF_IN_PACKET_RATE_CUM BYNETIF_NAME BYNETIF_NET_SPEED BYNETIF_NET_TYPE BYNETIF_OUT_BYTE BYNETIF_OUT_BYTE_RATE BYNETIF_OUT_BYTE_RATE_CUM BYNETIF_OUT_PACKET BYNETIF_OUT_PACKET_RATE BYNETIF_OUT_PACKET_RATE_CUM BYNETIF_PACKET_RATE BYNETIF_UTIL By Swap Metrics ---------------------------------- BYSWP_SWAP_PRI BYSWP_SWAP_SPACE_AVAIL BYSWP_SWAP_SPACE_NAME BYSWP_SWAP_SPACE_USED BYSWP_SWAP_TYPE By CPU Metrics ---------------------------------- BYCPU_ACTIVE BYCPU_CPU_CLOCK BYCPU_CPU_GUEST_TIME BYCPU_CPU_GUEST_TIME_CUM BYCPU_CPU_GUEST_UTIL BYCPU_CPU_GUEST_UTIL_CUM BYCPU_CPU_INTERRUPT_TIME BYCPU_CPU_INTERRUPT_TIME_CUM BYCPU_CPU_INTERRUPT_UTIL BYCPU_CPU_INTERRUPT_UTIL_CUM BYCPU_CPU_NICE_TIME BYCPU_CPU_NICE_TIME_CUM BYCPU_CPU_NICE_UTIL BYCPU_CPU_NICE_UTIL_CUM BYCPU_CPU_STOLEN_TIME BYCPU_CPU_STOLEN_TIME_CUM BYCPU_CPU_STOLEN_UTIL BYCPU_CPU_STOLEN_UTIL_CUM BYCPU_CPU_SYS_MODE_TIME BYCPU_CPU_SYS_MODE_TIME_CUM BYCPU_CPU_SYS_MODE_UTIL BYCPU_CPU_SYS_MODE_UTIL_CUM BYCPU_CPU_TOTAL_TIME BYCPU_CPU_TOTAL_TIME_CUM BYCPU_CPU_TOTAL_UTIL BYCPU_CPU_TOTAL_UTIL_CUM BYCPU_CPU_TYPE BYCPU_CPU_USER_MODE_TIME BYCPU_CPU_USER_MODE_TIME_CUM BYCPU_CPU_USER_MODE_UTIL BYCPU_CPU_USER_MODE_UTIL_CUM BYCPU_ID BYCPU_INTERRUPT BYCPU_INTERRUPT_RATE BYCPU_STATE Process By Memory Region Metrics ---------------------------------- PROC_REGION_FILENAME PROC_REGION_PRIVATE_SHARED_FLAG PROC_REGION_PROT_FLAG PROC_REGION_TYPE PROC_REGION_VIRT PROC_REGION_VIRT_ADDRS PROC_REGION_VIRT_DATA PROC_REGION_VIRT_OTHER PROC_REGION_VIRT_SHMEM PROC_REGION_VIRT_STACK PROC_REGION_VIRT_TEXT By Operation Metrics ---------------------------------- BYOP_CLIENT_COUNT BYOP_CLIENT_COUNT_CUM BYOP_INTERVAL BYOP_INTERVAL_CUM BYOP_NAME BYOP_SERVER_COUNT BYOP_SERVER_COUNT_CUM Transaction Metrics ---------------------------------- TT_ABORT TT_ABORT_CUM TT_ABORT_WALL_TIME TT_ABORT_WALL_TIME_CUM TT_APPNO TT_APP_NAME TT_CLIENT_CORRELATOR_COUNT TT_COUNT TT_COUNT_CUM TT_FAILED TT_FAILED_CUM TT_FAILED_WALL_TIME TT_FAILED_WALL_TIME_CUM TT_INFO TT_INPROGRESS_COUNT TT_INTERVAL TT_INTERVAL_CUM TT_MEASUREMENT_COUNT TT_NAME TT_SLO_COUNT TT_SLO_COUNT_CUM TT_SLO_PERCENT TT_SLO_THRESHOLD TT_TRAN_1_MIN_RATE TT_TRAN_ID TT_UID TT_UNAME TT_UPDATE TT_UPDATE_CUM TT_WALL_TIME TT_WALL_TIME_CUM TT_WALL_TIME_PER_TRAN TT_WALL_TIME_PER_TRAN_CUM Transaction Measurement Section Metrics ---------------------------------- TTBIN_TRANS_COUNT TTBIN_TRANS_COUNT_CUM TTBIN_UPPER_RANGE Transaction Client Metrics ---------------------------------- TT_CLIENT_ABORT TT_CLIENT_ABORT_CUM TT_CLIENT_ABORT_WALL_TIME TT_CLIENT_ABORT_WALL_TIME_CUM TT_CLIENT_ADDRESS TT_CLIENT_ADDRESS_FORMAT TT_CLIENT_TRAN_ID TT_CLIENT_COUNT TT_CLIENT_COUNT_CUM TT_CLIENT_FAILED TT_CLIENT_FAILED_CUM TT_CLIENT_FAILED_WALL_TIME TT_CLIENT_FAILED_WALL_TIME_CUM TT_CLIENT_INTERVAL TT_CLIENT_INTERVAL_CUM TT_CLIENT_SLO_COUNT TT_CLIENT_SLO_COUNT_CUM TT_CLIENT_UPDATE TT_CLIENT_UPDATE_CUM TT_CLIENT_WALL_TIME TT_CLIENT_WALL_TIME_CUM TT_CLIENT_WALL_TIME_PER_TRAN TT_CLIENT_WALL_TIME_PER_TRAN_CUM Transaction Instance Metrics ---------------------------------- TT_INSTANCE_ID TT_INSTANCE_PROC_ID TT_INSTANCE_START_TIME TT_INSTANCE_STOP_TIME TT_INSTANCE_THREAD_ID TT_INSTANCE_UPDATE_COUNT TT_INSTANCE_UPDATE_TIME TT_INSTANCE_WALL_TIME Transaction User Defined Measurement Metrics ---------------------------------- TT_USER_MEASUREMENT_AVG TT_USER_MEASUREMENT_MAX TT_USER_MEASUREMENT_MIN TT_USER_MEASUREMENT_NAME TT_USER_MEASUREMENT_STRING1024_VALUE TT_USER_MEASUREMENT_STRING32_VALUE TT_USER_MEASUREMENT_TYPE TT_USER_MEASUREMENT_VALUE Transaction Client User Defined Measurement Metrics ---------------------------------- TT_CLIENT_USER_MEASUREMENT_AVG TT_CLIENT_USER_MEASUREMENT_MAX TT_CLIENT_USER_MEASUREMENT_MIN TT_CLIENT_USER_MEASUREMENT_NAME TT_CLIENT_USER_MEASUREMENT_STRING1024_VALUE TT_CLIENT_USER_MEASUREMENT_STRING32_VALUE TT_CLIENT_USER_MEASUREMENT_TYPE TT_CLIENT_USER_MEASUREMENT_VALUE Transaction Instance User Defined Measurement Metrics ---------------------------------- TT_INSTANCE_USER_MEASUREMENT_AVG TT_INSTANCE_USER_MEASUREMENT_MAX TT_INSTANCE_USER_MEASUREMENT_MIN TT_INSTANCE_USER_MEASUREMENT_NAME TT_INSTANCE_USER_MEASUREMENT_STRING1024_VALUE TT_INSTANCE_USER_MEASUREMENT_STRING32_VALUE TT_INSTANCE_USER_MEASUREMENT_TYPE TT_INSTANCE_USER_MEASUREMENT_VALUE By Logical System Metrics ---------------------------------- BYLS_BOOT_TIME BYLS_CLUSTER_NAME BYLS_CPU_CLOCK BYLS_CPU_CYCLE_ENTL_MAX BYLS_CPU_CYCLE_ENTL_MIN BYLS_CPU_CYCLE_TOTAL_USED BYLS_CPU_EFFECTIVE_UTIL BYLS_CPU_ENTL_EMIN BYLS_CPU_ENTL_MAX BYLS_CPU_ENTL_MIN BYLS_CPU_ENTL_UTIL BYLS_CPU_FAILOVER BYLS_CPU_MT_ENABLED BYLS_CPU_PHYSC BYLS_CPU_PHYS_READY_UTIL BYLS_CPU_PHYS_SYS_MODE_UTIL BYLS_CPU_PHYS_TOTAL_TIME BYLS_CPU_PHYS_TOTAL_UTIL BYLS_CPU_PHYS_USER_MODE_UTIL BYLS_CPU_PHYS_WAIT_UTIL BYLS_CPU_SHARES_PRIO BYLS_CPU_SYS_MODE_UTIL BYLS_CPU_TOTAL_UTIL BYLS_CPU_UNRESERVED BYLS_CPU_USER_MODE_UTIL BYLS_DATACENTER_NAME BYLS_DISK_CAPACITY BYLS_DISK_COMMAND_ABORT_RATE BYLS_DISK_FREE_SPACE BYLS_DISK_IORM_ENABLED BYLS_DISK_IORM_THRESHOLD BYLS_DISK_PHYS_BYTE BYLS_DISK_PHYS_BYTE_RATE BYLS_DISK_PHYS_READ BYLS_DISK_PHYS_READ_BYTE_RATE BYLS_DISK_PHYS_READ_RATE BYLS_DISK_PHYS_WRITE BYLS_DISK_PHYS_WRITE_BYTE_RATE BYLS_DISK_PHYS_WRITE_RATE BYLS_DISK_READ_LATENCY BYLS_DISK_SHARE_PRIORITY BYLS_DISK_THROUGHPUT_CONTENTION BYLS_DISK_THROUGPUT_USAGE BYLS_DISK_UTIL BYLS_DISK_UTIL_PEAK BYLS_DISPLAY_NAME BYLS_GUEST_TOOLS_STATUS BYLS_IP_ADDRESS BYLS_LS_CONNECTION_STATE BYLS_LS_HOSTNAME BYLS_LS_HOST_HOSTNAME BYLS_LS_ID BYLS_LS_MODE BYLS_LS_NAME BYLS_LS_NUM_SNAPSHOTS BYLS_LS_OSTYPE BYLS_LS_PARENT_TYPE BYLS_LS_PARENT_UUID BYLS_LS_PATH BYLS_LS_ROLE BYLS_LS_SHARED BYLS_LS_STATE BYLS_LS_STATE_CHANGE_TIME BYLS_LS_TYPE BYLS_LS_UUID BYLS_MACHINE_MODEL BYLS_MEM_ACTIVE BYLS_MEM_AVAIL BYLS_MEM_BALLOON_USED BYLS_MEM_BALLOON_UTIL BYLS_MEM_EFFECTIVE_UTIL BYLS_MEM_ENTL BYLS_MEM_ENTL_MAX BYLS_MEM_ENTL_MIN BYLS_MEM_ENTL_UTIL BYLS_MEM_FREE BYLS_MEM_FREE_UTIL BYLS_MEM_HEALTH BYLS_MEM_OVERHEAD BYLS_MEM_PHYS BYLS_MEM_PHYS_UTIL BYLS_MEM_SHARES_PRIO BYLS_MEM_SWAPIN BYLS_MEM_SWAPOUT BYLS_MEM_SWAPPED BYLS_MEM_SWAPTARGET BYLS_MEM_SWAP_UTIL BYLS_MEM_SYS BYLS_MEM_UNRESERVED BYLS_MEM_USED BYLS_MULTIACC_ENABLED BYLS_NET_BYTE_RATE BYLS_NET_IN_BYTE BYLS_NET_IN_PACKET BYLS_NET_IN_PACKET_RATE BYLS_NET_OUT_BYTE BYLS_NET_OUT_PACKET BYLS_NET_OUT_PACKET_RATE BYLS_NET_PACKET_RATE BYLS_NUM_ACTIVE_LS BYLS_NUM_CLONES BYLS_NUM_CPU BYLS_NUM_CPU_CORE BYLS_NUM_CREATE BYLS_NUM_DEPLOY BYLS_NUM_DESTROY BYLS_NUM_DISK BYLS_NUM_HOSTS BYLS_NUM_LS BYLS_NUM_NETIF BYLS_NUM_RECONFIGURE BYLS_NUM_SOCKET BYLS_SCHEDULING_CLASS BYLS_SUBTYPE BYLS_TOTAL_SV_MOTIONS BYLS_TOTAL_VM_MOTIONS BYLS_UPTIME_HOURS BYLS_UPTIME_SECONDS BYLS_VC_IP_ADDRESS APP_ACTIVE_APP ---------------------------------- The number of applications that had processes active (consuming cpu resources) during the interval. APP_ACTIVE_PROC ---------------------------------- An active process is one that exists and consumes some CPU time. APP_ACTIVE_PROC is the sum of the alive-process-time/interval-time ratios of every process belonging to an application that is active (uses any CPU time) during an interval. The following diagram of a four second interval showing two processes, A and B, for an application should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval, but consumes no CPU. A’s contribution to APP_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to APP_ACTIVE_PROC. B’s contribution to APP_ALIVE_PROC is 3*1/4. B contributes 2*1/4 to APP_ACTIVE_PROC. Thus, for this interval, APP_ACTIVE_PROC equals 0.5 and APP_ALIVE_PROC equals 1.75. Because a process may be alive but not active, APP_ACTIVE_PROC will always be less than or equal to APP_ALIVE_PROC. This metric indicates the number of processes in an application group that are competing for the CPU. This metric is useful, along with other metrics, for comparing loads placed on the system by different groups of processes. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. APP_ALIVE_PROC ---------------------------------- An alive process is one that exists on the system. APP_ALIVE_PROC is the sum of the alive-process-time/interval-time ratios for every process belonging to a given application. The following diagram of a four second interval showing two processes, A and B, for an application should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval but consumes no CPU. A’s contribution to APP_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to APP_ACTIVE_PROC. B’s contribution to APP_ALIVE_PROC is 3*1/4. B contributes 2*1/4 to APP_ACTIVE_PROC. Thus, for this interval, APP_ACTIVE_PROC equals 0.5 and APP_ALIVE_PROC equals 1.75. Because a process may be alive but not active, APP_ACTIVE_PROC will always be less than or equal to APP_ALIVE_PROC. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. APP_COMPLETED_PROC ---------------------------------- The number of processes in this group that completed during the interval. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. APP_CPU_SYS_MODE_TIME ---------------------------------- The time, in seconds, during the interval that the CPU was in system mode for processes in this group. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_SYS_MODE_UTIL ---------------------------------- The percentage of time during the interval that the CPU was used in system mode for processes in this group. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. High system CPU utilizations are normal for IO intensive groups. Abnormally high system CPU utilization can indicate that a hardware problem is causing a high interrupt rate. It can also indicate programs that are not making efficient system calls. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_TOTAL_TIME ---------------------------------- The total CPU time, in seconds, devoted to processes in this group during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_TOTAL_UTIL ---------------------------------- The percentage of the total CPU time devoted to processes in this group during the interval. This indicates the relative CPU load placed on the system by processes in this group. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. Large values for this metric may indicate that this group is causing a CPU bottleneck. This would be normal in a computation-bound workload, but might mean that processes are using excessive CPU time and perhaps looping. If the “other” application shows significant amounts of CPU, you may want to consider tuning your parm file so that process activity is accounted for in known applications. APP_CPU_TOTAL_UTIL = APP_CPU_SYS_MODE_UTIL + APP_CPU_USER_MODE_UTIL NOTE: On Windows, the sum of the APP_CPU_TOTAL_UTIL metrics may not equal GBL_CPU_TOTAL_UTIL. Microsoft states that “this is expected behavior” because the GBL_CPU_TOTAL_UTIL metric is taken from the NT performance library Processor objects while the APP_CPU_TOTAL_UTIL metrics are taken from the Process objects. Microsoft states that there can be CPU time accounted for in the Processor system objects that may not be seen in the Process objects. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_TOTAL_UTIL_CUM ---------------------------------- The average CPU time per interval for processes in this group over the cumulative collection time, or since the last PRM configuration change on HP- UX. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_USER_MODE_TIME ---------------------------------- The time, in seconds, that processes in this group were in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_CPU_USER_MODE_UTIL ---------------------------------- The percentage of time that processes in this group were using the CPU in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. High user mode CPU percentages are normal for computation-intensive groups. Low values of user CPU utilization compared to relatively high values for APP_CPU_SYS_MODE_UTIL can indicate a hardware problem or improperly tuned programs in this group. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. APP_DISK_PHYS_IO_RATE ---------------------------------- The number of physical IOs per second for processes in this group during the interval. APP_DISK_PHYS_READ ---------------------------------- The number of physical reads for processes in this group during the interval. APP_DISK_PHYS_READ_RATE ---------------------------------- The number of physical reads per second for processes in this group during the interval. APP_DISK_PHYS_WRITE ---------------------------------- The number of physical writes for processes in this group during the interval. APP_DISK_PHYS_WRITE_RATE ---------------------------------- The number of physical writes per second for processes in this group during the interval. APP_DISK_SUBSYSTEM_QUEUE ---------------------------------- The average number of processes or kernel threads in this group that were blocked on the disk subsystem (waiting for their file system IOs to complete) during the interval. On HP-UX, this is based on the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On Linux, this is based on the sum of all processes or kernel threads blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. APP_DISK_SUBSYSTEM_WAIT_PCT ---------------------------------- The percentage of time processes or kernel threads in this group were blocked on the disk subsystem (waiting for their file system IOs to complete) during the interval. On HP-UX, this is based on the sum of processes or kernel threads in the DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing raw IO to a disk are not included in this measurement. On Linux, this is based on the sum of all processes or kernel threads blocked on disk. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. A percentage of time spent in a wait state is calculated as the accumulated time kernel threads belonging to processes in this group spent waiting in this state, divided by accumulated alive time of kernel threads belonging to processes in this group during the interval. For example, assume an application has 20 kernel threads. During the interval, ten kernel threads slept the entire time, while ten kernel threads waited on terminal input. As a result, the application wait percent values would be 50% for SLEEP and 50% for TERM (that is, terminal IO). The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. APP_INTERVAL ---------------------------------- The amount of time in the interval. APP_INTERVAL_CUM ---------------------------------- The amount of time over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. APP_IO_BYTE ---------------------------------- The number of characters (in KB) transferred for processes in this group to all devices during the interval. This includes IO to disk, terminal, tape and printers. APP_IO_BYTE_RATE ---------------------------------- The number of characters (in KB) per second transferred for processes in this group to all devices during the interval. This includes IO to disk, terminal, tape and printers. APP_MAJOR_FAULT ---------------------------------- The number of major page faults that required a disk IO for processes in this group during the interval. APP_MAJOR_FAULT_RATE ---------------------------------- The number of major page faults per second that required a disk IO for processes in this group during the interval. APP_MEM_RES ---------------------------------- On Unix systems, this is the sum of the size (in MB) of resident memory for processes in this group that were alive at the end of the interval. This consists of text, data, stack, and shared memory regions. On HP-UX, since PROC_MEM_RES typically takes shared region references into account, this approximates the total resident (physical) memory consumed by all processes in this group. On all other Unix systems, this is the sum of the resident memory region sizes for all processes in this group. When the resident memory size for processes includes shared regions, such as shared memory and library text and data, the shared regions are counted multiple times in this sum. For example, if the application contains four processes that are attached to a 500MB shared memory region that is all resident in physical memory, then 2000MB is contributed towards the sum in this metric. As such, this metric can overestimate the resident memory being used by processes in this group when they share memory regions. Refer to the help text for PROC_MEM_RES for additional information. On Windows, this is the sum of the size (in MB) of the working sets for processes in this group during the interval. The working set counts memory pages referenced recently by the threads making up this group. Note that the size of the working set is often larger than the amount of pagefile space consumed. APP_MEM_UTIL ---------------------------------- On Unix systems, this is the approximate percentage of the system’s physical memory used as resident memory by processes in this group that were alive at the end of the interval. This metric summarizes process private and shared memory in each application. On Windows, this is an estimate of the percentage of the system’s physical memory allocated for working set memory by processes in this group during the interval. On HP-UX, this consists of text, data, stack, as well the process’ portion of shared memory regions (such as, shared libraries, text segments, and shared data). The sum of the shared region pages is typically divided by the number of references. APP_MEM_VIRT ---------------------------------- On Unix systems, this is the sum (in MB) of virtual memory for processes in this group that were alive at the end of the interval. This consists of text, data, stack, and shared memory regions. On HP-UX, since PROC_MEM_VIRT typically takes shared region references into account, this approximates the total virtual memory consumed by all processes in this group. On all other Unix systems, this is the sum of the virtual memory region sizes for all processes in this group. When the virtual memory size for processes includes shared regions, such as shared memory and library text and data, the shared regions are counted multiple times in this sum. For example, if the application contains four processes that are attached to a 500MB shared memory region, then 2000MB is reported in this metric. As such, this metric can overestimate the virtual memory being used by processes in this group when they share memory regions. On Windows, this is the sum (in MB) of paging file space used for all processes in this group during the interval. Groups of processes may have working set sizes (APP_MEM_RES) larger than the size of their pagefile space. APP_MINOR_FAULT ---------------------------------- The number of minor page faults satisfied in memory (a page was reclaimed from one of the free lists) for processes in this group during the interval. APP_MINOR_FAULT_RATE ---------------------------------- The number of minor page faults per second satisfied in memory (pages were reclaimed from one of the free lists) for processes in this group during the interval. APP_NAME ---------------------------------- The name of the application (up to 20 characters). This comes from the parm file where the applications are defined. The application called “other” captures all processes not aggregated into applications specifically defined in the parm file. In other words, if no applications are defined in the parm file, then all process data would be reflected in the “other” application. APP_NUM ---------------------------------- The sequentially assigned number of this application or, on Solaris, the project ID when application grouping by project is enabled. APP_PRI ---------------------------------- On Unix systems, this is the average priority of the processes in this group during the interval. On Windows, this is the average base priority of the processes in this group during the interval. APP_PRI_QUEUE ---------------------------------- The average number of processes or kernel threads in this group blocked on PRI (waiting for their priority to become high enough to get the CPU) during the interval. This is calculated as the accumulated time that all processes or kernel threads in this group spent blocked on PRI divided by the interval time. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. APP_PRI_WAIT_PCT ---------------------------------- The percentage of time processes or kernel threads in this group were blocked on PRI (waiting for their priority to become high enough to get the CPU) during the interval. On Linux, if thread collection is disabled, only the first thread of each multi-threaded process is taken into account. This metric will be NA on kernels older than 2.6.23 or kernels not including CFS, the Completely Fair Scheduler. A percentage of time spent in a wait state is calculated as the accumulated time kernel threads belonging to processes in this group spent waiting in this state, divided by accumulated alive time of kernel threads belonging to processes in this group during the interval. For example, assume an application has 20 kernel threads. During the interval, ten kernel threads slept the entire time, while ten kernel threads waited on terminal input. As a result, the application wait percent values would be 50% for SLEEP and 50% for TERM (that is, terminal IO). The Application QUEUE metrics, which are based on block states, represent the average number of process or kernel thread counts, not actual queues, within the context of a specific application. The Application WAIT PCT metrics, which are also based on block states, represent the percentage of processes or kernel threads that were alive on the system within the context of a specific application. These values will vary greatly depending on the application. No direct comparison is reasonable with the Global Queue metrics since they represent the average number of all processes or kernel threads that were alive on the system. As such, the Application WAIT PCT metrics cannot be summed or compared with global values easily. In addition, the sum of each Application WAIT PCT for all applications will not equal 100% since these values will vary greatly depending on the number of processes or kernel threads in each application. For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many processes on the system, but there are only a very small number of processes in the specific application that is being examined and there is a high percentage of those few processes that are blocked on the disk I/O subsystem. APP_PROC_RUN_TIME ---------------------------------- The average run time for processes in this group that completed during the interval. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. APP_SAMPLE ---------------------------------- The number of samples of process data that have been averaged or accumulated during this sample. APP_TIME ---------------------------------- The end time of the measurement interval. BYCPU_ACTIVE ---------------------------------- Indicates whether or not this CPU is online. A CPU that is online is considered active. For HP-UX and certain versions of Linux, the sar(1M) command allows you to check the status of the system CPUs. For SUN and DEC, the commands psrinfo(1M) and psradm(1M) allow you to check or change the status of the system CPUs. For AIX, the pstat(1) command allows you to check the status of the system CPUs. BYCPU_CPU_CLOCK ---------------------------------- The clock speed of the CPU in the current slot. The clock speed is in MHz for the selected CPU. The Linux kernel currently doesn’t provide any metadata information for disabled CPUs. This means that there is no way to find out types, speeds, as well as hardware IDs or any other information that is used to determine the number of cores, the number of threads, the HyperThreading state, etc... If the agent (or Glance) is started while some of the CPUs are disabled, some of these metrics will be “na”, some will be based on what is visible at startup time. All information will be updated if/when additional CPUs are enabled and information about them becomes available. The configuration counts will remain at the highest discovered level (i.e. if CPUs are then disabled, the maximum number of CPUs/cores/etc... will remain at the highest observed level). It is recommended that the agent be started with all CPUs enabled. On Linux, this value is always rounded up to the next MHz. BYCPU_CPU_GUEST_TIME ---------------------------------- The time, in seconds, that this CPU was servicing guests during the interval. Guest time, on Linux KVM hosts, is the time that is spent servicing guests. Xen hosts, as of this release, do not update these counters, neither do other OSes. BYCPU_CPU_GUEST_TIME_CUM ---------------------------------- The time, in seconds, that this CPU was servicing guests over the cumulative collection time. Guest time, on Linux KVM hosts, is the time that is spent servicing guests. Xen hosts, as of this release, do not update these counters, neither do other OSes. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYCPU_CPU_GUEST_UTIL ---------------------------------- The percentage of time that this CPU was servicing guests during the interval. Guest time, on Linux KVM hosts, is the time that is spent servicing guests. Xen hosts, as of this release, do not update these counters, neither do other OSes. BYCPU_CPU_GUEST_UTIL_CUM ---------------------------------- The percentage of time that this CPU was servicing guests over the cumulative collection time. Guest time, on Linux KVM hosts, is the time that is spent servicing guests. Xen hosts, as of this release, do not update these counters, neither do other OSes. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYCPU_CPU_INTERRUPT_TIME ---------------------------------- The time, in seconds, that this CPU was performing interrupt processing during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_INTERRUPT_TIME_CUM ---------------------------------- The time, in seconds, that this CPU was performing interrupt processing over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_INTERRUPT_UTIL ---------------------------------- The percentage of time that this CPU was performing interrupt processing during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_INTERRUPT_UTIL_CUM ---------------------------------- The percentage of time that this CPU was performing interrupt processing over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_NICE_TIME ---------------------------------- The time, in seconds, that this CPU was in user mode at a nice priority during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_NICE_TIME_CUM ---------------------------------- The time, in seconds, that this CPU was in user mode at a nice priority over the cumulative collection time. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_NICE_UTIL ---------------------------------- The percentage of time that this CPU was in user mode at a nice priority during the interval. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_NICE_UTIL_CUM ---------------------------------- The average percentage of time that this CPU was in user mode at a nice priority over the cumulative collection time. On HP-UX, the NICE metrics include positive nice value CPU time only. Negative nice value CPU is broken out into NNICE (negative nice) metrics. Positive nice values range from 20 to 39. Negative nice values range from 0 to 19. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_STOLEN_TIME ---------------------------------- The time, in seconds, that was stolen from this CPU during the interval. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’. BYCPU_CPU_STOLEN_TIME_CUM ---------------------------------- The time, in seconds, that was stolen from this CPU over the cumulative collection time. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYCPU_CPU_STOLEN_UTIL ---------------------------------- The time, in seconds, that was stolen from this CPU during the interval. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’. BYCPU_CPU_STOLEN_UTIL_CUM ---------------------------------- The average percentage of time that was stolen from this CPU over the cumulative collection time. Stolen (or steal, or involuntary wait) time, on Linux, is the time that the CPU had runnable threads, but the Xen hypervisor chose to run something else instead. KVM hosts, as of this release, do not update these counters. Stolen CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYCPU_CPU_SYS_MODE_TIME ---------------------------------- The time, in seconds, that this CPU (or logical processor) was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_SYS_MODE_TIME_CUM ---------------------------------- The time, in seconds, that this CPU (or logical processor) was in system mode over the cumulative collection time. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_SYS_MODE_UTIL ---------------------------------- The percentage of time that this CPU (or logical processor) was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_SYS_MODE_UTIL_CUM ---------------------------------- The percentage of time that this CPU (or logical processor) was in system mode over the cumulative collection time. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine’s privileged protection mode and runs in system mode. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_TOTAL_TIME ---------------------------------- The total time, in seconds, that this CPU (or logical processor) was not idle during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_TOTAL_TIME_CUM ---------------------------------- The total time, in seconds, that this CPU (or logical processor) was not idle over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_TOTAL_UTIL ---------------------------------- The percentage of time that this CPU (or logical processor) was not idle during the interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_TOTAL_UTIL_CUM ---------------------------------- The average percentage of time that this CPU (or logical processor) was not idle over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_TYPE ---------------------------------- The type of processor in the current slot. The Linux kernel currently doesn’t provide any metadata information for disabled CPUs. This means that there is no way to find out types, speeds, as well as hardware IDs or any other information that is used to determine the number of cores, the number of threads, the HyperThreading state, etc... If the agent (or Glance) is started while some of the CPUs are disabled, some of these metrics will be “na”, some will be based on what is visible at startup time. All information will be updated if/when additional CPUs are enabled and information about them becomes available. The configuration counts will remain at the highest discovered level (i.e. if CPUs are then disabled, the maximum number of CPUs/cores/etc... will remain at the highest observed level). It is recommended that the agent be started with all CPUs enabled. BYCPU_CPU_USER_MODE_TIME ---------------------------------- The time, in seconds, during the interval that this CPU (or logical processor) was in user mode. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_USER_MODE_TIME_CUM ---------------------------------- The time, in seconds, that this CPU (or logical processor) was in user mode over the cumulative collection time. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_USER_MODE_UTIL ---------------------------------- The percentage of time that this CPU (or logical processor) was in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_CPU_USER_MODE_UTIL_CUM ---------------------------------- The average percentage of time that this CPU (or logical processor) was in user mode over the cumulative collection time. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. On platforms other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report values normalized against the number of active cores in the system. If the ignore_mt flag is not set(false) in parm file, this metric will report values normalized against the number of threads in the system. This flag will be a no-op if Multithreading is turned off. On HPUX, CPU utilization normalization is controlled by the “-ignore_mt” option of the midaemon(1m). To change normalization from core-based to logical-cpu-based, or vice-versa, all performance components (scopeux, glance, perfd) must be shut down and the midaemon restarted in the desired mode. To start the midaemon with “-ignore_mt” by default, this option should be added in the /etc/rc.config.d/ovpa control file. Refer to the documentation regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying core-based normalization affects CPU, application, process and thread metrics. BYCPU_ID ---------------------------------- The ID number of this CPU. On some Unix systems, such as SUN, CPUs are not sequentially numbered. BYCPU_INTERRUPT ---------------------------------- The number of device interrupts for this CPU during the interval. On HP-UX, a value of “na” is displayed on a system with multiple CPUs. BYCPU_INTERRUPT_RATE ---------------------------------- The average number of device interrupts per second for this CPU during the interval. On HP-UX, a value of “na” is displayed on a system with multiple CPUs. BYCPU_STATE ---------------------------------- A text string indicating the current state of a processor. On HP-UX, this is either “Enabled”, “Disabled” or “Unknown”. On AIX, this is either “Idle/Offline” or “Online”. On all other systems, this is either “Offline”, “Online” or “Unknown”. BYDSK_AVG_REQUEST_QUEUE ---------------------------------- The average number of IO requests that were in the wait and service queues for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For example, if 4 intervals have passed with average queue lengths of 0, 2, 0, and 6, then the average number of IO requests over all intervals would be 2. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_AVG_SERVICE_TIME ---------------------------------- The average time, in milliseconds, that this disk device spent processing each disk request during the interval. For example, a value of 5.14 would indicate that disk requests during the last interval took on average slightly longer than five one-thousandths of a second to complete for this device. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. This is a measure of the speed of the disk, because slower disk devices typically show a larger average service time. Average service time is also dependent on factors such as the distribution of I/O requests over the interval and their locality. It can also be influenced by disk driver and controller features such as I/O merging and command queueing. Note that this service time is measured from the perspective of the kernel, not the disk device itself. For example, if a disk device can find the requested data in its cache, the average service time could be quicker than the speed of the physical disk hardware. This metric can be used to help determine which disk devices are taking more time than usual to process requests. BYDSK_BUSY_TIME ---------------------------------- The time, in seconds, that this disk device was busy transferring data during the interval. On HP-UX, this is the time, in seconds, during the interval that the disk device had IO in progress from the point of view of the Operating System. In other words, the time, in seconds, the disk was busy servicing requests for this device. BYDSK_DEVNAME ---------------------------------- The name of this disk device. On HP-UX, the name identifying the specific disk spindle is the hardware path which specifies the address of the hardware components leading to the disk device. On SUN, these names are the same disk names displayed by “iostat”. On AIX, this is the path name string of this disk device. This is the fsname parameter in the mount(1M) command. If more than one file system is contained on a device (that is, the device is partitioned), this is indicated by an asterisk (“*”) at the end of the path name. On OSF1, this is the path name string of this disk device. This is the file- system parameter in the mount(1M) command. On Windows, this is the unit number of this disk device. BYDSK_DEVNO ---------------------------------- Major / Minor number of the device. BYDSK_DIRNAME ---------------------------------- The name of the file system directory mounted on this disk device. If more than one file system is mounted on this device, “Multiple FS” is seen. BYDSK_ID ---------------------------------- The ID of the current disk device. BYDSK_INTERVAL ---------------------------------- The amount of time in the interval. BYDSK_INTERVAL_CUM ---------------------------------- The amount of time over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYDSK_PHYS_BYTE ---------------------------------- The number of KBs of physical IOs transferred to or from this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_BYTE_RATE ---------------------------------- The average KBs per second transferred to or from this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_BYTE_RATE_CUM ---------------------------------- The average number of KBs per second of physical reads and writes to or from this disk device over the cumulative collection time. On Unix systems, this includes all types of physical disk IOs including file system, virtual memory, and raw IOs. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYDSK_PHYS_IO ---------------------------------- The number of physical IOs for this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw reads. BYDSK_PHYS_IO_RATE ---------------------------------- The average number of physical IO requests per second for this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system IO, virtual memory and raw IO. BYDSK_PHYS_IO_RATE_CUM ---------------------------------- The average number of physical reads and writes per second for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYDSK_PHYS_READ ---------------------------------- The number of physical reads for this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On AIX, this is an estimated value based on the ratio of read bytes to total bytes transferred. The actual number of reads is not tracked by the kernel. This is calculated as BYDSK_PHYS_READ = BYDSK_PHYS_IO * (BYDSK_PHYS_READ_BYTE / BYDSK_PHYS_IO_BYTE) BYDSK_PHYS_READ_BYTE ---------------------------------- The KBs transferred from this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_READ_BYTE_RATE ---------------------------------- The average KBs per second transferred from this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_READ_BYTE_RATE_CUM ---------------------------------- The average number of KBs per second of physical reads from this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYDSK_PHYS_READ_RATE ---------------------------------- The average number of physical reads per second for this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On AIX, this is an estimated value based on the ratio of read bytes to total bytes transferred. The actual number of reads is not tracked by the kernel. This is calculated as BYDSK_PHYS_READ_RATE = BYDSK_PHYS_IO_RATE * (BYDSK_PHYS_READ_BYTE / BYDSK_PHYS_IO_BYTE) BYDSK_PHYS_READ_RATE_CUM ---------------------------------- The average number of physical reads per second for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYDSK_PHYS_WRITE ---------------------------------- The number of physical writes for this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On AIX, this is an estimated value based on the ratio of write bytes to total bytes transferred because the actual number of writes is not tracked by the kernel. This is calculated as BYDSK_PHYS_WRITE = BYDSK_PHYS_IO * (BYDSK_PHYS_WRITE_BYTE / BYDSK_PHYS_IO_BYTE) BYDSK_PHYS_WRITE_BYTE ---------------------------------- The KBs transferred to this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_WRITE_BYTE_RATE ---------------------------------- The average KBs per second transferred to this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_WRITE_BYTE_RATE_CUM ---------------------------------- The average number of KBs per second of physical writes to this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYDSK_PHYS_WRITE_RATE ---------------------------------- The average number of physical writes per second for this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On AIX, this is an estimated value based on the ratio of write bytes to total bytes transferred. The actual number of writes is not tracked by the kernel. This is calculated as BYDSK_PHYS_WRITE_RATE = BYDSK_PHYS_IO_RATE * (BYDSK_PHYS_WRITE_BYTE / BYDSK_PHYS_IO_BYTE) BYDSK_PHYS_WRITE_RATE_CUM ---------------------------------- The average number of physical writes per second for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. BYDSK_QUEUE_0_UTIL ---------------------------------- The percentage of intervals during which there were no IO requests pending for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 1.5, 0, and 3, then the value for this metric would be 50% since 50% of the intervals had a zero queue length. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_QUEUE_2_UTIL ---------------------------------- The percentage of intervals during which there were 1 or 2 IO requests pending for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 1, 0, and 2, then the value for this metric would be 50% since 50% of the intervals had a 1-2 queue length. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_QUEUE_4_UTIL ---------------------------------- The percentage of intervals during which there were 3 or 4 IO requests waiting to use this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 3, 0, and 4, then the value for this metric would be 50% since 50% of the intervals had a 3-4 queue length. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_QUEUE_8_UTIL ---------------------------------- The percentage of intervals during which there were between 5 and 8 IO requests pending for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 8, 0, and 5, then the value for this metric would be 50% since 50% of the intervals had a 5-8 queue length. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_QUEUE_X_UTIL ---------------------------------- The percentage of intervals during which there were more than 8 IO requests pending for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or thread) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to Glance, if available for the given platform), whichever occurred last. On HP-UX, all cumulative collection times and intervals start when the midaemon starts. On other Unix systems, non-process collection time starts from the start of the performance tool, process collection time starts from the start time of the process or measurement start time, which ever is older. Regardless of the process start time, application cumulative intervals start from the time the performance tool is started. On systems where the performance components are 32-bit or where the 64-bit model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f” (overflow) after the performance agent (or the midaemon on HPUX) has been up for 466 days and the cumulative metrics will fail to report accurate data after 497 days. On Linux, Solaris and AIX, if measurement is started after the system has been up for more than 466 days, cumulative process CPU data won’t include times accumulated prior to the performance tool’s start and a message will be logged to indicate this. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 9, 0, and 10, then the value for this metric would be 50% since 50% of the intervals had queue length greater than 8. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_REQUEST_QUEUE ---------------------------------- The average number of IO requests that were in the wait queue for this disk device during the interval. These requests are the physical requests (as opposed to logical IO requests). Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_TIME ---------------------------------- The time of day of the interval. BYDSK_UTIL ---------------------------------- On HP-UX, this is the percentage of the time during the interval that the disk device had IO in progress from the point of view of the Operating System. In other words, the utilization or percentage of time busy servicing requests for this device. On the non-HP-UX systems, this is the percentage of the time that this disk device was busy transferring data during the interval. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. This is a measure of the ability of the IO path to meet the transfer demands being placed on it. Slower disk devices may show a higher utilization with lower IO rates than faster disk devices such as disk arrays. A value of greater than 50% utilization over time may indicate that this device or its IO path is a bottleneck, and the access pattern of the workload, database, or files may need reorganizing for better balance of disk IO load. BYLS_BOOT_TIME ---------------------------------- On vMA, for a host and logical system the metric is the date and time when the system was last booted. The value is NA for resource pool. Note that this date is obtained from the VMware API as an already formatted string and may not conform to the expected localization. BYLS_CLUSTER_NAME ---------------------------------- On vMA, for a host and resource pool it is the name of the cluster to which the host belongs to when it is managed by virtual centre. For a logical system, the value is NA. BYLS_CPU_CLOCK ---------------------------------- On vMA, for a host and logical system, it is the clock speed of the CPUs in MHz if all of the processors have the same clock speed. For a resource pool the value is NA. This metric represents the CPU clock speed. For an AIX frame, this metric is available only if the LPAR supports perfstat_partition_config call from libperfstat.a. This is usually present on AIX 7.1 onwards. For an LPAR, this value will be na. BYLS_CPU_CYCLE_ENTL_MAX ---------------------------------- On vMA, for a host, logical system and resource pool this value indicates the maximum processor capacity, in MHz, configured for the entity. If the maximum processor capacity is not configured for the entity, a value of “-3” will be displayed in PA and “ul”( unlimited ) in other clients. On HPUX, the maximum processor capacity, in MHz, configured for this logical system. BYLS_CPU_CYCLE_ENTL_MIN ---------------------------------- On vMA, for a host, logical system and resource pool this value indicates the minimum processor capacity, in MHz, configured for the entity. On HPUX, the minimum processor capacity, in MHz, configured for this logical system. BYLS_CPU_CYCLE_TOTAL_USED ---------------------------------- On vMA, for host, resource pool and logical system, it is the total time the physical CPUs were utilized during the interval, represented in cpu cycles. On KVM/Xen, this is the number of milliseconds used on all CPUs during the interval. BYLS_CPU_EFFECTIVE_UTIL ---------------------------------- On vMA, for a cluster the metric is theutilization of total available CPU resources of all hosts within that cluster. Effective CPU = Aggregate host CPU capacity - (VMkernel CPU + Service Console CPU + other service CPU). The value is NA for all other entities. BYLS_CPU_ENTL_EMIN ---------------------------------- On vMA, for host, logical system and resource pool the value is “na”. BYLS_CPU_ENTL_MAX ---------------------------------- The maximum CPU units configured for a logical system. On HP-UX HPVM, this metric indicates the maximum percentage of physical CPU that a virtual CPU of this logical system can get. On AIX SPLPAR, this metric is equivalent to “Maximum Capacity” field of ‘lparstat -i’ command. For WPARs, it is the maximum percentage of CPU that a WPAR can have even if there is no contention for CPU. WPAR shares CPU units of its global environment. On Hyper-V host, for Root partition, this metric is NA. On vMA, for a host, the metric is equivalent to total number of cores on the host. For a resource pool and a logical system, this metrics indicates the maximum CPU units configured for it. BYLS_CPU_ENTL_MIN ---------------------------------- The minimum CPU units configured for this logical system. On HP-UX HPVM, this metric indicates the minimum percentage of physical CPU that a virtual CPU of this logical system is guaranteed. On AIX SPLPAR, this metric is equivalent to “Minimum Capacity” field of ‘lparstat -i’ command. For WPARs, it is the minimum CPU share assigned to a WPAR that is guaranteed. WPAR shares CPU units of its global environment. On Hyper-V host, for Root partition, this metric is NA. On vMA, for a host, the metric is equivalent to total number of cores on the host. For a resource pool and a logical system, this metrics indicates the guranteed minimum CPU units configured for it. On Solaris Zones, this metrics indicates the configured minimum CPU percentage reserved for a logical system. For Solaris Zones, this metric is calculated as: BYLS_CPU_ENTL_MIN = ( BYLS_CPU_SHARES_PRIO / Pool-Cpu-Shares ) where, Pool-Cpu-Shares is the total CPU shares available with CPU pool the zone is associated with. Pool-Cpu-Shares is addition of BYLS_CPU_SHARES_PRIO values for all active zones associated with this pool. BYLS_CPU_ENTL_UTIL ---------------------------------- Percentage of entitled processing units (guaranteed processing units allocated to this logical system) consumed by the logical system. On a HP-UX HPVM host the metric indicates the logical system’s CPU utilization with respect to minimum CPU entitlement. On HP-UX HPVM host, this metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_PHYSC / (BYLS_CPU_ENTL_MIN * BYLS_NUM_CPU)) * 100 On AIX, this metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_PHYSC / BYLS_CPU_ENTL) * 100 On WPAR, this metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_PHYSC / BYLS_CPU_ENTL_MAX) * 100 This metric matches “%Resc” of topas command (inside WPAR) On Solaris Zones, the metric indicates the logical system’s CPU utilization with respect to minimum CPU entitlement. This metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_TOTAL_UTIL / BYLS_CPU_SHARES_PRIO) * 100 If a Solaris zone is not assigned a CPU entitlement value then a CPU entitlement value is derived for this zone based on total CPU entitlement associated with the CPU pool this zone is attached to. On Hyper-V host, for Root partition, this metric is NA. On vMA, for a host the value is same as BYLS_CPU_PHYS_TOTAL_UTIL while for logical system and resource pool the value is the percentage of processing units consumed w.r.t minimum CPU entitlement. BYLS_CPU_FAILOVER ---------------------------------- On vMA, for a cluster the metric is theVMware HA Number of failures that can be tolerated.The value is NA for all other entities. BYLS_CPU_MT_ENABLED ---------------------------------- Indicates whether the CPU hardware threads are enabled(“On”) or not(“Off”) for a logical system. For AIX WPARs, the metric will be “na”. On vMA, this metric indicates whether the CPU hardware threads are enabled or not for a host while for a resource pool and a logical system the value is not available(“na”). BYLS_CPU_PHYSC ---------------------------------- This metric indicates the number of CPU units utilized by the logical system. On an Uncapped logical system, this value will be equal to the CPU units capacity used by the logical system during the interval. This can be more than the value entitled for a logical system. BYLS_CPU_PHYS_READY_UTIL ---------------------------------- On vMA, for a logical system it is the percentage of time, during the interval, that the CPU was in ready state. For a host and resource pool the value is NA. BYLS_CPU_PHYS_SYS_MODE_UTIL ---------------------------------- The percentage of time the physical CPUs were in system mode (kernel mode) for the logical system during the interval. On AIX LPAR, this value is equivalent to “%sys” field reported by the “lparstat” command. On Hyper-V host, this metric indicates the percentage of time spent in Hypervisor code. On vMA, the metric indicates the percentage of time the physical CPUs were in system mode during the interval for the host or logical system. On vMA, for a resource pool, this metric is “na”. BYLS_CPU_PHYS_TOTAL_TIME ---------------------------------- Total time in seconds, spent by the logical system on the physical CPUs. On HPUX, this information is updated internally every 10 seconds so it may take that long for these values to be updated in PA/Glance. On vMA, the value indicates the time spent in seconds on the physical CPU. by logical system or host or resource pool, On KVM/Xen, this value is core-normalized if GBL_IGNORE_MT is enabled on the server. BYLS_CPU_PHYS_TOTAL_UTIL ---------------------------------- Percentage of total time the physical CPUs were utilized by this logical system during the interval. On HPUX, this information is updated internally every 10 seconds so it may take that long for these values to be updated in PA/Glance. On Solaris, this metric is calculated with respect to the available active physical CPUs on the system. On AIX, this metric is equivalent to sum of BYLS_CPU_PHYS_USER_MODE_UTIL and BYLS_CPU_PHYS_SYS_MODE_UTIL. For AIX lpars, the metric is calculated with respect to the available physical CPUs in the pool to which this LPAR belongs to. For AIX WPARs, the metric is calculated with respect to the available physical CPUs in the resource set or Global Environment. On vMA, the value indicates percentage of total time the physical CPUs were utilized by logical system or host or resource pool, On KVM/Xen, this value is core-normalized if GBL_IGNORE_MT is enabled on the server. BYLS_CPU_PHYS_USER_MODE_UTIL ---------------------------------- The percentage of time the physical CPUs were in user mode for the logical system during the interval. On AIX LPAR, this value is equivalent to “%user” field reported by the “lparstat” command. On Hyper-V host, this metric indicates the percentage of time spent in guest code. On vMA, the metrics indicates the percentage of time the physical CPUs were in user mode during the interval for the host or logical system. On vMA, for a resource pool, this metric is “na”. BYLS_CPU_PHYS_WAIT_UTIL ---------------------------------- On vMA, for a logical system it is the percentage of time, during the interval, that the virtual CPU was waiting for the IOs to complete. For a host and resource pool the value is NA. BYLS_CPU_SHARES_PRIO ---------------------------------- This metric indicates the weightage/priority assigned to a Uncapped logical system. This value determines the minimum share of unutilized processing units that this logical system can utilize. The value of this metric will be “-3” in PA and “ul” in other clients if cpu shares value is ‘Unlimited’ for a logical system. On AIX SPLPAR this value is dependent on the available processing units in the pool and can range from 0 to 255. For WPARs, this metric represents how much of a particular resource a WPAR receives relative to the other WPARs. On vMA, for logical system and resource pool this value can range from 1 to 1000000 while for host the value is NA. On Solaris Zones, this metric sets a limit on the number of fair share scheduler (FSS) CPU shares for a zone. On Hyper-V host, this metric specifies allocation of CPU resources when more than one virtual machine is running and competing for resources. This value can range from 0 to 10000. For Root partition, this metric is NA. BYLS_CPU_SYS_MODE_UTIL ---------------------------------- On vMA, for a host and a logical system, this metric indicates the percentage of time the CPU was in system mode. On vMA, for a resource pool, this metric is “na”. during the interval. BYLS_CPU_TOTAL_UTIL ---------------------------------- Percentage of total time the logical CPUs were not idle during this interval. This metric is calculated against the number of logical CPUs configured for this logical system. For AIX wpars, the metric represents the percentage of time the physical CPUs were not idle during this interval. BYLS_CPU_UNRESERVED ---------------------------------- On vMA, for host, it is the number of CPU cycles that are available for creating a new logical system. For a logical system and resource pool the value is NA. BYLS_CPU_USER_MODE_UTIL ---------------------------------- On vMA, for a host and a logical system, this metric indicates the percentage of time the CPU was in user mode during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DATACENTER_NAME ---------------------------------- On vMA, for a host it is the name of the datacenter to which the host belongs to when it is managed by virtual center. To uniquely identify datacenter in a virtual center, datacenter name is appended with the folder names in bottom up order. For a logical system and resource pool, the value is NA. BYLS_DISK_CAPACITY ---------------------------------- On vMA, for a datastore the metric is the capacityof the datastore(in MB).The value is NA for all other entities. BYLS_DISK_COMMAND_ABORT_RATE ---------------------------------- On vMA, for a host, the metric is the measureof the disk commands abort rate on the host. It is calculated bydividing the total commands aborted in the interval by the total commands issued in that interval. The value is NA for all other entities. BYLS_DISK_FREE_SPACE ---------------------------------- On vMA, for a datastore the metric is the amountof free space (in MB) available in the datastore.The value is NA for all other entities. BYLS_DISK_IORM_ENABLED ---------------------------------- On vMA, for a datastore the metric is the measureof whether IORM is enabled for the datastore.The value is NA for all other entities. BYLS_DISK_IORM_THRESHOLD ---------------------------------- On vMA, for a datastore the metric is the thresholdvalue of the IORM of the datastore.The value is NA for all other entities. BYLS_DISK_PHYS_BYTE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the number of KBs transferred to and from disks during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DISK_PHYS_BYTE_RATE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the average number of KBs per second at which data was transferred to and from disks during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DISK_PHYS_READ ---------------------------------- On vMA, for a host and a logical system this metric indicates the number of physical reads during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DISK_PHYS_READ_BYTE_RATE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the average number of KBs transferred from the disk per second during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DISK_PHYS_READ_RATE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the number of physical reads per second during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DISK_PHYS_WRITE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the number of physical writes during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DISK_PHYS_WRITE_BYTE_RATE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the average number of KBs transferred to the disk per second during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DISK_PHYS_WRITE_RATE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the number of physical writes per second during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_DISK_READ_LATENCY ---------------------------------- On vMA, for a host and guest, the metric is the totalread latency experienced on the entity.The value is NA for all other entities. HELP_ENDFUNCTION_BEGIN return (metric_t) lsr->ls_disk_totalReadLatency; FUNCTION_END METRIC BYLS_DISK_WRITE_LATENCY = 3775 CLASS _BYLS TYPE T_NBR AVAIL LX PCSLABEL LS_DISK_WRITE_LATENCY HEADING1 “Dsk Wrt” HEADING2 “Latency” WIDTH 8 PRECISION 0 SCALE10 0 HELP_BEGIN On vMA, for a host and guest, the metric is the totalwrite latency experienced on the entity.The value is NA for all other entities. HELP_ENDFUNCTION_BEGIN return (metric_t) lsr->ls_disk_totalWriteLatency; FUNCTION_END METRIC BYLS_DISK_QUEUE_DEPTH_PEAK = 3776 CLASS _BYLS TYPE T_NBR AVAIL LX PCSLABEL LS_DISK_QUEUE_DEPTH_PEAK HEADING1 “Dsk Q” HEADING2 “Dpth Pk” WIDTH 8 PRECISION 0 SCALE10 0 HELP_BEGIN On vMA, for a host, the metric is the measureof the wait queue depth experienced on the host.The value is NA for all other entities. BYLS_DISK_SHARE_PRIORITY ---------------------------------- On vMA, for a datastore the metric is the measureof the shares priority the datastore.The value is NA for all other entities. BYLS_DISK_THROUGHPUT_CONTENTION ---------------------------------- On vMA, for a datastore the metric is the diskthroughput contention in that interval.The value is NA for all other entities. BYLS_DISK_THROUGPUT_USAGE ---------------------------------- On vMA, for a datastore the metric is the diskthroughput usage.The value is NA for all other entities. BYLS_DISK_UTIL ---------------------------------- On vMA, for a host, it is the average percentage of time during the interval (average utilization) that all the disks had IO in progress. For logical system and resource pool the value is NA. BYLS_DISK_UTIL_PEAK ---------------------------------- On vMA, for a host, it is the utilization of the busiest disk during the interval. For a logical system and resource pool the value is NA. BYLS_DISPLAY_NAME ---------------------------------- On vMA, this metric indicates the name of the host or logical system or resource pool. On HPVM, this metric indicates the Virtual Machine name of the logical systemand is equivalent to “Virtual Machine Name” field of ‘hpvmstatus’ command. On AIX the value is as returned by the command “uname -n” (that is, the string returned from the “hostname” program). On Solaris Zones, this metric indicates the zone name and is equivalent to ‘NAME’ field of ‘zoneadm list -vc’ command. On Hyper-V host, this metric indicates the Virtual Machine name of the logical systemand is equivalent to the Name displayed in Hyper-V Manager. For Root partition, the value is always “Root”. BYLS_GUEST_TOOLS_STATUS ---------------------------------- On vMA, for a guest the metric is thecurrent status of Guest Integration Tools in the guest operating system, if known. The value is NA for all other entities. BYLS_IP_ADDRESS ---------------------------------- This metric indicates IP Address of the particular logical system. On vMA, this metric indicates the IP Address for a host and a logical system while for a resource pool the value is NA. BYLS_LS_CONNECTION_STATE ---------------------------------- For a host this metric is the current status of the connection.For logical systems, it indicates whether or not the entity is available for management. It can have values as - Connected, Disconnected or NotResponding. The value is NA for all other entities. BYLS_LS_HOSTNAME ---------------------------------- This is the DNS registered name of the system. On Hyper-V host, this metric is NA if the logical system is not active or Hyper-V Integration Components are not installed on it. On vMA, for a host and logical system the metric is the Fully Qualified Domain Name, while for resource pool the value is NA. BYLS_LS_HOST_HOSTNAME ---------------------------------- On vMA, for logical system and resource pool, it is the FQDN of the host on which they are hosted. For a host, the value is NA. BYLS_LS_ID ---------------------------------- An unique identifier of the logical system. On HPVM, this metric is a numeric id and is equivalent to “VM # “ field of ‘hpvmstatus’ command. On AIX LPAR, this metric indicates partition number and is equivalent to “Partition Number” field of ‘lparstat -i’ command. For aix wpar, this metric represents the partition number and is equivalent to “uname -W” from inside wpar. On Solaris Zones, this metric indicates the zone id and is equivalent to ‘ID’ field of ‘zoneadm list -vc’ command. On Hyper-V host, this metric indicates the PID of the process corresponding to this logical system. For Root partition, this metric is NA. On vMA, this metric is a unique identifier for a host, resource pool and a logical system. The value of this metric may change for an instance across collection intervals. BYLS_LS_MODE ---------------------------------- This metric indicates whether the CPU entitlement for the logical system is Capped or Uncapped. On AIX SPLPAR, this metric is same as “Mode” field of ‘lparstat -i’ command. For WPARs, this metric is always CAPPED. On vMA, the value is Capped for a host and Uncapped for a logical system. For resource pool, the value is Uncapped or Capped depending on whether the reservation is expandable or not for it. On Solaris Zones, this metric is “Capped” when the zone is assigned CPU shares and is attached to a valid CPU pool. BYLS_LS_NAME ---------------------------------- This is the name of the computer. On HPVM, this metric indicates the Virtual Machine name of the logical systemand is equivalent to “Virtual Machine Name” field of ‘hpvmstatus’ command. On AIX the value is as returned by the command “uname -n” (that is, the string returned from the “hostname” program). On vMA, this metric is a unique identifier for host, resource pool and a logical system. The value of this metric remains the same, for an instance, across collection intervals. On Solaris Zones, this metric indicates the zone name and is equivalent to ‘NAME’ field of ‘zoneadm list -vc’ command. On Hyper-V host, this metric indicates the name of the XML file which has configuration information of the logical system. This file will be present under the logical system’s installation directory indicated by BYLS_LS_PATH. For Root partition, the value is always “Root”. BYLS_LS_NUM_SNAPSHOTS ---------------------------------- For a guest, the metric is the number of snapshots created for the system. The value is NA for all other entities. BYLS_LS_OSTYPE ---------------------------------- The Guest OS this logical system is hosting. On HPVM, the metric can have following values: HP-UX Linux Windows OpenVMS Other Unknown On Hyper-V host, the metric can have following values: Windows Other On Hyper-V host, this metric is NA if the logical system is not active or Hyper-V Integration Components are not installed on it. On vMA, the metric can have the following values for host and logical system: ESX/ESXi followed by version or ESX-Serv (applicable only for a host) Linux Windows Solaris Unknown The value is NA for resource pool BYLS_LS_PARENT_TYPE ---------------------------------- On vMA, the metric indicates the type of parent entity. The value is HOST if the parent is a host, RESPOOL if the parent is resource pool. For a host, the value is NA. BYLS_LS_PARENT_UUID ---------------------------------- On vMA, the metric indicates the UUID appended to display_name of the parent entity. For logical system and resource pool this metric could indicate the UUID appended to display_name of a host or resource pool as they can be created under a host or resource pool. For a host, the value is NA. For an LPAR , if the frame is discovered the value will be BYLS_LS_UUID of the frame. BYLS_LS_PATH ---------------------------------- This metric indicates the installation path for the logical system. On Hyper-V host, for Root partition, this metric is NA. On vMA, the metric indicates the installation path for host or logical system. On vMA, for a resource pool and a host, this metric is “na”. BYLS_LS_ROLE ---------------------------------- On vMA, for a host the metric is HOST. For a logical system the value is GUEST and for a resource pool the value is RESPOOL. For logical system which is a vMA or VA, the value is PROXY. For datacenter, the value is DATACENTER. For cluster, the value is CLUSTER. For datastore, the value is DATASTORE. For template, the value is TEMPLATE. For an AIX frame, the role is “Host”. For an LPAR, the role is “Guest”. BYLS_LS_SHARED ---------------------------------- This metric indicates whether the physical CPUs are dedicated to this logical system or shared. On HPUX HPVM, and Hyper-V host,this metric is always “Shared”. On vMA, the value is “Dedicated” for host, and “Shared” for logical system and resource pool. On AIX SPLPAR, this metric is equivalent to “Type” field of ‘lparstat -i’ command. For AIX wpars,this metric will be always “Shared”. On Solaris Zones, this metric is “Dedicated” when this zone is attached to a CPU pool not shared by any other zone. BYLS_LS_STATE ---------------------------------- The state of this logical system. On HPVM, the logical systems can have one of the following states: Unknown Other invalid Up Down Boot Crash Shutdown Hung On vMA, this metric can have one of the following states for a host: on off unknown The values for a logical system can be one of the following: on off suspended unknown The value is NA for resource pool. On Solaris Zones, the logical systems can have one of the following states: configured incomplete installed ready running shutting down mounted On AIX lpars, the logical system will be always active. On AIX wpars, the logical systems can have one of the following states: Broken Transitional Defined Active Loaded Paused Frozen Error A logical system on a Hyper-V host can have the following states: unknown enabled disabled paused suspended starting snapshtng migrating saving stopping deleted pausing resuming BYLS_LS_STATE_CHANGE_TIME ---------------------------------- For a guest, the metric is the epoch time when the last state change was observed. The value is NA for all other entities. BYLS_LS_TYPE ---------------------------------- The type of this logical system. On AIX, the logical systems can have one of the following types: lpar sys wpar app wpar On vMA, the value of this metric is “VMware”. For an AIX frame, the value of this metric is “FRAME”. BYLS_LS_UUID ---------------------------------- UUID of this logical system. This Id uniquely identifies this logical system across multiple hosts. On Hyper-V host, for Root partition, this metric is NA. On vMA, for a logical system or a host, the value indicates the UUID appended to display_name of the system. For a resource pool the value is hostname of the host where resource pool is hosted followed by the unique id of resource pool. For an AIX frame, the value is the display name appended with serial number. For an LPAR, this value is the frame’s name appended with serial number. BYLS_MACHINE_MODEL ---------------------------------- On vMA, for a host, it is the CPU model of the host system. For a logical system and resource pool the value is “na”. The machine model of the AIX Frame if present. For an LPAR, this value would be “na”. BYLS_MEM_ACTIVE ---------------------------------- On vMA, for a logical system it is the amount of memory, that is actively used. For a host and resource pool the value is NA. BYLS_MEM_AVAIL ---------------------------------- On vMA, for a host, the amount of physical available memory in the host system (in MBs unless otherwise specified). For a logical system and resource pool the value is NA. BYLS_MEM_BALLOON_USED ---------------------------------- On vMA, for logical system and cluster, it is the amount of memory held by memory control for ballooning. The value is represented in KB. For a host and resource pool the value is NA. On KVM/Xen, this value will be “na” if version of libvirt doesn’t support memory stats. BYLS_MEM_BALLOON_UTIL ---------------------------------- On vMA, for logical system, it is the amount of memory held by memory control for ballooning. It is represented as a percentage of BYLS_MEM_ENTL. For a host, and resource pool the value is NA. On KVM/Xen, this value will be “na” if version of libvirt doesn’t support memory stats. BYLS_MEM_EFFECTIVE_UTIL ---------------------------------- On vMA, for a cluster the metric is theutilization of total amount of machine memory of all hosts in the cluster that is available for use for virtual machine memory (physical memory for use by the Guest OS) and virtual machine overhead memory. Effective Memory = Aggregate host machine memory - (VMkernel memory + Service Console memory + other service memory). The value is NA for all other entities. BYLS_MEM_ENTL ---------------------------------- The entitled memory configured for this logical system (in MB). On Hyper-V host, for Root partition, this metric is NA. On vMA, for host the value is the physical memory available in the system and for logical system this metric indicates the minimum memory configured while for resource pool the value is NA. For an AIX frame, this value is obtained from the command “lshwres -m -r mem --level sys “. BYLS_MEM_ENTL_MAX ---------------------------------- The maximum amount of memory configured for a logical system, in MB. The value of this metric will be “-3” in PA and “ul” in other clients if entitlement is ‘Unlimited’ for a logical system. On AIX LPARs, this metric will be “na”. On vMA, this metric indicates the maximum amount of memory configured for a resource pool or a logical system. For a host, the value is the amount of physical memory available in the system. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. BYLS_MEM_ENTL_MIN ---------------------------------- The minimum amount of memory configured for the logical system, in MB. On AIX LPARs, this metric will be “na”. On vMA, this metric indicates the reserved amount of memory configured for a host, resource pool or a logical system. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. BYLS_MEM_ENTL_UTIL ---------------------------------- The percentage of entitled memory in use during the interval. On vMA, for a logical system or a host, the value indicates percentage of entitled memory in use during the interval by it. For an AIX frame, this is calculated using “lshwres -r mempool -m “ from HMC. Active Memory Sharing has to be turned on for this. On vMA, for a resource pool, this metric is “na”. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. BYLS_MEM_FREE ---------------------------------- The amount of free memory on the logical system, in MB. On vMA, for a host and logical system, it is the amount of memory not allocated. For a resource pool the value is “na”. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. BYLS_MEM_FREE_UTIL ---------------------------------- The percentage of memory that is free at the end of the interval. On vMA, for a resource pool the value is NA. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. BYLS_MEM_HEALTH ---------------------------------- On vMA, for a host, it is a number that indicates the state of the memory. Low number indicates system is not under memory pressure. For a logical system and resource pool the value is “na”. On vMA, the values are defined as: 0 - High - indicates free memory is available and no memory pressure. 1 - Soft 2 - Hard 3 - Low - indicates there is a pressure for free memory. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. For relevant guests, these values represent the level of memory pressure, 0 being none and 3 being very high. BYLS_MEM_OVERHEAD ---------------------------------- The amount of memory associated with a logical system, that is currently consumed on the host system, due to virtualization. On vMA, this metric indicates the amount of overhead memory associated with a host, logical system and resource pool. BYLS_MEM_PHYS ---------------------------------- On vMA, for host the value is the physical memory available in the system and for logical system this metric indicates the minimum memory configured. On vMA, for a resource pool, this metric is “na”. On HPVM, this metric matches the data in the “Memory Details” section of “hpvmstatus -V”, when the dynamic memory driver is not enabled, and it matches the data in the “Dynamic Memory Information” section when the dynamic memory driver is active. The dynamic memory driver is currently only available on guests running HPUX 11iv3 or newer versions. BYLS_MEM_PHYS_UTIL ---------------------------------- The percentage of physical memory used during the interval. On vMA and Cluster, the metric indicates the percentage of physical memory used by a host, logical system. On vMA, for a resource pool, this metric is “na”. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. On KVM/Xen, this is the percentage of the total memory assigned to the VM that is currently used. For Domain-0 or any other instance with unlimited memory entitlement, it is NA. BYLS_MEM_SHARES_PRIO ---------------------------------- The weightage/priority for memory assigned to this logical system. This value influences the share of unutilized physical Memory that this logical system can utilize. The value of this metric will be “-3” in PA and “ul” in other clients if memory shares value is ‘Unlimited’ for a logical system. On AIX LPARs, this metric will be “na”. On vMA, this metric indicates the share of memory configured to a resource pool and a logical system. For a host the value is NA. BYLS_MEM_SWAPIN ---------------------------------- On vMA, for a logical system the value indicates the amount of memory that is swapped in during the interval. For a host and resource pool the value is NA. On KVM/Xen, this value will be “na” if extended memory statistics are not available. BYLS_MEM_SWAPOUT ---------------------------------- On vMA, for a logical system the value indicates the amount of memory that is swapped out during the interval. For a host and resource pool the value is NA. On KVM/Xen, this value will be “na” if extended memory statistics are not available. BYLS_MEM_SWAPPED ---------------------------------- On vMA, for a host, logical system and resource pool, this metrics indicates the amount of memory that has been transparently swapped to and from the disk. BYLS_MEM_SWAPTARGET ---------------------------------- On vMA, for a logical system the value indicates the amount of memory that can be swapped. For a host and resource pool the value is “na”. BYLS_MEM_SWAP_UTIL ---------------------------------- On Solaris, this metric indicates the percentage of swap memory consumed by the zone with respect to total configured swap memory (BYLS_MEM_SWAP). This metric is calculated as : BYLS_MEM_SWAP_UTIL = (BYLS_MEM_SWAP_USED ) / (BYLS_MEM_SWAP) * 100 On vMA, for a logical system, it is the percentage of swap memory utilized w.r.t the amount of swap memory available for a logical system. For host and resource pool the value is NA. For a logical system this metric is calculated using the below formula: (BYLS_MEM_SWAPPED * 100)/(BYLS_MEM_ENTL - BYLS_MEM_ENTL_MIN) BYLS_MEM_SYS ---------------------------------- On vMA, for a host, it is the amount of physical memory (in MBs unless otherwise specified) used by the system (kernel) during the interval. For logical system and resource pool the value is NA. BYLS_MEM_UNRESERVED ---------------------------------- On vMA, for a host it is the amount of memory, that is unreserved. For a logical system and resource pool the value is “na”. Memory reservation not used by the Service Console, VMkernel, vSphere services and other powered on VMs user-specified memory reservations and overhead memory. BYLS_MEM_USED ---------------------------------- The amount of memory used by the logical system at the end of the interval. On vMA, this applies to hosts, resource pools and logical systems. On vMA, for a resource pool, this metric is “na”. On HPVM, this metric is valid for HPUX guests running 11iv3 or newer releases, with the dynamic memory driver active. Running “hpvmstatus -V” will indicate whether the driver is active. For all other guests, the value is “na”. BYLS_MULTIACC_ENABLED ---------------------------------- On vMA, for a datastore the metric is the measurewhether multi access has been enabled for the datastore.The value is NA for all other entities. BYLS_NET_BYTE_RATE ---------------------------------- On vMA, for a host and logical system, it is the sum of data transmitted and received for all the NIC instances of the host and virtual machine. It is represented in KBps. For a resource pool the value is NA. BYLS_NET_IN_BYTE ---------------------------------- On vMA, for a host and logical system, it is number of bytes, in MB, received during the interval. For a resource pool the value is NA. BYLS_NET_IN_PACKET ---------------------------------- On vMA, for a host and a logical system, this metric indicates the number of successful packets received through all network interfaces during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_NET_IN_PACKET_RATE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the number of successful packets per second received through all network interfaces during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_NET_OUT_BYTE ---------------------------------- On vMA, for a host and logical system, it is the number of bytes, in MB, transmitted during the interval. For a resource pool the value is NA. BYLS_NET_OUT_PACKET ---------------------------------- On vMA, for a host and a logical system, it is the number of successful packets sent through all network interfaces during the last interval. On vMA, for a resource pool, this metric is “na”. BYLS_NET_OUT_PACKET_RATE ---------------------------------- On vMA, for a host and a logical system, this metric indicates the number of successful packets per second sent through the network interfaces during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_NET_PACKET_RATE ---------------------------------- On vMA, for a host and a logical system, it is the number of successful packets per second, both sent and received, for all network interfaces during the interval. On vMA, for a resource pool, this metric is “na”. BYLS_NUM_ACTIVE_LS ---------------------------------- On vMA, for a host, this indicates the number of logical systems hosted in a system that are active. For a logical system and resource pool the value is NA. For an AIX frame, this is the number of LPARs in “Running” state. For an LPAR, this value will be “na”. BYLS_NUM_CLONES ---------------------------------- On vMA, for a cluster the metric is Number ofvirtual machine clone operations for that cluster in that interval. The value is NA for all other entities. BYLS_NUM_CPU ---------------------------------- The number of virtual CPUs configured for this logical system. This metric is equivalent to GBL_NUM_CPU on the corresponding logical system. On HPVM, the maximum CPUs a logical system can have is 4 with respect to HPVM 3.x. On AIX SPLPAR, the number of CPUs can be configured irrespective of the available physical CPUs in the pool this logical system belongs to. For AIX wpars, this metric represents the logical CPUs of the global environment. On vMA, for a host the metric is the number of physical CPU threads on the host. For a logical system, the metric is the number of virtual cpus configured.For a resource pool the metric is NA. On Solaris Zones, this metric represents number of CPUs in the CPU pool this zone is attached to. This metric value is equivalent to GBL_NUM_CPU inside corresponding non-global zone. BYLS_NUM_CPU_CORE ---------------------------------- On vMA, for a host this metric provides the total number of CPU cores on the system. For a logical system or a resource pool the value is NA. BYLS_NUM_CREATE ---------------------------------- On vMA, for a cluster the metric is Number ofvirtual machine create operations for that cluster in that interval. The value is NA for all other entities. BYLS_NUM_DEPLOY ---------------------------------- On vMA, for a cluster the metric is Number ofvirtual machine template deploy operations for that cluster in that interval. The value is NA for all other entities. BYLS_NUM_DESTROY ---------------------------------- On vMA, for a cluster the metric is Number ofvirtual machine delete operations for that cluster in that interval. The value is NA for all other entities. BYLS_NUM_DISK ---------------------------------- The number of disks configured for this logical system. Only local disk devices and optical devices present on the system are counted in this metric. On vMA, for a host the metric is the number of disks configured for the host . For a logical system, the metric is the number of logical disk devices present on the logical system. For a resource pool the metric is NA. For AIX wpars, this metric will be “na”. On Hyper-V host, this metric value is equivalent to GBL_NUM_DISK inside corresponding Hyper-V guest. On Hyper-V host, this metric is NA if the logical system is not active. BYLS_NUM_HOSTS ---------------------------------- On vMA, for a DataCenter
as first half and the second half is the ESX host name.
BYNETIF_NET_SPEED
----------------------------------
The speed of this interface. This is the bandwidth in Mega bits/sec.
Some AIX systems report a speed that is lower than the measured throughput
and this can result in BYNETIF_UTIL and GBL_NET_UTIL_PEAK showing more than
100% utilization.
On Linux, root permission is required to obtain network interface bandwidth
so values will be n/a when running in non-root mode. Also, maximum bandwidth
for virtual interfaces (vnetN) may be reported wrongly on KVM or Xen server
so, similarly to AIX, utilization may exceed 100%.
BYNETIF_NET_TYPE
----------------------------------
The type of network device the interface communicates through.
Lan - local area network card
Loop - software loopback
interface (not tied to a
hardware device)
Loop6 - software loopback
interface IPv6 (not tied
to a hardware device)
Serial - serial modem port
Vlan - virtual lan
Wan - wide area network card
Tunnel - tunnel interface
Apa - HP LinkAggregate Interface (APA)
Other - hardware network interface
type is unknown.
ESXVLan - The card type belongs to network cards of ESX hosts which are
monitored on vMA.
BYNETIF_OUT_BYTE
----------------------------------
The number of KBs sent to the network via this interface during the interval.
Only the bytes in packets that carry data are included in this rate.
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the
Lan card in the host.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
BYNETIF_OUT_BYTE_RATE
----------------------------------
The number of KBs per second sent to the network via this interface during
the interval. Only the bytes in packets that carry data are included in this
rate.
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the
Lan card in the host.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
BYNETIF_OUT_BYTE_RATE_CUM
----------------------------------
The average number of KBs per second sent to the network via this interface
over the cumulative collection time. Only the bytes in packets that carry
data are included in this rate.
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the
Lan card in the host.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
BYNETIF_OUT_PACKET
----------------------------------
The number of successful physical packets sent through the network interface
during the interval. Successful packets are those that have been processed
without errors or collisions.
For HP-UX, this will be the same as the sum of the “Outbound Unicast Packets”
and “Outbound Non-Unicast Packets” values from the output of the “lanadmin”
utility for the network interface. Remember that “lanadmin” reports
cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i”
shows network activity on the logical level (IP) only.
For all other Unix systems, this is the same as the sum of the “Opkts” column
(TX-OK on Linux) from the “netstat -i” command for a network device. See
also netstat(1).
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the
Lan card in the host.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
BYNETIF_OUT_PACKET_RATE
----------------------------------
The number of successful physical packets per second sent through the network
interface during the interval. Successful packets are those that have been
processed without errors or collisions.
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the
Lan card in the host.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
BYNETIF_OUT_PACKET_RATE_CUM
----------------------------------
The average number of successful physical packets per second sent through the
network interface over the cumulative collection time.
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the
Lan card in the host.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
BYNETIF_PACKET_RATE
----------------------------------
The number of successful physical packets per second sent and received
through the network interface during the interval. Successful packets are
those that have been processed without errors or collisions.
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the
Lan card in the host.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
BYNETIF_UTIL
----------------------------------
The percentage of bandwidth used with respect to the total available
bandwidth on a given network interface at the end of the interval.
On vMA this value will be N/A for those Lan cards which are of type ESXVLan.
Some AIX systems report a speed that is lower than the measured throughput
and this can result in BYNETIF_UTIL and GBL_NET_UTIL_PEAK showing more than
100% utilization.
On Linux, root permission is required to obtain network interface bandwidth
so values will be n/a when running in non-root mode. Also, maximum bandwidth
for virtual interfaces (vnetN) may be reported wrongly on KVM or Xen server
so, similarly to AIX, utilization may exceed 100%.
BYOP_CLIENT_COUNT
----------------------------------
The number of current NFS operations that the local machine has processed as
a NFS client during the interval.
A host on the network can act both as a client, or as a server at the same
time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
BYOP_CLIENT_COUNT_CUM
----------------------------------
The number of current NFS operations that the local machine has processed as
a NFS client over the cumulative collection time.
A host on the network can act both as a client, or as a server at the same
time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
BYOP_INTERVAL
----------------------------------
The amount of time in the interval.
BYOP_INTERVAL_CUM
----------------------------------
The amount of time over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
BYOP_NAME
----------------------------------
String mnemonic for the NFS operation. One of the following:
For NFS Version 2
Name Operation/Action
------------------------------------
getattr Return the current
attributes of a file.
setattr Set the attributes of a
file and returns the new
attributes.
lookup Return the attributes of
a file.
readlink Return the string in the
symbolic link of a file.
read Return data from a file.
write Put data into a file.
create Create a file.
remove Remove a file.
rename Give a file a new name.
link Create a hard link to a
file.
symlink Create a symbolic link
to a file.
mkdir Create a directory.
rmdir Remove a directory.
readdir Read a directory entry.
statfs Return mounted file
system information.
null Verify NFS service
connections and timing.
On HP-UX, no actual work
done.
writecache Flush the server write
cache if a special write
cache exists. Most
systems use the file
buffer cache and not a
special server cache.
Not used on HP-UX.
root Find root file system
handle (probably
obsolete).
Not used on HP-UX.
For NFS Version 3
Name Operation/Action
------------------------------------
getattr Return the current
attributes of a file.
setattr Set the attributes of a
file and returns the new
attributes.
lookup Return the attributes of
a file.
access Check access permissions
of a user.
readlink Return the string in the
symbolic link of a file.
read Return data from a file.
write Put data into a file.
create Create a file.
mkdir Make a directory.
symlink Create a symbolic link
to a file.
mknod Create a special device.
remove Remove a file.
rmdir Remove a directory.
rename Give a file a new name.
link Create a hard link to a
file.
readdir Read a directory entry.
readdirplus Extended read of a
directory entry.
fsstat Get dynamic file
system information.
fsinfo Get static file
system information.
pathconf Retrieve POSIX
information.
commit Commit cached data on
server to stable
storage.
null Verify NFS services.
No actual work done.
BYOP_SERVER_COUNT
----------------------------------
The number of current NFS operations that the local machine has processed as
a NFS server during the interval.
A host on the network can act both as a client, or as a server at the same
time.
BYOP_SERVER_COUNT_CUM
----------------------------------
The number of current NFS operations that the local machine has processed as
a NFS server over the cumulative collection time.
A host on the network can act both as a client, or as a server at the same
time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
BYSWP_SWAP_PRI
----------------------------------
The priority of this swap device. This value is set by either the swapon(1M)
command, or by the “pri=“ field in /etc/fstab.
On HP-UX, swap space is used by the lower value priorities first. Since
device swap is faster than file system swap, it is advisable to have lower
values for device swap. The legal values for priority range from 0 to 10.
On HP-UX, the “memory” swap area has no priority and will be shown as -1.
This indicates that using memory as a swap area is only done after all other
swap resources have been exhausted. This is true in extreme cases of memory
pressure forcing the kernel to swap the entire process to disk. In cases of
process deactivation, the memory pseudo swap actually has the highest
priority - deactivated pages are not moved - they are simply marked as
deactivated and the space they occupy is considered pseudo swap.
On Linux, swap space is used by the higher value priorities first. The legal
values for priority range from 0 to 32767. The system assigns negative
priority values if no priority is specified during the creation of swap area.
See swapon(8) for details.
BYSWP_SWAP_SPACE_AVAIL
----------------------------------
The capacity (in MB) for swapping in this swap area.
On HP-UX, for “device” type swap, this value is constant. However, for
“filesys” swap this value grows as needed. File system swap grows in units
of “SWCHUNKS” x DEV_BSIZE bytes, which is typically 2MB. This metric is
similar to the “AVAIL” parameters returned from /usr/sbin/swapinfo. For
“memory” type swap, this value also grows as needed or as possible, given
that any memory reserved for swap cannot be used for normal virtual memory.
Note that this is potential swap space. Since swap is allocated in fixed
(SWCHUNK) sizes, not all of this space may actually be usable. For example,
on a 61 MB disk using 2 MB swap size allocations, 1 MB remains unusable and
is considered wasted space.
On SUN, this is the same as (blocks * .5)/1024, reported by the “swap -l”
command.
On AIX, this metric is set to “na” for inactive swap devices.
On Unix systems, this metric is updated every 30 seconds or the sampling
interval, whichever is greater.
BYSWP_SWAP_SPACE_NAME
----------------------------------
On Unix systems, this is the name of the device file or file system where the
swap space is located.
On HP-UX, part of the system’s physical memory may be allocated as a pseudo-
swap device. It is enabled by setting the “SWAPMEM_ON” kernel parameter to
1.
On SunOS 5.X, part of the system’s physical memory may be allocated as a
pseudo-swap device. Also note, “/tmp” is usually configured as a memory
based file system and is not used for swap space. Therefore, it will not be
listed with the swap devices. This is noted because “df” uses the label
“swap” for the “/tmp” file system which may be confusing. See tmpfs(7).
BYSWP_SWAP_SPACE_USED
----------------------------------
The amount of swap space (in MB) used in this area.
On HP-UX, this value is similar to the “USED” column returned by the
/usr/sbin/swapinfo command.
On SUN, “Used” indicates amount written to disk (or locked in memory), rather
than reserved. Swap space is reserved (by decrementing a counter) when
virtual memory for a program is created. This is the same as (blocks - free)
* .5/1024, reported by the “swap -l” command.
On SUN, global swap space is tracked through the operating system. Device
swap space is tracked through the devices. For this reason, the amount of
swap space used may differ between the global and by-device metrics.
Sometimes pages that are marked to be swapped to disk by the operating system
are never swapped. The operating system records this as used swap space, but
the devices do not, since no physical IOs occur. (Metrics with the prefix
“GBL” are global and metrics with the prefix “BYSWP” are by device.)
On AIX, this metric is set to “na” for inactive swap devices.
On Unix systems, this metric is updated every 30 seconds or the sampling
interval, whichever is greater.
BYSWP_SWAP_TYPE
----------------------------------
The type of swap space allocated on the system.
On HP-UX and SUN, types of swap space are device, file system (“filesys”), or
memory. “Device” swap is accessed directly without going through the file
system, and is therefore faster than “filesys” swap. “Filesys” swap can be
to a local or NFS mounted swap file. “Memory” swap is space in the system’s
physical memory reserved for pseudo-swap for running processes. Using
pseudo-swap means the pages are simply locked in memory rather than copied to
a swap area.
On SUN, note that “/tmp” is usually configured as a memory based file system
and is not used for swap space. Therefore, it will not be listed with the
swap devices, and “swap” or “tmpfs” will not be swap types. This is noted
because “df” uses the label “swap” for the “/tmp” file system which may be
confusing. See tmpfs(7).
On AIX, “Device” swap is accessed directly without going through the file
system. For “Device” swap, the device is specially allocated for swapping
purpose only. The device can be logical volume, “lv” or remote file system,
“remote fs”. The swap is often referred as paging to paging space.
FS_BLOCK_SIZE
----------------------------------
The maximum block size of this file system, in bytes.
A value of “na” may be displayed if the file system is not mounted. If the
product is restarted, these unmounted file systems are not displayed until
remounted.
FS_DEVNAME
----------------------------------
On Unix systems, this is the path name string of the current device.
On Windows, this is the disk drive string of the current device.
On HP-UX, this is the “fsname” parameter in the mount(1M) command. For NFS
devices, this includes the name of the node exporting the file system. It is
possible that a process may mount a device using the mount(2) system call.
This call does not update the “/etc/mnttab” and its name is blank. This
situation is rare, and should be corrected by syncer(1M). Note that once a
device is mounted, its entry is displayed, even after the device is
unmounted, until the midaemon process terminates.
On SUN, this is the path name string of the current device, or “tmpfs” for
memory based file systems. See tmpfs(7).
FS_DEVNO
----------------------------------
On Unix systems, this is the major and minor number of the file system.
On Windows, this is the unit number of the disk device on which the logical
disk resides.
The scope collector logs the value of this metric in decimal format.
FS_DIRNAME
----------------------------------
On Unix systems, this is the path name of the mount point of the file system.
On Windows, this is the drive letter associated with the selected disk
partition.
On HP-UX, this is the path name of the mount point of the file system if the
logical volume has a mounted file system. This is the directory parameter of
the mount(1M) command for most entries. Exceptions are:
* For lvm swap areas, this field
contains “lvm swap device”.
* For logical volumes with no
mounted file systems, this field
contains “Raw Logical Volume”
(relevant only to Perf Agent).
On HP-UX, the file names are in the same order as shown in the
“/usr/sbin/mount -p” command. File systems are not displayed until they
exhibit IO activity once the midaemon has been started. Also, once a device
is displayed, it continues to be displayed (even after the device is
unmounted) until the midaemon process terminates.
On SUN, only “UFS”, “HSFS” and “TMPFS” file systems are listed. See
mount(1M) and mnttab(4). “TMPFS” file systems are memory based filesystems
and are listed here for convenience. See tmpfs(7).
On AIX, see mount(1M) and filesystems(4). On OSF1, see mount(2).
FS_FRAG_SIZE
----------------------------------
The fundamental file system block size, in bytes.
A value of “na” may be displayed if the file system is not mounted. If the
product is restarted, these unmounted file systems are not displayed until
remounted.
FS_INODE_UTIL
----------------------------------
Percentage of this file system’s inodes in use during the interval.
A value of “na” may be displayed if the file system is not mounted. If the
product is restarted, these unmounted file systems are not displayed until
remounted.
FS_MAX_INODES
----------------------------------
Number of configured file system inodes.
A value of “na” may be displayed if the file system is not mounted. If the
product is restarted, these unmounted file systems are not displayed until
remounted.
FS_MAX_SIZE
----------------------------------
Maximum number that this file system could obtain if full, in MB.
Note that this is the user space capacity - it is the file system space
accessible to non root users. On most Unix systems, the df command shows the
total file system capacity which includes the extra file system space
accessible to root users only.
The equivalent fields to look at are “used” and “avail”. For the target file
system, to calculate the maximum size in MB, use
FS Max Size = (used + avail)/1024
A value of “na” may be displayed if the file system is not mounted. If the
product is restarted, these unmounted file systems are not displayed until
remounted.
On HP-UX, this metric is updated at 4 minute intervals to minimize collection
overhead.
FS_PHYS_IO_RATE
----------------------------------
The number of physical IOs per second directed to this file system during the
interval.
FS_PHYS_IO_RATE_CUM
----------------------------------
The average number of physical IOs per second directed to this file system
over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
FS_PHYS_READ_BYTE_RATE
----------------------------------
The number of physical KBs per second read from this file system during the
interval.
FS_PHYS_READ_BYTE_RATE_CUM
----------------------------------
The average number of KBs per second of physical reads from this file system
over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
FS_PHYS_READ_RATE
----------------------------------
The number of physical reads per second directed to this file system during
the interval.
On Unix systems, physical reads are generated by user file access, virtual
memory access (paging), file system management, or raw device access.
FS_PHYS_READ_RATE_CUM
----------------------------------
The average number of physical reads per second directed to this file system
over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
FS_PHYS_WRITE_BYTE_RATE
----------------------------------
The number of physical KBs per second written to this file system during the
interval.
FS_PHYS_WRITE_BYTE_RATE_CUM
----------------------------------
The average number of KBs per second of physical writes to this file system
over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
FS_PHYS_WRITE_RATE
----------------------------------
The number of physical writes per second directed to this file system during
the interval.
FS_PHYS_WRITE_RATE_CUM
----------------------------------
The average number of physical writes per second directed to this file system
over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
FS_SPACE_RESERVED
----------------------------------
The amount of file system space in MBs reserved for superuser allocation.
On AIX, this metric is typically zero for local filesystems because by
default AIX does not reserve any file system space for the superuser.
FS_SPACE_USED
----------------------------------
The amount of file system space in MBs that is being used.
FS_SPACE_UTIL
----------------------------------
Percentage of the file system space in use during the interval.
Note that this is the user space capacity - it is the file system space
accessible to non root users. On most Unix systems, the df command shows the
total file system capacity which includes the extra file system space
accessible to root users only.
A value of “na” may be displayed if the file system is not mounted. If the
product is restarted, these unmounted file systems are not displayed until
remounted.
On HP-UX, this metric is updated at 4 minute intervals to minimize collection
overhead.
FS_TYPE
----------------------------------
A string indicating the file system type. On Unix systems, some of the
possible types are:
hfs - user file system
ufs - user file system
ext2 - user file system
cdfs - CD-ROM file system
vxfs - Veritas (vxfs) file system
nfs - network file system
nfs3 - network file system
Version 3
On Windows, some of the possible types are:
NTFS - New Technology File System
FAT - 16-bit File Allocation
Table
FAT32 - 32-bit File Allocation
Table
FAT uses a 16-bit file allocation table entry (216 clusters).
FAT32 uses a 32-bit file allocation table entry. However, Windows 2000
reserves the first 4 bits of a FAT32 file allocation table entry, which means
FAT32 has a theoretical maximum of 228 clusters. NTFS is native file system
of Windows NT and beyond.
GBL_ACTIVE_CPU
----------------------------------
The number of CPUs online on the system.
For HP-UX and certain versions of Linux, the sar(1M) command allows you to
check the status of the system CPUs.
For SUN and DEC, the commands psrinfo(1M) and psradm(1M) allow you to check
or change the status of the system CPUs.
For AIX, the pstat(1) command allows you to check the status of the system
CPUs.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment if RSET is not configured for the System WPAR. If RSET is
configured for the System WPAR, this metric value will report the number of
CPUs in the RSET.
On Solaris non-global zones with Uncapped CPUs, this metric shows data from
the global zone.
GBL_ACTIVE_CPU_CORE
----------------------------------
This metric provides the total number of active CPU cores on a physical
system.
GBL_ACTIVE_PROC
----------------------------------
An active process is one that exists and consumes some CPU time.
GBL_ACTIVE_PROC is the sum of the alive-process-time/interval-time ratios of
every process that is active (uses any CPU time) during an interval.
The following diagram of a four second interval during which two processes
exist on the system should be used to understand the above definition. Note
the difference between active processes, which consume CPU time, and alive
processes which merely exist on the system.
----------- Seconds -----------
1 2 3 4
Proc
---- ---- ---- ---- ----
A live live live live
B live/CPU live/CPU live dead
Process A is alive for the entire four second interval but consumes no CPU.
A’s contribution to GBL_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to
GBL_ACTIVE_PROC. B’s contribution to GBL_ALIVE_PROC is 3*1/4. B contributes
2*1/4 to GBL_ACTIVE_PROC. Thus, for this interval, GBL_ACTIVE_PROC equals
0.5 and GBL_ALIVE_PROC equals 1.75.
Because a process may be alive but not active, GBL_ACTIVE_PROC will always be
less than or equal to GBL_ALIVE_PROC.
This metric is a good overall indicator of the workload of the system. An
unusually large number of active processes could indicate a CPU bottleneck.
To determine if the CPU is a bottleneck, compare this metric with
GBL_CPU_TOTAL_UTIL and GBL_RUN_QUEUE. If GBL_CPU_TOTAL_UTIL is near 100
percent and GBL_RUN_QUEUE is greater than one, there is a bottleneck.
On non HP-UX systems, this metric is derived from sampled process data.
Since the data for a process is not available after the process has died on
this operating system, a process whose life is shorter than the sampling
interval may not be seen when the samples are taken. Thus this metric may be
slightly less than the actual value. Increasing the sampling frequency
captures a more accurate count, but the overhead of collection may also rise.
GBL_ALIVE_PROC
----------------------------------
An alive process is one that exists on the system. GBL_ALIVE_PROC is the sum
of the alive-process-time/interval-time ratios for every process.
The following diagram of a four second interval during which two processes
exist on the system should be used to understand the above definition. Note
the difference between active processes, which consume CPU time, and alive
processes which merely exist on the system.
----------- Seconds -----------
1 2 3 4
Proc
---- ---- ---- ---- ----
A live live live live
B live/CPU live/CPU live dead
Process A is alive for the entire four second interval but consumes no CPU.
A’s contribution to GBL_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to
GBL_ACTIVE_PROC. B’s contribution to GBL_ALIVE_PROC is 3*1/4. B contributes
2*1/4 to GBL_ACTIVE_PROC. Thus, for this interval, GBL_ACTIVE_PROC equals
0.5 and GBL_ALIVE_PROC equals 1.75.
Because a process may be alive but not active, GBL_ACTIVE_PROC will always be
less than or equal to GBL_ALIVE_PROC.
On non HP-UX systems, this metric is derived from sampled process data.
Since the data for a process is not available after the process has died on
this operating system, a process whose life is shorter than the sampling
interval may not be seen when the samples are taken. Thus this metric may be
slightly less than the actual value. Increasing the sampling frequency
captures a more accurate count, but the overhead of collection may also rise.
GBL_BLANK
----------------------------------
A string of blanks.
GBL_BOOT_TIME
----------------------------------
The date and time when the system was last booted.
GBL_COLLECTION_MODE
----------------------------------
This metric reports whether the data collection is running as “root” (super-
user) or “non-root” (regular user). Running as non-root results in a loss of
functionality which varies across Unix platforms. Running non-root is not
available on HP-UX.
The value is always “admin” on Windows.
GBL_COLLECTOR
----------------------------------
ASCII field containing collector name and version. The collector name will
appear as either “SCOPE/xx V.UU.FF.LF” or “Coda RV.UU.FF.LF”. xx identifies
the platform; V = version, UU = update level, FF = fix level, and LF = lab
fix id. For example, SCOPE/UX C.04.00.00; or Coda A.07.10.04.
GBL_COMPLETED_PROC
----------------------------------
The number of processes that terminated during the interval.
On non HP-UX systems, this metric is derived from sampled process data.
Since the data for a process is not available after the process has died on
this operating system, a process whose life is shorter than the sampling
interval may not be seen when the samples are taken. Thus this metric may be
slightly less than the actual value. Increasing the sampling frequency
captures a more accurate count, but the overhead of collection may also rise.
GBL_CPU_CLOCK
----------------------------------
The clock speed of the CPUs in MHz if all of the processors have the same
clock speed. Otherwise, “na” is shown if the processors have different clock
speeds. Note that Linux supports dynamic frequency scaling and if it is
enabled then there can be a change in CPU speed with varying load.
GBL_CPU_CYCLE_ENTL_MAX
----------------------------------
On a recognized VMware ESX guest, where VMware guest SDK is enabled,, this
value indicates the maximum processor capacity, in MHz, configured for this
logical system. The value is -3 if entitlement is ‘Unlimited’ for this
logical system.
On a recognized VMware ESX guest, where VMware guest SDK is disabled, the
value is “na”.
On a standalone system, the value is the sum of clock speed of individual
CPUs.
GBL_CPU_CYCLE_ENTL_MIN
----------------------------------
On a recognized VMware ESX guest, where VMware guest SDK is enabled,, this
value indicates the minimum processor capacity, in MHz, configured for this
logical system.
On a recognized VMware ESX guest, where VMware guest SDK is disabled, the
value is “na”.
On a standalone system, the value is the sum of clock speed of individual
CPUs.
GBL_CPU_ENTL_MAX
----------------------------------
In a virtual environment, this metric indicates the maximum number of
processing units configured for this logical system.
On AIX SPLPAR, this metric is equivalent to “Maximum Capacity” field of
‘lparstat -i’ command.
On a recognized VMware ESX guest the value is equivalent to
GBL_CPU_CYCLE_ENTL_MAX represented in CPU units.
On a recognized VMware ESX guest, where VMware guest SDK is disabled, the
value is “na”.
On a standalone system the value is same as GBL_NUM_CPU.
GBL_CPU_ENTL_MIN
----------------------------------
In a virtual environment, this metric indicates the minimum number of
processing units configured for this Logical system.
On AIX SPLPAR, this metric is equivalent to “Minimum Capacity” field of
‘lparstat -i’ command.
On a recognized VMware ESX guest, where VMware guest SDK is enabled, the
value is equivalent to GBL_CPU_CYCLE_ENTL_MIN represented in CPU units.
On a recognized VMware ESX guest, where VMware guest SDK is disabled, the
value is “na”.
On a standalone system the value is same as GBL_NUM_CPU.
GBL_CPU_ENTL_UTIL
----------------------------------
Percentage of entitled processing units (guaranteed processing units
allocated to this logical system) consumed by the logical system.
On AIX, this metric is calculated as:
GBL_CPU_ENTL_UTIL = (GBL_CPU_PHYSC / GBL_CPU_ENTL) * 100
On a recognized VMware ESX guest, where VMware guest SDK is enabled, this
metric is calculated as:
GBL_CPU_ENTL_UTIL = (GBL_CPU_PHYSC / GBL_CPU_ENTL_MIN) * 100
On a recognized VMware ESX guest, where VMware guest SDK is disabled, the
value is “na”.
On a standalone system, the value is same as GBL_CPU_TOTAL_UTIL.
GBL_CPU_GUEST_TIME
----------------------------------
The time, in seconds, spent by CPUs to service guests during the interval.
Guest time, on Linux KVM hosts, is the time that is spent servicing guests.
Xen hosts, as of this release, do not update these counters, neither do other
OSes. On a system with multiple CPUs, this metric is normalized. That is,
the CPU used over all processors is divided by the number of processors
online. This represents the usage of the total processing capacity
available.
GBL_CPU_GUEST_TIME_CUM
----------------------------------
The time, in seconds, spent by CPUs to service guests over the cumulative
collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
Guest time, on Linux KVM hosts, is the time that is spent servicing guests.
Xen hosts, as of this release, do not update these counters, neither do other
OSes. On a system with multiple CPUs, this metric is normalized. That is,
the CPU used over all processors is divided by the number of processors
online. This represents the usage of the total processing capacity
available.
GBL_CPU_GUEST_UTIL
----------------------------------
The percentage of time that the CPUs were used to service guests during the
interval.
Guest time, on Linux KVM hosts, is the time that is spent servicing guests.
Xen hosts, as of this release, do not update these counters, neither do other
OSes.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
GBL_CPU_GUEST_UTIL_CUM
----------------------------------
The percentage of time that the CPUs were used to service guests over the
cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
Guest time, on Linux KVM hosts, is the time that is spent servicing guests.
Xen hosts, as of this release, do not update these counters, neither do other
OSes. On a system with multiple CPUs, this metric is normalized. That is,
the CPU used over all processors is divided by the number of processors
online. This represents the usage of the total processing capacity
available.
GBL_CPU_GUEST_UTIL_HIGH
----------------------------------
The highest percentage of guest CPU time during any one interval over the
cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_IDLE_TIME
----------------------------------
The time, in seconds, that the CPU was idle during the interval. This is the
total idle time, including waiting for I/O (and stolen time on Linux).
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online.
On AIX System WPARs, this metric value is calculated against physical cpu
time.
On Solaris non-global zones, this metric is N/A. On platforms other than
HPUX, If the ignore_mt flag is set(true) in parm file, this metric will
report values normalized against the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_IDLE_TIME_CUM
----------------------------------
The time, in seconds, that the CPU was idle over the cumulative collection
time. This is the total idle time, including waiting for I/O (and stolen
time on Linux).
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_IDLE_UTIL
----------------------------------
The percentage of time that the CPU was idle during the interval. This is
the total idle time, including waiting for I/O (and stolen time on Linux).
On Unix systems, this is the same as the sum of the “%idle” and “%wio” fields
reported by the “sar -u” command.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online.
On Solaris non-global zones, this metric is N/A. On platforms other than
HPUX, If the ignore_mt flag is set(true) in parm file, this metric will
report values normalized against the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_IDLE_UTIL_CUM
----------------------------------
The percentage of time that the CPU was idle over the cumulative collection
time. This is the total idle time, including waiting for I/O (and stolen
time on Linux).
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. On
platforms other than HPUX, If the ignore_mt flag is set(true) in parm file,
this metric will report values normalized against the number of active cores
in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_IDLE_UTIL_HIGH
----------------------------------
The highest percentage of time that the CPU was idle during any one interval
over the cumulative collection time. This is the total idle time, including
waiting for I/O (and stolen time on Linux).
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_INTERRUPT_TIME
----------------------------------
The time, in seconds, that the CPU spent processing interrupts during the
interval.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On Hyper-V host, this metric is NA.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_INTERRUPT_TIME_CUM
----------------------------------
The time, in seconds, that the CPU spent processing interrupts over the
cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_INTERRUPT_UTIL
----------------------------------
The percentage of time that the CPU spent processing interrupts during the
interval.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On Hyper-V host, this metric is NA.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_INTERRUPT_UTIL_CUM
----------------------------------
The percentage of time that the CPU spent processing interrupts over the
cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_INTERRUPT_UTIL_HIGH
----------------------------------
The highest percentage of time that the CPU spent processing interrupts
during any one interval over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_MT_ENABLED
----------------------------------
On AIX, this metric indicates if this (Logical) System has SMT enabled or
not.
Other platforms, this metric shows either HyperThreading(HT) is Enabled or
Disabled/Not Supported.
On Linux, this state is dynamic: if HyperThreading is enabled but all the
CPUs have only one logical processor enabled, this metric will report that HT
is disabled.
On AIX System WPARs, this metric is NA.
On Windows, this metric will be “na” on Windows Server 2003 Itanium systems.
GBL_CPU_NICE_TIME
----------------------------------
The time, in seconds, that the CPU was in user mode at a nice priority during
the interval.
On HP-UX, the NICE metrics include positive nice value CPU time only.
Negative nice value CPU is broken out into NNICE (negative nice) metrics.
Positive nice values range from 20 to 39. Negative nice values range from 0
to 19.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_NICE_TIME_CUM
----------------------------------
The time, in seconds, that the CPU was in user mode at a nice priority over
the cumulative collection time.
On HP-UX, the NICE metrics include positive nice value CPU time only.
Negative nice value CPU is broken out into NNICE (negative nice) metrics.
Positive nice values range from 20 to 39. Negative nice values range from 0
to 19.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_NICE_UTIL
----------------------------------
The percentage of time that the CPU was in user mode at a nice priority
during the interval.
On HP-UX, the NICE metrics include positive nice value CPU time only.
Negative nice value CPU is broken out into NNICE (negative nice) metrics.
Positive nice values range from 20 to 39. Negative nice values range from 0
to 19.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_NICE_UTIL_CUM
----------------------------------
The percentage of time that the CPU was in user mode at a nice priority over
the cumulative collection time.
On HP-UX, the NICE metrics include positive nice value CPU time only.
Negative nice value CPU is broken out into NNICE (negative nice) metrics.
Positive nice values range from 20 to 39. Negative nice values range from 0
to 19.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_NICE_UTIL_HIGH
----------------------------------
The highest percentage of time during any one interval that the CPU was in
user mode at a nice priority over the cumulative collection time.
On HP-UX, the NICE metrics include positive nice value CPU time only.
Negative nice value CPU is broken out into NNICE (negative nice) metrics.
Positive nice values range from 20 to 39. Negative nice values range from 0
to 19.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_NUM_THREADS
----------------------------------
The number of active CPU threads supported by the CPU architecture.
The Linux kernel currently doesn’t provide any metadata information for
disabled CPUs. This means that there is no way to find out types, speeds, as
well as hardware IDs or any other information that is used to determine the
number of cores, the number of threads, the HyperThreading state, etc... If
the agent (or Glance) is started while some of the CPUs are disabled, some of
these metrics will be “na”, some will be based on what is visible at startup
time. All information will be updated if/when additional CPUs are enabled and
information about them becomes available. The configuration counts will
remain at the highest discovered level (i.e. if CPUs are then disabled, the
maximum number of CPUs/cores/etc... will remain at the highest observed
level). It is recommended that the agent be started with all CPUs enabled.
On AIX System WPARs, this metric is NA.
GBL_CPU_PHYSC
----------------------------------
The number of physical processors utilized by the logical system.
On an Uncapped logical system (partition), this value will be equal to the
physical processor capacity used by the logical system during the interval.
This can be more than the value entitled for a logical system.
On a standalone system the value is calculated based on GBL_CPU_TOTAL_UTIL
GBL_CPU_PHYS_TOTAL_UTIL
----------------------------------
The percentage of time the available physical CPUs were not idle for this
logical system during the interval.
On AIX, this metric is calculated as :
GBL_CPU_PHYS_TOTAL_UTIL = GBL_CPU_PHYS_USER_MODE_UTIL +
GBL_CPU_PHYS_SYS_MODE_UTIL ;
GBL_CPU_PHYS_TOTAL_UTIL + GBL_CPU_PHYS_WAIT_UTIL + GBL_CPU_PHYS_IDLE_UTIL =
100%
On Power5 based systems, traditional sample based calculations cannot be made
because the dispatch cycle for each of the virtual CPUs is not same. So
Power5 processor maintains a per-thread register PURR. The thread is
dispatching instructions or the thread that last dispatched an instruction
will be incremented at every processor clock cycle. This makes the value to
be distributed between the two threads. Power5 processor also maintains two
more registers, one is timebase - which gets incremented at every tick and
decrementer - that provided periodic interrupts.
On a Shared LPAR environment, PURR is equal to the time that a virtual
processor has spent on a physical processor. Hypervisor maintains a virtual
timebase which is same as the sum of two PURRs.
On a Capped Shared logical system (partition), the calculations for the
metric GBL_CPU_PHYS_USER_MODE_UTIL is as follows:
(delta PURR in user mode/entitlement) * 100 On an Uncapped Shared
logical system (partition): (delta PURR in user mode/entitlement consumed) *
100
The calculations for the other utilizations such as
GBL_CPU_PHYS_USER_MODE_UTIL, GBL_CPU_PHYS_SYS_MODE_UTIL, and
GBL_CPU_PHYS_WAIT_UTIL are also similar.
On a standalone system, the value will be equivalent to GBL_CPU_TOTAL_UTIL.
On AIX System WPARs, this metric value is calculated against physical cpu
time.
GBL_CPU_SHARES_PRIO
----------------------------------
The weightage/priority assigned to a Uncapped logical system. This value
determines the minimum share of unutilized processing units that this logical
system can utilize.
On AIX SPLPAR this value is dependent on the available processing units in
the pool and can range from 0 to 255
On recognized VMware ESX guest, this value can range from 1 to 100000
On a standalone system the value will be “na”.
GBL_CPU_STOLEN_TIME
----------------------------------
The time, in seconds, that was stolen from all the CPUs during the interval.
Stolen (or steal, or involuntary wait) time, on Linux, is the time that the
CPU had runnable threads, but the Xen hypervisor chose to run something else
instead. KVM hosts, as of this release, do not update these counters. Stolen
CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
GBL_CPU_STOLEN_TIME_CUM
----------------------------------
The time, in seconds, that was stolen from all the CPUs over the cumulative
collection time.
Stolen (or steal, or involuntary wait) time, on Linux, is the time that the
CPU had runnable threads, but the Xen hypervisor chose to run something else
instead. KVM hosts, as of this release, do not update these counters. Stolen
CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
GBL_CPU_STOLEN_UTIL
----------------------------------
The percentage of time that was stolen from all CPUs during the interval.
Stolen (or steal, or involuntary wait) time, on Linux, is the time that the
CPU had runnable threads, but the Xen hypervisor chose to run something else
instead. KVM hosts, as of this release, do not update these counters. Stolen
CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
GBL_CPU_STOLEN_UTIL_CUM
----------------------------------
The percentage of time that was stolen from all CPUs over the cumulative
collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
Stolen (or steal, or involuntary wait) time, on Linux, is the time that the
CPU had runnable threads, but the Xen hypervisor chose to run something else
instead. KVM hosts, as of this release, do not update these counters. Stolen
CPU time is shown as ‘%steal’ in ‘sar’ and ‘st’ in ‘vmstat’.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
GBL_CPU_STOLEN_UTIL_HIGH
----------------------------------
The highest percentage of stolen CPU time during any one interval over the
cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_SYS_MODE_TIME
----------------------------------
The time, in seconds, that the CPU was in system mode during the interval.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
On AIX System WPARs, this metric value is calculated against physical cpu
time.
On Hyper-V host, this metric indicates the time spent in Hypervisor code.
GBL_CPU_SYS_MODE_TIME_CUM
----------------------------------
The time, in seconds, that the CPU was in system mode over the cumulative
collection time.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
On AIX System WPARs, this metric value is calculated against physical cpu
time.
GBL_CPU_SYS_MODE_UTIL
----------------------------------
Percentage of time the CPU was in system mode during the interval.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode.
This metric is a subset of the GBL_CPU_TOTAL_UTIL percentage.
This is NOT a measure of the amount of time used by system daemon processes,
since most system daemons spend part of their time in user mode and part in
system calls, like any other process.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
High system mode CPU percentages are normal for IO intensive applications.
Abnormally high system mode CPU percentages can indicate that a hardware
problem is causing a high interrupt rate. It can also indicate programs that
are not calling system calls efficiently. On a logical system, this metric
indicates the percentage of time the logical processor was in kernel mode
during this interval.
On Hyper-V host, this metric indicates the percentage of time spent in
Hypervisor code.
GBL_CPU_SYS_MODE_UTIL_CUM
----------------------------------
The percentage of time that the CPU was in system mode over the cumulative
collection time.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available. On
platforms other than HPUX, If the ignore_mt flag is set(true) in parm file,
this metric will report values normalized against the number of active cores
in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_SYS_MODE_UTIL_HIGH
----------------------------------
The highest percentage of time during any one interval that the CPU was in
system mode over the cumulative collection time.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_TOTAL_TIME
----------------------------------
The total time, in seconds, that the CPU was not idle in the interval.
This is calculated as
GBL_CPU_TOTAL_TIME =
GBL_CPU_USER_MODE_TIME +
GBL_CPU_SYS_MODE_TIME
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
On AIX System WPARs, this metric value is calculated against physical cpu
time.
GBL_CPU_TOTAL_TIME_CUM
----------------------------------
The total time that the CPU was not idle over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
On AIX System WPARs, this metric value is calculated against physical cpu
time.
GBL_CPU_TOTAL_UTIL
----------------------------------
Percentage of time the CPU was not idle during the interval.
This is calculated as
GBL_CPU_TOTAL_UTIL =
GBL_CPU_USER_MODE_UTIL +
GBL_CPU_SYS_MODE_UTIL
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
GBL_CPU_TOTAL_UTIL +
GBL_CPU_IDLE_UTIL = 100%
This metric varies widely on most systems, depending on the workload. A
consistently high CPU utilization can indicate a CPU bottleneck, especially
when other indicators such as GBL_RUN_QUEUE and GBL_ACTIVE_PROC are also
high. High CPU utilization can also occur on systems that are bottlenecked
on memory, because the CPU spends more time paging and swapping.
NOTE: On Windows, this metric may not equal the sum of the APP_CPU_TOTAL_UTIL
metrics. Microsoft states that “this is expected behavior” because this
GBL_CPU_TOTAL_UTIL metric is taken from the performance library Processor
objects while the APP_CPU_TOTAL_UTIL metrics are taken from the Process
objects. Microsoft states that there can be CPU time accounted for in the
Processor system objects that may not be seen in the Process objects. On a
logical system, this metric indicates the logical utilization with respect to
number of processors available for the logical system (GBL_NUM_CPU).
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_TOTAL_UTIL_CUM
----------------------------------
The percentage of total CPU time that the processor was not idle over the
cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available. On
platforms other than HPUX, If the ignore_mt flag is set(true) in parm file,
this metric will report values normalized against the number of active cores
in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_TOTAL_UTIL_HIGH
----------------------------------
The highest percentage of total CPU time during any one interval that the
processor was not idle over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_USER_MODE_TIME
----------------------------------
The time, in seconds, that the CPU was in user mode during the interval.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
On AIX System WPARs, this metric value is calculated against physical cpu
time.
On Hyper-V host, this metric indicates the time spent in guest code.
GBL_CPU_USER_MODE_TIME_CUM
----------------------------------
The time, in seconds, that the CPU was in user mode over the cumulative
collection time.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
On AIX System WPARs, this metric value is calculated against physical cpu
time.
GBL_CPU_USER_MODE_UTIL
----------------------------------
The percentage of time the CPU was in user mode during the interval.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
This metric is a subset of the GBL_CPU_TOTAL_UTIL percentage.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
High user mode CPU percentages are normal for computation-intensive
applications. Low values of user CPU utilization compared to relatively high
values for GBL_CPU_SYS_MODE_UTIL can indicate an application or hardware
problem. On a logical system, this metric indicates the percentage of time
the logical processor was in user mode during this interval.
On Hyper-V host, this metric indicates the percentage of time spent in guest
code.
GBL_CPU_USER_MODE_UTIL_CUM
The percentage of time that the CPU was in user mode over the cumulative
collection time.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application, ----
------------------------------process and thread metrics.
GBL_CPU_USER_MODE_UTIL_HIGH
----------------------------------
The highest percentage of time during any one interval that the CPU was in
user mode over the cumulative collection time.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
GBL_CPU_WAIT_TIME
----------------------------------
The time, in seconds, that the CPU was idle and there were processes waiting
for physical IOs to complete during the interval.
IO wait time is included in idle time on all systems.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On AIX System WPARs, this metric value is calculated against physical cpu
time.
On Solaris non-global zones, this metric is N/A. On platforms other than
HPUX, If the ignore_mt flag is set(true) in parm file, this metric will
report values normalized against the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
On Linux, wait time includes CPU steal time.
GBL_CPU_WAIT_TIME_CUM
----------------------------------
The total time since the beginning of measurement, in seconds, that the CPU
was idle and there were processes waiting for physical IOs to complete
IO wait time is included in idle time on all systems.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
On Linux, wait time includes CPU steal time.
GBL_CPU_WAIT_UTIL
----------------------------------
The percentage of time during the interval that the CPU was idle and there
were processes waiting for physical IOs to complete.
IO wait time is included in idle time on all systems.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On Solaris non-global zones, this metric is N/A. On platforms other than
HPUX, If the ignore_mt flag is set(true) in parm file, this metric will
report values normalized against the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
On Linux, wait time includes CPU steal time.
GBL_CPU_WAIT_UTIL_CUM
----------------------------------
The percentage of time since the beginning of measurement that the CPU was
idle and there were processes waiting for physical IOs to complete.
IO wait time is included in idle time on all systems.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
On Linux, wait time includes CPU steal time.
GBL_CPU_WAIT_UTIL_HIGH
----------------------------------
The highest percentage of CPU wait time during any one interval over the
cumulative collection time.
IO wait time is included in idle time on all systems.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
On Linux, wait time includes CPU steal time.
GBL_CSWITCH_RATE
----------------------------------
The average number of context switches per second during the interval.
On HP-UX, this includes context switches that result in the execution of a
different process and those caused by a process stopping, then resuming, with
no other process running in the meantime.
On Windows, this includes switches from one thread to another either inside a
single process or across processes. A thread switch can be caused either by
one thread asking another for information or by a thread being preempted by
another higher priority thread becoming ready to run.
On Solaris non-global zones with Uncapped CPUs, this metric shows data from
the global zone.
GBL_CSWITCH_RATE_CUM
----------------------------------
The average number of context switches per second over the cumulative
collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, this includes context switches that result in the execution of a
different process and those caused by a process stopping, then resuming, with
no other process running in the meantime.
GBL_CSWITCH_RATE_HIGH
----------------------------------
The highest number of context switches per second during any interval over
the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, this includes context switches that result in the execution of a
different process and those caused by a process stopping, then resuming, with
no other process running in the meantime.
GBL_DISK_PHYS_BYTE
----------------------------------
The number of KBs transferred to and from disks during the interval. The
bytes for all types of physical IOs are counted. Only local disks are
counted in this measurement. NFS devices are excluded.
It is not directly related to the number of IOs, since IO requests can be of
differing lengths.
On Unix systems, this includes file system IO, virtual memory IO, and raw IO.
On Windows, all types of physical IOs are counted.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_BYTE_RATE
----------------------------------
The average number of KBs per second at which data was transferred to and
from disks during the interval. The bytes for all types physical IOs are
counted. Only local disks are counted in this measurement. NFS devices are
excluded.
This is a measure of the physical data transfer rate. It is not directly
related to the number of IOs, since IO requests can be of differing lengths.
This is an indicator of how much data is being transferred to and from disk
devices. Large spikes in this metric can indicate a disk bottleneck.
On Unix systems, all types of physical disk IOs are counted, including file
system, virtual memory, and raw reads.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_IO
----------------------------------
The number of physical IOs during the interval. Only local disks are counted
in this measurement. NFS devices are excluded.
On Unix systems, all types of physical disk IOs are counted, including file
system IO, virtual memory IO and raw IO.
On HP-UX, this is calculated as
GBL_DISK_PHYS_IO =
GBL_DISK_FS_IO +
GBL_DISK_VM_IO +
GBL_DISK_SYSTEM_IO +
GBL_DISK_RAW_IO
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_IO_CUM
----------------------------------
The total number of physical IOs over the cumulative collection time. Only
local disks are counted in this measurement. NFS devices are excluded.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
GBL_DISK_PHYS_IO_RATE
----------------------------------
The number of physical IOs per second during the interval. Only local disks
are counted in this measurement. NFS devices are excluded.
On Unix systems, all types of physical disk IOs are counted, including file
system IO, virtual memory IO and raw IO.
On HP-UX, this is calculated as
GBL_DISK_PHYS_IO_RATE =
GBL_DISK_FS_IO_RATE +
GBL_DISK_VM_IO_RATE +
GBL_DISK_SYSTEM_IO_RATE +
GBL_DISK_RAW_IO_RATE
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_IO_RATE_CUM
----------------------------------
The number of physical IOs per second over the cumulative collection time.
Only local disks are counted in this measurement. NFS devices are excluded.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
GBL_DISK_PHYS_READ
----------------------------------
The number of physical reads during the interval. Only local disks are
counted in this measurement. NFS devices are excluded.
On Unix systems, all types of physical disk reads are counted, including file
system, virtual memory, and raw reads.
On HP-UX, there are many reasons why there is not a direct correlation
between the number of logical IOs and physical IOs. For example, small
sequential logical reads may be satisfied from the buffer cache, resulting in
fewer physical IOs than logical IOs. Conversely, large logical IOs or small
random IOs may result in more physical than logical IOs. Logical volume
mappings, logical disk mirroring, and disk striping also tend to remove any
correlation.
On HP-UX, this is calculated as
GBL_DISK_PHYS_READ =
GBL_DISK_FS_READ +
GBL_DISK_VM_READ +
GBL_DISK_SYSTEM_READ +
GBL_DISK_RAW_READ
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_READ_BYTE
----------------------------------
The number of KBs physically transferred from the disk during the interval.
Only local disks are counted in this measurement. NFS devices are excluded.
On Unix systems, all types of physical disk reads are counted, including file
system, virtual memory, and raw reads.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
GBL_DISK_PHYS_READ_BYTE_CUM
----------------------------------
The number of KBs (or MBs if specified) physically transferred from the disk
over the cumulative collection time. Only local disks are counted in this
measurement. NFS devices are excluded.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
GBL_DISK_PHYS_READ_BYTE_RATE
----------------------------------
The average number of KBs transferred from the disk per second during the
interval. Only local disks are counted in this measurement. NFS devices are
excluded.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_READ_CUM
----------------------------------
The total number of physical reads over the cumulative collection time. Only
local disks are counted in this measurement. NFS devices are excluded.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
GBL_DISK_PHYS_READ_PCT
----------------------------------
The percentage of physical reads of total physical IO during the interval.
Only local disks are counted in this measurement. NFS devices are excluded.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_READ_PCT_CUM
----------------------------------
The percentage of physical reads of total physical IO over the cumulative
collection time. Only local disks are counted in this measurement. NFS
devices are excluded.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
GBL_DISK_PHYS_READ_RATE
----------------------------------
The number of physical reads per second during the interval. Only local
disks are counted in this measurement. NFS devices are excluded.
On Unix systems, all types of physical disk reads are counted, including file
system, virtual memory, and raw reads.
On HP-UX, this is calculated as
GBL_DISK_PHYS_READ_RATE =
GBL_DISK_FS_READ_RATE +
GBL_DISK_VM_READ_RATE +
GBL_DISK_SYSTEM_READ_RATE +
GBL_DISK_RAW_READ_RATE
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_READ_RATE_CUM
----------------------------------
The average number of physical reads per second over the cumulative
collection time. Only local disks are counted in this measurement. NFS
devices are excluded.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
GBL_DISK_PHYS_WRITE
----------------------------------
The number of physical writes during the interval. Only local disks are
counted in this measurement. NFS devices are excluded.
On Unix systems, all types of physical disk writes are counted, including
file system IO, virtual memory IO, and raw writes.
On HP-UX, since this value is reported by the drivers, multiple physical
requests that have been collapsed to a single physical operation (due to
driver IO merging) are only counted once.
On HP-UX, there are many reasons why there is not a direct correlation
between logical IOs and physical IOs. For example, small logical writes may
end up entirely in the buffer cache, and later generate fewer physical IOs
when written to disk due to the larger IO size. Or conversely, small logical
writes may require physical prefetching of the corresponding disk blocks
before the data is merged and posted to disk. Logical volume mappings,
logical disk mirroring, and disk striping also tend to remove any
correlation.
On HP-UX, this is calculated as
GBL_DISK_PHYS_WRITE =
GBL_DISK_FS_WRITE +
GBL_DISK_VM_WRITE +
GBL_DISK_SYSTEM_WRITE +
GBL_DISK_RAW_WRITE
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_WRITE_BYTE
----------------------------------
The number of KBs (or MBs if specified) physically transferred to the disk
during the interval. Only local disks are counted in this measurement. NFS
devices are excluded.
On Unix systems, all types of physical disk writes are counted, including
file system IO, virtual memory IO, and raw writes.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
GBL_DISK_PHYS_WRITE_BYTE_CUM
----------------------------------
The number of KBs (or MBs if specified) physically transferred to the disk
over the cumulative collection time. Only local disks are counted in this
measurement. NFS devices are excluded.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
GBL_DISK_PHYS_WRITE_BYTE_RATE
----------------------------------
The average number of KBs transferred to the disk per second during the
interval. Only local disks are counted in this measurement. NFS devices are
excluded.
On Unix systems, all types of physical disk writes are counted, including
file system IO, virtual memory IO, and raw writes.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_WRITE_CUM
----------------------------------
The total number of physical writes over the cumulative collection time.
Only local disks are counted in this measurement. NFS devices are excluded.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, since this value is reported by the drivers, multiple physical
requests that have been collapsed to a single physical operation (due to
driver IO merging) are only counted once.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
GBL_DISK_PHYS_WRITE_PCT
----------------------------------
The percentage of physical writes of total physical IO during the interval.
Only local disks are counted in this measurement. NFS devices are excluded.
On HP-UX, since this value is reported by the drivers, multiple physical
requests that have been collapsed to a single physical operation (due to
driver IO merging) are only counted once.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
GBL_DISK_PHYS_WRITE_PCT_CUM
----------------------------------
The percentage of physical writes of total physical IO over the cumulative
collection time. Only local disks are counted in this measurement. NFS
devices are excluded.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, since this value is reported by the drivers, multiple physical
requests that have been collapsed to a single physical operation (due to
driver IO merging) are only counted once.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
GBL_DISK_PHYS_WRITE_RATE
----------------------------------
The number of physical writes per second during the interval. Only local
disks are counted in this measurement. NFS devices are excluded.
On Unix systems, all types of physical disk writes are counted, including
file system IO, virtual memory IO, and raw writes.
On HP-UX, since this value is reported by the drivers, multiple physical
requests that have been collapsed to a single physical operation (due to
driver IO merging) are only counted once.
On HP-UX, this is calculated as
GBL_DISK_PHYS_WRITE_RATE =
GBL_DISK_FS_WRITE_RATE +
GBL_DISK_VM_WRITE_RATE +
GBL_DISK_SYSTEM_WRITE_RATE +
GBL_DISK_RAW_WRITE_RATE
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_WRITE_RATE_CUM
----------------------------------
The number of physical writes per second over the cumulative collection time.
Only local disks are counted in this measurement. NFS devices are excluded.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, since this value is reported by the drivers, multiple physical
requests that have been collapsed to a single physical operation (due to
driver IO merging) are only counted once.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
GBL_DISK_REQUEST_QUEUE
----------------------------------
The total length of all of the disk queues at the end of the interval.
Some Linux kernels, typically 2.2 and older kernels, do not support the
instrumentation needed to provide values for this metric. This metric will
be “na” on the affected kernels. The “sar -d” command will also not be
present on these systems. Distributions and OS releases that are known to be
affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive
at boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_SUBSYSTEM_QUEUE
----------------------------------
The average number of processes or kernel threads blocked on the disk
subsystem (in a “queue” waiting for their file system disk IO to complete)
during the interval.
On HP-UX, this is based on the sum of processes or kernel threads in the
DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing
raw IO to a disk are not included in this measurement.
On Linux, this is based on the sum of all processes or kernel threads blocked
on disk.
On Linux, if thread collection is disabled, only the first thread of each
multi-threaded process is taken into account. This metric will be NA on
kernels older than 2.6.23 or kernels not including CFS, the Completely Fair
Scheduler.
This is calculated as the accumulated time mentioned above divided by the
interval time.
As this number rises, it is an indication of a disk bottleneck.
The Global QUEUE metrics, which are based on block states, represent the
average number of process or kernel thread counts, not actual queues.
The Global WAIT PCT metrics, which are also based on block states, represent
the percentage of all processes or kernel threads that were alive on the
system.
No direct comparison is reasonable with the Application WAIT PCT metrics
since they represent percentages within the context of a specific application
and cannot be summed or compared with global values easily. In addition, the
sum of each Application WAIT PCT for all applications will not equal 100%
since these values will vary greatly depending on the number of processes or
kernel threads in each application.
For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the
APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many
processes on the system, but there are only a very small number of processes
in the specific application that is being examined and there is a high
percentage of those few processes that are blocked on the disk I/O subsystem.
GBL_DISK_SUBSYSTEM_WAIT_PCT
----------------------------------
The percentage of time processes or kernel threads were blocked on the disk
subsystem (waiting for their file system IOs to complete) during the
interval.
On HP-UX, this is based on the sum of processes or kernel threads in the
DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing
raw IO to a disk are not included in this measurement.
On Linux, this is based on the sum of all processes or kernel threads blocked
on disk.
On Linux, if thread collection is disabled, only the first thread of each
multi-threaded process is taken into account. This metric will be NA on
kernels older than 2.6.23 or kernels not including CFS, the Completely Fair
Scheduler.
This is calculated as the accumulated time mentioned above divided by the
accumulated time that all processes or kernel threads were alive during the
interval.
The Global QUEUE metrics, which are based on block states, represent the
average number of process or kernel thread counts, not actual queues.
The Global WAIT PCT metrics, which are also based on block states, represent
the percentage of all processes or kernel threads that were alive on the
system.
No direct comparison is reasonable with the Application WAIT PCT metrics
since they represent percentages within the context of a specific application
and cannot be summed or compared with global values easily. In addition, the
sum of each Application WAIT PCT for all applications will not equal 100%
since these values will vary greatly depending on the number of processes or
kernel threads in each application.
For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the
APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many
processes on the system, but there are only a very small number of processes
in the specific application that is being examined and there is a high
percentage of those few processes that are blocked on the disk I/O subsystem.
GBL_DISK_SUBSYSTEM_WAIT_TIME
----------------------------------
On HP-UX, the accumulated time, in seconds, that all processes or kernel
threads were blocked on the disk subsystem (waiting for their file system IOs
to complete) during the interval. This is the sum of processes or kernel
threads in the DISK, INODE, CACHE and CDFS wait states.
On Linux, the accumulated time, in seconds, that all processes or kernel
threads were blocked on disk.
On Linux, if thread collection is disabled, only the first thread of each
multi-threaded process is taken into account. This metric will be NA on
kernels older than 2.6.23 or kernels not including CFS, the Completely Fair
Scheduler.
GBL_DISK_TIME_PEAK
----------------------------------
The time, in seconds, during the interval that the busiest disk was
performing IO transfers. This is for the busiest disk only, not all disk
devices. This counter is based on an end-to-end measurement for each IO
transfer updated at queue entry and exit points.
Only local disks are counted in this measurement. NFS devices are excluded.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_UTIL
----------------------------------
On HP-UX, this is the average percentage of time during the interval that all
disks had IO in progress from the point of view of the Operating System.
This is the average utilization for all disks.
On all other Unix systems, this is the average percentage of disk in use time
of the total interval (that is, the average utilization).
Only local disks are counted in this measurement. NFS devices are excluded.
GBL_DISK_UTIL_PEAK
----------------------------------
The utilization of the busiest disk during the interval.
On HP-UX, this is the percentage of time during the interval that the busiest
disk device had IO in progress from the point of view of the Operating
System.
On all other systems, this is the percentage of time during the interval that
the busiest disk was performing IO transfers.
It is not an average utilization over all the disk devices. Only local disks
are counted in this measurement. NFS devices are excluded.
Some Linux kernels, typically 2.2 and older kernels, do not support the
instrumentation needed to provide values for this metric. This metric will
be “na” on the affected kernels. The “sar -d” command will also not be
present on these systems. Distributions and OS releases that are known to be
affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0.
A peak disk utilization of more than 50 percent often indicates a disk IO
subsystem bottleneck situation. A bottleneck may not be in the physical disk
drive itself, but elsewhere in the IO path.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_UTIL_PEAK_CUM
----------------------------------
The average utilization of the busiest disk in each interval over the
cumulative collection time. Utilization is the percentage of time in use
versus the time in the measurement interval. For each interval a different
disk may be the busiest. Only local disks are counted in this measurement.
NFS devices are excluded.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
GBL_DISK_UTIL_PEAK_HIGH
----------------------------------
The highest utilization of any disk during any interval over the cumulative
collection time. Utilization is the percentage of time in use versus the
time in the measurement interval. Only local disks are counted in this
measurement. NFS devices are excluded.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
GBL_DISTRIBUTION
----------------------------------
The software distribution, if available.
GBL_FS_SPACE_UTIL_PEAK
----------------------------------
The percentage of occupied disk space to total disk space for the fullest
file system found during the interval. Only locally mounted file systems are
counted in this metric.
This metric can be used as an indicator that at least one file system on the
system is running out of disk space.
On Unix systems, CDROM and PC file systems are also excluded. This metric
can exceed 100 percent. This is because a portion of the file system space
is reserved as a buffer and can only be used by root. If the root user has
made the file system grow beyond the reserved buffer, the utilization will be
greater than 100 percent. This is a dangerous situation since if the root
user totally fills the file system, the system may crash.
On Windows, CDROM file systems are also excluded.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_GMTOFFSET
----------------------------------
The difference, in minutes, between local time and GMT (Greenwich Mean Time).
GBL_IGNORE_MT
----------------------------------
This boolean value indicates whether the CPU normalization is on or off. If
the metric value is “true”, CPU related metrics in the global class will
report values which are normalized against the number of active cores on the
system.
If the metric value is “false”, CPU related metrics in the global class will
report values which are normalized against the number of CPU threads on the
system.
If CPU MultiThreading is turned off this configuration option is a no-op and
the metric value will be “true”.
On Linux, this metric will only report “true” if this configuration is on and
if the kernel provides enough information to determine whether MultiThreading
is turned on.
On HPUX, this metric will report “na” if the processor doesn’t support the
feature.
GBL_INTERRUPT
----------------------------------
The number of IO interrupts during the interval.
On Solaris non-global zones with Uncapped CPUs, this metric shows data from
the global zone.
GBL_INTERRUPT_RATE
----------------------------------
The average number of IO interrupts per second during the interval.
On HPUX and SUN this value includes clock interrupts. To get non-clock
device interrupts, subtract clock interrupts from the value.
On Solaris non-global zones with Uncapped CPUs, this metric shows data from
the global zone.
GBL_INTERRUPT_RATE_CUM
----------------------------------
The average number of IO interrupts per second over the cumulative collection
time.
On HPUX and SUN this value includes clock interrupts. To get non-clock
device interrupts, subtract clock interrupts from the value.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
GBL_INTERRUPT_RATE_HIGH
----------------------------------
The highest number of IO interrupts per second during any one interval over
the cumulative collection time.
On HPUX and SUN this value includes clock interrupts. To get non-clock
device interrupts, subtract clock interrupts from the value.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
GBL_INTERVAL
----------------------------------
The amount of time in the interval.
This measured interval is slightly larger than the desired or configured
interval if the collection program is delayed by a higher priority process
and cannot sample the data immediately.
GBL_INTERVAL_CUM
----------------------------------
The amount of time over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
GBL_JAVAARG
----------------------------------
This boolean value indicates whether the java class overloading mechanism is
enabled or not. This metric will be set when the javaarg flag in the parm
file is set. The metric affected by this setting is PROC_PROC_ARGV1. This
setting is useful to construct parm file java application definitions using
the argv1= keyword.
GBL_LOADAVG
----------------------------------
The 1 minute load average of the system obtained at the time of logging.
On windows this is the load average of the system over the interval. Load
average on windows is the average number of threads that have been waiting in
ready state during the interval. This is obtained by checking the number of
threads in ready state every sub proc interval, accumulating them over the
interval and averaging over the interval.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_LOADAVG15
----------------------------------
The 15 minute load average of the system obtained at the time of logging.
GBL_LOADAVG5
----------------------------------
The 5 minute load average of the system obtained at the time of logging.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_LOADAVG_CUM
----------------------------------
The average load average of the system over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
GBL_LOADAVG_HIGH
----------------------------------
The highest value of the load average during any interval over the cumulative
collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
GBL_LOST_MI_TRACE_BUFFERS
----------------------------------
The number of trace buffers lost by the measurement processing daemon.
On HP-UX systems, if this value is > 0, the measurement subsystem is not
keeping up with the system events that generate traces.
For other Unix systems, if this value is > 0, the measurement subsystem is
not keeping up with the ARM API calls that generate traces.
Note: The value reported for this metric will roll over to 0 once it crosses
INTMAX.
GBL_LS_MODE
----------------------------------
Indicates whether the CPU entitlement for the logical system is Capped or
Uncapped.
On a recognized VMware ESX guest, where VMware guest SDK is enabled, the
value is “Uncapped” if maximum CPU entitlement (GBL_CPU_ENTL_MAX) is
unlimited.
Else, the value is always “Capped”.
GBL_LS_ROLE
----------------------------------
Indicates whether Perf Agent is installed on Logical system or host or
standalone system. This metric will be either “GUEST”, “HOST” or “STAND”.
GBL_LS_SHARED
----------------------------------
In a virtual environment, this metric indicates whether the physical CPUs are
dedicated to this Logical system or shared.
On AIX SPLPAR, this metric is equivalent to “Type” field of ‘lparstat -i’
command.
On a recognized VMware ESX guest, where VMware guest SDK is enabled, the
value is “Shared”.
On a standalone system the value of this metrics is “Dedicated”.
On AIX System WPARs, this metric is NA.
GBL_LS_TYPE
----------------------------------
The virtulization technology if applicable. The value of this metric is
“HPVM” on HP-UX host, “LPAR” on AIX LPAR, “Sys WPAR” on system WPAR, “Zone”
on Solaris Zones, “VMware” on recognized VMware ESX guest and VMware ESX
Server console, “Hyper-V” on Hyper-V host, else “NoVM”.
In conjunction with GBL_LS_ROLE this metric could be used to identify the
environment in which Perf Agent/Glance is running. For example, if
GBL_LS_ROLE is “Guest” and GBL_LS_TYPE is “VMware” then PA/Glance is running
on a VMware Guest.
GBL_MACHINE
----------------------------------
An ASCII string representing the Processor Architecture. And machine hardware
model is represented by GBL_MACHINE_MODEL metric.
GBL_MACHINE_MEM_USED
----------------------------------
The amount of physical host memory currently consumed for this logical
system’s physical memory. On a standalone system, the value will be
(GBL_MEM_UTIL * GBL_MEM_PHYS) / 100
GBL_MACHINE_MODEL
----------------------------------
The CPU model. This is similar to the information returned by the
GBL_MACHINE metric and the uname command(except for Solaris 10 x86/x86_64).
However, this metric returns more information on some processors.
On HP-UX, this is the same information returned by the model command.
GBL_MEM_AVAIL
----------------------------------
The amount of physical available memory in the system (in MBs unless
otherwise specified).
On Windows, memory resident operating system code and data is not included as
available memory.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_CACHE
----------------------------------
The amount of physical memory (in MBs unless otherwise specified) used by the
buffer cache during the interval.
On HP-UX 11i v2 and below, the buffer cache is a memory pool used by the
system to stage disk IO data for the driver.
On HP-UX 11i v3 and above this metric value represents the usage of the file
system buffer cache which is still being used for file system metadata.
On SUN, this value is obtained by multiplying the system page size times the
number of buffer headers (nbuf). For example, on a SPARCstation 10 the
buffer size is usually (200 (page size buffers) * 4096 (bytes/page) = 800
KB).
On SUN, the buffer cache is a memory pool used by the system to cache inode,
indirect block and cylinder group related disk accesses. This is different
from the traditional concept of a buffer cache that also holds file system
data. On Solaris 5.X, as file data is cached, accesses to it show up as
virtual memory IOs. File data caching occurs through memory mapping managed
by the virtual memory system, not through the buffer cache. The “nbuf” value
is dynamic, but it is very hard to create a situation where the memory cache
metrics change, since most systems have more than adequate space for inode,
indirect block, and cylinder group data caching. This cache is more heavily
utilized on NFS file servers.
On AIX, this value should be minimal since most disk IOs are done through
memory mapped files.
GBL_MEM_CACHE_UTIL
----------------------------------
The percentage of physical memory used by the buffer cache during the
interval.
On HP-UX 11i v2 and below, the buffer cache is a memory pool used by the
system to stage disk IO data for the driver.
On HP-UX 11i v3 and above this metric value represents the usage of the file
system buffer cache which is still being used for file system metadata.
On SUN, this percentage is based on calculating the buffer cache size by
multiplying the system page size times the number of buffer headers (nbuf).
For example, on a SPARCstation 10 the buffer size is usually (200 (page size
buffers) * 4096 (bytes/page) = 800 KB).
On SUN, the buffer cache is a memory pool used by the system to cache inode,
indirect block and cylinder group related disk accesses. This is different
from the traditional concept of a buffer cache that also holds file system
data. On Solaris 5.X, as file data is cached, accesses to it show up as
virtual memory IOs. File data caching occurs through memory mapping managed
by the virtual memory system, not through the buffer cache. The “nbuf” value
is dynamic, but it is very hard to create a situation where the memory cache
metrics change, since most systems have more than adequate space for inode,
indirect block, and cylinder group data caching. This cache is more heavily
utilized on NFS file servers.
On AIX, this value should be minimal since most disk IOs are done through
memory mapped files. On Windows the value reports ‘copy read hit %’ and ‘Pin
read hit %’.
GBL_MEM_ENTL_MAX
----------------------------------
In a virtual environment, this metric indicates the maximum amount of memory
configured for this logical system. The value is -3 if entitlement is
‘Unlimited’ for this logical system.
On a recognized VMware ESX guest, where VMware guest SDK is disabled, the
value is “na”
On Solaris non-global zones, this metric value is equivalent to ‘capped-
memory’ value for ‘zonecfg -z zonename info’ command.
On a standalone system this metric is equivalent to GBL_MEM_PHYS.
GBL_MEM_ENTL_MIN
----------------------------------
In a virtual environment, this metric indicates the minimum amount of memory
configured for this logical system.
On a recognized VMware ESX guest, where VMware guest SDK is disabled, the
value is “na”
On a standalone system, this metrics is equivalent to GBL_MEM_PHYS.
GBL_MEM_FILE_PAGEIN_RATE
----------------------------------
The number of page ins from the file system per second during the interval.
On Solaris, this is the same as the “fpi” value from the “vmstat -p” command,
divided by page size in KB.
On Linux, the value is reported in kilobytes and matches the ‘io/bi’ values
from vmstat.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_FILE_PAGEOUT_RATE
----------------------------------
The number of page outs to the file system per second during the interval.
On Solaris, this is the same as the “fpo” value from the “vmstat -p” command,
divided by page size in KB.
On Linux, the value is reported in kilobytes and matches the ‘io/bo’ values
from vmstat.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_FILE_PAGE_CACHE
----------------------------------
The amount of physical memory (in MBs unless otherwise specified) used by the
file cache during the interval. File cache is a memory pool used by the
system to stage disk IO data for the driver.
This metric is supported on HP-UX 11iv3 and above. The filecache_min and
filecache_max tunables control the filecache memory usage on the system. The
filecache_min tunable specifies the amount of physical memory that is
guaranteed to be available for filecache on the system. The filecache memory
usage can grow beyond filecache_min, up to the limit set by the filecache_max
tunable. The Virtual Memory(VM) subsystem always pre reserves ‘filecache_min’
tunable value worth of pages on the system for filecache, even in the case of
filecache under utilization (actual filecache utilization < filecache_min
value). This preserved memory by the VM is not available for the user. In
this scenario, this metric will show the ‘filecache_min’ as the filecache
value, rather than showing the actual filecache utilization.
On Linux, this metric is equal to ‘cached’ value of ‘free -m’ command output.
GBL_MEM_FILE_PAGE_CACHE_UTIL
----------------------------------
The percentage of physical_memory used by the file cache during the interval.
File cache is a memory pool used by the system to stage disk IO data for the
driver.
This metric is supported on HP-UX 11iv3 and above. The filecache_min and
filecache_max tunables control the filecache memory usage on the system. The
filecache_min tunable specifies the amount of physical memory that is
guaranteed to be available for filecache on the system. The filecache memory
usage can grow beyond filecache_min, up to the limit set by the filecache_max
tunable. The Virtual Memory(VM) subsystem always pre reserves ‘filecache_min’
tunable value worth of pages on the system for filecache, even in the case of
filecache under utilization (actual filecache utilization < filecache_min
value). This preserved memory by the VM is not available for the user. In
this scenario, this metric will show the ‘filecache_min’ as the filecache
value, rather than showing the actual filecache utilization.
On Linux, this metric is derived from ‘cached’ value of ‘free -m’ command
output.
GBL_MEM_FREE
----------------------------------
The amount of memory not allocated (in MBs unless otherwise specified). As
this value drops, the likelihood increases that swapping or paging out to
disk may occur to satisfy new memory requests.
On SUN, low values for this metric may not indicate a true memory shortage.
This metric can be influenced by the VMM (Virtual Memory Management) system.
On uncapped solaris zones, the metric indicates the amount of memory that is
available across the whole system that is not consumed by the global zone and
other non-global zones. In case of capped solaris zones, the metric indicates
the amount of memory that is not consumed by this zone against the memory cap
set.
On Linux, this metric is sum of ‘free’ and ‘cached’ memory.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
Locality Domain metrics are available on HP-UX 11iv2 and above.
GBL_MEM_FREE and LDOM_MEM_FREE, as well as the memory utilization metrics
derived from them, may not always fully match. GBL_MEM_FREE represents free
memory in the kernel’s reservation layer while LDOM_MEM_FREE shows actual
free pages. If memory has been reserved but not actually consumed from the
Locality Domains, the two values won’t match. Because GBL_MEM_FREE includes
pre-reserved memory, the GBL_MEM_* metrics are a better indicator of actual
memory consumption in most situations.
GBL_MEM_FREE_UTIL
----------------------------------
The percentage of physical memory that was free at the end of the interval.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_OVERHEAD
----------------------------------
The amount of “overhead” memory associated with this logical system that is
currently consumed on the host system. On VMware ESX Server console, the
value is equivalent to sum of the current overhead memory for all running
virtual machines On a standalone system, the value will be 0. On a
recognized VMware ESX guest, where VMware guest SDK is disabled, the value is
“na”.
GBL_MEM_PAGEIN
----------------------------------
The total number of page ins from the disk during the interval.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
On HP-UX, this is the same as the “page ins” value from the “vmstat -s”
command. On AIX, this is the same as the “paging space page ins” value.
Remember that “vmstat -s” reports cumulative counts.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PAGEIN_BYTE
----------------------------------
The number of KBs (or MBs if specified) of page ins during the interval.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
GBL_MEM_PAGEIN_BYTE_CUM
----------------------------------
The number of KBs (or MBs if specified) of page ins over the cumulative
collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
GBL_MEM_PAGEIN_BYTE_RATE
----------------------------------
The number of KBs per second of page ins during the interval.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
GBL_MEM_PAGEIN_BYTE_RATE_CUM
----------------------------------
The average number of KBs per second of page ins over the cumulative
collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
GBL_MEM_PAGEIN_BYTE_RATE_HIGH
----------------------------------
The highest number of KBs per second of page ins during any interval over the
cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
GBL_MEM_PAGEIN_CUM
----------------------------------
The total number of page ins from the disk over the cumulative collection
time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
GBL_MEM_PAGEIN_RATE
----------------------------------
The total number of page ins per second from the disk during the interval.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
On HP-UX and AIX, this is the same as the “pi” value from the vmstat command.
On Solaris, this is the same as the sum of the “epi” and “api” values from
the “vmstat -p” command, divided by the page size in KB.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PAGEIN_RATE_CUM
----------------------------------
The average number of page ins per second over the cumulative collection
time. This includes pages paged in from paging space and, except for AIX,
from the file system.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
GBL_MEM_PAGEIN_RATE_HIGH
----------------------------------
The highest number of page ins per second from disk during any interval over
the cumulative collection time.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
GBL_MEM_PAGEOUT
----------------------------------
The total number of page outs to the disk during the interval.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
On HP-UX, this is the same as the “page outs” value from the “vmstat -s”
command. On HP-UX 11iv3 and above this includes filecache page outs also. On
AIX, this is the same as the “paging space page outs” value. Remember that
“vmstat -s” reports cumulative counts.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PAGEOUT_BYTE
----------------------------------
The number of KBs (or MBs if specified) of page outs during the interval.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PAGEOUT_BYTE_CUM
----------------------------------
The number of KBs (or MBs if specified) of page outs over the cumulative
collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
GBL_MEM_PAGEOUT_BYTE_RATE
----------------------------------
The number of KBs (or MBs if specified) per second of page outs during the
interval.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PAGEOUT_BYTE_RATE_CUM
----------------------------------
The average number of KBs per second of page outs over the cumulative
collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
GBL_MEM_PAGEOUT_BYTE_RATE_HIGH
----------------------------------
The highest number of KBs per second of page outs during any interval over
the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
GBL_MEM_PAGEOUT_CUM
----------------------------------
The total number of page outs to the disk over the cumulative collection
time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
GBL_MEM_PAGEOUT_RATE
----------------------------------
The total number of page outs to the disk per second during the interval.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
On HP-UX and AIX, this is the same as the “po” value from the vmstat command.
On Solaris, this is the same as the sum of the “epo” and “apo” values from
the “vmstat -p” command, divided by the page size in KB.
On Windows, this counter also includes paging traffic on behalf of the system
cache to access file data for applications and so may be high when there is
no memory pressure.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PAGEOUT_RATE_CUM
----------------------------------
The average number of page outs to the disk per second over the cumulative
collection time. This includes pages paged out to paging space and, except
for AIX, to the file system.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
GBL_MEM_PAGEOUT_RATE_HIGH
----------------------------------
The highest number of page outs per second to disk during any interval over
the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and
file systems.
On Windows, this includes paging activity for both file systems and paging
space.
GBL_MEM_PAGE_FAULT
----------------------------------
The number of page faults that occurred during the interval.
On Linux this metric is available only on 2.6 and above kernel versions.
GBL_MEM_PAGE_FAULT_CUM
----------------------------------
The number of page faults that occurred over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
GBL_MEM_PAGE_FAULT_RATE
----------------------------------
The number of page faults per second during the interval.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PAGE_FAULT_RATE_CUM
----------------------------------
The average number of page faults per second over the cumulative collection
time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
GBL_MEM_PAGE_FAULT_RATE_HIGH
----------------------------------
The highest page fault per second during any interval over the cumulative
collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
GBL_MEM_PAGE_REQUEST
----------------------------------
The number of page requests to or from the disk during the interval.
On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging
space and not to the file system.
On Windows, this includes pages paged to or from both paging space and the
file system.
On HP-UX, this is the same as the sun of the “page ins” and “page outs”
values from the “vmstat -s” command. On AIX, this is the same as the sum of
the “paging space page ins” and “paging space page outs” values. Remember
that “vmstat -s” reports cumulative counts.
On Windows, this counter also includes paging traffic on behalf of the system
cache to access file data for applications and so may be high when there is
no memory pressure.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PAGE_REQUEST_CUM
----------------------------------
The total number of page requests to or from the disk over the cumulative
collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging
space and not to or from the file system.
On Windows, this includes pages paged to or from both paging space and the
file system.
On Windows, this counter also includes paging traffic on behalf of the system
cache to access file data for applications and so may be high when there is
no memory pressure.
GBL_MEM_PAGE_REQUEST_RATE
----------------------------------
The number of page requests to or from the disk per second during the
interval.
On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging
space and not to or from the file system.
On Windows, this includes pages paged to or from both paging space and the
file system.
On HP-UX and AIX, this is the same as the sum of the “pi” and “po” values
from the vmstat command.
On Solaris, this is the same as the sum of the “epi”, “epo”, “api”, and “apo”
values from the “vmstat -p” command, divided by the page size in KB.
Higher than normal rates can indicate either a memory or a disk bottleneck.
Compare GBL_DISK_UTIL_PEAK and GBL_MEM_UTIL to determine which resource is
more constrained. High rates may also indicate memory thrashing caused by a
particular application or set of applications. Look for processes with high
major fault rates to identify the culprits.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PAGE_REQUEST_RATE_CUM
----------------------------------
The average number of page requests to or from the disk per second over the
cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging
space and not to or from the file system.
On Windows, this includes pages paged to or from both paging space and the
file system.
GBL_MEM_PAGE_REQUEST_RATE_HIGH
----------------------------------
The highest number of page requests per second during any interval over the
cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging
space and not to or from the file system.
On Windows, this includes pages paged to or from both paging space and the
file system.
GBL_MEM_PHYS
----------------------------------
The amount of physical memory in the system (in MBs unless otherwise
specified).
On HP-UX, banks with bad memory are not counted. Note that on some machines,
the Processor Dependent Code (PDC) code uses the upper 1MB of memory and thus
reports less than the actual physical memory of the system. Thus, on a
system with 256MB of physical memory, this metric and dmesg(1M) might only
report 267,386,880 bytes (255MB). This is all the physical memory that
software on the machine can access.
On Windows, this is the total memory available, which may be slightly less
than the total amount of physical memory present in the system. This value
is also reported in the Control Panel’s About Windows NT help topic.
On Linux, this is the amount of memory given by dmesg(1M). If the value is
not available in kernel ring buffer, then the sum of system memory and
available memory will be reported as physical memory.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PHYS_SWAPPED
----------------------------------
On a recognized VMware ESX guest, where VMware guest SDK is enabled, this
metrics indicates the amount of memory that has been reclaimed by ESX Server
from this logical system by transparently swapping logical system’s memory to
disk. The value is “na” otherwise.
GBL_MEM_SHARES_PRIO
----------------------------------
The weightage/priority for memory assigned to this logical system. This value
influences the share of unutilized physical Memory that this logical system
can utilize. On a recognized VMware ESX guest, where VMware guest SDK is
enabled, this value can range from 0 to 100000. The value will be “na”
otherwise.
GBL_MEM_SWAPIN_BYTE
----------------------------------
The number of KBs transferred in from disk due to swap ins (or reactivations
on HP-UX) during the interval.
On Linux and AIX, swap metrics are equal to the corresponding page metrics.
On HP-UX, process swapping was replaced by a combination of paging and
deactivation. Process deactivation occurs when the system is thrashing or
when the amount of free memory falls below a critical level. The swapper
then marks certain processes for deactivation and removes them from the run
queue. Pages within the associated memory regions are reused or paged out by
the memory management vhand process in favor of pages belonging to processes
that are not deactivated. Unlike traditional process swapping, deactivated
memory pages may or may not be written out to the swap area, because a
process could be reactivated before the paging occurs.
To summarize, a process swap-out on HP-UX is a process deactivation. A swap-
in is a reactivation of a deactivated process. Swap metrics that report
swap-out bytes now represent bytes paged out to swap areas from deactivated
regions. Because these pages are pushed out over time based on memory
demands, these counts are much smaller than HP-UX 9.x counts where the entire
process was written to the swap area when it was swapped-out. Likewise, swap-
in bytes now represent bytes paged in as a result of reactivating a
deactivated process and reading in any pages that were actually paged out to
the swap area while the process was deactivated.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_SWAPIN_BYTE_CUM
----------------------------------
The number of KBs transferred in from disk due to swap ins (or reactivations
on HP-UX) over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On Linux and AIX, swap metrics are equal to the corresponding page metrics.
On HP-UX, process swapping was replaced by a combination of paging and
deactivation. Process deactivation occurs when the system is thrashing or
when the amount of free memory falls below a critical level. The swapper
then marks certain processes for deactivation and removes them from the run
queue. Pages within the associated memory regions are reused or paged out by
the memory management vhand process in favor of pages belonging to processes
that are not deactivated. Unlike traditional process swapping, deactivated
memory pages may or may not be written out to the swap area, because a
process could be reactivated before the paging occurs.
To summarize, a process swap-out on HP-UX is a process deactivation. A swap-
in is a reactivation of a deactivated process. Swap metrics that report
swap-out bytes now represent bytes paged out to swap areas from deactivated
regions. Because these pages are pushed out over time based on memory
demands, these counts are much smaller than HP-UX 9.x counts where the entire
process was written to the swap area when it was swapped-out. Likewise, swap-
in bytes now represent bytes paged in as a result of reactivating a
deactivated process and reading in any pages that were actually paged out to
the swap area while the process was deactivated.
GBL_MEM_SWAPIN_BYTE_RATE
----------------------------------
The number of KBs per second transferred from disk due to swap ins (or
reactivations on HP-UX) during the interval.
On Linux and AIX, swap metrics are equal to the corresponding page metrics.
On HP-UX, process swapping was replaced by a combination of paging and
deactivation. Process deactivation occurs when the system is thrashing or
when the amount of free memory falls below a critical level. The swapper
then marks certain processes for deactivation and removes them from the run
queue. Pages within the associated memory regions are reused or paged out by
the memory management vhand process in favor of pages belonging to processes
that are not deactivated. Unlike traditional process swapping, deactivated
memory pages may or may not be written out to the swap area, because a
process could be reactivated before the paging occurs.
To summarize, a process swap-out on HP-UX is a process deactivation. A swap-
in is a reactivation of a deactivated process. Swap metrics that report
swap-out bytes now represent bytes paged out to swap areas from deactivated
regions. Because these pages are pushed out over time based on memory
demands, these counts are much smaller than HP-UX 9.x counts where the entire
process was written to the swap area when it was swapped-out. Likewise, swap-
in bytes now represent bytes paged in as a result of reactivating a
deactivated process and reading in any pages that were actually paged out to
the swap area while the process was deactivated.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_SWAPIN_BYTE_RATE_CUM
----------------------------------
The number of KBs per second transferred from disk due to swap ins (or
reactivations on HP-UX) over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On Linux and AIX, swap metrics are equal to the corresponding page metrics.
On HP-UX, process swapping was replaced by a combination of paging and
deactivation. Process deactivation occurs when the system is thrashing or
when the amount of free memory falls below a critical level. The swapper
then marks certain processes for deactivation and removes them from the run
queue. Pages within the associated memory regions are reused or paged out by
the memory management vhand process in favor of pages belonging to processes
that are not deactivated. Unlike traditional process swapping, deactivated
memory pages may or may not be written out to the swap area, because a
process could be reactivated before the paging occurs.
To summarize, a process swap-out on HP-UX is a process deactivation. A swap-
in is a reactivation of a deactivated process. Swap metrics that report
swap-out bytes now represent bytes paged out to swap areas from deactivated
regions. Because these pages are pushed out over time based on memory
demands, these counts are much smaller than HP-UX 9.x counts where the entire
process was written to the swap area when it was swapped-out. Likewise, swap-
in bytes now represent bytes paged in as a result of reactivating a
deactivated process and reading in any pages that were actually paged out to
the swap area while the process was deactivated.
GBL_MEM_SWAPIN_BYTE_RATE_HIGH
----------------------------------
The highest number of KBs per second transferred from disk due to swap ins
(or reactivations on HP-UX) during any interval over the cumulative
collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On Linux and AIX, swap metrics are equal to the corresponding page metrics.
On HP-UX, process swapping was replaced by a combination of paging and
deactivation. Process deactivation occurs when the system is thrashing or
when the amount of free memory falls below a critical level. The swapper
then marks certain processes for deactivation and removes them from the run
queue. Pages within the associated memory regions are reused or paged out by
the memory management vhand process in favor of pages belonging to processes
that are not deactivated. Unlike traditional process swapping, deactivated
memory pages may or may not be written out to the swap area, because a
process could be reactivated before the paging occurs.
To summarize, a process swap-out on HP-UX is a process deactivation. A swap-
in is a reactivation of a deactivated process. Swap metrics that report
swap-out bytes now represent bytes paged out to swap areas from deactivated
regions. Because these pages are pushed out over time based on memory
demands, these counts are much smaller than HP-UX 9.x counts where the entire
process was written to the swap area when it was swapped-out. Likewise, swap-
in bytes now represent bytes paged in as a result of reactivating a
deactivated process and reading in any pages that were actually paged out to
the swap area while the process was deactivated.
GBL_MEM_SWAPOUT_BYTE
----------------------------------
The number of KBs (or MBs if specified) transferred out to disk due to swap
outs (or deactivations on HP-UX) during the interval.
On Linux and AIX, swap metrics are equal to the corresponding page metrics.
On HP-UX, process swapping was replaced by a combination of paging and
deactivation. Process deactivation occurs when the system is thrashing or
when the amount of free memory falls below a critical level. The swapper
then marks certain processes for deactivation and removes them from the run
queue. Pages within the associated memory regions are reused or paged out by
the memory management vhand process in favor of pages belonging to processes
that are not deactivated. Unlike traditional process swapping, deactivated
memory pages may or may not be written out to the swap area, because a
process could be reactivated before the paging occurs.
To summarize, a process swap-out on HP-UX is a process deactivation. A swap-
in is a reactivation of a deactivated process. Swap metrics that report
swap-out bytes now represent bytes paged out to swap areas from deactivated
regions. Because these pages are pushed out over time based on memory
demands, these counts are much smaller than HP-UX 9.x counts where the entire
process was written to the swap area when it was swapped-out. Likewise, swap-
in bytes now represent bytes paged in as a result of reactivating a
deactivated process and reading in any pages that were actually paged out to
the swap area while the process was deactivated.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_SWAPOUT_BYTE_CUM
----------------------------------
The number of KBs (or MBs if specified) transferred out to disk due to swap
outs (or deactivations on HP-UX) over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On Linux and AIX, swap metrics are equal to the corresponding page metrics.
On HP-UX, process swapping was replaced by a combination of paging and
deactivation. Process deactivation occurs when the system is thrashing or
when the amount of free memory falls below a critical level. The swapper
then marks certain processes for deactivation and removes them from the run
queue. Pages within the associated memory regions are reused or paged out by
the memory management vhand process in favor of pages belonging to processes
that are not deactivated. Unlike traditional process swapping, deactivated
memory pages may or may not be written out to the swap area, because a
process could be reactivated before the paging occurs.
To summarize, a process swap-out on HP-UX is a process deactivation. A swap-
in is a reactivation of a deactivated process. Swap metrics that report
swap-out bytes now represent bytes paged out to swap areas from deactivated
regions. Because these pages are pushed out over time based on memory
demands, these counts are much smaller than HP-UX 9.x counts where the entire
process was written to the swap area when it was swapped-out. Likewise, swap-
in bytes now represent bytes paged in as a result of reactivating a
deactivated process and reading in any pages that were actually paged out to
the swap area while the process was deactivated.
GBL_MEM_SWAPOUT_BYTE_RATE
----------------------------------
The number of KBs (or MBs if specified) per second transferred out to disk
due to swap outs (or deactivations on HP-UX) during the interval.
On Linux and AIX, swap metrics are equal to the corresponding page metrics.
On HP-UX, process swapping was replaced by a combination of paging and
deactivation. Process deactivation occurs when the system is thrashing or
when the amount of free memory falls below a critical level. The swapper
then marks certain processes for deactivation and removes them from the run
queue. Pages within the associated memory regions are reused or paged out by
the memory management vhand process in favor of pages belonging to processes
that are not deactivated. Unlike traditional process swapping, deactivated
memory pages may or may not be written out to the swap area, because a
process could be reactivated before the paging occurs.
To summarize, a process swap-out on HP-UX is a process deactivation. A swap-
in is a reactivation of a deactivated process. Swap metrics that report
swap-out bytes now represent bytes paged out to swap areas from deactivated
regions. Because these pages are pushed out over time based on memory
demands, these counts are much smaller than HP-UX 9.x counts where the entire
process was written to the swap area when it was swapped-out. Likewise, swap-
in bytes now represent bytes paged in as a result of reactivating a
deactivated process and reading in any pages that were actually paged out to
the swap area while the process was deactivated.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_SWAPOUT_BYTE_RATE_CUM
----------------------------------
The average number of KBs (or MBs if specified) per second transferred out to
disk due to swap outs (or deactivations on HP-UX) over the cumulative
collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On Linux and AIX, swap metrics are equal to the corresponding page metrics.
On HP-UX, process swapping was replaced by a combination of paging and
deactivation. Process deactivation occurs when the system is thrashing or
when the amount of free memory falls below a critical level. The swapper
then marks certain processes for deactivation and removes them from the run
queue. Pages within the associated memory regions are reused or paged out by
the memory management vhand process in favor of pages belonging to processes
that are not deactivated. Unlike traditional process swapping, deactivated
memory pages may or may not be written out to the swap area, because a
process could be reactivated before the paging occurs.
To summarize, a process swap-out on HP-UX is a process deactivation. A swap-
in is a reactivation of a deactivated process. Swap metrics that report
swap-out bytes now represent bytes paged out to swap areas from deactivated
regions. Because these pages are pushed out over time based on memory
demands, these counts are much smaller than HP-UX 9.x counts where the entire
process was written to the swap area when it was swapped-out. Likewise, swap-
in bytes now represent bytes paged in as a result of reactivating a
deactivated process and reading in any pages that were actually paged out to
the swap area while the process was deactivated.
GBL_MEM_SWAPOUT_BYTE_RATE_HIGH
----------------------------------
The highest number of KBs (or MBs if specified) per second transferred out to
disk due to swap outs (or deactivations on HP-UX) during any interval over
the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On Linux and AIX, swap metrics are equal to the corresponding page metrics.
On HP-UX, process swapping was replaced by a combination of paging and
deactivation. Process deactivation occurs when the system is thrashing or
when the amount of free memory falls below a critical level. The swapper
then marks certain processes for deactivation and removes them from the run
queue. Pages within the associated memory regions are reused or paged out by
the memory management vhand process in favor of pages belonging to processes
that are not deactivated. Unlike traditional process swapping, deactivated
memory pages may or may not be written out to the swap area, because a
process could be reactivated before the paging occurs.
To summarize, a process swap-out on HP-UX is a process deactivation. A swap-
in is a reactivation of a deactivated process. Swap metrics that report
swap-out bytes now represent bytes paged out to swap areas from deactivated
regions. Because these pages are pushed out over time based on memory
demands, these counts are much smaller than HP-UX 9.x counts where the entire
process was written to the swap area when it was swapped-out. Likewise, swap-
in bytes now represent bytes paged in as a result of reactivating a
deactivated process and reading in any pages that were actually paged out to
the swap area while the process was deactivated.
GBL_MEM_SYS
----------------------------------
The amount of physical memory (in MBs unless otherwise specified) used by the
system (kernel) during the interval. System memory does not include the
buffer cache. On HP-UX and Linux this does not include filecache also.
On HP-UX 11.0, this metric does not include some kinds of dynamically
allocated kernel memory. This has always been reported in the GBL_MEM_USER*
metrics.
On HP-UX 11.11 and beyond, this metric includes some kinds of dynamically
allocated kernel memory.
On Solaris non-global zones, this metric shows value as 0.
GBL_MEM_SYS_UTIL
----------------------------------
The percentage of physical memory used by the system during the interval.
System memory does not include the buffer cache. On HP-UX and Linux this
does not include filecache also.
On HP-UX 11.0, this metric does not include some kinds of dynamically
allocated kernel memory. This has always been reported in the GBL_MEM_USER*
metrics.
On HP-UX 11.11 and beyond, this metric includes some kinds of dynamically
allocated kernel memory.
On Solaris non-global zones, this metric shows value as 0.
GBL_MEM_USER
----------------------------------
The amount of physical memory (in MBs unless otherwise specified) allocated
to user code and data at the end of the interval. User memory regions
include code, heap, stack, and other data areas including shared memory.
This does not include memory for buffer cache. On HP-UX and Linux this does
not include filecache also.
On HP-UX 11.0, this metric includes some kinds of dynamically allocated
kernel memory.
On HP-UX 11.11 and beyond, this metric does not include some kinds of
dynamically allocated kernel memory. This is now reported in the
GBL_MEM_SYS* metrics.
Large fluctuations in this metric can be caused by programs which allocate
large amounts of memory and then either release the memory or terminate. A
slow continual increase in this metric may indicate a program with a memory
leak.
GBL_MEM_USER_UTIL
----------------------------------
The percent of physical memory allocated to user code and data at the end of
the interval. This metric shows the percent of memory owned by user memory
regions such as user code, heap, stack and other data areas including shared
memory. This does not include memory for buffer cache. On HP-UX and Linux
this does not include filecache also. On HP-UX 11.0, this metric includes
some kinds of dynamically allocated kernel memory.
On HP-UX 11.11 and beyond, this metric does not include some kinds of
dynamically allocated kernel memory. This is now reported in the
GBL_MEM_SYS* metrics.
Large fluctuations in this metric can be caused by programs which allocate
large amounts of memory and then either release the memory or terminate. A
slow continual increase in this metric may indicate a program with a memory
leak.
GBL_MEM_UTIL
----------------------------------
The percentage of physical memory in use during the interval. This includes
system memory (occupied by the kernel), buffer cache and user memory.
On HP-UX 11iv3 and above, this includes file cache. This excludes file cache
when cachemem parameter in the parm file is set to free.
On HP-UX, this calculation is done using the byte values for physical memory
and used memory, and is therefore more accurate than comparing the reported
kilobyte values for physical memory and used memory.
On Linux, the value of this metric includes file cache when the cachemem
parameter in the parm file is set to user.
On SUN, high values for this metric may not indicate a true memory shortage.
This metric can be influenced by the VMM (Virtual Memory Management) system.
This excludes ZFS ARC cache when cachemem parameter in the parm file is set
to free.
On AIX, this excludes file cache when cachemem parameter in the parm file is
set to free.
Locality Domain metrics are available on HP-UX 11iv2 and above.
GBL_MEM_FREE and LDOM_MEM_FREE, as well as the memory utilization metrics
derived from them, may not always fully match. GBL_MEM_FREE represents free
memory in the kernel’s reservation layer while LDOM_MEM_FREE shows actual
free pages. If memory has been reserved but not actually consumed from the
Locality Domains, the two values won’t match. Because GBL_MEM_FREE includes
pre-reserved memory, the GBL_MEM_* metrics are a better indicator of actual
memory consumption in most situations.
GBL_MEM_UTIL_CUM
----------------------------------
The average percentage of physical memory in use over the cumulative
collection time. This includes system memory (occupied by the kernel),
buffer cache and user memory.
On HP-UX 11iv3 and above, this includes file cache also.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
GBL_MEM_UTIL_HIGH
----------------------------------
The highest percentage of physical memory in use in any interval over the
cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
GBL_NET_COLLISION
----------------------------------
The number of collisions that occurred on all network interfaces during the
interval. A rising rate of collisions versus outbound packets is an
indication that the network is becoming increasingly congested. This metric
does not include deferred packets.
This does not include data for loopback interface.
For HP-UX, this will be the same as the sum of the “Single Collision Frames”,
“Multiple Collision Frames”, “Late Collisions”, and “Excessive Collisions”
values from the output of the “lanadmin” utility for the network interface.
Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0
release and beyond, “netstat -i” shows network activity on the logical level
(IP) only.
For all other Unix systems, this is the same as the sum of the “Coll” column
from the “netstat -i” command (“collisions” from the “netstat -i -e” command
on Linux) for a network device. See also netstat(1).
AIX does not support the collision count for the ethernet interface. The
collision count is supported for the token ring (tr) and loopback (lo)
interfaces. For more information, please refer to the netstat(1) man page.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_COLLISION_1_MIN_RATE
----------------------------------
The number of collisions per minute on all network interfaces during the
interval. This metric does not include deferred packets.
This does not include data for loopback interface.
Collisions occur on any busy network, but abnormal collision rates could
indicate a hardware or software problem.
AIX does not support the collision count for the ethernet interface. The
collision count is supported for the token ring (tr) and loopback (lo)
interfaces. For more information, please refer to the netstat(1) man page.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_COLLISION_CUM
----------------------------------
The number of collisions that occurred on all network interfaces over the
cumulative collection time. A rising rate of collisions versus outbound
packets is an indication that the network is becoming increasingly congested.
This metric does not include deferred packets.
This does not include data for loopback interface.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
For HP-UX, this will be the same as the sum of the “Single Collision Frames”,
“Multiple Collision Frames”, “Late Collisions”, and “Excessive Collisions”
values from the output of the “lanadmin” utility for the network interface.
Remember that “lanadmin” reports cumulative counts. For this release and
beyond, “netstat -i” shows network activity on the logical level (IP) only.
For other Unix systems, this is the same as the sum of the “Coll” column from
the “netstat -i” command (“collisions” from the “netstat -i -e” command on
Linux) for a network device. See also netstat(1).
AIX does not support the collision count for the ethernet interface. The
collision count is supported for the token ring (tr) and loopback (lo)
interfaces. For more information, please refer to the netstat(1) man page.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_COLLISION_PCT
----------------------------------
The percentage of collisions to total outbound packet attempts during the
interval. Outbound packet attempts include both successful packets and
collisions.
This does not include data for loopback interface.
A rising rate of collisions versus outbound packets is an indication that the
network is becoming increasingly congested.
This metric does not currently include deferred packets.
AIX does not support the collision count for the ethernet interface. The
collision count is supported for the token ring (tr) and loopback (lo)
interfaces. For more information, please refer to the netstat(1) man page.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_COLLISION_PCT_CUM
----------------------------------
The percentage of collisions to total outbound packet attempts over the
cumulative collection time. Outbound packet attempts include both successful
packets and collisions.
This does not include data for loopback interface.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
A rising rate of collisions versus outbound packets is an indication that the
network is becoming increasingly congested.
This metric does not currently include deferred packets.
AIX does not support the collision count for the ethernet interface. The
collision count is supported for the token ring (tr) and loopback (lo)
interfaces. For more information, please refer to the netstat(1) man page.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_COLLISION_RATE
----------------------------------
The number of collisions per second on all network interfaces during the
interval. This metric does not include deferred packets.
This does not include data for loopback interface.
A rising rate of collisions versus outbound packets is an indication that the
network is becoming increasingly congested.
AIX does not support the collision count for the ethernet interface. The
collision count is supported for the token ring (tr) and loopback (lo)
interfaces. For more information, please refer to the netstat(1) man page.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_ERROR
----------------------------------
The number of errors that occurred on all network interfaces during the
interval.
This does not include data for loopback interface.
For HP-UX, this will be the same as the sum of the “Inbound Errors” and
“Outbound Errors” values from the output of the “lanadmin” utility for the
network interface. Remember that “lanadmin” reports cumulative counts. As
of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on
the logical level (IP) only.
For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on
Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a
network device. See also netstat(1).
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_ERROR_1_MIN_RATE
----------------------------------
The number of errors per minute on all network interfaces during the
interval. This rate should normally be zero or very small. A large error
rate can indicate a hardware or software problem.
This does not include data for loopback interface.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_ERROR_CUM
----------------------------------
The number of errors that occurred on all network interfaces over the
cumulative collection time.
This does not include data for loopback interface.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
For HP-UX, this will be the same as the total sum of the “Inbound Errors” and
“Outbound Errors” values from the output of the “lanadmin” utility for the
network interface. Remember that “lanadmin” reports cumulative counts. As
of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on
the logical level (IP) only.
For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on
Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a
network device. See also netstat(1).
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_ERROR_RATE
----------------------------------
The number of errors per second on all network interfaces during the
interval.
This does not include data for loopback interface.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_IN_ERROR
----------------------------------
The number of inbound errors that occurred on all network interfaces during
the interval.
A large number of errors may indicate a hardware problem on the network.
This does not include data for loopback interface.
For HP-UX, this will be the same as the sum of the “Inbound Errors” values
from the output of the “lanadmin” utility for the network interface.
Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0
release and beyond, “netstat -i” shows network activity on the logical level
(IP) only.
For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on
Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a
network device. See also netstat(1).
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_IN_ERROR_CUM
----------------------------------
The number of inbound errors that occurred on all network interfaces over the
cumulative collection time.
This does not include data for loopback interface.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
A large number of errors may indicate a hardware problem on the network.
For HP-UX, this will be the same as the total sum of the “Inbound Errors”
values from the output of the “lanadmin” utility for the network interface.
Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0
release and beyond, “netstat -i” shows network activity on the logical level
(IP) only.
For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on
Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a
network device. See also netstat(1).
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_IN_ERROR_PCT
----------------------------------
The percentage of inbound network errors to total inbound packet attempts
during the interval. Inbound packet attempts include both packets
successfully received and those that encountered errors.
This does not include data for loopback interface.
A large number of errors may indicate a hardware problem on the network. The
percentage of inbound errors to total packets attempted should remain low.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_IN_ERROR_PCT_CUM
----------------------------------
The percentage of inbound network errors to total inbound packet attempts
over the cumulative collection time. Inbound packet attempts include both
packets successfully received and those that encountered errors.
This does not include data for loopback interface.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
A large number of errors may indicate a hardware problem on the network. The
percentage of inbound errors to total packets attempted should remain low.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_IN_ERROR_RATE
----------------------------------
The number of inbound errors per second on all network interfaces during the
interval.
This does not include data for loopback interface.
A large number of errors may indicate a hardware problem on the network. The
percentage of inbound errors to total packets attempted should remain low.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_IN_ERROR_RATE_CUM
----------------------------------
The average number of inbound errors per second on all network interfaces
over the cumulative collection time.
This does not include data for loopback interface.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_IN_PACKET
----------------------------------
The number of successful packets received through all network interfaces
during the interval. Successful packets are those that have been processed
without errors or collisions.
This does not include data for loopback interface.
For HP-UX, this will be the same as the sum of the “Inbound Unicast Packets”
and “Inbound Non-Unicast Packets” values from the output of the “lanadmin”
utility for the network interface. Remember that “lanadmin” reports
cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i”
shows network activity on the logical level (IP) only.
For all other Unix systems, this is the same as the sum of the “Ipkts” column
(RX-OK on Linux) from the “netstat -i” command for a network device. See
also netstat(1).
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On Windows system, the packet size for NBT connections is defined as 1
Kbyte.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_IN_PACKET_CUM
----------------------------------
The number of successful packets received through all network interfaces over
the cumulative collection time. Successful packets are those that have been
processed without errors or collisions.
This does not include data for loopback interface.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
For HP-UX, this will be the same as the total sum of the “Inbound Unicast
Packets” and “Inbound Non-Unicast Packets” values from the output of the
“lanadmin” utility for the network interface. Remember that “lanadmin”
reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat
-i” shows network activity on the logical level (IP) only.
For all other Unix systems, this is the same as the sum of the “Ipkts” column
(RX-OK on Linux) from the “netstat -i” command for a network device. See
also netstat(1).
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_IN_PACKET_RATE
----------------------------------
The number of successful packets per second received through all network
interfaces during the interval. Successful packets are those that have been
processed without errors or collisions.
This does not include data for loopback interface.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On Windows system, the packet size for NBT connections is defined as 1
Kbyte.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_OUT_ERROR
----------------------------------
The number of outbound errors that occurred on all network interfaces during
the interval.
This does not include data for loopback interface.
For HP-UX, this will be the same as the sum of the “Outbound Errors” values
from the output of the “lanadmin” utility for the network interface.
Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0
release and beyond, “netstat -i” shows network activity on the logical level
(IP) only.
For all other Unix systems, this is the same as the sum of “Oerrs” (TX-ERR on
Linux) from the “netstat -i” command for a network device. See also
netstat(1).
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_OUT_ERROR_CUM
----------------------------------
The number of outbound errors that occurred on all network interfaces over
the cumulative collection time.
This does not include data for loopback interface.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
For HP-UX, this will be the same as the total sum of the “Outbound Errors”
values from the output of the “lanadmin” utility for the network interface.
Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0
release and beyond, “netstat -i” shows network activity on the logical level
(IP) only.
For all other Unix systems, this is the same as the sum of “Oerrs” (TX-ERR on
Linux) from the “netstat -i” command for a network device. See also
netstat(1).
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_OUT_ERROR_PCT
----------------------------------
The percentage of outbound network errors to total outbound packet attempts
during the interval. Outbound packet attempts include both packets
successfully sent and those that encountered errors.
This does not include data for loopback interface.
The percentage of outbound errors to total packets attempted to be
transmitted should remain low.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_OUT_ERROR_PCT_CUM
----------------------------------
The percentage of outbound network errors to total outbound packet attempts
over the cumulative collection time. Outbound packet attempts include both
packets successfully sent and those that encountered errors.
This does not include data for loopback interface.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
The percentage of outbound errors to total packets attempted to be
transmitted should remain low.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_OUT_ERROR_RATE
----------------------------------
The number of outbound errors per second on all network interfaces during the
interval.
This does not include data for loopback interface.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_OUT_ERROR_RATE_CUM
----------------------------------
The number of outbound errors per second on all network interfaces over the
cumulative collection time.
This does not include data for loopback interface.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_OUT_PACKET
----------------------------------
The number of successful packets sent through all network interfaces during
the last interval. Successful packets are those that have been processed
without errors or collisions.
This does not include data for loopback interface.
For HP-UX, this will be the same as the sum of the “Outbound Unicast Packets”
and “Outbound Non-Unicast Packets” values from the output of the “lanadmin”
utility for the network interface. Remember that “lanadmin” reports
cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i”
shows network activity on the logical level (IP) only.
For all other Unix systems, this is the same as the sum of the “Opkts” column
(TX-OK on Linux) from the “netstat -i” command for a network device. See
also netstat(1).
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On Windows system, the packet size for NBT connections is defined as 1
Kbyte.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_OUT_PACKET_CUM
----------------------------------
The number of successful packets sent through all network interfaces over the
cumulative collection time. Successful packets are those that have been
processed without errors or collisions.
This does not include data for loopback interface.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
For HP-UX, this will be the same as the total sum of the “Outbound Unicast
Packets” and “Outbound Non-Unicast Packets” values from the output of the
“lanadmin” utility for the network interface. Remember that “lanadmin”
reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat
-i” shows network activity on the logical level (IP) only.
For all other Unix systems, this is the same as the sum of the “Opkts” column
(TX-OK on Linux) from the “netstat -i” command for a network device. See
also netstat(1).
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_OUT_PACKET_RATE
----------------------------------
The number of successful packets per second sent through the network
interfaces during the interval. Successful packets are those that have been
processed without errors or collisions.
This does not include data for loopback interface.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On Windows system, the packet size for NBT connections is defined as 1
Kbyte.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_PACKET
----------------------------------
The total number of successful inbound and outbound packets for all network
interfaces during the interval. These are the packets that have been
processed without errors or collisions.
This does not include data for loopback interface.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On Windows system, the packet size for NBT connections is defined as 1
Kbyte.
GBL_NET_PACKET_RATE
----------------------------------
The number of successful packets per second (both inbound and outbound) for
all network interfaces during the interval. Successful packets are those
that have been processed without errors or collisions.
This does not include data for loopback interface.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On Windows system, the packet size for NBT connections is defined as 1
Kbyte.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_UTIL_PEAK
----------------------------------
It is the utilisation of the most used network interfaces at the end of the
interval.
Some AIX systems report a speed that is lower than the measured throughput
and this can result in BYNETIF_UTIL and GBL_NET_UTIL_PEAK showing more than
100% utilization.
On Linux, root permission is required to obtain network interface bandwidth
so values will be n/a when running in non-root mode. Also, maximum bandwidth
for virtual interfaces (vnetN) may be reported wrongly on KVM or Xen server
so, similarly to AIX, utilization may exceed 100%.
GBL_NFS_CALL
----------------------------------
The number of NFS calls the local system has made as either a NFS client or
server during the interval.
This includes both successful and unsuccessful calls. Unsuccessful calls are
those that cannot be completed due to resource limitations or LAN packet
errors.
NFS calls include create, remove, rename, link, symlink, mkdir, rmdir,
statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache,
null and root operations.
On AIX System WPARs, this metric is NA.
GBL_NFS_CALL_RATE
----------------------------------
The number of NFS calls per second the system made as either a NFS client or
NFS server during the interval.
Each computer can operate as both a NFS server, and as an NFS client.
This metric includes both successful and unsuccessful calls. Unsuccessful
calls are those that cannot be completed due to resource limitations or LAN
packet errors.
NFS calls include create, remove, rename, link, symlink, mkdir, rmdir,
statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache,
null and root operations.
On AIX System WPARs, this metric is NA.
GBL_NFS_CLIENT_BAD_CALL
----------------------------------
The number of failed NFS client calls during the interval. Calls fail due to
lack of system resources (lack of virtual memory) as well as network errors.
GBL_NFS_CLIENT_BAD_CALL_CUM
----------------------------------
The number of failed NFS client calls over the cumulative collection time.
Calls fail due to lack of system resources (lack of virtual memory) as well
as network errors.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
GBL_NFS_CLIENT_CALL
----------------------------------
The number of NFS calls the local machine has processed as a NFS client
during the interval. Calls are the system calls used to initiate physical
NFS operations. These calls are not always successful due to resource
constraints or LAN errors, which means that the call rate should exceed the
IO rate. This metric includes both successful and unsuccessful calls.
NFS calls include create, remove, rename, link, symlink, mkdir, rmdir,
statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache,
null and root operations.
GBL_NFS_CLIENT_CALL_CUM
----------------------------------
The number of NFS calls the local machine has processed as a NFS client over
the cumulative collection time. Calls are the system calls used to initiate
physical NFS operations. These calls are not always successful due to
resource constraints or LAN errors, which means that the call rate should
exceed the IO rate. This metric includes both successful and unsuccessful
calls.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
NFS calls include create, remove, rename, link, symlink, mkdir, rmdir,
statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache,
null and root operations.
GBL_NFS_CLIENT_CALL_RATE
----------------------------------
The number of NFS calls the local machine has processed as a NFS client per
second during the interval. Calls are the system call used to initiate
physical NFS operations. These calls are not always successful due to
resource constraints or LAN errors, which means that the call rate should
exceed the IO rate. This metric includes both successful and unsuccessful
calls.
NFS calls include create, remove, rename, link, symlink, mkdir, rmdir,
statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache,
null and root operations.
GBL_NFS_CLIENT_IO
----------------------------------
The number of NFS IOs the local machine has completed as an NFS client during
the interval. This number represents physical IOs sent by the client in
contrast to a call which is an attempt to initiate these operations.
Each computer can operate as both an NFS server, and as a NFS client.
NFS IOs include reads and writes from successful calls to getattr, setattr,
lookup, read, readdir, readlink, write, and writecache.
GBL_NFS_CLIENT_IO_CUM
----------------------------------
The number of NFS IOs the local machine has completed as an NFS client over
the cumulative collection time. This number represents physical IOs sent by
the client in contrast to a call which is an attempt to initiate these
operations.
Each computer can operate as both an NFS server, and as a NFS client.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
NFS IOs include reads and writes from successful calls to getattr, setattr,
lookup, read, readdir, readlink, write, and writecache.
GBL_NFS_CLIENT_IO_PCT
----------------------------------
The percentage of NFs IOs the local machine has completed as an NFS client
versus total NFS IOs completed during the interval. This number represents
physical IOs sent by the client in contrast to a call which is an attempt to
initiate these operations.
Each computer can operate as both an NFS server, and as a NFS client.
A percentage greater than 50 indicates that this machine is acting more as a
client. A percentage less than 50 indicates this machine is acting more as a
server for others.
NFS IOs include reads and writes from successful calls to getattr, setattr,
lookup, read, readdir, readlink, write, and writecache.
GBL_NFS_CLIENT_IO_PCT_CUM
----------------------------------
The percentage of NFS IOs the local machine has completed as an NFS client
versus total NFS IOs completed over the cumulative collection time. This
number represents physical IOs sent by the client in contrast to a call which
is an attempt to initiate these operations.
Each computer can operate as both an NFS server, and as a NFS client.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
A percentage greater than 50 indicates that this machine is acting more as a
client. A percentage less than 50 indicates this machine is acting more as a
server for others.
NFS IOs include reads and writes from successful calls to getattr, setattr,
lookup, read, readdir, readlink, write, and writecache.
GBL_NFS_CLIENT_IO_RATE
----------------------------------
The number of NFS IOs per second the local machine has completed as an NFS
client during the interval. This number represents physical IOs sent by the
client in contrast to a call which is an attempt to initiate these
operations.
Each computer can operate as both an NFS server, and as a NFS client.
NFS IOs include reads and writes from successful calls to getattr, setattr,
lookup, read, readdir, readlink, write, and writecache.
GBL_NFS_CLIENT_IO_RATE_CUM
----------------------------------
The number of NFS IOs per second the local machine has completed as an NFS
client over the cumulative collection time. This number represents physical
IOs sent by the client in contrast to a call which is an attempt to initiate
these operations.
Each computer can operate as both an NFS server, and as a NFS client.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
NFS IOs include reads and writes from successful calls to getattr, setattr,
lookup, read, readdir, readlink, write, and writecache.
GBL_NFS_CLIENT_READ_RATE
----------------------------------
The number of NFS “read” operations per second the system generated as an NFS
client during the interval.
NFS Version 2 read operations consist of getattr, lookup, readlink, readdir,
null, root, statfs, and read.
NFS Version 3 read operations consist of getattr, lookup, access, readlink,
read, readdir, readdirplus, fsstat, fsinfo, and null.
GBL_NFS_CLIENT_READ_RATE_CUM
----------------------------------
The average number of NFS “read” operations per second the system generated
as an NFS client over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
NFS Version 2 read operations consist of getattr, lookup, readlink, readdir,
null, root, statfs, and read.
NFS Version 3 read operations consist of getattr, lookup, access, readlink,
read, readdir, readdirplus, fsstat, fsinfo, and null.
GBL_NFS_CLIENT_WRITE_RATE
----------------------------------
The number of NFS “write” operations per second the system generated as an
NFS client during the interval.
NFS Version 2 write operations consist of setattr, write, writecache,
create, remove, rename, link, symlink, mkdir, and rmdir.
NFS Version 3 write operations consist of setattr, write, create, mkdir,
symlink, mknod, remove, rmdir, rename, link, pathconf, and commit.
GBL_NFS_CLIENT_WRITE_RATE_CUM
----------------------------------
The average number of NFS “write” operations per second the system generated
as an NFS client over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
NFS Version 2 write operations consist of setattr, write, writecache,
create, remove, rename, link, symlink, mkdir, and rmdir.
NFS Version 3 write operations consist of setattr, write, create, mkdir,
symlink, mknod, remove, rmdir, rename, link, pathconf, and commit.
GBL_NFS_SERVER_BAD_CALL
----------------------------------
The number of failed NFS server calls during the interval. Calls fail due to
lack of system resources (lack of virtual memory) as well as network errors.
GBL_NFS_SERVER_BAD_CALL_CUM
----------------------------------
The number of failed NFS server calls over the cumulative collection time.
Calls fail due to lack of system resources (lack of virtual memory) as well
as network errors.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
GBL_NFS_SERVER_CALL
----------------------------------
The number of NFS calls the local machine has processed as a NFS server
during the interval.
Calls are the system calls used to initiate physical NFS operations. These
calls are not always successful due to resource constraints or LAN errors,
which means that the call rate could exceed the IO rate. This metric
includes both successful and unsuccessful calls.
NFS calls include create, remove, rename, link, symlink, mkdir, rmdir,
statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache,
null and root operations.
GBL_NFS_SERVER_CALL_CUM
----------------------------------
The number of NFS calls the local machine has processed as a NFS server over
the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
Calls are the system calls used to initiate physical NFS operations. These
calls are not always successful due to resource constraints or LAN errors,
which means that the call rate could exceed the IO rate. This metric
includes both successful and unsuccessful calls.
NFS calls include create, remove, rename, link, symlink, mkdir, rmdir,
statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache,
null and root operations.
GBL_NFS_SERVER_CALL_RATE
----------------------------------
The number of NFS calls the local machine has processed per second as a NFS
server during the interval.
Calls are the system calls used to initiate physical NFS operations. These
calls are not always successful due to resource constraints or LAN errors,
which means that the call rate could exceed the IO rate. This metric
includes both successful and unsuccessful calls.
NFS calls include create, remove, rename, link, symlink, mkdir, rmdir,
statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache,
null and root operations.
GBL_NFS_SERVER_IO
----------------------------------
The number of NFS IOs the local machine has completed as an NFS server during
the interval. This number represents physical IOs received by the serverein
contrast to a call which is an attempt to initiate these operations.
Each computer can operate as both a NFS server, and as an NFS client.
NFS IOs include reads and writes from successful calls to getattr, setattr,
lookup, read, readdir, readlink, write, and writecache.
GBL_NFS_SERVER_IO_CUM
----------------------------------
The number of NFS IOs the local machine has completed as an NFS server over
the cumulative collection time. This number represents physical IOs received
by the server n contrast to a call which is an attempt to initiate these
operations.
Each computer can operate as both a NFS server, and as an NFS client.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
NFS IOs include reads and writes from successful calls to getattr, setattr,
lookup, read, readdir, readlink, write, and writecache.
GBL_NFS_SERVER_IO_PCT
----------------------------------
The percentage of NFS IOs the local machine has completed as an NFS server
versus total NFS IOs completed during the interval. This number represents
physical IOs received by the server in contrast to a call which is an attempt
to initiate these operations.
Each computer can operate as both a NFS server, and as an NFS client.
A percentage greater than 50 indicates that this machine is acting more as a
server for others. A percentage less than 50 indicates this machine is
acting more as a client.
NFS IOs include reads and writes from successful calls to getattr, setattr,
lookup, read, readdir, readlink, write, and writecache.
GBL_NFS_SERVER_IO_PCT_CUM
----------------------------------
The percentage of NFs IOs the local machine has completed as an NFS server
versus total NFS IOs completed over the cumulative collection time. This
number represents physical IOs received by the server in contrast to a call
which is an attempt to initiate these operations.
Each computer can operate as both a NFS server, and as an NFS client.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
A percentage greater than 50 indicates that this machine is acting more as a
server for others. A percentage less than 50 indicates this machine is
acting more as a client.
NFS IOs include reads and writes from successful calls to getattr, setattr,
lookup, read, readdir, readlink, write, and writecache.
GBL_NFS_SERVER_IO_RATE
----------------------------------
The number of NFS IOs per second the local machine has completed as an NFS
server during the interval. This number represents physical IOs received by
the server in contrast to a call which is an attempt to initiate these
operations.
Each computer can operate as both a NFS server, and as an NFS client.
NFS IOs include reads and writes from successful calls to getattr, setattr,
lookup, read, readdir, readlink, write, and writecache.
GBL_NFS_SERVER_IO_RATE_CUM
----------------------------------
The number of NFS IOs per second the local machine has completed as an NFS
server over the cumulative collection time. This number represents physical
IOs received by the server in contrast to a call which is an attempt to
initiate these operations.
Each computer can operate as both a NFS server, and as an NFS client.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
NFS IOs include reads and writes from successful calls to getattr, setattr,
lookup, read, readdir, readlink, write, and writecache.
GBL_NFS_SERVER_READ_RATE
----------------------------------
The number of NFS “read” operations per second the system processed as an NFS
server during the interval.
NFS Version 2 read operations consist of getattr, lookup, readlink, readdir,
null, root, statfs, and read.
NFS Version 3 read operations consist of getattr, lookup, access, readlink,
read, readdir, readdirplus, fsstat, fsinfo, and null.
GBL_NFS_SERVER_READ_RATE_CUM
----------------------------------
The average number of NFS “read” operations per second the system processed
as an NFS server over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
NFS Version 2 read operations consist of getattr, lookup, readlink, readdir,
null, root, statfs, and read.
NFS Version 3 read operations consist of getattr, lookup, access, readlink,
read, readdir, readdirplus, fsstat, fsinfo, and null.
GBL_NFS_SERVER_WRITE_RATE
----------------------------------
The number of NFS “write” operations per second the system processed as an
NFS server during the interval.
NFS Version 2 write operations consist of setattr, write, writecache,
create, remove, rename, link, symlink, mkdir, and rmdir.
NFS Version 3 write operations consist of setattr, write, create, mkdir,
symlink, mknod, remove, rmdir, rename, link, pathconf, and commit.
GBL_NFS_SERVER_WRITE_RATE_CUM
----------------------------------
The average number of NFS “write” operations per second the system processed
as an NFS server over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
NFS Version 2 write operations consist of setattr, write, writecache,
create, remove, rename, link, symlink, mkdir, and rmdir.
NFS Version 3 write operations consist of setattr, write, create, mkdir,
symlink, mknod, remove, rmdir, rename, link, pathconf, and commit.
GBL_NODENAME
----------------------------------
On Unix systems, this is the name of the computer as returned by the command
“uname -n” (that is, the string returned from the “hostname” program).
On Windows, this is the name of the computer as returned by GetComputerName.
GBL_NUM_ACTIVE_LS
----------------------------------
This indicates the number of LS hosted in a system that are active . If Perf
Agent is installed in a guest or in a standalone system this value will be 0.
On Solaris non-global zones, this metric shows value as 0.
GBL_NUM_APP
----------------------------------
The number of applications defined in the parm file plus one (for “other”).
The application called “other” captures all other processes not defined in
the parm file.
You can define up to 999 applications.
GBL_NUM_CPU
----------------------------------
The number of physical CPUs on the system. This includes all CPUs, either
online or offline. For HP-UX and certain versions of Linux, the sar(1M)
command allows you to check the status of the system CPUs. For SUN and DEC,
the commands psrinfo(1M) and psradm(1M) allow you to check or change the
status of the system CPUs. For AIX, this metric indicates the maximum number
of CPUs the system ever had.
On a logical system, this metric indicates the number of virtual CPUs
configured. When hardware threads are enabled, this metric indicates the
number of logical processors.
On Solaris non-global zones with Uncapped CPUs, this metric shows data from
the global zone.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
The Linux kernel currently doesn’t provide any metadata information for
disabled CPUs. This means that there is no way to find out types, speeds, as
well as hardware IDs or any other information that is used to determine the
number of cores, the number of threads, the HyperThreading state, etc... If
the agent (or Glance) is started while some of the CPUs are disabled, some of
these metrics will be “na”, some will be based on what is visible at startup
time. All information will be updated if/when additional CPUs are enabled and
information about them becomes available. The configuration counts will
remain at the highest discovered level (i.e. if CPUs are then disabled, the
maximum number of CPUs/cores/etc... will remain at the highest observed
level). It is recommended that the agent be started with all CPUs enabled.
GBL_NUM_CPU_CORE
----------------------------------
This metric provides the total number of CPU cores on a physical system. On
VMs, this metric shows information according to resources available on that
VM. On non HP-UX system, this metric is equivalent to active CPU cores. On
AIX System WPARs, this metric value is identical to the value on AIX Global
Environment. On Windows, this metric will be “na” on Windows Server 2003
Itanium systems.
The Linux kernel currently doesn’t provide any metadata information for
disabled CPUs. This means that there is no way to find out types, speeds, as
well as hardware IDs or any other information that is used to determine the
number of cores, the number of threads, the HyperThreading state, etc... If
the agent (or Glance) is started while some of the CPUs are disabled, some of
these metrics will be “na”, some will be based on what is visible at startup
time. All information will be updated if/when additional CPUs are enabled and
information about them becomes available. The configuration counts will
remain at the highest discovered level (i.e. if CPUs are then disabled, the
maximum number of CPUs/cores/etc... will remain at the highest observed
level). It is recommended that the agent be started with all CPUs enabled.
GBL_NUM_DISK
----------------------------------
The number of disks on the system. Only local disk devices are counted in
this metric.
On HP-UX, this is a count of the number of disks on the system that have ever
had activity over the cumulative collection time.
On Solaris non-global zones, this metric shows value as 0.
On AIX System WPARs, this metric shows value as 0.
GBL_NUM_LS
----------------------------------
This indicates the number of LS hosted in a system. If Perf Agent is
installed in a guest or in a standalone system this value will be 0.
On Solaris non-global zones, this metric shows value as 0.
GBL_NUM_NETWORK
----------------------------------
The number of network interfaces on the system. This includes the loopback
interface. On certain platforms, this also include FDDI, Hyperfabric, ATM,
Serial Software interfaces such as SLIP or PPP, and Wide Area Network
interfaces (WAN) such as ISDN or X.25. The “netstat -i” command also
displays the list of network interfaces on the system.
GBL_NUM_SOCKET
----------------------------------
The number of physical cpu sockets on the system. On VMs, this metric shows
information according to resources available on that VM.
On Windows, this metric will be “na” on Windows Server 2003 Itanium systems.
GBL_NUM_SWAP
----------------------------------
The number of configured swap areas.
GBL_NUM_TT
----------------------------------
The number of unique Transaction Tracker (TT) transactions that have been
registered on this system.
GBL_NUM_USER
----------------------------------
The number of users logged in at the time of the interval sample. This is
the same as the command “who | wc -l”.
For Unix systems, the information for this metric comes from the utmp file
which is updated by the login command. For more information, read the man
page for utmp. Some applications may create users on the system without
using login and updating the utmp file. These users are not reflected in
this count.
This metric can be a general indicator of system usage. In a networked
environment, however, users may maintain inactive logins on several systems.
On Windows, the information for this metric comes from the Server Sessions
counter in the Performance Libraries Server object. It is a count of the
number of users using this machine as a file server.
GBL_OSKERNELTYPE
----------------------------------
This indicates the word size of the current kernel on the system. Some
hardware can load the 64-bit kernel or the 32-bit kernel.
GBL_OSKERNELTYPE_INT
----------------------------------
This indicates the word size of the current kernel on the system. Some
hardware can load the 64-bit kernel or the 32-bit kernel.
GBL_OSNAME
----------------------------------
A string representing the name of the operating system. On Unix systems,
this is the same as the output from the “uname -s” command.
GBL_OSRELEASE
----------------------------------
The current release of the operating system.
On most Unix systems, this is same as the output from the “uname -r” command.
On AIX, this is the actual patch level of the operating system. This is
similar to what is returned by the command “lslpp -l bos.rte” as the most
recent level of the COMMITTED Base OS Runtime. For example, “5.2.0”.
GBL_OSVERSION
----------------------------------
A string representing the version of the operating system. This is the same
as the output from the “uname -v” command. This string is limited to 20
characters, and as a result, the complete version name might be truncated.
On Windows, this is a string representing the service pack installed on the
operating system.
GBL_PRI_QUEUE
----------------------------------
The average number of processes or kernel threads blocked on PRI (waiting for
their priority to become high enough to get the CPU) during the interval.
To determine if the CPU is a bottleneck, compare this metric with
GBL_CPU_TOTAL_UTIL. If GBL_CPU_TOTAL_UTIL is near 100 percent and
GBL_PRI_QUEUE is greater than three, there is a high probability of a CPU
bottleneck.
This is calculated as the accumulated time that all processes or kernel
threads spent blocked on PRI divided by the interval time.
HP-UX RUN/PRI/CPU Queue differences for multi-cpu systems:
For example, let’s assume we’re using a system with eight processors. We
start eight CPU intensive threads that consume almost all of the CPU
resources. The approximate values shown for the CPU related queue metrics
would be:
GBL_RUN_QUEUE = 1.0
GBL_PRI_QUEUE = 0.1
GBL_CPU_QUEUE = 1.0
Assume we start an additional eight CPU intensive threads. The approximate
values now shown are:
GBL_RUN_QUEUE = 2.0
GBL_PRI_QUEUE = 8.0
GBL_CPU_QUEUE = 16.0
At this point, we have sixteen CPU intensive threads running on the eight
processors. Keeping the definitions of the three queue metrics in mind, the
run queue is 2 (that is, 16 / 8); the pri queue is 8 (only half of the
threads can be active at any given time); and the cpu queue is 16 (half of
the threads waiting in the cpu queue that are ready to run, plus one for each
active thread).
This illustrates that the run queue is the average of number of threads
waiting in the runqueue for all processors; the pri queue is the number of
threads that are blocked on “PRI” (priority); and the cpu queue is the number
of threads in the cpu queue that are ready to run, including the threads
using the CPU.
Note that if the value for GBL_PRI_QUEUE greatly exceeds the value for
GBL_RUN_QUEUE, this may be a side-effect of the measurement interface having
lost trace data. In this case, check the value of the
GBL_LOST_MI_TRACE_BUFFERS metric. If there has been buffer loss, you can
correct the value of GBL_PRI_QUEUE by restarting the midaemon and the
performance tools. You can use the /opt/perf/bin/midaemon -T command to
force immediate shutdown of the measurement interface.
On Linux, if thread collection is disabled, only the first thread of each
multi-threaded process is taken into account. This metric will be NA on
kernels older than 2.6.23 or kernels not including CFS, the Completely Fair
Scheduler.
The Global QUEUE metrics, which are based on block states, represent the
average number of process or kernel thread counts, not actual queues.
The Global WAIT PCT metrics, which are also based on block states, represent
the percentage of all processes or kernel threads that were alive on the
system.
No direct comparison is reasonable with the Application WAIT PCT metrics
since they represent percentages within the context of a specific application
and cannot be summed or compared with global values easily. In addition, the
sum of each Application WAIT PCT for all applications will not equal 100%
since these values will vary greatly depending on the number of processes or
kernel threads in each application.
For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the
APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many
processes on the system, but there are only a very small number of processes
in the specific application that is being examined and there is a high
percentage of those few processes that are blocked on the disk I/O subsystem.
GBL_PRI_WAIT_PCT
----------------------------------
The percentage of time processes or kernel threads were blocked on PRI
(waiting for their priority to become high enough to get the CPU) during the
interval.
This is calculated as the accumulated time that all processes or kernel
threads spent blocked on PRI divided by the accumulated time that all
processes or kernel threads were alive during the interval.
On Linux, if thread collection is disabled, only the first thread of each
multi-threaded process is taken into account. This metric will be NA on
kernels older than 2.6.23 or kernels not including CFS, the Completely Fair
Scheduler.
The Global QUEUE metrics, which are based on block states, represent the
average number of process or kernel thread counts, not actual queues.
The Global WAIT PCT metrics, which are also based on block states, represent
the percentage of all processes or kernel threads that were alive on the
system.
No direct comparison is reasonable with the Application WAIT PCT metrics
since they represent percentages within the context of a specific application
and cannot be summed or compared with global values easily. In addition, the
sum of each Application WAIT PCT for all applications will not equal 100%
since these values will vary greatly depending on the number of processes or
kernel threads in each application.
For example, the GBL_DISK_SUBSYSTEM_QUEUE values can be low, while the
APP_DISK_SUBSYSTEM_WAIT_PCT values can be high. In this case, there are many
processes on the system, but there are only a very small number of processes
in the specific application that is being examined and there is a high
percentage of those few processes that are blocked on the disk I/O subsystem.
GBL_PRI_WAIT_TIME
----------------------------------
The accumulated time, in seconds, that all processes or kernel threads were
blocked on PRI (waiting for their priority to become high enough to get the
CPU) during the interval.
On Linux, if thread collection is disabled, only the first thread of each
multi-threaded process is taken into account. This metric will be NA on
kernels older than 2.6.23 or kernels not including CFS, the Completely Fair
Scheduler.
GBL_PROC_SAMPLE
----------------------------------
The number of process data samples that have been averaged into global
metrics (such as GBL_ACTIVE_PROC) that are based on process samples.
GBL_RUN_QUEUE
----------------------------------
On UNIX systems except Linux, this is the average number of threads waiting
in the runqueue over the interval. The average is computed against the number
of times the run queue is occupied instead of time. The average is updated by
the kernel at a fine grain interval, only when the run queue is occupied. It
is not averaged against the interval and can therefore be misleading for long
intervals when the run queue is empty most or part of the time. This value
matches runq-sz reported by the “sar -q” command. The GBL_LOADAVG* metrics
are better indicators of run queue pressure.
On Linux and Windows, this is instantaneous value obtained at the time of
logging. On Linux, it shows the number of threads waiting in the runqueue.
On Windows, it shows the Processor Queue Length.
On Unix systems, GBL_RUN_QUEUE will typically be a small number. Larger than
normal values for this metric indicate CPU contention among threads. This
CPU bottleneck is also normally indicated by 100 percent GBL_CPU_TOTAL_UTIL.
It may be OK to have GBL_CPU_TOTAL_UTIL be 100 percent if no other threads
are waiting for the CPU. However, if GBL_CPU_TOTAL_UTIL is 100 percent and
GBL_RUN_QUEUE is greater than the number of processors, it indicates a CPU
bottleneck.
On Windows, the Processor Queue reflects a count of process threads which are
ready to execute. A thread is ready to execute (in the Ready state) when the
only resource it is waiting on is the processor. The Windows operating
system itself has many system threads which intermittently use small amounts
of processor time. Several low priority threads intermittently wake up and
execute for very short intervals. Depending on when the collection process
samples this queue, there may be none or several of these low-priority
threads trying to execute. Therefore, even on an otherwise quiescent system,
the Processor Queue Length can be high. High values for this metric during
intervals where the overall CPU utilization (gbl_cpu_total_util) is low do
not indicate a performance bottleneck. Relatively high values for this
metric during intervals where the overall CPU utilization is near 100% can
indicate a CPU performance bottleneck.
HP-UX RUN/PRI/CPU Queue differences for multi-cpu systems:
For example, let’s assume we’re using a system with eight processors. We
start eight CPU intensive threads that consume almost all of the CPU
resources. The approximate values shown for the CPU related queue metrics
would be:
GBL_RUN_QUEUE = 1.0
GBL_PRI_QUEUE = 0.1
GBL_CPU_QUEUE = 1.0
Assume we start an additional eight CPU intensive threads. The approximate
values now shown are:
GBL_RUN_QUEUE = 2.0
GBL_PRI_QUEUE = 8.0
GBL_CPU_QUEUE = 16.0
At this point, we have sixteen CPU intensive threads running on the eight
processors. Keeping the definitions of the three queue metrics in mind, the
run queue is 2 (that is, 16 / 8); the pri queue is 8 (only half of the
threads can be active at any given time); and the cpu queue is 16 (half of
the threads waiting in the cpu queue that are ready to run, plus one for each
active thread).
This illustrates that the run queue is the average of number of threads
waiting in the runqueue for all processors; the pri queue is the number of
threads that are blocked on “PRI” (priority); and the cpu queue is the number
of threads in the cpu queue that are ready to run, including the threads
using the CPU.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_RUN_QUEUE_CUM
----------------------------------
On UNIX systems except Linux, this is the average number of threads waiting
in the runqueue over the cumulative collection time.
On Linux, this is approximately the number of threads waiting in the runqueue
over the cumulative collection time.
On Windows, this is approximately the average Processor Queue Length over the
cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
In this case, this metric is a cumulative average of data that was collected
as an average. This metric is derived from GBL_RUN_QUEUE.
HP-UX RUN/PRI/CPU Queue differences for multi-cpu systems:
For example, let’s assume we’re using a system with eight processors. We
start eight CPU intensive threads that consume almost all of the CPU
resources. The approximate values shown for the CPU related queue metrics
would be:
GBL_RUN_QUEUE = 1.0
GBL_PRI_QUEUE = 0.1
GBL_CPU_QUEUE = 1.0
Assume we start an additional eight CPU intensive threads. The approximate
values now shown are:
GBL_RUN_QUEUE = 2.0
GBL_PRI_QUEUE = 8.0
GBL_CPU_QUEUE = 16.0
At this point, we have sixteen CPU intensive threads running on the eight
processors. Keeping the definitions of the three queue metrics in mind, the
run queue is 2 (that is, 16 / 8); the pri queue is 8 (only half of the
threads can be active at any given time); and the cpu queue is 16 (half of
the threads waiting in the cpu queue that are ready to run, plus one for each
active thread).
This illustrates that the run queue is the average of number of threads
waiting in the runqueue for all processors; the pri queue is the number of
threads that are blocked on “PRI” (priority); and the cpu queue is the number
of threads in the cpu queue that are ready to run, including the threads
using the CPU.
GBL_RUN_QUEUE_HIGH
----------------------------------
On UNIX systems except Linux, this is the highest value of average number of
threads waiting in the runqueue during any interval over the cumulative
collection time.
On Linux, this is the highest value of number of threads waiting in the
runqueue during any interval over the cumulative collection time.
GBL_SAMPLE
----------------------------------
The number of data samples (intervals) that have occurred over the cumulative
collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
GBL_SERIALNO
----------------------------------
On HP-UX, this is the ID number of the computer as returned by the command
“uname -i”. If this value is not available, an empty string is returned.
On SUN, this is the ASCII representation of the hardware-specific serial
number. This is printed in hexadecimal as presented by the “hostid” command
when possible. If that is not possible, the decimal format is provided
instead.
On AIX, this is the machine ID number as returned by the command “uname -m”.
This number has the form xxyyyyyymmss. For the RISC System/6000, “xx”
position is always 00. The “yyyyyy” positions contain the unique ID number
for the central processing unit (cpu). While “mm” represents the model
number, and “ss” is the submodel number (always 00).
On Linux, this is the ASCII representation of the hardware-specific serial
number, as returned by the command “hostid”.
GBL_STARTDATE
----------------------------------
The date that the collector started.
GBL_STARTED_PROC
----------------------------------
The number of processes that started during the interval.
GBL_STARTED_PROC_RATE
----------------------------------
The number of processes that started per second during the interval.
GBL_STARTTIME
----------------------------------
The time of day that the collector started.
GBL_STATDATE
----------------------------------
The date at the end of the interval, based on local time.
GBL_STATTIME
----------------------------------
An ASCII string representing the time at the end of the interval, based on
local time.
GBL_SWAP_SPACE_AVAIL
----------------------------------
The total amount of potential swap space, in MB.
On HP-UX, this is the sum of the device swap areas enabled by the swapon
command, the allocated size of any file system swap areas, and the allocated
size of pseudo swap in memory if enabled. Note that this is potential swap
space. This is the same as (AVAIL: total) as reported by the “swapinfo -mt”
command.
On SUN, this is the total amount of swap space available from the physical
backing store devices (disks) plus the amount currently available from main
memory. This is the same as (used + available) /1024, reported by the “swap
-s” command.
On Linux, this is same as (Swap: total) as reported by the “free -m” command.
On Unix systems, this metric is updated every 30 seconds or the sampling
interval, whichever is greater.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_SWAP_SPACE_AVAIL_KB
----------------------------------
The total amount of potential swap space, in KB.
On HP-UX, this is the sum of the device swap areas enabled by the swapon
command, the allocated size of any file system swap areas, and the allocated
size of pseudo swap in memory if enabled. Note that this is potential swap
space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this
space may actually be usable. For example, on a 61MB disk using 2 MB swap
size allocations, 1 MB remains unusable and is considered wasted space.
On HP-UX, this is the same as (AVAIL: total) as reported by the “swapinfo -t”
command.
On SUN, this is the total amount of swap space available from the physical
backing store devices (disks) plus the amount currently available from main
memory. This is the same as (used + available)/1024, reported by the “swap -
s” command.
On Unix systems, this metric is updated every 30 seconds or the sampling
interval, whichever is greater.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_SWAP_SPACE_DEVICE_AVAIL
----------------------------------
The amount of swap space configured on disk devices exclusively as swap space
(in MB).
On Unix systems, this metric is updated every 30 seconds or the sampling
interval, whichever is greater.
On Solaris non-global zones, this metric is N/A.
GBL_SWAP_SPACE_DEVICE_UTIL
----------------------------------
On HP-UX, this is the percentage of device swap space currently in use of the
total swap space available. This does not include file system or remote swap
space.
On HP-UX, note that available swap is only potential swap space. Since swap
is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be
usable. For example, on a 61 MB disk using 2 MB swap size allocations, 1 MB
remains unusable and is considered wasted space. Consequently, 100 percent
utilization on a single device is not always obtainable. The wasted swap
space, and the remainder of allocated SWCHUNKs that have not been used is
what is reported in the hold field of the /usr/sbin/swapinfo command.
On HP-UX, when compared to the “swapinfo -mt” command results, this is
calculated as:
Util = ((USED: dev) sum
/ (AVAIL: total)) * 100
On SUN, this is the percentage of total system device swap space currently in
use. This metric only gives the percentage of swap space used from the
available physical swap device space, and does not include the memory that
can be used for swap. (On SunOS 5.X, the virtual swap swapfs can allocate
swap space from memory.)
On Unix systems, this metric is updated every 30 seconds or the sampling
interval, whichever is greater.
On Solaris non-global zones, this metric is N/A.
GBL_SWAP_SPACE_USED
----------------------------------
The amount of swap space used, in MB.
On HP-UX, “Used” indicates written to disk (or locked in memory), rather than
reserved. This is the same as (USED: total - reserve) as reported by the
“swapinfo -mt” command.
On SUN, “Used” indicates amount written to disk (or locked in memory), rather
than reserved. Swap space is reserved (by decrementing a counter) when
virtual memory for a program is created. This is the same as (bytes
allocated)/1024, reported by the “swap -s” command.
On Linux, this is same as (Swap: used) as reported by the “free -m” command.
On AIX System WPARs, this metric is NA.
On Solaris non-global zones, this metric is N/A. On Unix systems, this
metric is updated every 30 seconds or the sampling interval, whichever is
greater.
GBL_SWAP_SPACE_USED_UTIL
----------------------------------
This is the percentage of swap space used.
On HP-UX, “Used %” indicates percentage of swap space written to disk (or
locked in memory), rather than reserved. This is the same as percentage of
((USED: total - reserve)/total)*100, as reported by the “swapinfo -mt”
command.
On SUN, “Used %” indicates percentage of swap space written to disk (or
locked in memory), rather than reserved. Swap space is reserved (by
decrementing a counter) when virtual memory for a program is created. This
is the same as percentage of ((bytes allocated)/total)*100, reported by the
“swap -s” command.
On SUN, global swap space is tracked through the operating system. Device
swap space is tracked through the devices. For this reason, the amount of
swap space used may differ between the global and by-device metrics.
Sometimes pages that are marked to be swapped to disk by the operating system
are never swapped. The operating system records this as used swap space, but
the devices do not, since no physical IOs occur. (Metrics with the prefix
“GBL” are global and metrics with the prefix “BYSWP” are by device.)
On Linux, this is same as percentage of ((Swap: used)/total)*100, as reported
by the “free -m” command.
On Unix systems, this metric is updated every 30 seconds or the sampling
interval, whichever is greater.
On Solaris non-global zones, this metric is N/A.
GBL_SWAP_SPACE_UTIL
----------------------------------
The percent of available swap space that was being used by running processes
in the interval.
On Windows, this is the percentage of virtual memory, which is available to
user processes, that is in use at the end of the interval. It is not an
average over the entire interval. It reflects the ratio of committed memory
to the current commit limit. The limit may be increased by the operating
system if the paging file is extended. This is the same as (Committed Bytes
/ Commit Limit) * 100 when comparing the results to Performance Monitor.
On HP-UX, swap space must be reserved (but not allocated) before virtual
memory can be created. If all of available swap is reserved, then no new
processes or virtual memory can be created. Swap space locations are
actually assigned (used) when a page is actually written to disk or locked in
memory (pseudo swap in memory). This is the same as (PCT USED: total) as
reported by the “swapinfo -mt” command.
On Unix systems, this metric is a measure of capacity rather than
performance. As this metric nears 100 percent, processes are not able to
allocate any more memory and new processes may not be able to run. Very low
swap utilization values may indicate that too much area has been allocated to
swap, and better use of disk space could be made by reallocating some swap
partitions to be user filesystems.
On Unix systems, this metric is updated every 30 seconds or the sampling
interval, whichever is greater.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_SWAP_SPACE_UTIL_CUM
----------------------------------
The average percentage of available swap space currently in use (has memory
belonging to processes paged or swapped out on it) over the cumulative
collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, note that available swap is only potential swap space. Since swap
is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be
usable. For example, on a 61 MB disk using 2 MB swap size allocations, 1 MB
remains unusable and is considered wasted space. Consequently, 100 percent
utilization on a single device is not always obtainable.
On Unix systems, this metric is updated every 30 seconds or the sampling
interval, whichever is greater.
GBL_SWAP_SPACE_UTIL_HIGH
----------------------------------
The highest average percentage of available swap space currently in use (has
memory belonging to processes paged or swapped out on it) in any interval
over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, note that available swap is only potential swap space. Since swap
is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be
usable. For example, on a 61 MB disk using 2 MB swap size allocations, 1 MB
remains unusable and is considered wasted space. Consequently, 100 percent
utilization on a single device is not always obtainable.
On Unix systems, this metric is updated every 30 seconds or the sampling
interval, whichever is greater.
GBL_SYSTEM_ID
----------------------------------
The network node hostname of the system. This is the same as the output from
the “uname -n” command.
On Windows, the name obtained from GetComputerName.
GBL_SYSTEM_TYPE
----------------------------------
On Unix systems, this is either the model of the system or the instruction
set architecture of the system.
On Windows, this is the processor architecture of the system.
GBL_SYSTEM_UPTIME_HOURS
----------------------------------
The time, in hours, since the last system reboot.
GBL_SYSTEM_UPTIME_SECONDS
----------------------------------
The time, in seconds, since the last system reboot.
GBL_THRESHOLD_PROCCPU
----------------------------------
The process CPU threshold specified in the parm file.
GBL_THRESHOLD_PROCDISK
----------------------------------
The process disk threshold specified in the parm file.
GBL_THRESHOLD_PROCIO
----------------------------------
The process IO threshold specified in the parm file.
GBL_THRESHOLD_PROCMEM
----------------------------------
The process memory threshold specified in the parm file.
GBL_TT_OVERFLOW_COUNT
----------------------------------
The number of new transactions that could not be measured because the
Measurement Processing Daemon’s (midaemon) Measurement Performance Database
is full. If this happens, the default Measurement Performance Database size
is not large enough to hold all of the registered transactions on this
system. This can be remedied by stopping and restarting the midaemon process
using the -smdvss option to specify a larger Measurement Performance Database
size. The current Measurement Performance Database size can be checked using
the midaemon -sizes option.
PROC_APP_ID
----------------------------------
The ID number of the application to which the process (or kernel thread, if
HP-UX/Linux Kernel 2.6 and above) belonged during the interval.
Application “other” always has an ID of 1. There can be up to 999 user-
defined applications, which are defined in the parm file.
PROC_APP_NAME
----------------------------------
The application name of a process (or kernel thread, if HP-UX/Linux Kernel
2.6 and above).
Processes (or kernel threads, if HP-UX/Linux Kernel 2.6 and above) are
assigned into application groups based upon rules in the parm file. If a
process does not fit any rules in this file, it is assigned to the
application “other.”
The rules include decisions based upon pathname, user ID, priority, and so
forth. As these values change during the life of a process (or kernel
thread, if HP-UX/Linux Kernel 2.6 and above), it is re-assigned to another
application. This re-evaluation is done every measurement interval.
PROC_CHILD_CPU_SYS_MODE_UTIL
----------------------------------
The percentage of system time accumulated by this process’s children
processes during the interval.
On Unix systems, when a process terminates, its CPU counters (user and
system) are accumulated in the parent’s “children times” counters. This
occurs when the parent waits for (or reaps) the child. See getrusage(2). If
the process is an orphan process, its parent becomes the init(1m) process,
and its CPU times will be accumulated to the init process upon termination.
The PROC*_CHILD_* metrics attempt to report these counters in a meaningful
way. If these counters were reported unconditionally as they are incremented,
they would be misleading. For example, consider a shell process that forks
another process and that process accumulates 100 minutes of CPU time. When
that process terminates, the shell would report a huge child time utilization
for that interval even though it was generally idle, waiting for that child
to terminate. The child process was most likely already reported in previous
intervals as it used the CPU time, and therefore it would be confusing to
report this time in the parent. If, on the other hand, a process was
continuously forking short-lived processes during the interval, it would be
useful to report the CPU time used by those children processes. The simple
algorithm chosen is to only report children times when their total CPU time
is less than the process alive interval, and zero otherwise. It is not fool-
proof but it generally yields the right results, i.e., if a process reports
high child time utilization for several intervals in a row, it could be a
runaway forking process. An example of such a runaway process (or “fork
bomb”) is:
while true ; do ps -ef | grep something done
Moderate children times are also a useful way to identify daemons that rely
on child processes, or, in the case of the init process it may indicate that
many short-lived orphan processes are being created.
Note that this metric is only valid at the process level. It reports CPU time
of processes forked and does not report on threads created by processes. The
PROC*_CHILD* metrics have no meaning at the thread level, therefore the
thread metric of the same name, on systems that report per-thread data, will
show “na”.
PROC_CHILD_CPU_TOTAL_UTIL
----------------------------------
The percentage of system + user time accumulated by this process’s children
processes during the interval.
On Unix systems, when a process terminates, its CPU counters (user and
system) are accumulated in the parent’s “children times” counters. This
occurs when the parent waits for (or reaps) the child. See getrusage(2). If
the process is an orphan process, its parent becomes the init(1m) process,
and its CPU times will be accumulated to the init process upon termination.
The PROC*_CHILD_* metrics attempt to report these counters in a meaningful
way. If these counters were reported unconditionally as they are incremented,
they would be misleading. For example, consider a shell process that forks
another process and that process accumulates 100 minutes of CPU time. When
that process terminates, the shell would report a huge child time utilization
for that interval even though it was generally idle, waiting for that child
to terminate. The child process was most likely already reported in previous
intervals as it used the CPU time, and therefore it would be confusing to
report this time in the parent. If, on the other hand, a process was
continuously forking short-lived processes during the interval, it would be
useful to report the CPU time used by those children processes. The simple
algorithm chosen is to only report children times when their total CPU time
is less than the process alive interval, and zero otherwise. It is not fool-
proof but it generally yields the right results, i.e., if a process reports
high child time utilization for several intervals in a row, it could be a
runaway forking process. An example of such a runaway process (or “fork
bomb”) is:
while true ; do ps -ef | grep something done
Moderate children times are also a useful way to identify daemons that rely
on child processes, or, in the case of the init process it may indicate that
many short-lived orphan processes are being created.
Note that this metric is only valid at the process level. It reports CPU time
of processes forked and does not report on threads created by processes. The
PROC*_CHILD* metrics have no meaning at the thread level, therefore the
thread metric of the same name, on systems that report per-thread data, will
show “na”.
PROC_CHILD_CPU_USER_MODE_UTIL
----------------------------------
The percentage of user time accumulated by this process’s children processes
during the interval.
On Unix systems, when a process terminates, its CPU counters (user and
system) are accumulated in the parent’s “children times” counters. This
occurs when the parent waits for (or reaps) the child. See getrusage(2). If
the process is an orphan process, its parent becomes the init(1m) process,
and its CPU times will be accumulated to the init process upon termination.
The PROC*_CHILD_* metrics attempt to report these counters in a meaningful
way. If these counters were reported unconditionally as they are incremented,
they would be misleading. For example, consider a shell process that forks
another process and that process accumulates 100 minutes of CPU time. When
that process terminates, the shell would report a huge child time utilization
for that interval even though it was generally idle, waiting for that child
to terminate. The child process was most likely already reported in previous
intervals as it used the CPU time, and therefore it would be confusing to
report this time in the parent. If, on the other hand, a process was
continuously forking short-lived processes during the interval, it would be
useful to report the CPU time used by those children processes. The simple
algorithm chosen is to only report children times when their total CPU time
is less than the process alive interval, and zero otherwise. It is not fool-
proof but it generally yields the right results, i.e., if a process reports
high child time utilization for several intervals in a row, it could be a
runaway forking process. An example of such a runaway process (or “fork
bomb”) is:
while true ; do ps -ef | grep something done
Moderate children times are also a useful way to identify daemons that rely
on child processes, or, in the case of the init process it may indicate that
many short-lived orphan processes are being created.
Note that this metric is only valid at the process level. It reports CPU time
of processes forked and does not report on threads created by processes. The
PROC*_CHILD* metrics have no meaning at the thread level, therefore the
thread metric of the same name, on systems that report per-thread data, will
show “na”.
PROC_CPU_ALIVE_SYS_MODE_UTIL
----------------------------------
The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) in system mode as a percentage of the time it is alive
during the interval. On platforms other than HPUX, If the ignore_mt flag is
set(true) in parm file, this metric will report values normalized against the
number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
PROC_CPU_ALIVE_TOTAL_UTIL
----------------------------------
The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) as a percentage of the time it is alive during the
interval. On platforms other than HPUX, If the ignore_mt flag is set(true)
in parm file, this metric will report values normalized against the number of
active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
PROC_CPU_ALIVE_USER_MODE_UTIL
----------------------------------
The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) in user mode as a percentage of the time it is alive
during the interval. On platforms other than HPUX, If the ignore_mt flag is
set(true) in parm file, this metric will report values normalized against the
number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
PROC_CPU_LAST_USED
----------------------------------
The ID number of the processor that last ran the process (or kernel thread,
if HP-UX/Linux Kernel 2.6 and above). For uni-processor systems, this value
is always zero.
On a threaded operating system, such as HP-UX 11.0 and beyond, this metric
represents a kernel thread characteristic. If this metric is reported for a
process, the value for its last executing kernel thread is given. For
example, if a process has multiple kernel threads and kernel thread one is
the last to execute during the interval, the metric value for kernel thread
one is assigned to the process.
PROC_CPU_SYS_MODE_TIME
----------------------------------
The CPU time in system mode in the context of the process (or kernel thread,
if HP-UX/Linux Kernel 2.6 and above) during the interval.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation. On platforms other than HPUX, If the ignore_mt
flag is set(true) in parm file, this metric will report values normalized
against the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
PROC_CPU_SYS_MODE_TIME_CUM
----------------------------------
The CPU time in system mode in the context of the process (or kernel thread,
if HP-UX/Linux Kernel 2.6 and above) over the cumulative collection time.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation. On platforms other than HPUX, If the ignore_mt
flag is set(true) in parm file, this metric will report values normalized
against the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
PROC_CPU_SYS_MODE_UTIL
----------------------------------
The percentage of time that the CPU was in system mode in the context of the
process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) during the
interval.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode.
Unlike the global and application CPU metrics, process CPU is not averaged
over the number of processors on systems with multiple CPUs. Single-threaded
processes can use only one CPU at a time and never exceed 100% CPU
utilization.
High system mode CPU utilizations are normal for IO intensive programs.
Abnormally high system CPU utilization can indicate that a hardware problem
is causing a high interrupt rate. It can also indicate programs that are not
using system calls efficiently.
A classic “hung shell” shows up with very high system mode CPU because it
gets stuck in a loop doing terminal reads (a system call) to a device that
never responds.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
On multi-processor systems, processes which have component kernel threads
executing simultaneously on different processors could have resource
utilization sums over 100%. If there is no CPU multi-threading, the maximum
percentage is 100% times the number of Cores on the system. On a system with
multi-threaded CPUs, the maximum percentage is : 100 % times the number of
cores X 2. ( i.e the total number of logical CPUs on the system). On
platforms other than HPUX, If the ignore_mt flag is set(true) in parm file,
this metric will report values normalized against the number of active cores
in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
PROC_CPU_SYS_MODE_UTIL_CUM
----------------------------------
The average percentage of time that the CPU was in system mode in the context
of the process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) over
the cumulative collection time.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
Unlike the global and application CPU metrics, process CPU is not averaged
over the number of processors on systems with multiple CPUs. Single-threaded
processes can use only one CPU at a time and never exceed 100% CPU
utilization.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
On multi-processor systems, processes which have component kernel threads
executing simultaneously on different processors could have resource
utilization sums over 100%. If there is no CPU multi-threading, the maximum
percentage is 100% times the number of Cores on the system. On a system with
multi-threaded CPUs, the maximum percentage is : 100 % times the number of
cores X 2. ( i.e the total number of logical CPUs on the system). On
platforms other than HPUX, If the ignore_mt flag is set(true) in parm file,
this metric will report values normalized against the number of active cores
in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
PROC_CPU_TOTAL_TIME
----------------------------------
The total CPU time, in seconds, consumed by a process (or kernel thread, if
HP-UX/Linux Kernel 2.6 and above) during the interval.
Unlike the global and application CPU metrics, process CPU is not averaged
over the number of processors on systems with multiple CPUs. Single-threaded
processes can use only one CPU at a time and never exceed 100% CPU
utilization.
On HP-UX, the total CPU time is the sum of the CPU time components for a
process or kernel thread, including system, user, context switch, interrupts
processing, realtime, and nice utilization values.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
On multi-processor systems, processes which have component kernel threads
executing simultaneously on different processors could have resource
utilization sums over 100%. If there is no CPU multi-threading, the maximum
percentage is 100% times the number of Cores on the system. On a system with
multi-threaded CPUs, the maximum percentage is : 100 % times the number of
cores X 2. ( i.e the total number of logical CPUs on the system). On
platforms other than HPUX, If the ignore_mt flag is set(true) in parm file,
this metric will report values normalized against the number of active cores
in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
PROC_CPU_TOTAL_TIME_CUM
----------------------------------
The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) over the cumulative collection time. CPU time is in
seconds unless otherwise specified.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
This is calculated as
PROC_CPU_TOTAL_TIME_CUM =
PROC_CPU_SYS_MODE_TIME_CUM +
PROC_CPU_USER_MODE_TIME_CUM
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation. On platforms other than HPUX, If the ignore_mt
flag is set(true) in parm file, this metric will report values normalized
against the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
PROC_CPU_TOTAL_UTIL
----------------------------------
The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) as a percentage of the total CPU time available during
the interval.
Unlike the global and application CPU metrics, process CPU is not averaged
over the number of processors on systems with multiple CPUs. Single-threaded
processes can use only one CPU at a time and never exceed 100% CPU
utilization.
On HP-UX, the total CPU utilization is the sum of the CPU utilization
components for a process or kernel thread, including system, user, context
switch, interrupts processing, realtime, and nice utilization values.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
On multi-processor systems, processes which have component kernel threads
executing simultaneously on different processors could have resource
utilization sums over 100%. If there is no CPU multi-threading, the maximum
percentage is 100% times the number of Cores on the system. On a system with
multi-threaded CPUs, the maximum percentage is : 100 % times the number of
cores X 2. ( i.e the total number of logical CPUs on the system).
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
PROC_CPU_TOTAL_UTIL_CUM
----------------------------------
The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) as a percentage of the total CPU time available over
the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
Unlike the global and application CPU metrics, process CPU is not averaged
over the number of processors on systems with multiple CPUs. Single-threaded
processes can use only one CPU at a time and never exceed 100% CPU
utilization.
On HP-UX, the total CPU utilization is the sum of the CPU utilization
components for a process or kernel thread, including system, user, context
switch, interrupts processing, realtime, and nice utilization values.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
On multi-processor systems, processes which have component kernel threads
executing simultaneously on different processors could have resource
utilization sums over 100%. If there is no CPU multi-threading, the maximum
percentage is 100% times the number of Cores on the system. On a system with
multi-threaded CPUs, the maximum percentage is : 100 % times the number of
cores X 2. ( i.e the total number of logical CPUs on the system). On
platforms other than HPUX, If the ignore_mt flag is set(true) in parm file,
this metric will report values normalized against the number of active cores
in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
PROC_CPU_USER_MODE_TIME
----------------------------------
The time, in seconds, the process (or kernel threads, if HP-UX/Linux Kernel
2.6 and above) was using the CPU in user mode during the interval.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation. On platforms other than HPUX, If the ignore_mt
flag is set(true) in parm file, this metric will report values normalized
against the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
PROC_CPU_USER_MODE_TIME_CUM
----------------------------------
The time, in seconds, the process (or kernel thread, if HP-UX/Linux Kernel
2.6 and above) was using the CPU in user mode over the cumulative collection
time. collection time.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation. On platforms other than HPUX, If the ignore_mt
flag is set(true) in parm file, this metric will report values normalized
against the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
PROC_CPU_USER_MODE_UTIL
----------------------------------
The percentage of time the process (or kernel thread, if HP-UX/Linux Kernel
2.6 and above) was using the CPU in user mode during the interval.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
Unlike the global and application CPU metrics, process CPU is not averaged
over the number of processors on systems with multiple CPUs. Single-threaded
processes can use only one CPU at a time and never exceed 100% CPU
utilization.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
On multi-processor systems, processes which have component kernel threads
executing simultaneously on different processors could have resource
utilization sums over 100%. If there is no CPU multi-threading, the maximum
percentage is 100% times the number of Cores on the system. On a system with
multi-threaded CPUs, the maximum percentage is : 100 % times the number of
cores X 2. ( i.e the total number of logical CPUs on the system). On
platforms other than HPUX, If the ignore_mt flag is set(true) in parm file,
this metric will report values normalized against the number of active cores
in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
PROC_CPU_USER_MODE_UTIL_CUM
----------------------------------
The average percentage of time the process (or kernel thread, if HP_UX/Linux
Kernel 2.6 and above) was using the CPU in user mode over the cumulative
collection time.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
Unlike the global and application CPU metrics, process CPU is not averaged
over the number of processors on systems with multiple CPUs. Single-threaded
processes can use only one CPU at a time and never exceed 100% CPU
utilization.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
On multi-processor systems, processes which have component kernel threads
executing simultaneously on different processors could have resource
utilization sums over 100%. If there is no CPU multi-threading, the maximum
percentage is 100% times the number of Cores on the system. On a system with
multi-threaded CPUs, the maximum percentage is : 100 % times the number of
cores X 2. ( i.e the total number of logical CPUs on the system). On
platforms other than HPUX, If the ignore_mt flag is set(true) in parm file,
this metric will report values normalized against the number of active cores
in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system. This flag will
be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux,
glance, perfd) must be shut down and the midaemon restarted in the desired
mode. To start the midaemon with “-ignore_mt” by default, this option should
be added in the /etc/rc.config.d/ovpa control file. Refer to the
documentation regarding ovpa startup. Note that, on HPUX, unlike other
platforms, specifying core-based normalization affects CPU, application,
process and thread metrics.
PROC_DISK_PHYS_IO_RATE
----------------------------------
The average number of physical disk IOs per second made by the process or
kernel thread during the interval.
For processes which run for less than the measurement interval, this metric
is normalized over the measurement interval. For example, a process ran for
1 second and did 50 IOs during its life. If the measurement interval is 5
seconds, it is reported as having done 10 IOs per second. If the measurement
interval is 60 seconds, it is reported as having done 50/60 or 0.83 IOs per
second.
“Disk” in this instance refers to any locally attached physical disk drives
(that is, “spindles”) that may hold file systems and/or swap. NFS mounted
disks are not included in this list.
On HP-UX, since this value is reported by the drivers, multiple physical
requests that have been collapsed to a single physical operation (due to
driver IO merging) are only counted once.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
Linux release versions vary with regards to the amount of process-level IO
statistics that are available. Some kernels instrument only disk IO, while
some provide statistics for all devices together (including tty and other
devices with disk IO).
When it is available from your specific release of Linux, the PROC_DISK_PHYS*
metrics will report pages of disk IO specifically. The PROC_IO* metrics will
report the sum of all types of IO including disk IO, in Kilobytes or KB
rates. These metrics will have “na” values on kernels that do not support the
instrumentation.
For multi-threaded processes, some Linux kernels only report IO statistics
for the main thread. In that case, patches are available that will allow the
process instrumentation to report the sum of all thread’s IOs, and will also
enable per-thread reporting.
Starting with 2.6.3X, at least some kernels will include IO data from the
children of the process in the process data. This results in misleading
inflated IO metrics for processes that fork a lot of children, such as
shells, or the init(1m) process.
PROC_DISK_PHYS_IO_RATE_CUM
----------------------------------
The number of physical disk IOs per second made by the selected process or
kernel thread over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
“Disk” in this instance refers to any locally attached physical disk drives
(that is, “spindles”) that may hold file systems and/or swap. NFS mounted
disks are not included in this list.
On HP-UX, since this value is reported by the drivers, multiple physical
requests that have been collapsed to a single physical operation (due to
driver IO merging) are only counted once.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
Linux release versions vary with regards to the amount of process-level IO
statistics that are available. Some kernels instrument only disk IO, while
some provide statistics for all devices together (including tty and other
devices with disk IO).
When it is available from your specific release of Linux, the PROC_DISK_PHYS*
metrics will report pages of disk IO specifically. The PROC_IO* metrics will
report the sum of all types of IO including disk IO, in Kilobytes or KB
rates. These metrics will have “na” values on kernels that do not support the
instrumentation.
For multi-threaded processes, some Linux kernels only report IO statistics
for the main thread. In that case, patches are available that will allow the
process instrumentation to report the sum of all thread’s IOs, and will also
enable per-thread reporting.
Starting with 2.6.3X, at least some kernels will include IO data from the
children of the process in the process data. This results in misleading
inflated IO metrics for processes that fork a lot of children, such as
shells, or the init(1m) process.
PROC_DISK_PHYS_READ
----------------------------------
The number of physical reads made by (or for) a process or kernel thread
during the last interval.
“Disk” refers to a physical drive (that is, “spindle”), not a partition on a
drive (unless the partition occupies the entire physical disk). NFS mounted
disks are not included in this list.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
Linux release versions vary with regards to the amount of process-level IO
statistics that are available. Some kernels instrument only disk IO, while
some provide statistics for all devices together (including tty and other
devices with disk IO).
When it is available from your specific release of Linux, the PROC_DISK_PHYS*
metrics will report pages of disk IO specifically. The PROC_IO* metrics will
report the sum of all types of IO including disk IO, in Kilobytes or KB
rates. These metrics will have “na” values on kernels that do not support the
instrumentation.
For multi-threaded processes, some Linux kernels only report IO statistics
for the main thread. In that case, patches are available that will allow the
process instrumentation to report the sum of all thread’s IOs, and will also
enable per-thread reporting.
Starting with 2.6.3X, at least some kernels will include IO data from the
children of the process in the process data. This results in misleading
inflated IO metrics for processes that fork a lot of children, such as
shells, or the init(1m) process.
PROC_DISK_PHYS_READ_CUM
----------------------------------
The number of physical reads made by (or for) a process or kernel thread over
the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
“Disk” refers to a physical drive (that is, “spindle”), not a partition on a
drive (unless the partition occupies the entire physical disk). NFS mounted
disks are not included in this list.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
Linux release versions vary with regards to the amount of process-level IO
statistics that are available. Some kernels instrument only disk IO, while
some provide statistics for all devices together (including tty and other
devices with disk IO).
When it is available from your specific release of Linux, the PROC_DISK_PHYS*
metrics will report pages of disk IO specifically. The PROC_IO* metrics will
report the sum of all types of IO including disk IO, in Kilobytes or KB
rates. These metrics will have “na” values on kernels that do not support the
instrumentation.
For multi-threaded processes, some Linux kernels only report IO statistics
for the main thread. In that case, patches are available that will allow the
process instrumentation to report the sum of all thread’s IOs, and will also
enable per-thread reporting.
Starting with 2.6.3X, at least some kernels will include IO data from the
children of the process in the process data. This results in misleading
inflated IO metrics for processes that fork a lot of children, such as
shells, or the init(1m) process.
PROC_DISK_PHYS_READ_RATE
----------------------------------
The number of physical reads per second made by (or for) a process or kernel
thread during the interval.
“Disk” refers to a physical drive (that is, “spindle”), not a partition on a
drive (unless the partition occupies the entire physical disk). NFS mounted
disks are not included in this list.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
Linux release versions vary with regards to the amount of process-level IO
statistics that are available. Some kernels instrument only disk IO, while
some provide statistics for all devices together (including tty and other
devices with disk IO).
When it is available from your specific release of Linux, the PROC_DISK_PHYS*
metrics will report pages of disk IO specifically. The PROC_IO* metrics will
report the sum of all types of IO including disk IO, in Kilobytes or KB
rates. These metrics will have “na” values on kernels that do not support the
instrumentation.
For multi-threaded processes, some Linux kernels only report IO statistics
for the main thread. In that case, patches are available that will allow the
process instrumentation to report the sum of all thread’s IOs, and will also
enable per-thread reporting.
Starting with 2.6.3X, at least some kernels will include IO data from the
children of the process in the process data. This results in misleading
inflated IO metrics for processes that fork a lot of children, such as
shells, or the init(1m) process.
PROC_DISK_PHYS_WRITE
----------------------------------
The number of physical writes made by (or for) a process or kernel thread
during the last interval.
“Disk” in this instance refers to any locally attached physical disk drives
(that is, “spindles”) that may hold file systems and/or swap. NFS mounted
disks are not included in this list.
On HP-UX, since this value is reported by the drivers, multiple physical
requests that have been collapsed to a single physical operation (due to
driver IO merging) are only counted once.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
Linux release versions vary with regards to the amount of process-level IO
statistics that are available. Some kernels instrument only disk IO, while
some provide statistics for all devices together (including tty and other
devices with disk IO).
When it is available from your specific release of Linux, the PROC_DISK_PHYS*
metrics will report pages of disk IO specifically. The PROC_IO* metrics will
report the sum of all types of IO including disk IO, in Kilobytes or KB
rates. These metrics will have “na” values on kernels that do not support the
instrumentation.
For multi-threaded processes, some Linux kernels only report IO statistics
for the main thread. In that case, patches are available that will allow the
process instrumentation to report the sum of all thread’s IOs, and will also
enable per-thread reporting.
Starting with 2.6.3X, at least some kernels will include IO data from the
children of the process in the process data. This results in misleading
inflated IO metrics for processes that fork a lot of children, such as
shells, or the init(1m) process.
PROC_DISK_PHYS_WRITE_CUM
----------------------------------
The number of physical writes made by (or for) a process or kernel thread
over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
“Disk” in this instance refers to any locally attached physical disk drives
(that is, “spindles”) that may hold file systems and/or swap. NFS mounted
disks are not included in this list.
On HP-UX, since this value is reported by the drivers, multiple physical
requests that have been collapsed to a single physical operation (due to
driver IO merging) are only counted once.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
Linux release versions vary with regards to the amount of process-level IO
statistics that are available. Some kernels instrument only disk IO, while
some provide statistics for all devices together (including tty and other
devices with disk IO).
When it is available from your specific release of Linux, the PROC_DISK_PHYS*
metrics will report pages of disk IO specifically. The PROC_IO* metrics will
report the sum of all types of IO including disk IO, in Kilobytes or KB
rates. These metrics will have “na” values on kernels that do not support the
instrumentation.
For multi-threaded processes, some Linux kernels only report IO statistics
for the main thread. In that case, patches are available that will allow the
process instrumentation to report the sum of all thread’s IOs, and will also
enable per-thread reporting.
Starting with 2.6.3X, at least some kernels will include IO data from the
children of the process in the process data. This results in misleading
inflated IO metrics for processes that fork a lot of children, such as
shells, or the init(1m) process.
PROC_DISK_PHYS_WRITE_RATE
----------------------------------
The number of physical writes per second made by (or for) a process or kernel
thread during the interval.
“Disk” refers to a physical drive (that is, “spindle”), not a partition on a
drive (unless the partition occupies the entire physical disk). NFS mounted
disks are not included in this list.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
Linux release versions vary with regards to the amount of process-level IO
statistics that are available. Some kernels instrument only disk IO, while
some provide statistics for all devices together (including tty and other
devices with disk IO).
When it is available from your specific release of Linux, the PROC_DISK_PHYS*
metrics will report pages of disk IO specifically. The PROC_IO* metrics will
report the sum of all types of IO including disk IO, in Kilobytes or KB
rates. These metrics will have “na” values on kernels that do not support the
instrumentation.
For multi-threaded processes, some Linux kernels only report IO statistics
for the main thread. In that case, patches are available that will allow the
process instrumentation to report the sum of all thread’s IOs, and will also
enable per-thread reporting.
Starting with 2.6.3X, at least some kernels will include IO data from the
children of the process in the process data. This results in misleading
inflated IO metrics for processes that fork a lot of children, such as
shells, or the init(1m) process.
PROC_DISK_SUBSYSTEM_WAIT_PCT
----------------------------------
The percentage of time the process or kernel thread was blocked on the disk
subsystem (waiting for its file system IOs to complete) during the interval.
On HP-UX, this is based on the sum of processes or kernel threads in the
DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing
raw IO to a disk are not included in this measurement.
On Linux, this is based on the sum of all processes or kernel threads blocked
on disk.
On Linux, if thread collection is disabled, only the first thread of each
multi-threaded process is taken into account. This metric will be NA on
kernels older than 2.6.23 or kernels not including CFS, the Completely Fair
Scheduler.
On a threaded operating system, such as HP-UX 11.0 and beyond, process wait
time is calculated by summing the wait times of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
A percentage of time spent in a wait state is calculated as the time a kernel
thread (or all kernel threads of a process) spent waiting in this state,
divided by the alive time of the kernel thread (or all kernel threads of the
process) during the interval.
If this metric is reported for a kernel thread, the percentage value is for
that single kernel thread. If this metric is reported for a process, the
percentage value is calculated with the sum of the wait and alive times of
all of its kernel threads.
For example, if a process has 2 kernel threads, one sleeping for the entire
interval and one waiting on terminal input for the interval, the process wait
percent values will be 50% on Sleep and 50% on Terminal. The kernel thread
wait values will be 100% on Sleep for the first kernel thread and 100% on
Terminal for the second kernel thread.
For another example, consider the same process as above, with 2 kernel
threads, one of which was created half-way through the interval, and which
then slept for the remainder of the interval. The other kernel thread was
waiting for terminal input for half the interval, then used the CPU actively
for the remainder of the interval. The process wait percent values will be
33% on Sleep and 33% on Terminal (each one third of the total alive time).
The kernel thread wait values will be 100% on Sleep for the first kernel
thread and 50% on Terminal for the second kernel thread.
PROC_DISK_SUBSYSTEM_WAIT_PCT_CUM
----------------------------------
The percentage of time the process or kernel thread was blocked on the disk
subsystem (waiting for its file system IOs to complete) over the cumulative
collection time.
On HP-UX, this is based on the sum of processes or kernel threads in the
DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing
raw IO to a disk are not included in this measurement.
On Linux, this is based on the sum of all processes or kernel threads blocked
on disk.
On Linux, if thread collection is disabled, only the first thread of each
multi-threaded process is taken into account. This metric will be NA on
kernels older than 2.6.23 or kernels not including CFS, the Completely Fair
Scheduler.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a threaded operating system, such as HP-UX 11.0 and beyond, process wait
time is calculated by summing the wait times of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
A percentage of time spent in a wait state is calculated as the time a kernel
thread (or all kernel threads of a process) spent waiting in this state,
divided by the alive time of the kernel thread (or all kernel threads of the
process) during the interval.
If this metric is reported for a kernel thread, the percentage value is for
that single kernel thread. If this metric is reported for a process, the
percentage value is calculated with the sum of the wait and alive times of
all of its kernel threads.
For example, if a process has 2 kernel threads, one sleeping for the entire
interval and one waiting on terminal input for the interval, the process wait
percent values will be 50% on Sleep and 50% on Terminal. The kernel thread
wait values will be 100% on Sleep for the first kernel thread and 100% on
Terminal for the second kernel thread.
For another example, consider the same process as above, with 2 kernel
threads, one of which was created half-way through the interval, and which
then slept for the remainder of the interval. The other kernel thread was
waiting for terminal input for half the interval, then used the CPU actively
for the remainder of the interval. The process wait percent values will be
33% on Sleep and 33% on Terminal (each one third of the total alive time).
The kernel thread wait values will be 100% on Sleep for the first kernel
thread and 50% on Terminal for the second kernel thread.
PROC_DISK_SUBSYSTEM_WAIT_TIME
----------------------------------
The time, in seconds, that the process or kernel thread was blocked on the
disk subsystem (waiting for its file system IOs to complete) during the
interval.
On HP-UX, this is based on the sum of processes or kernel threads in the
DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing
raw IO to a disk are not included in this measurement.
On Linux, this is based on the sum of all processes or kernel threads blocked
on disk.
On Linux, if thread collection is disabled, only the first thread of each
multi-threaded process is taken into account. This metric will be NA on
kernels older than 2.6.23 or kernels not including CFS, the Completely Fair
Scheduler.
On a threaded operating system, such as HP-UX 11.0 and beyond, process wait
time is calculated by summing the wait times of its kernel threads. If this
metric is reported for a kernel thread, the value is the wait time of that
single kernel thread. If this metric is reported for a process, the value is
the sum of the wait times of all of its kernel threads. Alive kernel threads
and kernel threads that have died during the interval are included in the
summation. For multi-threaded processes, the wait times can exceed the
length of the measurement interval.
PROC_DISK_SUBSYSTEM_WAIT_TIME_CUM
----------------------------------
The time, in seconds, that the process or kernel thread was blocked on the
disk subsystem (waiting for its file system IOs to complete) over the
cumulative collection time.
On HP-UX, this is based on the sum of processes or kernel threads in the
DISK, INODE, CACHE and CDFS wait states. Processes or kernel threads doing
raw IO to a disk are not included in this measurement.
On Linux, this is based on the sum of all processes or kernel threads blocked
on disk.
On Linux, if thread collection is disabled, only the first thread of each
multi-threaded process is taken into account. This metric will be NA on
kernels older than 2.6.23 or kernels not including CFS, the Completely Fair
Scheduler.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On a threaded operating system, such as HP-UX 11.0 and beyond, process wait
time is calculated by summing the wait times of its kernel threads. If this
metric is reported for a kernel thread, the value is the wait time of that
single kernel thread. If this metric is reported for a process, the value is
the sum of the wait times of all of its kernel threads. Alive kernel threads
and kernel threads that have died during the interval are included in the
summation. For multi-threaded processes, the wait times can exceed the
length of the measurement interval.
PROC_EUID
----------------------------------
The Effective User ID of a process(or kernel thread, if HP-UX/Linux Kernel
2.6 and above).
On HP-UX, this metric is specific to a process. If this metric is reported
for a kernel thread, the value for its associated process is given.
PROC_FILE_MODE
----------------------------------
A text string summarizing the type of open mode:
rd/wr Opened for input & output
read Opened for input only
write Opened for output only
PROC_FILE_NAME
----------------------------------
The path name or identifying information about the open file descriptor. If
the path name string exceeds 40 characters in length, the beginning and the
end of the path is shown and the middle of the name is replaced by “...”.
An attempt is made to obtain the file path name by either searching the
current cylinder group to find directory entries that point to the currently
opened inode, or by searching the kernel name cache. Since looking up file
path names would require high disk overhead, some names may not be resolved.
If the path name can not be resolved, a string is returned indicating the
type and inode number of the file.
For the string format including an inode number, you may use the ncheck(1M)
program to display the file path name relative to the mount point. Sometimes
files may be deleted before they are closed. In these cases, the process
file table may still have the inode even though the file is not actually
present and as a result, ncheck will fail.
PROC_FILE_NUMBER
----------------------------------
The file number of the current open file.
PROC_FILE_OPEN
----------------------------------
Number of files the current process has remaining open as of the end of the
interval.
PROC_FILE_TYPE
----------------------------------
A text string describing the type of the current file. This is one of:
block Block special device
char Character device
dir Directory
fifo A pipe or named pipe
file Simple file
link Symbolic file link
other An unknown file type
PROC_FORCED_CSWITCH
----------------------------------
The number of times that the process (or kernel thread, if HP-UX) was
preempted by an external event and another process (or kernel thread, if HP-
UX) was allowed to execute during the interval.
Examples of reasons for a forced switch include expiration of a time slice or
returning from a system call with a higher priority process (or kernel
thread, if HP-UX) ready to run.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
On Linux, if thread collection is disabled, only the first thread of each
multi-threaded process is taken into account. This metric will be NA on
kernels older than 2.6.23 or kernels not including CFS, the Completely Fair
Scheduler.
PROC_FORCED_CSWITCH_CUM
----------------------------------
The number of times the process (or kernel thread, if HP-UX) was preempted by
an external event and another process (or kernel thread, if HP-UX) was
allowed to execute over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
Examples of reasons for a forced switch include expiration of a time slice or
returning from a system call with a higher priority process (or kernel
thread, if HP-UX) ready to run.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
On Linux, if thread collection is disabled, only the first thread of each
multi-threaded process is taken into account. This metric will be NA on
kernels older than 2.6.23 or kernels not including CFS, the Completely Fair
Scheduler.
PROC_GROUP_ID
----------------------------------
On most systems, this is the real group ID number of the process (or kernel
thread, if HP-UX/Linux Kernel 2.6 and above). On AIX, this is the effective
group ID number of the process.
On HP-UX, this is the effective group ID number of the process if not in
setgid mode.
On HP-UX, this metric is specific to a process. If this metric is reported
for a kernel thread, the value for its associated process is given.
PROC_GROUP_NAME
----------------------------------
The group name (from /etc/group) of a process(or kernel thread, if HP-
UX/Linux Kernel 2.6 and above).
The group identifier is obtained from searching the /etc/passwd file using
the user ID (uid) as a key. Therefore, if more than one account is listed in
/etc/passwd with the same user ID (uid) field, the first one is used. If no
entry can be found for the user ID in /etc/passwd, the group name is the uid
number. If no matching entry in /etc/group can be found, the group ID is
returned as the group name.
On HP-UX, this metric is specific to a process. If this metric is reported
for a kernel thread, the value for its associated process is given.
PROC_INTEREST
----------------------------------
A string containing the reason(s) why the process or thread is of interest,
based on the thresholds specified in the parm file.
An ‘A’ indicates that the process or thread exceeds the process CPU
threshold, computed using the actual time the process or thread was alive
during the interval.
A ‘C’ indicates that the process or thread exceeds the process CPU threshold,
computed using the collection interval. Currently, the same CPU threshold is
used for both CPU interest reasons.
A ‘D’ indicates that the process or thread exceeds the process disk IO
threshold.
An ‘I’ indicates that the process or thread exceeds the IO threshold.
An ‘M’ indicates that the process exceeds the process memory threshold. This
interest reason is only meaningful for processes and therefore not shown for
threads.
New processes or threads are identified with an ‘N’, terminated processes or
threads are identified with a ‘K’.
Note that the parm file ‘nonew’, ‘nokill’ and ‘shortlived’ settings are
logging only options and therefore ignored in Glance components.
PROC_INTERVAL
----------------------------------
The amount of time in the interval. This is the same value for all processes
(and kernel threads, if HP-UX/Linux Kernel 2.6 and above), regardless of
whether they were alive for the entire interval.
Note, calculations such as utilizations or rates are calculated using this
standardized process interval (PROC_INTERVAL), rather than the actual alive
time during the interval (PROC_INTERVAL_ALIVE). Thus, if a process was only
alive for 1 second and used the CPU during its entire life (1 second), but
the process sample interval was 5 seconds, it would be reported as using 1/5
or 20% CPU utilization, rather than 100% CPU utilization.
PROC_INTERVAL_ALIVE
----------------------------------
The number of seconds that the process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) was alive during the interval. This may be less than
the time of the interval if the process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) was new or died during the interval.
PROC_INTERVAL_CUM
----------------------------------
The amount of time over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On SUN, AIX, and OSF1, this differs from PROC_RUN_TIME in that PROC_RUN_TIME
may not include all of the first and last sample interval times and
PROC_INTERVAL_CUM does.
PROC_IO_BYTE
----------------------------------
On HP-UX, this is the total number of physical IO KBs (unless otherwise
specified) that was used by this process or kernel thread, either directly or
indirectly, during the interval.
On all other systems, this is the total number of physical IO KBs (unless
otherwise specified) that was used by this process during the interval. IOs
include disk, terminal, tape and network IO.
On HP-UX, indirect IOs include paging and deactivation/reactivation activity
done by the kernel on behalf of the process or kernel thread. Direct IOs
include disk, terminal, tape, and network IO, but exclude all NFS traffic.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
On SUN, counts in the MB ranges in general can be attributed to disk accesses
and counts in the KB ranges can be attributed to terminal IO. This is useful
when looking for processes with heavy disk IO activity. This may vary
depending on the sample interval length.
Linux release versions vary with regards to the amount of process-level IO
statistics that are available. Some kernels instrument only disk IO, while
some provide statistics for all devices together (including tty and other
devices with disk IO).
When it is available from your specific release of Linux, the PROC_DISK_PHYS*
metrics will report pages of disk IO specifically. The PROC_IO* metrics will
report the sum of all types of IO including disk IO, in Kilobytes or KB
rates. These metrics will have “na” values on kernels that do not support the
instrumentation.
For multi-threaded processes, some Linux kernels only report IO statistics
for the main thread. In that case, patches are available that will allow the
process instrumentation to report the sum of all thread’s IOs, and will also
enable per-thread reporting.
Starting with 2.6.3X, at least some kernels will include IO data from the
children of the process in the process data. This results in misleading
inflated IO metrics for processes that fork a lot of children, such as
shells, or the init(1m) process.
PROC_IO_BYTE_CUM
----------------------------------
On HP-UX, this is the total number of physical IO KBs (unless otherwise
specified) that was used by this process or kernel thread, either directly or
indirectly, over the cumulative collection time.
On all other systems, this is the total number of physical IO KBs (unless
otherwise specified) that was used by this process over the cumulative
collection time. IOs include disk, terminal, tape and network IO.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, indirect IOs include paging and deactivation/reactivation activity
done by the kernel on behalf of the process or kernel thread. Direct IOs
include disk, terminal, tape, and network IO, but exclude all NFS traffic.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
Linux release versions vary with regards to the amount of process-level IO
statistics that are available. Some kernels instrument only disk IO, while
some provide statistics for all devices together (including tty and other
devices with disk IO).
When it is available from your specific release of Linux, the PROC_DISK_PHYS*
metrics will report pages of disk IO specifically. The PROC_IO* metrics will
report the sum of all types of IO including disk IO, in Kilobytes or KB
rates. These metrics will have “na” values on kernels that do not support the
instrumentation.
For multi-threaded processes, some Linux kernels only report IO statistics
for the main thread. In that case, patches are available that will allow the
process instrumentation to report the sum of all thread’s IOs, and will also
enable per-thread reporting.
Starting with 2.6.3X, at least some kernels will include IO data from the
children of the process in the process data. This results in misleading
inflated IO metrics for processes that fork a lot of children, such as
shells, or the init(1m) process.
PROC_IO_BYTE_RATE
----------------------------------
On HP-UX, this is the number of physical IO KBs per second that was used by
this process or kernel thread, either directly or indirectly, during the
interval.
On all other systems, this is the number of physical IO KBs per second that
was used by this process during the interval. IOs include disk, terminal,
tape and network IO.
On HP-UX, indirect IOs include paging and deactivation/reactivation activity
done by the kernel on behalf of the process or kernel thread. Direct IOs
include disk, terminal, tape, and network IO, but exclude all NFS traffic.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
On SUN, counts in the MB ranges in general can be attributed to disk accesses
and counts in the KB ranges can be attributed to terminal IO. This is useful
when looking for processes with heavy disk IO activity. This may vary
depending on the sample interval length.
Certain types of disk IOs are not counted by AIX at the process level, so
they are excluded from this metric.
Linux release versions vary with regards to the amount of process-level IO
statistics that are available. Some kernels instrument only disk IO, while
some provide statistics for all devices together (including tty and other
devices with disk IO).
When it is available from your specific release of Linux, the PROC_DISK_PHYS*
metrics will report pages of disk IO specifically. The PROC_IO* metrics will
report the sum of all types of IO including disk IO, in Kilobytes or KB
rates. These metrics will have “na” values on kernels that do not support the
instrumentation.
For multi-threaded processes, some Linux kernels only report IO statistics
for the main thread. In that case, patches are available that will allow the
process instrumentation to report the sum of all thread’s IOs, and will also
enable per-thread reporting.
Starting with 2.6.3X, at least some kernels will include IO data from the
children of the process in the process data. This results in misleading
inflated IO metrics for processes that fork a lot of children, such as
shells, or the init(1m) process.
PROC_IO_BYTE_RATE_CUM
----------------------------------
On HP-UX, this is the average number of physical IO KBs per second that was
used by this process or kernel thread, either directly or indirectly, over
the cumulative collection time.
On all other systems, this is the average number of physical IO KBs per
second that was used by this process over the cumulative collection time.
IOs include disk, terminal, tape and network IO.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, indirect IOs include paging and deactivation/reactivation activity
done by the kernel on behalf of the process or kernel thread. Direct IOs
include disk, terminal, tape, and network IO, but exclude all NFS traffic.
On multi-threaded operating systems, process usage of a resource is
calculated by summing the usage of that resource by its kernel threads. If
this metric is reported for a kernel thread, the value is the resource usage
by that single kernel thread. If this metric is reported for a process, the
value is the sum of the resource usage by all of its kernel threads. Alive
kernel threads and kernel threads that have died during the interval are
included in the summation.
On SUN, counts in the MB ranges in general can be attributed to disk accesses
and counts in the KB ranges can be attributed to terminal IO. This is useful
when looking for processes with heavy disk IO activity. This may vary
depending on the sample interval length.
Linux release versions vary with regards to the amount of process-level IO
statistics that are available. Some kernels instrument only disk IO, while
some provide statistics for all devices together (including tty and other
devices with disk IO).
When it is available from your specific release of Linux, the PROC_DISK_PHYS*
metrics will report pages of disk IO specifically. The PROC_IO* metrics will
report the sum of all types of IO including disk IO, in Kilobytes or KB
rates. These metrics will have “na” values on kernels that do not support the
instrumentation.
For multi-threaded processes, some Linux kernels only report IO statistics
for the main thread. In that case, patches are available that will allow the
process instrumentation to report the sum of all thread’s IOs, and will also
enable per-thread reporting.
Starting with 2.6.3X, at least some kernels will include IO data from the
children of the process in the process data. This results in misleading
inflated IO metrics for processes that fork a lot of children, such as
shells, or the init(1m) process.
PROC_MAJOR_FAULT
----------------------------------
Number of major page faults for this process (or kernel thread, if HP-
UX/Linux Kernel 2.6 and above) during the interval.
On HP-UX, major page faults and minor page faults are a subset of vfaults
(virtual faults). Stack and heap accesses can cause vfaults, but do not
result in a disk page having to be loaded into memory.
PROC_MAJOR_FAULT_CUM
----------------------------------
Number of major page faults for this process (or kernel thread, if HP-
UX/Linux Kernel 2.6 and above) over the cumulative collection time.
The cumulative collection time is defined from the point in time when
either: a) the process (or thread) was first started, or b) the performance
tool was first started, or c) the cumulative counters were reset (relevant
only to Glance, if available for the given platform), whichever occurred
last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after
the system has been up for more than 466 days, cumulative process CPU data
won’t include times accumulated prior to the performance tool’s start and a
message will be logged to indicate this.
On HP-UX, major page faults and minor page faults are a subset of vfaults
(virtual faults). Stack and heap accesses can cause vfaults, but do not
result in a disk page having to be loaded into memory.
PROC_MEM_DATA_VIRT
----------------------------------
On SUN, this is the virtual set size (in KB) of the heap memory for this
process. Note that heap can reside partially in BSS and partially in the
data segment, so its value will not be the same as PROC_REGION_VIRT of the
data segment or PROC_REGION_VIRT_DATA, which is the sum of all data segments
for the process.
On the other non HP-UX systems, this is the virtual set size (in KB) of the
data segment for this process(or kernel thread, if Linux Kernel 2.6 and
above).
A value of “na” is displayed when this information is unobtainable.
On AIX, this is the same as the SIZE value reported by “ps v”.
On Linux this value is rounded to PAGESIZE.
PROC_MEM_LOCKED
----------------------------------
The number of KBs of virtual memory allocated by the process, marked as
locked memory.
On Windows, this is the non-paged pool memory of the process. This memory is
allocated from the system-wide non-paged pool, and is not affected by the
pageout process. Device drivers may allocate memory from the non-paged pool,
charging quota against the current (caller) thread.
The kernel and driver code use the non-paged pool for data that should always
be in the physical memory. The size of the non-paged pool is limited to
approximately 128 MB on Windows NT systems and to 256 MB on Windows 2000
systems. The failure to allocate memory from the non-paged pool can cause a
system crash.
PROC_MEM_RES
----------------------------------
The size (in KB) of resident memory allocated for the process(or kernel
thread, if HP-UX/Linux Kernel 2.6 and above).
On HP-UX, the calculation of this metric differs depending on whether this
process has used any CPU time since the midaemon process was started. This
metric is less accurate and does not include shared memory regions in its
calculation when the process has been idle since the midaemon was started.
On HP-UX, for processes that use CPU time subsequent to midaemon startup, the
resident memory is calculated as
RSS = sum of private region pages +
(sum of shared region pages /
number of references)
The number of references is a count of the number of attachments to the
memory region. Attachments, for shared regions, may come from several
processes sharing the same memory, a single process with multiple
attachments, or combinations of these.
This value is only updated when a process uses CPU. Thus, under memory
pressure, this value may be higher than the actual amount of resident memory
for processes which are idle because their memory pages may no longer be
resident or the reference count for shared segments may have changed.
On HP-UX, this metric is specific to a process. If this metric is reported
for a kernel thread, the value for its associated process is given.
A value of “na” is displayed when this information is unobtainable. This
information may not be obtainable for some system (kernel) processes. It may
also not be available for