HP OpenView GlancePlus Dictionary of Performance Metrics 09/2005 Accompanies GlancePlus for SUN, release C.04.50 INTRODUCTION ==================== This dictionary contains definitions of the GlancePlus performance metrics on the SUN. This dictionary is divided into the following sections: * "Metric Data Classes," which lists the metrics alphabetically by data class. * "Metric Definitions," which describes each metric in alphabetical order. * "Glossary," which provides a glossary of performance metric terms. NOTE: The name MeasureWare Agent for UNIX has been replaced with HP OpenView Performance Agent (OV Performance Agent or OVPA) for UNIX and the name PerfView for UNIX has been replaced with OV Performance Manager for UNIX throughout this documentation.However, the process names and software components operationally remain MeasureWare Agent (MWA) and PerfView. METRIC DATA CLASSES ================= Global Metrics -------------------- GBL_ACTIVE_CPU GBL_ACTIVE_PROC GBL_ALIVE_PROC GBL_BLANK GBL_BLOCKED_IO_QUEUE GBL_BOOT_TIME GBL_COLLECTOR GBL_COMPLETED_PROC GBL_CPU_CLOCK GBL_CPU_IDLE_TIME GBL_CPU_IDLE_TIME_CUM GBL_CPU_IDLE_UTIL GBL_CPU_IDLE_UTIL_CUM GBL_CPU_IDLE_UTIL_HIGH GBL_CPU_SYS_MODE_TIME GBL_CPU_SYS_MODE_TIME_CUM GBL_CPU_SYS_MODE_UTIL GBL_CPU_SYS_MODE_UTIL_CUM GBL_CPU_SYS_MODE_UTIL_HIGH GBL_CPU_TOTAL_TIME GBL_CPU_TOTAL_TIME_CUM GBL_CPU_TOTAL_UTIL GBL_CPU_TOTAL_UTIL_CUM GBL_CPU_TOTAL_UTIL_HIGH GBL_CPU_USER_MODE_TIME GBL_CPU_USER_MODE_TIME_CUM GBL_CPU_USER_MODE_UTIL GBL_CPU_USER_MODE_UTIL_CUM GBL_CPU_USER_MODE_UTIL_HIGH GBL_CPU_WAIT_TIME GBL_CPU_WAIT_UTIL GBL_CSWITCH_RATE GBL_CSWITCH_RATE_CUM GBL_CSWITCH_RATE_HIGH GBL_DISK_BLOCK_IO GBL_DISK_BLOCK_IO_CUM GBL_DISK_BLOCK_IO_PCT GBL_DISK_BLOCK_IO_PCT_CUM GBL_DISK_BLOCK_IO_RATE GBL_DISK_BLOCK_IO_RATE_CUM GBL_DISK_BLOCK_READ GBL_DISK_BLOCK_READ_RATE GBL_DISK_BLOCK_WRITE GBL_DISK_BLOCK_WRITE_RATE GBL_DISK_FILE_IO GBL_DISK_FILE_IO_CUM GBL_DISK_FILE_IO_PCT GBL_DISK_FILE_IO_PCT_CUM GBL_DISK_FILE_IO_RATE GBL_DISK_FILE_IO_RATE_CUM GBL_DISK_LOGL_IO GBL_DISK_LOGL_IO_CUM GBL_DISK_LOGL_IO_RATE GBL_DISK_LOGL_IO_RATE_CUM GBL_DISK_LOGL_READ GBL_DISK_LOGL_READ_CUM GBL_DISK_LOGL_READ_PCT GBL_DISK_LOGL_READ_PCT_CUM GBL_DISK_LOGL_READ_RATE GBL_DISK_LOGL_READ_RATE_CUM GBL_DISK_LOGL_WRITE GBL_DISK_LOGL_WRITE_CUM GBL_DISK_LOGL_WRITE_PCT GBL_DISK_LOGL_WRITE_PCT_CUM GBL_DISK_LOGL_WRITE_RATE GBL_DISK_LOGL_WRITE_RATE_CUM GBL_DISK_PHYS_BYTE GBL_DISK_PHYS_BYTE_RATE GBL_DISK_PHYS_IO GBL_DISK_PHYS_IO_CUM GBL_DISK_PHYS_IO_RATE GBL_DISK_PHYS_IO_RATE_CUM GBL_DISK_PHYS_READ GBL_DISK_PHYS_READ_BYTE GBL_DISK_PHYS_READ_BYTE_CUM GBL_DISK_PHYS_READ_BYTE_RATE GBL_DISK_PHYS_READ_CUM GBL_DISK_PHYS_READ_PCT GBL_DISK_PHYS_READ_PCT_CUM GBL_DISK_PHYS_READ_RATE GBL_DISK_PHYS_READ_RATE_CUM GBL_DISK_PHYS_WRITE GBL_DISK_PHYS_WRITE_BYTE GBL_DISK_PHYS_WRITE_BYTE_CUM GBL_DISK_PHYS_WRITE_BYTE_RATE GBL_DISK_PHYS_WRITE_CUM GBL_DISK_PHYS_WRITE_PCT GBL_DISK_PHYS_WRITE_PCT_CUM GBL_DISK_PHYS_WRITE_RATE GBL_DISK_PHYS_WRITE_RATE_CUM GBL_DISK_RAW_IO GBL_DISK_RAW_IO_CUM GBL_DISK_RAW_IO_PCT GBL_DISK_RAW_IO_PCT_CUM GBL_DISK_RAW_IO_RATE GBL_DISK_RAW_IO_RATE_CUM GBL_DISK_RAW_READ GBL_DISK_RAW_READ_RATE GBL_DISK_RAW_WRITE GBL_DISK_RAW_WRITE_RATE GBL_DISK_REQUEST_QUEUE GBL_DISK_TIME_PEAK GBL_DISK_UTIL GBL_DISK_UTIL_PEAK GBL_DISK_UTIL_PEAK_CUM GBL_DISK_UTIL_PEAK_HIGH GBL_DISK_VM_IO GBL_DISK_VM_IO_CUM GBL_DISK_VM_IO_PCT GBL_DISK_VM_IO_PCT_CUM GBL_DISK_VM_IO_RATE GBL_DISK_VM_IO_RATE_CUM GBL_FS_SPACE_UTIL_PEAK GBL_GMTOFFSET GBL_INTERRUPT GBL_INTERRUPT_RATE GBL_INTERRUPT_RATE_CUM GBL_INTERRUPT_RATE_HIGH GBL_INTERVAL GBL_INTERVAL_CUM GBL_JAVAARG GBL_LOADAVG GBL_LOADAVG_CUM GBL_LOADAVG_HIGH GBL_LOST_MI_TRACE_BUFFERS GBL_MACHINE GBL_MACHINE_MODEL GBL_MEM_AVAIL GBL_MEM_CACHE GBL_MEM_CACHE_HIT GBL_MEM_CACHE_HIT_CUM GBL_MEM_CACHE_HIT_PCT GBL_MEM_CACHE_HIT_PCT_CUM GBL_MEM_CACHE_HIT_PCT_HIGH GBL_MEM_CACHE_UTIL GBL_MEM_DNLC_HIT GBL_MEM_DNLC_HIT_CUM GBL_MEM_DNLC_HIT_PCT GBL_MEM_DNLC_HIT_PCT_CUM GBL_MEM_DNLC_HIT_PCT_HIGH GBL_MEM_DNLC_LONGS GBL_MEM_DNLC_LONGS_CUM GBL_MEM_DNLC_LONGS_PCT GBL_MEM_DNLC_LONGS_PCT_CUM GBL_MEM_DNLC_LONGS_PCT_HIGH GBL_MEM_FILE_PAGEIN_RATE GBL_MEM_FILE_PAGEOUT_RATE GBL_MEM_FREE GBL_MEM_FREE_UTIL GBL_MEM_PAGEIN GBL_MEM_PAGEIN_BYTE GBL_MEM_PAGEIN_BYTE_CUM GBL_MEM_PAGEIN_BYTE_RATE GBL_MEM_PAGEIN_BYTE_RATE_CUM GBL_MEM_PAGEIN_BYTE_RATE_HIGH GBL_MEM_PAGEIN_CUM GBL_MEM_PAGEIN_RATE GBL_MEM_PAGEIN_RATE_CUM GBL_MEM_PAGEIN_RATE_HIGH GBL_MEM_PAGEOUT GBL_MEM_PAGEOUT_BYTE GBL_MEM_PAGEOUT_BYTE_CUM GBL_MEM_PAGEOUT_BYTE_RATE GBL_MEM_PAGEOUT_BYTE_RATE_CUM GBL_MEM_PAGEOUT_BYTE_RATE_HIGH GBL_MEM_PAGEOUT_CUM GBL_MEM_PAGEOUT_RATE GBL_MEM_PAGEOUT_RATE_CUM GBL_MEM_PAGEOUT_RATE_HIGH GBL_MEM_PAGE_FAULT GBL_MEM_PAGE_FAULT_CUM GBL_MEM_PAGE_FAULT_RATE GBL_MEM_PAGE_FAULT_RATE_CUM GBL_MEM_PAGE_FAULT_RATE_HIGH GBL_MEM_PAGE_REQUEST GBL_MEM_PAGE_REQUEST_CUM GBL_MEM_PAGE_REQUEST_RATE GBL_MEM_PAGE_REQUEST_RATE_CUM GBL_MEM_PAGE_REQUEST_RATE_HIGH GBL_MEM_PG_SCAN GBL_MEM_PG_SCAN_CUM GBL_MEM_PG_SCAN_RATE GBL_MEM_PG_SCAN_RATE_CUM GBL_MEM_PG_SCAN_RATE_HIGH GBL_MEM_PHYS GBL_MEM_SWAP GBL_MEM_SWAPIN GBL_MEM_SWAPIN_BYTE GBL_MEM_SWAPIN_BYTE_CUM GBL_MEM_SWAPIN_BYTE_RATE GBL_MEM_SWAPIN_BYTE_RATE_CUM GBL_MEM_SWAPIN_BYTE_RATE_HIGH GBL_MEM_SWAPIN_CUM GBL_MEM_SWAPIN_RATE GBL_MEM_SWAPIN_RATE_CUM GBL_MEM_SWAPIN_RATE_HIGH GBL_MEM_SWAPOUT GBL_MEM_SWAPOUT_BYTE GBL_MEM_SWAPOUT_BYTE_CUM GBL_MEM_SWAPOUT_BYTE_RATE GBL_MEM_SWAPOUT_BYTE_RATE_CUM GBL_MEM_SWAPOUT_BYTE_RATE_HIGH GBL_MEM_SWAPOUT_CUM GBL_MEM_SWAPOUT_RATE GBL_MEM_SWAPOUT_RATE_CUM GBL_MEM_SWAPOUT_RATE_HIGH GBL_MEM_SWAP_1_MIN_RATE GBL_MEM_SWAP_CUM GBL_MEM_SWAP_RATE GBL_MEM_SWAP_RATE_CUM GBL_MEM_SWAP_RATE_HIGH GBL_MEM_SYS GBL_MEM_SYS_AND_CACHE_UTIL GBL_MEM_SYS_UTIL GBL_MEM_USER GBL_MEM_USER_UTIL GBL_MEM_UTIL GBL_MEM_UTIL_CUM GBL_MEM_UTIL_HIGH GBL_NET_COLLISION GBL_NET_COLLISION_1_MIN_RATE GBL_NET_COLLISION_CUM GBL_NET_COLLISION_PCT GBL_NET_COLLISION_PCT_CUM GBL_NET_COLLISION_RATE GBL_NET_DEFERRED GBL_NET_DEFERRED_CUM GBL_NET_DEFERRED_PCT GBL_NET_DEFERRED_PCT_CUM GBL_NET_DEFERRED_RATE GBL_NET_DEFERRED_RATE_CUM GBL_NET_ERROR GBL_NET_ERROR_1_MIN_RATE GBL_NET_ERROR_CUM GBL_NET_ERROR_RATE GBL_NET_IN_ERROR GBL_NET_IN_ERROR_CUM GBL_NET_IN_ERROR_PCT GBL_NET_IN_ERROR_PCT_CUM GBL_NET_IN_ERROR_RATE GBL_NET_IN_ERROR_RATE_CUM GBL_NET_IN_PACKET GBL_NET_IN_PACKET_CUM GBL_NET_IN_PACKET_RATE GBL_NET_OUT_ERROR GBL_NET_OUT_ERROR_CUM GBL_NET_OUT_ERROR_PCT GBL_NET_OUT_ERROR_PCT_CUM GBL_NET_OUT_ERROR_RATE GBL_NET_OUT_ERROR_RATE_CUM GBL_NET_OUT_PACKET GBL_NET_OUT_PACKET_CUM GBL_NET_OUT_PACKET_RATE GBL_NET_PACKET GBL_NET_PACKET_RATE GBL_NFS_CALL GBL_NFS_CALL_RATE GBL_NFS_CLIENT_BAD_CALL GBL_NFS_CLIENT_BAD_CALL_CUM GBL_NFS_CLIENT_CALL GBL_NFS_CLIENT_CALL_CUM GBL_NFS_CLIENT_CALL_RATE GBL_NFS_CLIENT_IO GBL_NFS_CLIENT_IO_CUM GBL_NFS_CLIENT_IO_PCT GBL_NFS_CLIENT_IO_PCT_CUM GBL_NFS_CLIENT_IO_RATE GBL_NFS_CLIENT_IO_RATE_CUM GBL_NFS_CLIENT_READ_RATE GBL_NFS_CLIENT_READ_RATE_CUM GBL_NFS_CLIENT_WRITE_RATE GBL_NFS_CLIENT_WRITE_RATE_CUM GBL_NFS_SERVER_BAD_CALL GBL_NFS_SERVER_BAD_CALL_CUM GBL_NFS_SERVER_CALL GBL_NFS_SERVER_CALL_CUM GBL_NFS_SERVER_CALL_RATE GBL_NFS_SERVER_IO GBL_NFS_SERVER_IO_CUM GBL_NFS_SERVER_IO_PCT GBL_NFS_SERVER_IO_PCT_CUM GBL_NFS_SERVER_IO_RATE GBL_NFS_SERVER_IO_RATE_CUM GBL_NFS_SERVER_READ_RATE GBL_NFS_SERVER_READ_RATE_CUM GBL_NFS_SERVER_WRITE_RATE GBL_NFS_SERVER_WRITE_RATE_CUM GBL_NODENAME GBL_NUM_APP GBL_NUM_CPU GBL_NUM_DISK GBL_NUM_LV GBL_NUM_NETWORK GBL_NUM_SWAP GBL_NUM_TT GBL_NUM_USER GBL_NUM_VG GBL_OSKERNELTYPE GBL_OSKERNELTYPE_INT GBL_OSNAME GBL_OSRELEASE GBL_OSVERSION GBL_PROC_RUN_TIME GBL_PROC_SAMPLE GBL_RENICE_PRI_LIMIT GBL_RUN_QUEUE GBL_RUN_QUEUE_CUM GBL_RUN_QUEUE_HIGH GBL_SAMPLE GBL_SERIALNO GBL_STARTDATE GBL_STARTED_PROC GBL_STARTED_PROC_RATE GBL_STARTTIME GBL_STATDATE GBL_STATTIME GBL_SWAP_RESERVED_ONLY_UTIL GBL_SWAP_SPACE_AVAIL GBL_SWAP_SPACE_AVAIL_KB GBL_SWAP_SPACE_DEVICE_AVAIL GBL_SWAP_SPACE_DEVICE_UTIL GBL_SWAP_SPACE_MEM_AVAIL GBL_SWAP_SPACE_MEM_UTIL GBL_SWAP_SPACE_RESERVED GBL_SWAP_SPACE_RESERVED_UTIL GBL_SWAP_SPACE_USED GBL_SWAP_SPACE_USED_UTIL GBL_SWAP_SPACE_UTIL GBL_SWAP_SPACE_UTIL_CUM GBL_SWAP_SPACE_UTIL_HIGH GBL_SYSCALL GBL_SYSCALL_BYTE_RATE GBL_SYSCALL_RATE GBL_SYSCALL_RATE_CUM GBL_SYSCALL_RATE_HIGH GBL_SYSCALL_READ GBL_SYSCALL_READ_BYTE GBL_SYSCALL_READ_BYTE_CUM GBL_SYSCALL_READ_BYTE_RATE GBL_SYSCALL_READ_CUM GBL_SYSCALL_READ_PCT GBL_SYSCALL_READ_PCT_CUM GBL_SYSCALL_READ_RATE GBL_SYSCALL_READ_RATE_CUM GBL_SYSCALL_WRITE GBL_SYSCALL_WRITE_BYTE GBL_SYSCALL_WRITE_BYTE_CUM GBL_SYSCALL_WRITE_BYTE_RATE GBL_SYSCALL_WRITE_CUM GBL_SYSCALL_WRITE_PCT GBL_SYSCALL_WRITE_PCT_CUM GBL_SYSCALL_WRITE_RATE GBL_SYSCALL_WRITE_RATE_CUM GBL_SYSTEM_ID GBL_SYSTEM_TYPE GBL_SYSTEM_UPTIME_HOURS GBL_SYSTEM_UPTIME_SECONDS GBL_TT_OVERFLOW_COUNT Table Metrics -------------------- TBL_BUFFER_CACHE_AVAIL TBL_BUFFER_CACHE_HWM TBL_BUFFER_HEADER_AVAIL TBL_BUFFER_HEADER_USED TBL_BUFFER_HEADER_USED_HIGH TBL_BUFFER_HEADER_UTIL TBL_BUFFER_HEADER_UTIL_HIGH TBL_FILE_LOCK_USED TBL_FILE_LOCK_USED_HIGH TBL_FILE_TABLE_AVAIL TBL_FILE_TABLE_USED TBL_FILE_TABLE_USED_HIGH TBL_FILE_TABLE_UTIL TBL_FILE_TABLE_UTIL_HIGH TBL_INODE_CACHE_AVAIL TBL_INODE_CACHE_HIGH TBL_INODE_CACHE_USED TBL_MAX_USERS TBL_MSG_BUFFER_ACTIVE TBL_MSG_BUFFER_AVAIL TBL_MSG_BUFFER_HIGH TBL_MSG_BUFFER_USED TBL_MSG_TABLE_ACTIVE TBL_MSG_TABLE_AVAIL TBL_MSG_TABLE_USED TBL_MSG_TABLE_UTIL TBL_MSG_TABLE_UTIL_HIGH TBL_NUM_NFSDS TBL_PROC_TABLE_AVAIL TBL_PROC_TABLE_USED TBL_PROC_TABLE_UTIL TBL_PROC_TABLE_UTIL_HIGH TBL_PTY_AVAIL TBL_PTY_USED TBL_PTY_UTIL TBL_PTY_UTIL_HIGH TBL_SEM_TABLE_ACTIVE TBL_SEM_TABLE_AVAIL TBL_SEM_TABLE_USED TBL_SEM_TABLE_UTIL TBL_SEM_TABLE_UTIL_HIGH TBL_SHMEM_ACTIVE TBL_SHMEM_AVAIL TBL_SHMEM_HIGH TBL_SHMEM_TABLE_ACTIVE TBL_SHMEM_TABLE_AVAIL TBL_SHMEM_TABLE_USED TBL_SHMEM_TABLE_UTIL TBL_SHMEM_TABLE_UTIL_HIGH TBL_SHMEM_USED Process Metrics -------------------- PROC_ACTIVE_PROC PROC_APP_ID PROC_APP_NAME PROC_CPU_SYS_MODE_TIME PROC_CPU_SYS_MODE_TIME_CUM PROC_CPU_SYS_MODE_UTIL PROC_CPU_SYS_MODE_UTIL_CUM PROC_CPU_TOTAL_TIME PROC_CPU_TOTAL_TIME_CUM PROC_CPU_TOTAL_UTIL PROC_CPU_TOTAL_UTIL_CUM PROC_CPU_USER_MODE_TIME PROC_CPU_USER_MODE_TIME_CUM PROC_CPU_USER_MODE_UTIL PROC_CPU_USER_MODE_UTIL_CUM PROC_DISK_BLOCK_IO PROC_DISK_BLOCK_IO_CUM PROC_DISK_BLOCK_IO_RATE PROC_DISK_BLOCK_IO_RATE_CUM PROC_DISK_BLOCK_READ PROC_DISK_BLOCK_READ_CUM PROC_DISK_BLOCK_READ_RATE PROC_DISK_BLOCK_WRITE PROC_DISK_BLOCK_WRITE_CUM PROC_DISK_BLOCK_WRITE_RATE PROC_EUID PROC_FORCED_CSWITCH PROC_FORCED_CSWITCH_CUM PROC_GROUP_ID PROC_GROUP_NAME PROC_INTERVAL PROC_INTERVAL_ALIVE PROC_INTERVAL_CUM PROC_IO_BYTE PROC_IO_BYTE_CUM PROC_IO_BYTE_RATE PROC_IO_BYTE_RATE_CUM PROC_MAJOR_FAULT PROC_MAJOR_FAULT_CUM PROC_MEM_DATA_VIRT PROC_MEM_RES PROC_MEM_RES_HIGH PROC_MEM_STACK_VIRT PROC_MEM_VIRT PROC_MINOR_FAULT PROC_MINOR_FAULT_CUM PROC_NICE_PRI PROC_PAGEFAULT PROC_PAGEFAULT_RATE PROC_PAGEFAULT_RATE_CUM PROC_PARENT_PROC_ID PROC_PRI PROC_PROC_ARGV1 PROC_PROC_CMD PROC_PROC_ID PROC_PROC_NAME PROC_REVERSE_PRI PROC_RUN_TIME PROC_SIGNAL PROC_SIGNAL_CUM PROC_STARTTIME PROC_STATE PROC_STATE_FLAG PROC_STOP_REASON PROC_STOP_REASON_FLAG PROC_SYSCALL PROC_SYSCALL_CUM PROC_THREAD_COUNT PROC_TOP_CPU_INDEX PROC_TTY PROC_TTY_DEV PROC_UID PROC_USER_NAME PROC_VOLUNTARY_CSWITCH PROC_VOLUNTARY_CSWITCH_CUM Application Metrics -------------------- APP_ACTIVE_APP APP_ACTIVE_PROC APP_ALIVE_PROC APP_COMPLETED_PROC APP_CPU_SYS_MODE_TIME APP_CPU_SYS_MODE_UTIL APP_CPU_TOTAL_TIME APP_CPU_TOTAL_UTIL APP_CPU_TOTAL_UTIL_CUM APP_CPU_USER_MODE_TIME APP_CPU_USER_MODE_UTIL APP_DISK_BLOCK_IO APP_DISK_BLOCK_IO_RATE APP_DISK_BLOCK_READ APP_DISK_BLOCK_READ_RATE APP_DISK_BLOCK_WRITE APP_DISK_BLOCK_WRITE_RATE APP_INTERVAL APP_INTERVAL_CUM APP_IO_BYTE APP_IO_BYTE_RATE APP_MAJOR_FAULT APP_MAJOR_FAULT_RATE APP_MEM_RES APP_MEM_UTIL APP_MEM_VIRT APP_MINOR_FAULT APP_MINOR_FAULT_RATE APP_NAME APP_NUM APP_PRI APP_PRI_STD_DEV APP_PROC_RUN_TIME APP_REVERSE_PRI APP_REV_PRI_STD_DEV APP_SAMPLE APP_TIME Process By File Metrics -------------------- PROC_FILE_COUNT PROC_FILE_MODE PROC_FILE_NAME PROC_FILE_NUMBER PROC_FILE_OFFSET PROC_FILE_OPEN PROC_FILE_TYPE By Disk Metrics -------------------- BYDSK_AVG_REQUEST_QUEUE BYDSK_AVG_SERVICE_TIME BYDSK_BUSY_TIME BYDSK_CURR_QUEUE_LENGTH BYDSK_DEVNAME BYDSK_DEVNO BYDSK_DIRNAME BYDSK_ID BYDSK_INTERVAL BYDSK_INTERVAL_CUM BYDSK_PHYS_BYTE BYDSK_PHYS_BYTE_RATE BYDSK_PHYS_BYTE_RATE_CUM BYDSK_PHYS_IO BYDSK_PHYS_IO_RATE BYDSK_PHYS_IO_RATE_CUM BYDSK_PHYS_READ BYDSK_PHYS_READ_BYTE BYDSK_PHYS_READ_BYTE_RATE BYDSK_PHYS_READ_BYTE_RATE_CUM BYDSK_PHYS_READ_RATE BYDSK_PHYS_READ_RATE_CUM BYDSK_PHYS_WRITE BYDSK_PHYS_WRITE_BYTE BYDSK_PHYS_WRITE_BYTE_RATE BYDSK_PHYS_WRITE_BYTE_RATE_CUM BYDSK_PHYS_WRITE_RATE BYDSK_PHYS_WRITE_RATE_CUM BYDSK_QUEUE_0_UTIL BYDSK_QUEUE_2_UTIL BYDSK_QUEUE_4_UTIL BYDSK_QUEUE_8_UTIL BYDSK_QUEUE_X_UTIL BYDSK_REQUEST_QUEUE BYDSK_TIME BYDSK_UTIL BYDSK_UTIL_CUM File System Metrics -------------------- FS_BLOCK_SIZE FS_DEVNAME FS_DEVNO FS_DIRNAME FS_FRAG_SIZE FS_INODE_UTIL FS_MAX_INODES FS_MAX_SIZE FS_SPACE_RESERVED FS_SPACE_USED FS_SPACE_UTIL FS_TYPE Logical Volume Metrics -------------------- LV_AVG_READ_SERVICE_TIME LV_AVG_WRITE_SERVICE_TIME LV_DEVNO LV_DIRNAME LV_GROUP_NAME LV_INTERVAL LV_INTERVAL_CUM LV_LOGLP_LV LV_OPEN_LV LV_PHYSLV_SIZE LV_READ_BYTE_RATE LV_READ_BYTE_RATE_CUM LV_READ_RATE LV_READ_RATE_CUM LV_SPACE_UTIL LV_STATE_LV LV_TYPE LV_TYPE_LV LV_WRITE_BYTE_RATE LV_WRITE_BYTE_RATE_CUM LV_WRITE_RATE LV_WRITE_RATE_CUM By Network Interface Metrics -------------------- BYNETIF_COLLISION BYNETIF_COLLISION_1_MIN_RATE BYNETIF_COLLISION_RATE BYNETIF_COLLISION_RATE_CUM BYNETIF_DEFERRED BYNETIF_DEFERRED_RATE BYNETIF_ERROR BYNETIF_ERROR_1_MIN_RATE BYNETIF_ERROR_RATE BYNETIF_ERROR_RATE_CUM BYNETIF_ID BYNETIF_IN_BYTE BYNETIF_IN_BYTE_RATE BYNETIF_IN_BYTE_RATE_CUM BYNETIF_IN_PACKET BYNETIF_IN_PACKET_RATE BYNETIF_IN_PACKET_RATE_CUM BYNETIF_NAME BYNETIF_NET_TYPE BYNETIF_OUT_BYTE BYNETIF_OUT_BYTE_RATE BYNETIF_OUT_BYTE_RATE_CUM BYNETIF_OUT_PACKET BYNETIF_OUT_PACKET_RATE BYNETIF_OUT_PACKET_RATE_CUM BYNETIF_PACKET_RATE By Swap Metrics -------------------- BYSWP_SWAP_SPACE_AVAIL BYSWP_SWAP_SPACE_NAME BYSWP_SWAP_SPACE_USED BYSWP_SWAP_TYPE By CPU Metrics -------------------- BYCPU_ACTIVE BYCPU_CPU_CLOCK BYCPU_CPU_SYS_MODE_TIME BYCPU_CPU_SYS_MODE_TIME_CUM BYCPU_CPU_SYS_MODE_UTIL BYCPU_CPU_SYS_MODE_UTIL_CUM BYCPU_CPU_TOTAL_TIME BYCPU_CPU_TOTAL_TIME_CUM BYCPU_CPU_TOTAL_UTIL BYCPU_CPU_TOTAL_UTIL_CUM BYCPU_CPU_TYPE BYCPU_CPU_USER_MODE_TIME BYCPU_CPU_USER_MODE_TIME_CUM BYCPU_CPU_USER_MODE_UTIL BYCPU_CPU_USER_MODE_UTIL_CUM BYCPU_CSWITCH BYCPU_CSWITCH_CUM BYCPU_CSWITCH_RATE BYCPU_CSWITCH_RATE_CUM BYCPU_ID BYCPU_INTERRUPT BYCPU_INTERRUPT_RATE BYCPU_STATE Process By Memory Region Metrics -------------------- PROC_REGION_FILENAME PROC_REGION_PRIVATE_SHARED_FLAG PROC_REGION_PROT_FLAG PROC_REGION_REF_COUNT PROC_REGION_TYPE PROC_REGION_VIRT PROC_REGION_VIRT_ADDRS PROC_REGION_VIRT_DATA PROC_REGION_VIRT_OTHER PROC_REGION_VIRT_SHMEM PROC_REGION_VIRT_STACK PROC_REGION_VIRT_TEXT By Operation Metrics -------------------- BYOP_CLIENT_COUNT BYOP_CLIENT_COUNT_CUM BYOP_NAME BYOP_SERVER_COUNT BYOP_SERVER_COUNT_CUM Transaction Metrics -------------------- TT_ABORT TT_ABORT_CUM TT_ABORT_WALL_TIME TT_ABORT_WALL_TIME_CUM TT_APPNO TT_APP_NAME TT_CLIENT_CORRELATOR_COUNT TT_COUNT TT_COUNT_CUM TT_FAILED TT_FAILED_CUM TT_FAILED_WALL_TIME TT_FAILED_WALL_TIME_CUM TT_INFO TT_INPROGRESS_COUNT TT_INTERVAL TT_INTERVAL_CUM TT_MEASUREMENT_COUNT TT_NAME TT_SLO_COUNT TT_SLO_COUNT_CUM TT_SLO_PERCENT TT_SLO_THRESHOLD TT_TRAN_1_MIN_RATE TT_TRAN_ID TT_UID TT_UNAME TT_UPDATE TT_UPDATE_CUM TT_WALL_TIME TT_WALL_TIME_CUM TT_WALL_TIME_PER_TRAN TT_WALL_TIME_PER_TRAN_CUM Transaction Measurement Section Metrics -------------------- TTBIN_TRANS_COUNT TTBIN_TRANS_COUNT_CUM TTBIN_UPPER_RANGE Transaction Client Metrics -------------------- TT_CLIENT_ABORT TT_CLIENT_ABORT_CUM TT_CLIENT_ABORT_WALL_TIME TT_CLIENT_ABORT_WALL_TIME_CUM TT_CLIENT_ADDRESS TT_CLIENT_ADDRESS_FORMAT TT_CLIENT_TRAN_ID TT_CLIENT_COUNT TT_CLIENT_COUNT_CUM TT_CLIENT_FAILED TT_CLIENT_FAILED_CUM TT_CLIENT_FAILED_WALL_TIME TT_CLIENT_FAILED_WALL_TIME_CUM TT_CLIENT_INTERVAL TT_CLIENT_INTERVAL_CUM TT_CLIENT_SLO_COUNT TT_CLIENT_SLO_COUNT_CUM TT_CLIENT_UPDATE TT_CLIENT_UPDATE_CUM TT_CLIENT_WALL_TIME TT_CLIENT_WALL_TIME_CUM TT_CLIENT_WALL_TIME_PER_TRAN TT_CLIENT_WALL_TIME_PER_TRAN_CUM Transaction Instance Metrics -------------------- TT_INSTANCE_ID TT_INSTANCE_PROC_ID TT_INSTANCE_START_TIME TT_INSTANCE_STOP_TIME TT_INSTANCE_THREAD_ID TT_INSTANCE_UPDATE_COUNT TT_INSTANCE_UPDATE_TIME TT_INSTANCE_WALL_TIME Transaction User Defined Measurement Metrics -------------------- TT_USER_MEASUREMENT_AVG TT_USER_MEASUREMENT_MAX TT_USER_MEASUREMENT_MIN TT_USER_MEASUREMENT_NAME TT_USER_MEASUREMENT_STRING1024_VALUE TT_USER_MEASUREMENT_STRING32_VALUE TT_USER_MEASUREMENT_TYPE TT_USER_MEASUREMENT_VALUE Transaction Client User Defined Measurement Metrics -------------------- TT_CLIENT_USER_MEASUREMENT_AVG TT_CLIENT_USER_MEASUREMENT_MAX TT_CLIENT_USER_MEASUREMENT_MIN TT_CLIENT_USER_MEASUREMENT_NAME TT_CLIENT_USER_MEASUREMENT_STRING1024_VALUE TT_CLIENT_USER_MEASUREMENT_STRING32_VALUE TT_CLIENT_USER_MEASUREMENT_TYPE TT_CLIENT_USER_MEASUREMENT_VALUE Transaction Instance User Defined Measurement Metrics -------------------- TT_INSTANCE_USER_MEASUREMENT_AVG TT_INSTANCE_USER_MEASUREMENT_MAX TT_INSTANCE_USER_MEASUREMENT_MIN TT_INSTANCE_USER_MEASUREMENT_NAME TT_INSTANCE_USER_MEASUREMENT_STRING1024_VALUE TT_INSTANCE_USER_MEASUREMENT_STRING32_VALUE TT_INSTANCE_USER_MEASUREMENT_TYPE TT_INSTANCE_USER_MEASUREMENT_VALUE METRIC DEFINITIONS =================== APP_ACTIVE_APP -------------------- The number of applications that had processes active (consuming cpu resources) during the interval. APP_ACTIVE_PROC -------------------- An active process is one that exists and consumes some CPU time. APP_ACTIVE_PROC is the sum of the alive-process- time/interval-time ratios of every process belonging to an application that is active (uses any CPU time) during an interval. The following diagram of a four second interval showing two processes, A and B, for an application should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval, but consumes no CPU. A's contribution to APP_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to APP_ACTIVE_PROC. B's contribution to APP_ALIVE_PROC is 3*1/4. B contributes 2*1/4 to APP_ACTIVE_PROC. Thus, for this interval, APP_ACTIVE_PROC equals 0.5 and APP_ALIVE_PROC equals 1.75. Because a process may be alive but not active, APP_ACTIVE_PROC will always be less than or equal to APP_ALIVE_PROC. This metric indicates the number of processes in an application group that are competing for the CPU. This metric is useful, along with other metrics, for comparing loads placed on the system by different groups of processes. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. APP_ALIVE_PROC -------------------- An alive process is one that exists on the system. APP_ALIVE_PROC is the sum of the alive-process-time/interval- time ratios for every process belonging to a given application. The following diagram of a four second interval showing two processes, A and B, for an application should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval but consumes no CPU. A's contribution to APP_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to APP_ACTIVE_PROC. B's contribution to APP_ALIVE_PROC is 3*1/4. B contributes 2*1/4 to APP_ACTIVE_PROC. Thus, for this interval, APP_ACTIVE_PROC equals 0.5 and APP_ALIVE_PROC equals 1.75. Because a process may be alive but not active, APP_ACTIVE_PROC will always be less than or equal to APP_ALIVE_PROC. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. APP_COMPLETED_PROC -------------------- The number of processes in this group that completed during the interval. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. APP_CPU_SYS_MODE_TIME -------------------- The time, in seconds, during the interval that the CPU was in system mode for processes in this group. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. APP_CPU_SYS_MODE_UTIL -------------------- The percentage of time during the interval that the CPU was used in system mode for processes in this group. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. High system CPU utilizations are normal for IO intensive groups. Abnormally high system CPU utilization can indicate that a hardware problem is causing a high interrupt rate. It can also indicate programs that are not making efficient system calls. APP_CPU_TOTAL_TIME -------------------- The total CPU time, in seconds, devoted to processes in this group during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. APP_CPU_TOTAL_UTIL -------------------- The percentage of the total CPU time devoted to processes in this group during the interval. This indicates the relative CPU load placed on the system by processes in this group. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. Large values for this metric may indicate that this group is causing a CPU bottleneck. This would be normal in a computation-bound workload, but might mean that processes are using excessive CPU time and perhaps looping. If the “other” application shows significant amounts of CPU, you may want to consider tuning your parm file so that process activity is accounted for in known applications. APP_CPU_TOTAL_UTIL = APP_CPU_SYS_MODE_UTIL + APP_CPU_USER_MODE_UTIL NOTE: On Windows, the sum of the APP_CPU_TOTAL_UTIL metrics may not equal GBL_CPU_TOTAL_UTIL. Microsoft states that “this is expected behavior” because the GBL_CPU_TOTAL_UTIL metric is taken from the NT performance library Processor objects while the APP_CPU_TOTAL_UTIL metrics are taken from the Process objects. Microsoft states that there can be CPU time accounted for in the Processor system objects that may not be seen in the Process objects. APP_CPU_TOTAL_UTIL_CUM -------------------- The average CPU time per interval for processes in this group over the cumulative collection time, or since the last PRM configuration change on HP-UX. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. APP_CPU_USER_MODE_TIME -------------------- The time, in seconds, that processes in this group were in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. APP_CPU_USER_MODE_UTIL -------------------- The percentage of time that processes in this group were using the CPU in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. High user mode CPU percentages are normal for computation- intensive groups. Low values of user CPU utilization compared to relatively high values for APP_CPU_SYS_MODE_UTIL can indicate a hardware problem or improperly tuned programs in this group. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. APP_DISK_BLOCK_IO -------------------- The number of block IOs to the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. APP_DISK_BLOCK_IO_RATE -------------------- The number of block IOs per second to the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. APP_DISK_BLOCK_READ -------------------- The number of block reads from the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical reads generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. APP_DISK_BLOCK_READ_RATE -------------------- The number of block reads per second from the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical reads generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. APP_DISK_BLOCK_WRITE -------------------- The number of block writes to the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical writes generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. APP_DISK_BLOCK_WRITE_RATE -------------------- The number of block writes per second from the file system buffer cache for processes in this group during the interval. On Sun 5.X (Solaris 2.X or later), these are physical writes generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. APP_INTERVAL -------------------- The amount of time in the interval. APP_INTERVAL_CUM -------------------- The amount of time over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. APP_IO_BYTE -------------------- The number of characters (in KB) transferred for processes in this group to all devices during the interval. This includes IO to disk, terminal, tape and printers. APP_IO_BYTE_RATE -------------------- The number of characters (in KB) per second transferred for processes in this group to all devices during the interval. This includes IO to disk, terminal, tape and printers. APP_MAJOR_FAULT -------------------- The number of major page faults that required a disk IO for processes in this group during the interval. APP_MAJOR_FAULT_RATE -------------------- The number of major page faults per second that required a disk IO for processes in this group during the interval. APP_MEM_RES -------------------- On Unix systems, this is the sum of the size (in KB) of resident memory for processes in this group that were alive at the end of the interval. This consists of text, data, stack, and shared memory regions. On HP-UX, since PROC_MEM_RES typically takes shared region references into account, this approximates the total resident (physical) memory consumed by all processes in this group. On all other Unix systems, this is the sum of the resident memory region sizes for all processes in this group. When the resident memory size for processes includes shared regions, such as shared memory and library text and data, the shared regions are counted multiple times in this sum. For example, if the application contains four processes that are attached to a 500MB shared memory region that is all resident in physical memory, then 2000MB is contributed towards the sum in this metric. As such, this metric can overestimate the resident memory being used by processes in this group when they share memory regions. Refer to the help text for PROC_MEM_RES for additional information. On Windows, this is the sum of the size (in KB) of the working sets for processes in this group during the interval. The working set counts memory pages referenced recently by the threads making up this group. Note that the size of the working set is often larger than the amount of pagefile space consumed. APP_MEM_UTIL -------------------- On Unix systems, this is the approximate percentage of the system's physical memory used as resident memory by processes in this group that were alive at the end of the interval. This metric summarizes process private and shared memory in each application. On Windows, this is an estimate of the percentage of the system's physical memory allocated for working set memory by processes in this group during the interval. On HP-UX, this consists of text, data, stack, as well the process' portion of shared memory regions (such as, shared libraries, text segments, and shared data). The sum of the shared region pages is typically divided by the number of references. On Unix systems, each application's total resident memory is summed. This value is then divided by the summed total of all applications resident memory and then multiplied by the ratio of available user memory versus total physical memory to arrive at a calculated percentage of the total physical memory. It must be remembered, however, that this is a calculated metric that shows the approximate percentage of the physical memory used as resident memory by the processes in this application during the interval. On Windows, the sum of the working set sizes for each process in this group is kept as APP_MEM_RES. This value is divided by the sum of APP_MEM_RES for all applications defined on the system to come up with a ratio of this application's working set size to the total. This value is then multiplied by the ratio of available user memory versus total physical memory to arrive at a calculated percent of total physical memory. APP_MEM_VIRT -------------------- On Unix systems, this is the sum (in KB) of virtual memory for processes in this group that were alive at the end of the interval. This consists of text, data, stack, and shared memory regions. On HP-UX, since PROC_MEM_VIRT typically takes shared region references into account, this approximates the total virtual memory consumed by all processes in this group. On all other Unix systems, this is the sum of the virtual memory region sizes for all processes in this group. When the virtual memory size for processes includes shared regions, such as shared memory and library text and data, the shared regions are counted multiple times in this sum. For example, if the application contains four processes that are attached to a 500MB shared memory region, then 2000MB is reported in this metric. As such, this metric can overestimate the virtual memory being used by processes in this group when they share memory regions. On Windows, this is the sum (in KB) of paging file space used for all processes in this group during the interval. Groups of processes may have working set sizes (APP_MEM_RES) larger than the size of their pagefile space. APP_MINOR_FAULT -------------------- The number of minor page faults satisfied in memory (a page was reclaimed from one of the free lists) for processes in this group during the interval. APP_MINOR_FAULT_RATE -------------------- The number of minor page faults per second satisfied in memory (pages were reclaimed from one of the free lists) for processes in this group during the interval. APP_NAME -------------------- The name of the application (up to 20 characters). This comes from the parm file where the applications are defined. The application called “other” captures all processes not aggregated into applications specifically defined in the parm file. In other words, if no applications are defined in the parm file, then all process data would be reflected in the “other” application. APP_NUM -------------------- The sequentially assigned number of this application. APP_PRI -------------------- On Unix systems, this is the average priority of the processes in this group during the interval. On Windows, this is the average base priority of the processes in this group during the interval. APP_PRI_STD_DEV -------------------- The standard deviation of priorities of the processes in this group during the interval. This metric is available on HP-UX 10.20. APP_PROC_RUN_TIME -------------------- The average run time for processes in this group that completed during the interval. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. APP_REVERSE_PRI -------------------- The average priority of the processes in this group during the interval. Lower values for this metric always imply higher processing priority. The range is from 0 to 127. Since priority ranges can be customized on this OS, this metric provides a standardized way of interpreting priority that is consistent with other versions of Unix. See also the APP_PRI metric. This is derived from the PRI field of the ps command when the - c option is not used. APP_REV_PRI_STD_DEV -------------------- The standard deviation of priorities of the processes in this group during the interval. Priorities are mapped into a traditional lower value implies higher priority scheme. APP_SAMPLE -------------------- The number of samples of process data that have been averaged or accumulated during this sample. APP_TIME -------------------- The end time of the measurement interval. BYCPU_ACTIVE -------------------- Indicates whether or not this CPU is online. A CPU that is online is considered active. For HP-UX and certain versions of Linux, the sar(1M) command allows you to check the status of the system CPUs. For SUN and DEC, the commands psrinfo(1M) and psradm(1M) allow you to check or change the status of the system CPUs. For AIX, the pstat(1) command allows you to check the status of the system CPUs. BYCPU_CPU_CLOCK -------------------- The clock speed of the CPU in the current slot. The clock speed is in MHz for the selected CPU. BYCPU_CPU_SYS_MODE_TIME -------------------- The time, in seconds, that this CPU was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. BYCPU_CPU_SYS_MODE_TIME_CUM -------------------- The time, in seconds, that this CPU was in system mode over the cumulative collection time. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. BYCPU_CPU_SYS_MODE_UTIL -------------------- The percentage of time that this CPU was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. BYCPU_CPU_SYS_MODE_UTIL_CUM -------------------- The percentage of time that this CPU was in system mode over the cumulative collection time. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. BYCPU_CPU_TOTAL_TIME -------------------- The total time, in seconds, that this CPU was not idle during the interval. BYCPU_CPU_TOTAL_TIME_CUM -------------------- The total time, in seconds, that this CPU was not idle over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. BYCPU_CPU_TOTAL_UTIL -------------------- The percentage of time that this CPU was not idle during the interval. BYCPU_CPU_TOTAL_UTIL_CUM -------------------- The average percentage of time that this CPU was not idle over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. BYCPU_CPU_TYPE -------------------- The type of processor in the current slot. BYCPU_CPU_USER_MODE_TIME -------------------- The time, in seconds, during the interval that this CPU was in user mode. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. BYCPU_CPU_USER_MODE_TIME_CUM -------------------- The time, in seconds, that this CPU was in user mode over the cumulative collection time. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. BYCPU_CPU_USER_MODE_UTIL -------------------- The percentage of time that this CPU was in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. BYCPU_CPU_USER_MODE_UTIL_CUM -------------------- The average percentage of time that this CPU was in user mode over the cumulative collection time. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. BYCPU_CSWITCH -------------------- The number of context switches for this CPU during the interval. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. BYCPU_CSWITCH_CUM -------------------- The number of context switches for this CPU over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. BYCPU_CSWITCH_RATE -------------------- The average number of context switches per second for this CPU during the interval. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. BYCPU_CSWITCH_RATE_CUM -------------------- The average number of context switches per second for this CPU over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. BYCPU_ID -------------------- The ID number of this CPU. On some Unix systems, such as SUN, CPUs are not sequentially numbered. BYCPU_INTERRUPT -------------------- The number of device interrupts for this CPU during the interval. BYCPU_INTERRUPT_RATE -------------------- The average number of device interrupts per second for this CPU during the interval. On HP-UX, a value of “na” is displayed on a system with multiple CPUs. BYCPU_STATE -------------------- A text string indicating the current state of a processor. On HP-UX, this is either “Enabled”, “Disabled” or “Unknown”. On AIX, this is either “Idle/Offline” or “Online”. On all other systems, this is either “Offline”, “Online” or “Unknown”. BYDSK_AVG_REQUEST_QUEUE -------------------- The average number of IO requests that were in the wait and service queues for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. For example, if 4 intervals have passed with average queue lengths of 0, 2, 0, and 6, then the average number of IO requests over all intervals would be 2. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_AVG_SERVICE_TIME -------------------- The average time, in milliseconds, that this disk device spent processing each disk request during the interval. For example, a value of 5.14 would indicate that disk requests during the last interval took on average slightly longer than five one- thousandths of a second to complete for this device. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. This is a measure of the speed of the disk, because slower disk devices typically show a larger average service time. Average service time is also dependent on factors such as the distribution of I/O requests over the interval and their locality. It can also be influenced by disk driver and controller features such as I/O merging and command queueing. Note that this service time is measured from the perspective of the kernel, not the disk device itself. For example, if a disk device can find the requested data in its cache, the average service time could be quicker than the speed of the physical disk hardware. This metric can be used to help determine which disk devices are taking more time than usual to process requests. BYDSK_BUSY_TIME -------------------- The time, in seconds, that this disk device was busy transferring data during the interval. On HP-UX, this is the time, in seconds, during the interval that the disk device had IO in progress from the point of view of the Operating System. In other words, the time, in seconds, the disk was busy servicing requests for this device. BYDSK_CURR_QUEUE_LENGTH -------------------- The average number of physical IO requests that were in the wait and service queues for this disk device during the interval. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_DEVNAME -------------------- The name of this disk device. On HP-UX, the name identifying the specific disk spindle is the hardware path which specifies the address of the hardware components leading to the disk device. On SUN, these names are the same disk names displayed by “iostat”. On AIX, this is the path name string of this disk device. This is the fsname parameter in the mount(1M) command. If more than one file system is contained on a device (that is, the device is partitioned), this is indicated by an asterisk (“*”) at the end of the path name. On OSF1, this is the path name string of this disk device. This is the file-system parameter in the mount(1M) command. On Windows, this is the unit number of this disk device. BYDSK_DEVNO -------------------- Major / Minor number of the device. BYDSK_DIRNAME -------------------- The name of the file system directory mounted on this disk device. If more than one file system is mounted on this device, “Multiple FS” is seen. BYDSK_ID -------------------- The ID of the current disk device. BYDSK_INTERVAL -------------------- The amount of time in the interval. BYDSK_INTERVAL_CUM -------------------- The amount of time over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. BYDSK_PHYS_BYTE -------------------- The number of KBs of physical IOs transferred to or from this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_BYTE_RATE -------------------- The average KBs per second transferred to or from this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_BYTE_RATE_CUM -------------------- The average number of KBs per second of physical reads and writes to or from this disk device over the cumulative collection time. On Unix systems, this includes all types of physical disk IOs including file system, virtual memory, and raw IOs. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. BYDSK_PHYS_IO -------------------- The number of physical IOs for this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw reads. BYDSK_PHYS_IO_RATE -------------------- The average number of physical IO requests per second for this disk device during the interval. On Unix systems, all types of physical disk IOs are counted, including file system IO, virtual memory and raw IO. BYDSK_PHYS_IO_RATE_CUM -------------------- The average number of physical reads and writes per second for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. BYDSK_PHYS_READ -------------------- The number of physical reads for this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On AIX, this is an estimated value based on the ratio of read bytes to total bytes transferred. The actual number of reads is not tracked by the kernel. This is calculated as BYDSK_PHYS_READ = BYDSK_PHYS_IO * (BYDSK_PHYS_READ_BYTE / BYDSK_PHYS_IO_BYTE) BYDSK_PHYS_READ_BYTE -------------------- The KBs transferred from this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_READ_BYTE_RATE -------------------- The average KBs per second transferred from this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_READ_BYTE_RATE_CUM -------------------- The average number of KBs per second of physical reads from this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. BYDSK_PHYS_READ_RATE -------------------- The average number of physical reads per second for this disk device during the interval. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On AIX, this is an estimated value based on the ratio of read bytes to total bytes transferred. The actual number of reads is not tracked by the kernel. This is calculated as BYDSK_PHYS_READ_RATE = BYDSK_PHYS_IO_RATE * (BYDSK_PHYS_READ_BYTE / BYDSK_PHYS_IO_BYTE) BYDSK_PHYS_READ_RATE_CUM -------------------- The average number of physical reads per second for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. BYDSK_PHYS_WRITE -------------------- The number of physical writes for this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On AIX, this is an estimated value based on the ratio of write bytes to total bytes transferred because the actual number of writes is not tracked by the kernel. This is calculated as BYDSK_PHYS_WRITE = BYDSK_PHYS_IO * (BYDSK_PHYS_WRITE_BYTE / BYDSK_PHYS_IO_BYTE) BYDSK_PHYS_WRITE_BYTE -------------------- The KBs transferred to this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_WRITE_BYTE_RATE -------------------- The average KBs per second transferred to this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system, virtual memory, and raw IO. BYDSK_PHYS_WRITE_BYTE_RATE_CUM -------------------- The average number of KBs per second of physical writes to this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. BYDSK_PHYS_WRITE_RATE -------------------- The average number of physical writes per second for this disk device during the interval. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On AIX, this is an estimated value based on the ratio of write bytes to total bytes transferred. The actual number of writes is not tracked by the kernel. This is calculated as BYDSK_PHYS_WRITE_RATE = BYDSK_PHYS_IO_RATE * (BYDSK_PHYS_WRITE_BYTE / BYDSK_PHYS_IO_BYTE) BYDSK_PHYS_WRITE_RATE_CUM -------------------- The average number of physical writes per second for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. BYDSK_QUEUE_0_UTIL -------------------- The percentage of intervals during which there were no IO requests pending for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 1.5, 0, and 3, then the value for this metric would be 50% since 50% of the intervals had a zero queue length. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_QUEUE_2_UTIL -------------------- The percentage of intervals during which there were 1 or 2 IO requests pending for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 1, 0, and 2, then the value for this metric would be 50% since 50% of the intervals had a 1-2 queue length. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_QUEUE_4_UTIL -------------------- The percentage of intervals during which there were 3 or 4 IO requests waiting to use this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 3, 0, and 4, then the value for this metric would be 50% since 50% of the intervals had a 3-4 queue length. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_QUEUE_8_UTIL -------------------- The percentage of intervals during which there were between 5 and 8 IO requests pending for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 8, 0, and 5, then the value for this metric would be 50% since 50% of the intervals had a 5-8 queue length. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_QUEUE_X_UTIL -------------------- The percentage of intervals during which there were more than 8 IO requests pending for this disk device over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. For example if 4 intervals have passed (that is, 4 screen updates) and the average queue length for these intervals was 0, 9, 0, and 10, then the value for this metric would be 50% since 50% of the intervals had queue length greater than 8. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_REQUEST_QUEUE -------------------- The average number of IO requests that were in the wait queue for this disk device during the interval. These requests are the physical requests (as opposed to logical IO requests). Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. BYDSK_TIME -------------------- The time of day of the interval. BYDSK_UTIL -------------------- On HP-UX, this is the percentage of the time during the interval that the disk device had IO in progress from the point of view of the Operating System. In other words, the utilization or percentage of time busy servicing requests for this device. On the non-HP-UX systems, this is the percentage of the time that this disk device was busy transferring data during the interval. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. This is a measure of the ability of the IO path to meet the transfer demands being placed on it. Slower disk devices may show a higher utilization with lower IO rates than faster disk devices such as disk arrays. A value of greater than 50% utilization over time may indicate that this device or its IO path is a bottleneck, and the access pattern of the workload, database, or files may need reorganizing for better balance of disk IO load. BYDSK_UTIL_CUM -------------------- On HP-UX, this is the percentage of the time that this disk device had IO in progress from the point of view of the Operating System over the cumulative collection time. In other words, this is the utilization or percentage of time busy servicing requests for this device. On all other Unix systems, this is the percentage of the time that this disk device was busy transferring data over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. This is a measure of the ability of the IO path to meet the transfer demands being placed on it. Slower disk devices may show a higher utilization with lower IO rates than faster disk devices such as disk arrays. A value of greater than 50% utilization over time may indicate that this device or its IO path is a bottleneck, and the access pattern of the workload, database, or files may need reorganizing for better balance of disk IO load. BYNETIF_COLLISION -------------------- The number of physical collisions that occurred on the network interface during the interval. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets. This data is not collected for non-broadcasting devices, such as loopback (lo), and is always zero. For HP-UX, this will be the same as the sum of the “Single Collision Frames“, ”Multiple Collision Frames“, ”Late Collisions“, and ”Excessive Collisions“ values from the output of the ”lanadmin“ utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For most other Unix systems, this is the same as the sum of the “Coll” column from the “netstat -i” command (“collisions” from the “netstat -i -e” command on Linux) for a network device. See also netstat(1). AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_COLLISION_1_MIN_RATE -------------------- The number of physical collisions per minute on the network interface during the interval. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets. This data is not collected for non-broadcasting devices, such as loopback (lo), and is always zero. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_COLLISION_RATE -------------------- The number of physical collisions per second on the network interface during the interval. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets. This data is not collected for non-broadcasting devices, such as loopback (lo), and is always zero. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_COLLISION_RATE_CUM -------------------- The average number of physical collisions per second on the network interface over the cumulative collection time. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. This data is not collected for non-broadcasting devices, such as loopback (lo), and is always zero. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_DEFERRED -------------------- The number of physical outbound packets that were deferred due to the network being in use during the interval. On Unix systems, this data is not available for loop-back (lo) devices and is always zero. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_DEFERRED_RATE -------------------- The number of physical outbound packets per second that were deferred due to the network being in use during the interval. On Unix systems, this data is not available for loop-back (lo) devices and is always zero. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_ERROR -------------------- The number of physical errors that occurred on the network interface during the interval. An increasing number of errors may indicate a hardware problem in the network. On Unix systems, this data is not available for loop-back (lo) devices and is always zero. For HP-UX, this will be the same as the sum of the “Inbound Errors” and “Outbound Errors” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a network device. See also netstat(1). Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_ERROR_1_MIN_RATE -------------------- The number of physical errors per minute on the network interface during the interval. On Unix systems, this data is not available for loop-back (lo) devices and is always zero. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_ERROR_RATE -------------------- The number of physical errors per second on the network interface during the interval. On Unix systems, this data is not available for loop-back (lo) devices and is always zero. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_ERROR_RATE_CUM -------------------- The average number of physical errors per second on the network interface over the cumulative collection time. On Unix systems, this data is not available for loop-back (lo) devices and is always zero. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_ID -------------------- The ID number of the network interface. BYNETIF_IN_BYTE -------------------- The number of KBs received from the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_IN_BYTE_RATE -------------------- The number of KBs per second received from the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_IN_BYTE_RATE_CUM -------------------- The average number of KBs per second received from the network via this interface over the cumulative collection time. Only the bytes in packets that carry data are included in this rate. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_IN_PACKET -------------------- The number of successful physical packets received through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. For HP-UX, this will be the same as the sum of the “Inbound Unicast Packets“ and ”Inbound Non-Unicast Packets“ values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Ipkts” column (RX-OK on Linux) from the “netstat -i” command for a network device. See also netstat(1). Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_IN_PACKET_RATE -------------------- The number of successful physical packets per second received through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_IN_PACKET_RATE_CUM -------------------- The average number of physical packets per second received through the network interface over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_NAME -------------------- The name of the network interface. For HP-UX 11.0 and beyond, these are the same names that appear in the “Description” field of the “lanadmin” command output. On all other Unix systems, these are the same names that appear in the “Name” column of the “netstat -i” command. Some examples of device names are: lo - loop-back driver ln - Standard Ethernet driver en - Standard Ethernet driver le - Lance Ethernet driver ie - Intel Ethernet driver tr - Token-Ring driver et - Ether Twist driver bf - fiber optic driver All of the device names will have the unit number appended to the name. For example, a loop-back device in unit 0 will be “lo0”. BYNETIF_NET_TYPE -------------------- The type of network device the interface communicates through. Lan - local area network card Loop - software loopback interface (not tied to a hardware device) Loop6 - software loopback interface IPv6 (not tied to a hardware device) Serial - serial modem port Vlan - virtual lan Wan - wide area network card Other - hardware network interface type is unknown. BYNETIF_OUT_BYTE -------------------- The number of KBs sent to the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_OUT_BYTE_RATE -------------------- The number of KBs per second sent to the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_OUT_BYTE_RATE_CUM -------------------- The average number of KBs per second sent to the network via this interface over the cumulative collection time. Only the bytes in packets that carry data are included in this rate. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_OUT_PACKET -------------------- The number of successful physical packets sent through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. For HP-UX, this will be the same as the sum of the “Outbound Unicast Packets“ and ”Outbound Non-Unicast Packets“ values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Opkts” column (TX-OK on Linux) from the “netstat -i” command for a network device. See also netstat(1). Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_OUT_PACKET_RATE -------------------- The number of successful physical packets per second sent through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_OUT_PACKET_RATE_CUM -------------------- The average number of successful physical packets per second sent through the network interface over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYNETIF_PACKET_RATE -------------------- The number of successful physical packets per second sent and received through the network interface during the interval. Successful packets are those that have been processed without errors or collisions. Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show “na” for the physical statistics since there is no network driver activity. Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. BYOP_CLIENT_COUNT -------------------- The number of current NFS operations that the local machine has processed as a NFS client during the interval. A host on the network can act both as a client, or as a server at the same time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. BYOP_CLIENT_COUNT_CUM -------------------- The number of current NFS operations that the local machine has processed as a NFS client over the cumulative collection time. A host on the network can act both as a client, or as a server at the same time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. BYOP_NAME -------------------- String mnemonic for the NFS operation. One of the following: For NFS Version 2 Name Operation/Action ------------------------------------ getattr Return the current attributes of a file. setattr Set the attributes of a file and returns the new attributes. lookup Return the attributes of a file. readlink Return the string in the symbolic link of a file. read Return data from a file. write Put data into a file. create Create a file. remove Remove a file. rename Give a file a new name. link Create a hard link to a file. symlink Create a symbolic link to a file. mkdir Create a directory. rmdir Remove a directory. readdir Read a directory entry. statfs Return mounted file system information. null Verify NFS service connections and timing. On HP-UX, no actual work done. writecache Flush the server write cache if a special write cache exists. Most systems use the file buffer cache and not a special server cache. Not used on HP-UX. root Find root file system handle (probably obsolete). Not used on HP-UX. For NFS Version 3 Name Operation/Action ------------------------------------ getattr Return the current attributes of a file. setattr Set the attributes of a file and returns the new attributes. lookup Return the attributes of a file. access Check access permissions of a user. readlink Return the string in the symbolic link of a file. read Return data from a file. write Put data into a file. create Create a file. mkdir Make a directory. symlink Create a symbolic link to a file. mknod Create a special device. remove Remove a file. rmdir Remove a directory. rename Give a file a new name. link Create a hard link to a file. readdir Read a directory entry. readdirplus Extended read of a directory entry. fsstat Get dynamic file system information. fsinfo Get static file system information. pathconf Retrieve POSIX information. commit Commit cached data on server to stable storage. null Verify NFS services. No actual work done. BYOP_SERVER_COUNT -------------------- The number of current NFS operations that the local machine has processed as a NFS server during the interval. A host on the network can act both as a client, or as a server at the same time. BYOP_SERVER_COUNT_CUM -------------------- The number of current NFS operations that the local machine has processed as a NFS server over the cumulative collection time. A host on the network can act both as a client, or as a server at the same time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. BYSWP_SWAP_SPACE_AVAIL -------------------- The capacity (in MB) for swapping in this swap area. On HP-UX, for “device” type swap, this value is constant. However, for “filesys” swap this value grows as needed. File system swap grows in units of “SWCHUNKS” x DEV_BSIZE bytes, which is typically 2MB. This metric is similar to the “AVAIL” parameters returned from /usr/sbin/swapinfo. For “memory” type swap, this value also grows as needed or as possible, given that any memory reserved for swap cannot be used for normal virtual memory. Note that this is potential swap space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be usable. For example, on a 61 MB disk using 2 MB swap size allocations, 1 MB remains unusable and is considered wasted space. On SUN, this is the same as (blocks * .5)/1024, reported by the “swap -l” command. On AIX, this metric is set to “na” for inactive swap devices. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. BYSWP_SWAP_SPACE_NAME -------------------- On Unix systems, this is the name of the device file or file system where the swap space is located. On HP-UX, part of the system's physical memory may be allocated as a pseudo-swap device. It is enabled by setting the “SWAPMEM_ON” kernel parameter to 1. On SunOS 5.X, part of the system's physical memory may be allocated as a pseudo-swap device. Also note, “/tmp” is usually configured as a memory based file system and is not used for swap space. Therefore, it will not be listed with the swap devices. This is noted because “df” uses the label “swap” for the “/tmp” file system which may be confusing. See tmpfs(7). BYSWP_SWAP_SPACE_USED -------------------- The amount of swap space (in MB) used in this area. On HP-UX, this value is similar to the “USED” column returned by the /usr/sbin/swapinfo command. On SUN, “Used” indicates amount written to disk (or locked in memory), rather than reserved. Swap space is reserved (by decrementing a counter) when virtual memory for a program is created. This is the same as (blocks - free) * .5/1024, reported by the “swap -l” command. On SUN, global swap space is tracked through the operating system. Device swap space is tracked through the devices. For this reason, the amount of swap space used may differ between the global and by-device metrics. Sometimes pages that are marked to be swapped to disk by the operating system are never swapped. The operating system records this as used swap space, but the devices do not, since no physical IOs occur. (Metrics with the prefix “GBL” are global and metrics with the prefix “BYSWP” are by device.) On AIX, this metric is set to “na” for inactive swap devices. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. BYSWP_SWAP_TYPE -------------------- The type of swap space allocated on the system. On HP-UX and SUN, types of swap space are device, file system (“filesys”), or memory. “Device” swap is accessed directly without going through the file system, and is therefore faster than “filesys” swap. “Filesys” swap can be to a local or NFS mounted swap file. “Memory” swap is space in the system's physical memory reserved for pseudo-swap for running processes. Using pseudo-swap means the pages are simply locked in memory rather than copied to a swap area. On SUN, note that “/tmp” is usually configured as a memory based file system and is not used for swap space. Therefore, it will not be listed with the swap devices, and “swap” or “tmpfs” will not be swap types. This is noted because “df” uses the label “swap” for the “/tmp” file system which may be confusing. See tmpfs(7). On AIX, “Device” swap is accessed directly without going through the file system. For “Device” swap, the device is specially allocated for swapping purpose only. The swap is often referred as paging to paging space. FS_BLOCK_SIZE -------------------- The maximum block size of this file system, in bytes. A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. FS_DEVNAME -------------------- On Unix systems, this is the path name string of the current device. On Windows, this is the disk drive string of the current device. On HP-UX, this is the “fsname” parameter in the mount(1M) command. For NFS devices, this includes the name of the node exporting the file system. It is possible that a process may mount a device using the mount(2) system call. This call does not update the “/etc/mnttab” and its name is blank. This situation is rare, and should be corrected by syncer(1M). Note that once a device is mounted, its entry is displayed, even after the device is unmounted, until the midaemon process terminates. On SUN, this is the path name string of the current device, or “tmpfs” for memory based file systems. See tmpfs(7). FS_DEVNO -------------------- On Unix systems, this is the major and minor number of the file system. On Windows, this is the unit number of the disk device on which the logical disk resides. FS_DIRNAME -------------------- On Unix systems, this is the path name of the mount point of the file system. On Windows, this is the drive letter associated with the selected disk partition. On HP-UX, this is the path name of the mount point of the file system if the logical volume has a mounted file system. This is the directory parameter of the mount(1M) command for most entries. Exceptions are: * For lvm swap areas, this field contains “lvm swap device”. * For logical volumes with no mounted file systems, this field contains “Raw Logical Volume” (relevant only to OVPA). On HP-UX, the file names are in the same order as shown in the “/usr/sbin/mount -p” command. File systems are not displayed until they exhibit IO activity once the midaemon has been started. Also, once a device is displayed, it continues to be displayed (even after the device is unmounted) until the midaemon process terminates. On SUN, only “UFS”, “HSFS” and “TMPFS” file systems are listed. See mount(1M) and mnttab(4). “TMPFS” file systems are memory based filesystems and are listed here for convenience. See tmpfs(7). On AIX, see mount(1M) and filesystems(4). On OSF1, see mount(2). FS_FRAG_SIZE -------------------- The fundamental file system block size, in bytes. A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. FS_INODE_UTIL -------------------- Percentage of this file system's inodes in use during the interval. A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. FS_MAX_INODES -------------------- Number of configured file system inodes. A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. FS_MAX_SIZE -------------------- Maximum number that this file system could obtain if full, in MB. Note that this is the user space capacity - it is the file system space accessible to non root users. On most Unix systems, the df command shows the total file system capacity which includes the extra file system space accessible to root users only. The equivalent fields to look at are “used” and “avail”. For the target file system, to calculate the maximum size in MB, use FS Max Size = (used + avail)/1024 A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. On HP-UX, this metric is updated at 4 minute intervals to minimize collection overhead. FS_SPACE_RESERVED -------------------- The amount of file system space in MBs reserved for superuser allocation. On AIX, this metric is typically zero because by default AIX does not reserve any file system space for the superuser. FS_SPACE_USED -------------------- The amount of file system space in MBs that is being used. FS_SPACE_UTIL -------------------- Percentage of the file system space in use during the interval. Note that this is the user space capacity - it is the file system space accessible to non root users. On most Unix systems, the df command shows the total file system capacity which includes the extra file system space accessible to root users only. A value of “na” may be displayed if the file system is not mounted. If the product is restarted, these unmounted file systems are not displayed until remounted. On HP-UX, this metric is updated at 4 minute intervals to minimize collection overhead. FS_TYPE -------------------- A string indicating the file system type. On Unix systems, some of the possible types are: hfs - user file system ufs - user file system ext2 - user file system cdfs - CD-ROM file system vxfs - Veritas (vxfs) file system nfs - network file system nfs3 - network file system Version 3 On Windows, some of the possible types are: NTFS - New Technology File System FAT - 16-bit File Allocation Table FAT32 - 32-bit File Allocation Table FAT uses a 16-bit file allocation table entry (216 clusters). FAT32 uses a 32-bit file allocation table entry. However, Windows 2000 reserves the first 4 bits of a FAT32 file allocation table entry, which means FAT32 has a theoretical maximum of 228 clusters. NTFS is native file system of Windows NT and beyond. GBL_ACTIVE_CPU -------------------- The number of CPUs online on the system. For HP-UX and certain versions of Linux, the sar(1M) command allows you to check the status of the system CPUs. For SUN and DEC, the commands psrinfo(1M) and psradm(1M) allow you to check or change the status of the system CPUs. For AIX, the pstat(1) command allows you to check the status of the system CPUs. GBL_ACTIVE_PROC -------------------- An active process is one that exists and consumes some CPU time. GBL_ACTIVE_PROC is the sum of the alive-process- time/interval-time ratios of every process that is active (uses any CPU time) during an interval. The following diagram of a four second interval during which two processes exist on the system should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval but consumes no CPU. A's contribution to GBL_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to GBL_ACTIVE_PROC. B's contribution to GBL_ALIVE_PROC is 3*1/4. B contributes 2*1/4 to GBL_ACTIVE_PROC. Thus, for this interval, GBL_ACTIVE_PROC equals 0.5 and GBL_ALIVE_PROC equals 1.75. Because a process may be alive but not active, GBL_ACTIVE_PROC will always be less than or equal to GBL_ALIVE_PROC. This metric is a good overall indicator of the workload of the system. An unusually large number of active processes could indicate a CPU bottleneck. To determine if the CPU is a bottleneck, compare this metric with GBL_CPU_TOTAL_UTIL and GBL_RUN_QUEUE. If GBL_CPU_TOTAL_UTIL is near 100 percent and GBL_RUN_QUEUE is greater than one, there is a bottleneck. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. GBL_ALIVE_PROC -------------------- An alive process is one that exists on the system. GBL_ALIVE_PROC is the sum of the alive-process-time/interval- time ratios for every process. The following diagram of a four second interval during which two processes exist on the system should be used to understand the above definition. Note the difference between active processes, which consume CPU time, and alive processes which merely exist on the system. ----------- Seconds ----------- 1 2 3 4 Proc ---- ---- ---- ---- ---- A live live live live B live/CPU live/CPU live dead Process A is alive for the entire four second interval but consumes no CPU. A's contribution to GBL_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to GBL_ACTIVE_PROC. B's contribution to GBL_ALIVE_PROC is 3*1/4. B contributes 2*1/4 to GBL_ACTIVE_PROC. Thus, for this interval, GBL_ACTIVE_PROC equals 0.5 and GBL_ALIVE_PROC equals 1.75. Because a process may be alive but not active, GBL_ACTIVE_PROC will always be less than or equal to GBL_ALIVE_PROC. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. GBL_BLANK -------------------- A string of blanks. GBL_BLOCKED_IO_QUEUE -------------------- The average number of processes blocked on local disk resources (IO, paging). This metric is an indicator of disk contention among active processes. It should normally be a very small number. If GBL_DISK_UTIL_PEAK is near 100 percent and GBL_BLOCKED_IO_QUEUE is greater than 1, a disk bottleneck is probable. On SUN, this is the same as the “procs b” field reported in vmstat. GBL_BOOT_TIME -------------------- The date and time when the system was last booted. GBL_COLLECTOR -------------------- ASCII field containing collector name and version. The collector name will appear as either “SCOPE/xx V.UU.FF.LF” or “Coda RV.UU.FF.LF”. xx identifies the platform; V = version, UU = update level, FF = fix level, and LF = lab fix id. For example, SCOPE/UX C.04.00.00; or Coda A.07.10.04. GBL_COMPLETED_PROC -------------------- The number of processes that terminated during the interval. On non HP-UX systems, this metric is derived from sampled process data. Since the data for a process is not available after the process has died on this operating system, a process whose life is shorter than the sampling interval may not be seen when the samples are taken. Thus this metric may be slightly less than the actual value. Increasing the sampling frequency captures a more accurate count, but the overhead of collection may also rise. GBL_CPU_CLOCK -------------------- The clock speed of the CPUs in MHz if all of the processors have the same clock speed. Otherwise, “na” is shown if the processors have different clock speeds. GBL_CPU_IDLE_TIME -------------------- The time, in seconds, that the CPU was idle during the interval. This is the total idle time, including waiting for I/O. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. GBL_CPU_IDLE_TIME_CUM -------------------- The time, in seconds, that the CPU was idle over the cumulative collection time. This is the total idle time, including waiting for I/O. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. GBL_CPU_IDLE_UTIL -------------------- The percentage of time that the CPU was idle during the interval. This is the total idle time, including waiting for I/O. On Unix systems, this is the same as the sum of the “%idle” and “%wio” fields reported by the “sar -u” command. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. GBL_CPU_IDLE_UTIL_CUM -------------------- The percentage of time that the CPU was idle over the cumulative collection time. This is the total idle time, including waiting for I/O. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. GBL_CPU_IDLE_UTIL_HIGH -------------------- The highest percentage of time that the CPU was idle during any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. GBL_CPU_SYS_MODE_TIME -------------------- The time, in seconds, that the CPU was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_SYS_MODE_TIME_CUM -------------------- The time, in seconds, that the CPU was in system mode since over the cumulative collection time. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_SYS_MODE_UTIL -------------------- Percentage of time the CPU was in system mode during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. This metric is a subset of the GBL_CPU_TOTAL_UTIL percentage. This is NOT a measure of the amount of time used by system daemon processes, since most system daemons spend part of their time in user mode and part in system calls, like any other process. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. High system mode CPU percentages are normal for IO intensive applications. Abnormally high system mode CPU percentages can indicate that a hardware problem is causing a high interrupt rate. It can also indicate programs that are not calling system calls efficiently. GBL_CPU_SYS_MODE_UTIL_CUM -------------------- The percentage of time that the CPU was in system mode over the cumulative collection time. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_SYS_MODE_UTIL_HIGH -------------------- The highest percentage of time during any one interval that the CPU was in system mode over the cumulative collection time. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_TOTAL_TIME -------------------- The total time, in seconds, that the CPU was not idle in the interval. This is calculated as GBL_CPU_TOTAL_TIME = GBL_CPU_USER_MODE_TIME + GBL_CPU_SYS_MODE_TIME On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_TOTAL_TIME_CUM -------------------- The total time that the CPU was not idle over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_TOTAL_UTIL -------------------- Percentage of time the CPU was not idle during the interval. This is calculated as GBL_CPU_TOTAL_UTIL = GBL_CPU_USER_MODE_UTIL + GBL_CPU_SYS_MODE_UTIL On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_TOTAL_UTIL + GBL_CPU_IDLE_UTIL = 100% This metric varies widely on most systems, depending on the workload. A consistently high CPU utilization can indicate a CPU bottleneck, especially when other indicators such as GBL_RUN_QUEUE and GBL_ACTIVE_PROC are also high. High CPU utilization can also occur on systems that are bottlenecked on memory, because the CPU spends more time paging and swapping. NOTE: On Windows, this metric may not equal the sum of the APP_CPU_TOTAL_UTIL metrics. Microsoft states that “this is expected behavior“ because this GBL_CPU_TOTAL_UTIL metric is taken from the performance library Processor objects while the APP_CPU_TOTAL_UTIL metrics are taken from the Process objects. Microsoft states that there can be CPU time accounted for in the Processor system objects that may not be seen in the Process objects. GBL_CPU_TOTAL_UTIL_CUM -------------------- The percentage of total CPU time that the processor was not idle over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_TOTAL_UTIL_HIGH -------------------- The highest percentage of total CPU time during any one interval that the processor was not idle over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_USER_MODE_TIME -------------------- The time, in seconds, that the CPU was in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_USER_MODE_TIME_CUM -------------------- The time, in seconds, that the CPU was in user mode over the cumulative collection time. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_USER_MODE_UTIL -------------------- The percentage of time the CPU was in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. This metric is a subset of the GBL_CPU_TOTAL_UTIL percentage. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. High user mode CPU percentages are normal for computation- intensive applications. Low values of user CPU utilization compared to relatively high values for GBL_CPU_SYS_MODE_UTIL can indicate an application or hardware problem. GBL_CPU_USER_MODE_UTIL_CUM -------------------- The percentage of time that the CPU was in user mode over the cumulative collection time. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_USER_MODE_UTIL_HIGH -------------------- The highest percentage of time during any one interval that the CPU was in user mode over the cumulative collection time. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_WAIT_TIME -------------------- The time, in seconds, that the CPU was idle and there were processes waiting for physical IOs to complete during the interval. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CPU_WAIT_UTIL -------------------- The percentage of time during the interval that the CPU was idle and there were processes waiting for physical IOs to complete. On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available. GBL_CSWITCH_RATE -------------------- The average number of context switches per second during the interval. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. On Windows, this includes switches from one thread to another either inside a single process or across processes. A thread switch can be caused either by one thread asking another for information or by a thread being preempted by another higher priority thread becoming ready to run. GBL_CSWITCH_RATE_CUM -------------------- The average number of context switches per second over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. GBL_CSWITCH_RATE_HIGH -------------------- The highest number of context switches per second during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, this includes context switches that result in the execution of a different process and those caused by a process stopping, then resuming, with no other process running in the meantime. GBL_DISK_BLOCK_IO -------------------- The total number of block IOs during the interval. On SUN, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These do include the IO of the inode (system write) and the file system data IO. GBL_DISK_BLOCK_IO_CUM -------------------- The total number of block reads and writes over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On SUN, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These do include the IO of the inode (system write) and the file system data IO. GBL_DISK_BLOCK_IO_PCT -------------------- The percentage of block IOs of the total physical IOs during the interval. On SUN, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On AIX, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These do include the IO of the inode (system write) and the file system data IO. GBL_DISK_BLOCK_IO_PCT_CUM -------------------- The percentage of block IOs of the total physical IOs over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On SUN, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. On AIX, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These do include the IO of the inode (system write) and the file system data IO. GBL_DISK_BLOCK_IO_RATE -------------------- The total number of block IOs per second during the interval. On SUN, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These do include the IO of the inode (system write) and the file system data IO. GBL_DISK_BLOCK_IO_RATE_CUM -------------------- The total number of block reads and writes per second over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On SUN, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These do include the IO of the inode (system write) and the file system data IO. GBL_DISK_BLOCK_READ -------------------- The number of block reads during the interval. On SUN, these are physical reads generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical reads generated by file system access and do not include virtual memory reads, or reads relating to raw disk access. These do include the read of the inode (system read) and the file data read. GBL_DISK_BLOCK_READ_RATE -------------------- The number of block reads per second during the interval. On SUN, these are physical reads generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical reads generated by file system access and do not include virtual memory reads, or reads relating to raw disk access. These do include the read of the inode (system read) and the file data read. GBL_DISK_BLOCK_WRITE -------------------- The number of block writes during the interval. On SUN, these are physical writes generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical writes generated by file system access and do not include virtual memory writes, or writes relating to raw disk access. These do include the write of the inode (system write) and the file system data write. GBL_DISK_BLOCK_WRITE_RATE -------------------- The number of block writes per second during the interval. On SUN, these are physical writes generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, these are physical writes generated by file system access and do not include virtual memory writes, or writes relating to raw disk access. These do include the write of the inode (system write) and the file system data write. GBL_DISK_FILE_IO -------------------- The number of file IOs, excluding virtual memory IOs, during the interval. Only local disks are counted in this measurement. NFS devices are excluded. GBL_DISK_FILE_IO_CUM -------------------- The total number of physical IOs excluding virtual memory IOs over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_DISK_FILE_IO_PCT -------------------- The percentage of file IOs of the total physical IO during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_FILE_IO_PCT_CUM -------------------- The percentage of file IOs of total physical IO over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_FILE_IO_RATE -------------------- The number of file IOs per second excluding virtual memory IOs during the interval. This is the sum of block IOs and raw IOs. Only local disks are counted in this measurement. NFS devices are excluded. On SUN, when a file is accessed, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). GBL_DISK_FILE_IO_RATE_CUM -------------------- The number of file IOs per second, excluding virtual memory IOs, over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_DISK_LOGL_IO -------------------- The number of logical IOs made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the read and write system calls that are directed to disk devices. Also counted are read and write system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, writev, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. GBL_DISK_LOGL_IO_CUM -------------------- The number of logical IOs made over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On many Unix systems, logical disk IOs are measured by counting the read and write system calls that are directed to disk devices. Also counted are read and write system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, writev, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. GBL_DISK_LOGL_IO_RATE -------------------- The number of logical IOs per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the read and write system calls that are directed to disk devices. Also counted are read and write system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, writev, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. GBL_DISK_LOGL_IO_RATE_CUM -------------------- The average number of logical IOs per second over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On many Unix systems, logical disk IOs are measured by counting the read and write system calls that are directed to disk devices. Also counted are read and write system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, writev, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. GBL_DISK_LOGL_READ -------------------- On most systems, this is the number of logical reads made during the interval. On SUN, this is the number of logical block reads made during the interval. On Windows, this includes both buffered (cached) read requests and unbuffered reads. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the read system calls that are directed to disk devices. Also counted are read system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. GBL_DISK_LOGL_READ_CUM -------------------- On most systems, this is the total number of logical reads made over the cumulative collection time. On SUN, this is the total number of logical block reads over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On many Unix systems, logical disk IOs are measured by counting the read system calls that are directed to disk devices. Also counted are read system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. GBL_DISK_LOGL_READ_PCT -------------------- On most systems, this is the percentage of logical reads of the total logical IO during the interval. On SUN, this is the percentage of logical block reads of the total logical IOs during the interval. On many Unix systems, logical disk IOs are measured by counting the read system calls that are directed to disk devices. Also counted are read system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. GBL_DISK_LOGL_READ_PCT_CUM -------------------- On most systems, this is the percentage of logical reads of the total logical IOs over the cumulative collection time. On SUN, this is the percentage of logical block reads of the total logical IOs over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On many Unix systems, logical disk IOs are measured by counting the read system calls that are directed to disk devices. Also counted are read system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. GBL_DISK_LOGL_READ_RATE -------------------- On most systems, this is The average number of logical reads per second made during the interval. On SUN, this is the average number of logical block reads per second made during the interval. On Windows, this includes both buffered (cached) read requests and unbuffered reads. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the read system calls that are directed to disk devices. Also counted are read system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. GBL_DISK_LOGL_READ_RATE_CUM -------------------- On most Unix systems, this is the average number of logical reads per second over the cumulative collection time. On SUN, this is the average number of logical block reads per second over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On many Unix systems, logical disk IOs are measured by counting the read system calls that are directed to disk devices. Also counted are read system calls made indirectly through other system calls, including readv, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. GBL_DISK_LOGL_WRITE -------------------- On most systems, this is the number of logical writes made during the interval. On SUN, this is the number of logical block writes during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the write system calls that are directed to disk devices. Also counted are write system calls made indirectly through other system calls, including writev, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. GBL_DISK_LOGL_WRITE_CUM -------------------- On most systems, this is the total number of logical writes made over the cumulative collection time. On SUN, this is the total number of logical block writes over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On many Unix systems, logical disk IOs are measured by counting the write system calls that are directed to disk devices. Also counted are write system calls made indirectly through other system calls, including writev, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. GBL_DISK_LOGL_WRITE_PCT -------------------- On most systems, this is the percentage of logical writes of the logical IO during the interval. On SUN, this is the percentage of logical block writes of the total logical block IOs during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the write system calls that are directed to disk devices. Also counted are write system calls made indirectly through other system calls, including writev, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. GBL_DISK_LOGL_WRITE_PCT_CUM -------------------- On most systems, this is the percentage of logical writes of the total logical IO over the cumulative collection time. On SUN, this is the percentage of logical block writes of the total logical block IOs over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On many Unix systems, logical disk IOs are measured by counting the write system calls that are directed to disk devices. Also counted are write system calls made indirectly through other system calls, including writev, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. GBL_DISK_LOGL_WRITE_RATE -------------------- On most systems, this is the average number of logical writes per second made during the interval. On SUN, this is the average number of logical block writes per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On many Unix systems, logical disk IOs are measured by counting the write system calls that are directed to disk devices. Also counted are write system calls made indirectly through other system calls, including writev, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. GBL_DISK_LOGL_WRITE_RATE_CUM -------------------- On most systems, this is the average number of logical writes per second of the total logical IOs over the cumulative collection time. On SUN, this is the average number of logical block writes per second of the total logical block IOs over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On many Unix systems, logical disk IOs are measured by counting the write system calls that are directed to disk devices. Also counted are write system calls made indirectly through other system calls, including writev, recvfrom, recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend. On many Unix systems, there are several reasons why logical IOs may not correspond with physical IOs. Logical IOs may not always result in a physical disk access, since the data may already reside in memory -- either in the buffer cache, or in virtual memory if the IO is to a memory mapped file. Several logical IOs may all map to the same physical page or block. In these two cases, logical IOs are greater than physical IOs. The reverse can also happen. A single logical write can cause a physical read to fetch the block to be updated from disk, and then cause a physical write to put it back on disk. A single logical IO can require more than one physical page or block, and these can be found on different disks. Mirrored disks further distort the relationship between logical and physical IO, since physical writes are doubled. GBL_DISK_PHYS_BYTE -------------------- The number of KBs transferred to and from disks during the interval. The bytes for all types of physical IOs are counted. Only local disks are counted in this measurement. NFS devices are excluded. It is not directly related to the number of IOs, since IO requests can be of differing lengths. On Unix systems, this includes file system IO, virtual memory IO, and raw IO. On Windows, all types of physical IOs are counted. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_BYTE_RATE -------------------- The average number of KBs per second at which data was transferred to and from disks during the interval. The bytes for all types physical IOs are counted. Only local disks are counted in this measurement. NFS devices are excluded. This is a measure of the physical data transfer rate. It is not directly related to the number of IOs, since IO requests can be of differing lengths. This is an indicator of how much data is being transferred to and from disk devices. Large spikes in this metric can indicate a disk bottleneck. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw reads. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_IO -------------------- The number of physical IOs during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk IOs are counted, including file system IO, virtual memory IO and raw IO. On HP-UX, this is calculated as GBL_DISK_PHYS_IO = GBL_DISK_FS_IO + GBL_DISK_VM_IO + GBL_DISK_SYSTEM_IO + GBL_DISK_RAW_IO On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_IO_CUM -------------------- The total number of physical IOs over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_IO_RATE -------------------- The number of physical IOs per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk IOs are counted, including file system IO, virtual memory IO and raw IO. On HP-UX, this is calculated as GBL_DISK_PHYS_IO_RATE = GBL_DISK_FS_IO_RATE + GBL_DISK_VM_IO_RATE + GBL_DISK_SYSTEM_IO_RATE + GBL_DISK_RAW_IO_RATE On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_IO_RATE_CUM -------------------- The number of physical IOs per second over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_READ -------------------- The number of physical reads during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On HP-UX, there are many reasons why there is not a direct correlation between the number of logical IOs and physical IOs. For example, small sequential logical reads may be satisfied from the buffer cache, resulting in fewer physical IOs than logical IOs. Conversely, large logical IOs or small random IOs may result in more physical than logical IOs. Logical volume mappings, logical disk mirroring, and disk striping also tend to remove any correlation. On HP-UX, this is calculated as GBL_DISK_PHYS_READ = GBL_DISK_FS_READ + GBL_DISK_VM_READ + GBL_DISK_SYSTEM_READ + GBL_DISK_RAW_READ On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_READ_BYTE -------------------- The number of KBs physically transferred from the disk during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_READ_BYTE_CUM -------------------- The number of KBs (or MBs if specified) physically transferred from the disk over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_READ_BYTE_RATE -------------------- The average number of KBs transferred from the disk per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_READ_CUM -------------------- The total number of physical reads over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_READ_PCT -------------------- The percentage of physical reads of total physical IO during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_READ_PCT_CUM -------------------- The percentage of physical reads of total physical IO over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_READ_RATE -------------------- The number of physical reads per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads. On HP-UX, this is calculated as GBL_DISK_PHYS_READ_RATE = GBL_DISK_FS_READ_RATE + GBL_DISK_VM_READ_RATE + GBL_DISK_SYSTEM_READ_RATE + GBL_DISK_RAW_READ_RATE On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_READ_RATE_CUM -------------------- The average number of physical reads per second over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_WRITE -------------------- The number of physical writes during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On HP-UX, there are many reasons why there is not a direct correlation between logical IOs and physical IOs. For example, small logical writes may end up entirely in the buffer cache, and later generate fewer physical IOs when written to disk due to the larger IO size. Or conversely, small logical writes may require physical prefetching of the corresponding disk blocks before the data is merged and posted to disk. Logical volume mappings, logical disk mirroring, and disk striping also tend to remove any correlation. On HP-UX, this is calculated as GBL_DISK_PHYS_WRITE = GBL_DISK_FS_WRITE + GBL_DISK_VM_WRITE + GBL_DISK_SYSTEM_WRITE + GBL_DISK_RAW_WRITE On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_WRITE_BYTE -------------------- The number of KBs (or MBs if specified) physically transferred to the disk during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_WRITE_BYTE_CUM -------------------- The number of KBs (or MBs if specified) physically transferred to the disk over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_WRITE_BYTE_RATE -------------------- The average number of KBs transferred to the disk per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_WRITE_CUM -------------------- The total number of physical writes over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_WRITE_PCT -------------------- The percentage of physical writes of total physical IO during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_WRITE_PCT_CUM -------------------- The percentage of physical writes of total physical IO over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_WRITE_RATE -------------------- The number of physical writes per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk writes are counted, including file system IO, virtual memory IO, and raw writes. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On HP-UX, this is calculated as GBL_DISK_PHYS_WRITE_RATE = GBL_DISK_FS_WRITE_RATE + GBL_DISK_VM_WRITE_RATE + GBL_DISK_SYSTEM_WRITE_RATE + GBL_DISK_RAW_WRITE_RATE On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_PHYS_WRITE_RATE_CUM -------------------- The number of physical writes per second over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, since this value is reported by the drivers, multiple physical requests that have been collapsed to a single physical operation (due to driver IO merging) are only counted once. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_RAW_IO -------------------- The total number of raw reads and writes during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Sun, tape drive accesses are included in raw IOs, but not in physical IOs. To determine if raw IO is tape access versus disk access, compare the global physical disk accesses to the total raw, block, and vm IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. GBL_DISK_RAW_IO_CUM -------------------- The total number of raw IOs over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Sun, tape drive accesses are included in raw IOs, but not in physical IOs. To determine if raw IO is tape access versus disk access, compare the global physical disk accesses to the total raw, block, and vm IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. GBL_DISK_RAW_IO_PCT -------------------- The percentage of raw IOs to total physical IOs made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Sun, tape drive accesses are included in raw IOs, but not in physical IOs. To determine if raw IO is tape access versus disk access, compare the global physical disk accesses to the total raw, block, and vm IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. GBL_DISK_RAW_IO_PCT_CUM -------------------- The percentage of physical raw IOs to total physical IOs made over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Sun, tape drive accesses are included in raw IOs, but not in physical IOs. To determine if raw IO is tape access versus disk access, compare the global physical disk accesses to the total raw, block, and vm IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. GBL_DISK_RAW_IO_RATE -------------------- The total number of raw reads and writes per second during the interval. Only accesses to local disk devices are counted. On Sun, tape drive accesses are included in raw IOs, but not in physical IOs. To determine if raw IO is tape access versus disk access, compare the global physical disk accesses to the total raw, block, and vm IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. GBL_DISK_RAW_IO_RATE_CUM -------------------- The average number of raw IOs over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Sun, tape drive accesses are included in raw IOs, but not in physical IOs. To determine if raw IO is tape access versus disk access, compare the global physical disk accesses to the total raw, block, and vm IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. GBL_DISK_RAW_READ -------------------- The number of raw reads during the interval. Only accesses to local disk devices are counted. GBL_DISK_RAW_READ_RATE -------------------- The number of raw reads per second during the interval. Only accesses to local disk devices are counted. GBL_DISK_RAW_WRITE -------------------- The number of raw writes during the interval. Only accesses to local disk devices are counted. GBL_DISK_RAW_WRITE_RATE -------------------- The number of raw writes per second during the interval. Only accesses to local disk devices are counted. On Sun, tape drive accesses are included in raw IOs, but not in physical IOs. To determine if raw IO is tape access versus disk access, compare the global physical disk accesses to the total raw, block, and vm IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. GBL_DISK_REQUEST_QUEUE -------------------- The total length of all of the disk queues at the end of the interval. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at boottime, the operating system does not provide performance data for that device. This can be determined by checking the “by-disk” data when provided in a product. If the CD drive has an entry in the list of active disks on a system, then data for that device is being collected. GBL_DISK_TIME_PEAK -------------------- The time, in seconds, during the interval that the busiest disk was performing IO transfers. This is for the busiest disk only, not all disk devices. This counter is based on an end- to-end measurement for each IO transfer updated at queue entry and exit points. Only local disks are counted in this measurement. NFS devices are excluded. GBL_DISK_UTIL -------------------- On HP-UX, this is the average percentage of time during the interval that all disks had IO in progress from the point of view of the Operating System. This is the average utilization for all disks. On all other Unix systems, this is the average percentage of disk in use time of the total interval (that is, the average utilization). Only local disks are counted in this measurement. NFS devices are excluded. GBL_DISK_UTIL_PEAK -------------------- The utilization of the busiest disk during the interval. On HP-UX, this is the percentage of time during the interval that the busiest disk device had IO in progress from the point of view of the Operating System. On all other systems, this is the percentage of time during the interval that the busiest disk was performing IO transfers. It is not an average utilization over all the disk devices. Only local disks are counted in this measurement. NFS devices are excluded. Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be “na” on the affected kernels. The “sar -d” command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0. A peak disk utilization of more than 50 percent often indicates a disk IO subsystem bottleneck situation. A bottleneck may not be in the physical disk drive itself, but elsewhere in the IO path. GBL_DISK_UTIL_PEAK_CUM -------------------- The average utilization of the busiest disk in each interval over the cumulative collection time. Utilization is the percentage of time in use versus the time in the measurement interval. For each interval a different disk may be the busiest. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_DISK_UTIL_PEAK_HIGH -------------------- The highest utilization of any disk during any interval over the cumulative collection time. Utilization is the percentage of time in use versus the time in the measurement interval. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_DISK_VM_IO -------------------- The total number of virtual memory IOs made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the IOs to user file data are not included in this metric unless they were done via the mmap(2) system call. On SUN, when a file is accessed, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On SUN, this metric is calculated by subtracting raw and block IOs from physical IOs. Tape drive accesses are included in the raw IOs, but not in the physical IOs. Therefore, when tape drive accesses are occurring on a system, all virtual memory and raw IO is counted as raw IO. For example, you may see heavy raw IO occurring during system backup. Raw IOs for disks are counted in the physical IOs. To determine if the raw IO is tape access versus disk access, compare the global physical disk accesses to the total of raw, block, and VM IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. GBL_DISK_VM_IO_CUM -------------------- The total number of virtual memory IOs over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, the IOs to user file data are not included in this metric unless they were done via the mmap(2) system call. On SUN, when a file is accessed, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On SUN, this metric is calculated by subtracting raw and block IOs from physical IOs. Tape drive accesses are included in the raw IOs, but not in the physical IOs. Therefore, when tape drive accesses are occurring on a system, all virtual memory and raw IO is counted as raw IO. For example, you may see heavy raw IO occurring during system backup. Raw IOs for disks are counted in the physical IOs. To determine if the raw IO is tape access versus disk access, compare the global physical disk accesses to the total of raw, block, and VM IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. GBL_DISK_VM_IO_PCT -------------------- On HP-UX and AIX, this is the percentage of virtual memory IO requests of total physical disk IOs during the interval. On the other Unix systems, this is the percentage of virtual memory IOs of the total number of physical IOs during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the IOs to user file data are not included in this metric unless they were done via the mmap(2) system call. On SUN, when a file is accessed, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On SUN, this metric is calculated by subtracting raw and block IOs from physical IOs. Tape drive accesses are included in the raw IOs, but not in the physical IOs. Therefore, when tape drive accesses are occurring on a system, all virtual memory and raw IO is counted as raw IO. For example, you may see heavy raw IO occurring during system backup. Raw IOs for disks are counted in the physical IOs. To determine if the raw IO is tape access versus disk access, compare the global physical disk accesses to the total of raw, block, and VM IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. GBL_DISK_VM_IO_PCT_CUM -------------------- The percentage of virtual memory IOs of the total number of physical IOs over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, the IOs to user file data are not included in this metric unless they were done via the mmap(2) system call. On SUN, when a file is accessed, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On SUN, this metric is calculated by subtracting raw and block IOs from physical IOs. Tape drive accesses are included in the raw IOs, but not in the physical IOs. Therefore, when tape drive accesses are occurring on a system, all virtual memory and raw IO is counted as raw IO. For example, you may see heavy raw IO occurring during system backup. Raw IOs for disks are counted in the physical IOs. To determine if the raw IO is tape access versus disk access, compare the global physical disk accesses to the total of raw, block, and VM IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. GBL_DISK_VM_IO_RATE -------------------- The number of virtual memory IOs per second made during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On HP-UX, the IOs to user file data are not included in this metric unless they were done via the mmap(2) system call. On SUN, when a file is accessed, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On SUN, this metric is calculated by subtracting raw and block IOs from physical IOs. Tape drive accesses are included in the raw IOs, but not in the physical IOs. Therefore, when tape drive accesses are occurring on a system, all virtual memory and raw IO is counted as raw IO. For example, you may see heavy raw IO occurring during system backup. Raw IOs for disks are counted in the physical IOs. To determine if the raw IO is tape access versus disk access, compare the global physical disk accesses to the total of raw, block, and VM IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. GBL_DISK_VM_IO_RATE_CUM -------------------- On HP-UX and AIX, this is the number of virtual memory IOs per second made over the cumulative collection time. On the other Unix systems, the number of virtual memory IOs per second made over the cumulative collection time. Only local disks are counted in this measurement. NFS devices are excluded. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, the IOs to user file data are not included in this metric unless they were done via the mmap(2) system call. On SUN, when a file is accessed, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On SUN, this metric is calculated by subtracting raw and block IOs from physical IOs. Tape drive accesses are included in the raw IOs, but not in the physical IOs. Therefore, when tape drive accesses are occurring on a system, all virtual memory and raw IO is counted as raw IO. For example, you may see heavy raw IO occurring during system backup. Raw IOs for disks are counted in the physical IOs. To determine if the raw IO is tape access versus disk access, compare the global physical disk accesses to the total of raw, block, and VM IOs. If the totals are the same, the raw IO activity is to a disk, floppy, or CD drive. Check physical IO data for each individual disk device to isolate a device. If the totals are different, there is raw IO activity to a non-disk device like a tape drive. GBL_FS_SPACE_UTIL_PEAK -------------------- The percentage of occupied disk space to total disk space for the fullest file system found during the interval. Only locally mounted file systems are counted in this metric. This metric can be used as an indicator that at least one file system on the system is running out of disk space. On Unix systems, CDROM and PC file systems are also excluded. This metric can exceed 100 percent. This is because a portion of the file system space is reserved as a buffer and can only be used by root. If the root user has made the file system grow beyond the reserved buffer, the utilization will be greater than 100 percent. This is a dangerous situation since if the root user totally fills the file system, the system may crash. On Windows, CDROM file systems are also excluded. GBL_GMTOFFSET -------------------- The difference, in minutes, between local time and GMT (Greenwich Mean Time). GBL_INTERRUPT -------------------- The number of IO interrupts during the interval. GBL_INTERRUPT_RATE -------------------- The average number of IO interrupts per second during the interval. On HPUX and SUN this value includes clock interrupts. To get non-clock device interrupts, subtract clock interrupts from the value. GBL_INTERRUPT_RATE_CUM -------------------- The average number of IO interrupts per second over the cumulative collection time. On HPUX and SUN this value includes clock interrupts. To get non-clock device interrupts, subtract clock interrupts from the value. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_INTERRUPT_RATE_HIGH -------------------- The highest number of IO interrupts per second during any one interval over the cumulative collection time. On HPUX and SUN this value includes clock interrupts. To get non-clock device interrupts, subtract clock interrupts from the value. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_INTERVAL -------------------- The amount of time in the interval. This measured interval is slightly larger than the desired or configured interval if the collection program is delayed by a higher priority process and cannot sample the data immediately. GBL_INTERVAL_CUM -------------------- The amount of time over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_JAVAARG -------------------- This boolean value indicates whether the java class overloading mechanism is enabled or not. This metric will be set when the javaarg flag in the parm file is set. The metric affected by this setting is PROC_PROC_ARGV1. This setting is useful to construct parm file java application definitions using the argv1= keyword. GBL_LOADAVG -------------------- The average load average of the system during the interval. GBL_LOADAVG_CUM -------------------- The average load average of the system over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_LOADAVG_HIGH -------------------- The highest value of the load average during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_LOST_MI_TRACE_BUFFERS -------------------- The number of trace buffers lost by the measurement processing daemon. On HP-UX systems, if this value is > 0, the measurement subsystem is not keeping up with the system events that generate traces. For other Unix systems, if this value is > 0, the measurement subsystem is not keeping up with the ARM API calls that generate traces. Note: The value reported for this metric will roll over to 0 once it crosses INTMAX. GBL_MACHINE -------------------- On most Unix systems, this is a text string representing the type of computer. This is similar to what is returned by the command “uname -m”. On AIX, this is a text string representing the model number of the computer. This is similar to what is returned by the command “uname -M”. For example, “7043-150”. On Windows, this is a text string representing the type of the computer. For example, “80686”. GBL_MACHINE_MODEL -------------------- The CPU model. This is similar to the information returned by the GBL_MACHINE metric and the uname command. However, this metric returns more information on some processors. On HP-UX, this is the same information returned by the model command. GBL_MEM_AVAIL -------------------- The amount of physical available memory in the system (in MBs unless otherwise specified). Beginning with the OVPA 4.0 release, this metric is now reported in MBytes to better report the significant increases in system memory capacities. WARNING: This change in scale applies to this metric when logged by OVPA or displayed with GlancePlus for this release and beyond. However, the presentation of this metric recorded in legacy data (data logged with OVPA C.03 and previous releases), will remain in units of KBytes when viewed with extract or OVPM. On Windows, memory resident operating system code and data is not included as available memory. GBL_MEM_CACHE -------------------- The amount of physical memory (in MBs unless otherwise specified) used by the buffer cache during the interval. Beginning with the OVPA 4.0 release, this metric is now reported in MBytes to better report the significant increases in system memory capacities. WARNING: This change in scale applies to this metric when logged by OVPA or displayed with GlancePlus for this release and beyond. However, the presentation of this metric recorded in legacy data (data logged with OVPA C.03 and previous releases), will remain in units of KBytes when viewed with extract or OVPM. On HP-UX, the buffer cache is a memory pool used by the system to stage disk IO data for the driver. On SUN, this value is obtained by multiplying the system page size times the number of buffer headers (nbuf). For example, on a SPARCstation 10 the buffer size is usually (200 (page size buffers) * 4096 (bytes/page) = 800 KB). On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The “nbuf” value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On AIX, this value should be minimal since most disk IOs are done through memory mapped files. GBL_MEM_CACHE_HIT -------------------- On HP-UX, the number of buffer cache reads resolved from the buffer cache (rather than going to disk) during the interval. Buffer cache reads can occur as a result of a logical read (for example, file read system call), a read generated by a client, a read-ahead on behalf of a logical read or a system procedure. On HP-UX, this metric is obtained by measuring the number of buffered read calls that were satisfied by the data that was in the file system buffer cache. Reads that are not in the buffer cache result in disk IO. raw IO and virtual memory IO, are not counted in this metric. On SUN, the number of physical reads resolved from memory (rather than going to disk) during the interval. This includes inode, indirect block and cylinder group related disk reads, plus file reads from files memory mapped by the virtual memory IO system. On AIX, the number of disk reads that were satisfied in the file system buffer cache (rather than going to disk) during the interval. On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. GBL_MEM_CACHE_HIT_CUM -------------------- On HP-UX, the number of buffer cache reads resolved from the buffer cache (rather than going to disk) over the cumulative collection time. Buffer cache reads can occur as a result of a logical read (for example, file read system call), a read generated by a client, a read-ahead on behalf of a logical read or a system procedure. On HP-UX, this metric is obtained by measuring the number of buffered read calls that were satisfied by the data that was in the file system buffer cache. Reads that are not in the buffer cache result in disk IO. raw IO and virtual memory IO, are not counted in this metric. On SUN, the number of physical reads resolved from memory (rather than going to disk) over the cumulative collection time. This includes inode, indirect block and cylinder group related disk reads, plus file reads from files memory mapped by the virtual memory IO system. On AIX, the number of disk reads that were satisfied in the file system buffer cache (rather than going to disk) over the cumulative collection time. On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_MEM_CACHE_HIT_PCT -------------------- On HP-UX, the percentage of buffer cache reads resolved from the buffer cache (rather than going to disk) during the interval. Buffer cache reads can occur as a result of a logical read (for example, file read system call), a read generated by a client, a read-ahead on behalf of a logical read or a system procedure. On HP-UX, this metric is obtained by measuring the number of buffered read calls that were satisfied by the data that was in the file system buffer cache. Reads to filesystem file buffers that are not in the buffer cache result in disk IO. Reads to raw IO and virtual memory IO (including memory mapped files), do not go through the filesystem buffer cache, and so are not relevant to this metric. On HP-UX, a low cache hit rate may indicate low efficiency of the buffer cache, either because applications have poor data locality or because the buffer cache is too small. Overly large buffer cache sizes can lead to a memory bottleneck. The buffer cache should be sized small enough so that pageouts do not occur even when the system is busy. However, in the case of VxFS, all memory-mapped IOs show up as page ins/page outs and are not a result of memory pressure. On AIX, the percentage of disk reads that were satisfied in the file system buffer cache (rather than going to disk) during the interval. On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. On the remaining Unix systems, this is the percentage of logical reads satisfied in memory (rather than going to disk) during the interval. This includes inode, indirect block and cylinder group related disk reads, plus file reads from files memory mapped by the virtual memory IO system. On Windows, this is the percentage of buffered reads satisfied in the buffer cache (rather than going to disk) during the interval. This metric is obtained by measuring the number of buffered read calls that were satisfied by the data that was in the system buffer cache. Reads that are not in the buffer cache result in disk IO. Unbuffered IO and virtual memory IO (including memory mapped files), are not counted in this metric. GBL_MEM_CACHE_HIT_PCT_CUM -------------------- On HP-UX, this is the average percentage of buffer cache reads resolved from the buffer cache (rather than going to disk) over the cumulative collection time. Buffer cache reads can occur as a result of a logical read (for example, file read system call), a read generated by a client, a read-ahead on behalf of a logical read or a system procedure. On SUN, this is the percentage of physical reads that were satisfied in memory (rather than going to disk) over the cumulative collection time. This includes inode, indirect block and cylinder group related disk reads, plus file reads from files memory mapped by the virtual memory IO system. On AIX, this is the percentage of physical reads satisfied in the file system buffer cache (rather than going to disk) over the cumulative collection time. On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_MEM_CACHE_HIT_PCT_HIGH -------------------- On HP-UX, this is the highest interval percentage of buffer cache reads resolved from the buffer cache (rather than going to disk) over the cumulative collection time. Buffer cache reads can occur as a result of a logical read (for example, file read system call), a read generated by a client, a read- ahead on behalf of a logical read or a system procedure. On SUN, this is the highest interval percentage of physical reads satisfied in memory (rather than going to disk) over the cumulative collection time. This includes inode, indirect block and cylinder group related disk reads, plus file reads from files memory mapped by the virtual memory IO system. On AIX, this is the highest interval percentage of physical reads satisfied in the file system buffer cache (rather than going to disk) over the cumulative collection time. On AIX, the traditional file system buffer cache is not normally used, since files are implicitly memory mapped and the access is through the virtual memory system rather than the buffer cache. However, if a file is read as a block device (e.g /dev/hdisk1), the file system buffer cache is used, making this metric meaningful in that situation. If no IO through the buffer cache occurs during the interval, this metric is 0. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_MEM_CACHE_UTIL -------------------- The percentage of physical memory used by the buffer cache during the interval. On HP-UX, the buffer cache is a memory pool used by the system to stage disk IO data for the driver. On SUN, this percentage is based on calculating the buffer cache size by multiplying the system page size times the number of buffer headers (nbuf). For example, on a SPARCstation 10 the buffer size is usually (200 (page size buffers) * 4096 (bytes/page) = 800 KB). On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The “nbuf” value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On AIX, this value should be minimal since most disk IOs are done through memory mapped files. GBL_MEM_DNLC_HIT -------------------- The number of times a pathname component was found in the directory name lookup cache (rather than requiring a disk read to find a file) during the interval. On HP-UX, the directory name lookup cache is used to minimize sequential searches through directory entries for pathname components during pathname to inode translations. Such translations are done whenever a file is accessed through its filename. The cache holds the inode cache table offset for recently referenced pathname components. Pathname components that exceed 15 characters are not cached. Any HP-UX system call that includes a path parameter can result in directory name lookup cache activity, including but not limited to system calls such as open, stat, exec, lstat, unlink. Each component of a path parameter is parsed and converted to an inode separately, therefore several dnlc hits per path are possible. High directory name cache hit rates on HP-UX will be seen on systems where pathname component requests are frequently repeated. For example, when users or applications work in the same directory where they repeatedly list or open the same files, cache hit rates will be high. Unusually low cache hit rates might be seen on HP-UX systems where users or applications access many different directories in no particular pattern. Low cache hit rates can also be an indicator of an underconfigured inode cache. When an inode cache is too small, the kernel will more frequently have to flush older inode cache and their corresponding directory name cache entries in order to make room for new inode cache entries. On HP-UX, the directory name lookup cache is static in size and is allocated in kernel memory. As a result, it is not affected by user memory constraints. The size of the cache is stored in the kernel variable “ncsize” and is not directly tunable by the system administrator; however, it can be changed indirectly by tuning other tables used in the formula to compute the “ncsize”. The formula is: ncsize = MAX(((nproc+16+maxusers)+ 32+(2*npty)),ninode) Note that ncsize is always >= ninode which is the default size of the inode cache. This is because the directory name cache contains inode table offsets for each cached pathname component. On SUN, long file names (greater than 30 characters) are not cached and are a type of cache miss. “Enters”, or cache data updates, are not included in this data. The DNLC size is: (maxusers * 17) + 90 GBL_MEM_DNLC_HIT_CUM -------------------- The number of times a pathname component was found in the directory name lookup cache (rather than requiring a disk read to find a file) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, the directory name lookup cache is used to minimize sequential searches through directory entries for pathname components during pathname to inode translations. Such translations are done whenever a file is accessed through its filename. The cache holds the inode cache table offset for recently referenced pathname components. Pathname components that exceed 15 characters are not cached. Any HP-UX system call that includes a path parameter can result in directory name lookup cache activity, including but not limited to system calls such as open, stat, exec, lstat, unlink. Each component of a path parameter is parsed and converted to an inode separately, therefore several dnlc hits per path are possible. High directory name cache hit rates on HP-UX will be seen on systems where pathname component requests are frequently repeated. For example, when users or applications work in the same directory where they repeatedly list or open the same files, cache hit rates will be high. Unusually low cache hit rates might be seen on HP-UX systems where users or applications access many different directories in no particular pattern. Low cache hit rates can also be an indicator of an underconfigured inode cache. When an inode cache is too small, the kernel will more frequently have to flush older inode cache and their corresponding directory name cache entries in order to make room for new inode cache entries. On HP-UX, the directory name lookup cache is static in size and is allocated in kernel memory. As a result, it is not affected by user memory constraints. The size of the cache is stored in the kernel variable “ncsize” and is not directly tunable by the system administrator; however, it can be changed indirectly by tuning other tables used in the formula to compute the “ncsize”. The formula is: ncsize = MAX(((nproc+16+maxusers)+ 32+(2*npty)),ninode) Note that ncsize is always >= ninode which is the default size of the inode cache. This is because the directory name cache contains inode table offsets for each cached pathname component. On SUN, long file names (greater than 30 characters) are not cached and are a type of cache miss. “Enters”, or cache data updates, are not included in this data. The DNLC size is: (maxusers * 17) + 90 GBL_MEM_DNLC_HIT_PCT -------------------- The percentage of time a pathname component was found in the directory name lookup cache (rather than requiring a disk read to find a file) during the interval. On HP-UX, the directory name lookup cache is used to minimize sequential searches through directory entries for pathname components during pathname to inode translations. Such translations are done whenever a file is accessed through its filename. The cache holds the inode cache table offset for recently referenced pathname components. Pathname components that exceed 15 characters are not cached. Any HP-UX system call that includes a path parameter can result in directory name lookup cache activity, including but not limited to system calls such as open, stat, exec, lstat, unlink. Each component of a path parameter is parsed and converted to an inode separately, therefore several dnlc hits per path are possible. High directory name cache hit rates on HP-UX will be seen on systems where pathname component requests are frequently repeated. For example, when users or applications work in the same directory where they repeatedly list or open the same files, cache hit rates will be high. Unusually low cache hit rates might be seen on HP-UX systems where users or applications access many different directories in no particular pattern. Low cache hit rates can also be an indicator of an underconfigured inode cache. When an inode cache is too small, the kernel will more frequently have to flush older inode cache and their corresponding directory name cache entries in order to make room for new inode cache entries. On HP-UX, the directory name lookup cache is static in size and is allocated in kernel memory. As a result, it is not affected by user memory constraints. The size of the cache is stored in the kernel variable “ncsize” and is not directly tunable by the system administrator; however, it can be changed indirectly by tuning other tables used in the formula to compute the “ncsize”. The formula is: ncsize = MAX(((nproc+16+maxusers)+ 32+(2*npty)),ninode) Note that ncsize is always >= ninode which is the default size of the inode cache. This is because the directory name cache contains inode table offsets for each cached pathname component. On SUN, long file names (greater than 30 characters) are not cached and are a type of cache miss. “Enters”, or cache data updates, are not included in this data. The DNLC size is: (maxusers * 17) + 90 GBL_MEM_DNLC_HIT_PCT_CUM -------------------- The percentage of time a pathname component was found in the directory name lookup cache (rather than requiring a disk read to find a file) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, the directory name lookup cache is used to minimize sequential searches through directory entries for pathname components during pathname to inode translations. Such translations are done whenever a file is accessed through its filename. The cache holds the inode cache table offset for recently referenced pathname components. Pathname components that exceed 15 characters are not cached. Any HP-UX system call that includes a path parameter can result in directory name lookup cache activity, including but not limited to system calls such as open, stat, exec, lstat, unlink. Each component of a path parameter is parsed and converted to an inode separately, therefore several dnlc hits per path are possible. High directory name cache hit rates on HP-UX will be seen on systems where pathname component requests are frequently repeated. For example, when users or applications work in the same directory where they repeatedly list or open the same files, cache hit rates will be high. Unusually low cache hit rates might be seen on HP-UX systems where users or applications access many different directories in no particular pattern. Low cache hit rates can also be an indicator of an underconfigured inode cache. When an inode cache is too small, the kernel will more frequently have to flush older inode cache and their corresponding directory name cache entries in order to make room for new inode cache entries. On HP-UX, the directory name lookup cache is static in size and is allocated in kernel memory. As a result, it is not affected by user memory constraints. The size of the cache is stored in the kernel variable “ncsize” and is not directly tunable by the system administrator; however, it can be changed indirectly by tuning other tables used in the formula to compute the “ncsize”. The formula is: ncsize = MAX(((nproc+16+maxusers)+ 32+(2*npty)),ninode) Note that ncsize is always >= ninode which is the default size of the inode cache. This is because the directory name cache contains inode table offsets for each cached pathname component. On SUN, long file names (greater than 30 characters) are not cached and are a type of cache miss. “Enters”, or cache data updates, are not included in this data. The DNLC size is: (maxusers * 17) + 90 GBL_MEM_DNLC_HIT_PCT_HIGH -------------------- The highest percentage of time during any one interval that a pathname component was found in the directory name lookup cache (rather than requiring a disk read to find a file) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, the directory name lookup cache is used to minimize sequential searches through directory entries for pathname components during pathname to inode translations. Such translations are done whenever a file is accessed through its filename. The cache holds the inode cache table offset for recently referenced pathname components. Pathname components that exceed 15 characters are not cached. Any HP-UX system call that includes a path parameter can result in directory name lookup cache activity, including but not limited to system calls such as open, stat, exec, lstat, unlink. Each component of a path parameter is parsed and converted to an inode separately, therefore several dnlc hits per path are possible. High directory name cache hit rates on HP-UX will be seen on systems where pathname component requests are frequently repeated. For example, when users or applications work in the same directory where they repeatedly list or open the same files, cache hit rates will be high. Unusually low cache hit rates might be seen on HP-UX systems where users or applications access many different directories in no particular pattern. Low cache hit rates can also be an indicator of an underconfigured inode cache. When an inode cache is too small, the kernel will more frequently have to flush older inode cache and their corresponding directory name cache entries in order to make room for new inode cache entries. On HP-UX, the directory name lookup cache is static in size and is allocated in kernel memory. As a result, it is not affected by user memory constraints. The size of the cache is stored in the kernel variable “ncsize” and is not directly tunable by the system administrator; however, it can be changed indirectly by tuning other tables used in the formula to compute the “ncsize”. The formula is: ncsize = MAX(((nproc+16+maxusers)+ 32+(2*npty)),ninode) Note that ncsize is always >= ninode which is the default size of the inode cache. This is because the directory name cache contains inode table offsets for each cached pathname component. On SUN, long file names (greater than 30 characters) are not cached and are a type of cache miss. “Enters”, or cache data updates, are not included in this data. The DNLC size is: (maxusers * 17) + 90 GBL_MEM_DNLC_LONGS -------------------- The number of times a pathname component was too long to be found in the directory name lookup cache during the interval. On HP-UX, the directory name lookup cache is used to minimize sequential searches through directory entries for pathname components during pathname to inode translations. Such translations are done whenever a file is accessed through its filename. The cache holds the inode cache table offset for recently referenced pathname components. Pathname components that exceed 15 characters are not cached. Any HP-UX system call that includes a path parameter can result in directory name lookup cache activity, including but not limited to system calls such as open, stat, exec, lstat, unlink. Each component of a path parameter is parsed and converted to an inode separately, therefore several dnlc hits per path are possible. High directory name cache hit rates on HP-UX will be seen on systems where pathname component requests are frequently repeated. For example, when users or applications work in the same directory where they repeatedly list or open the same files, cache hit rates will be high. Unusually low cache hit rates might be seen on HP-UX systems where users or applications access many different directories in no particular pattern. Low cache hit rates can also be an indicator of an underconfigured inode cache. When an inode cache is too small, the kernel will more frequently have to flush older inode cache and their corresponding directory name cache entries in order to make room for new inode cache entries. On HP-UX, the directory name lookup cache is static in size and is allocated in kernel memory. As a result, it is not affected by user memory constraints. The size of the cache is stored in the kernel variable “ncsize” and is not directly tunable by the system administrator; however, it can be changed indirectly by tuning other tables used in the formula to compute the “ncsize”. The formula is: ncsize = MAX(((nproc+16+maxusers)+ 32+(2*npty)),ninode) Note that ncsize is always >= ninode which is the default size of the inode cache. This is because the directory name cache contains inode table offsets for each cached pathname component. On SUN, long file names (greater than 30 characters) are not cached and are a type of cache miss. “Enters”, or cache data updates, are not included in this data. The DNLC size is: (maxusers * 17) + 90 GBL_MEM_DNLC_LONGS_CUM -------------------- The number of times a pathname component was too long to be found in the directory name lookup cache over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, the directory name lookup cache is used to minimize sequential searches through directory entries for pathname components during pathname to inode translations. Such translations are done whenever a file is accessed through its filename. The cache holds the inode cache table offset for recently referenced pathname components. Pathname components that exceed 15 characters are not cached. Any HP-UX system call that includes a path parameter can result in directory name lookup cache activity, including but not limited to system calls such as open, stat, exec, lstat, unlink. Each component of a path parameter is parsed and converted to an inode separately, therefore several dnlc hits per path are possible. High directory name cache hit rates on HP-UX will be seen on systems where pathname component requests are frequently repeated. For example, when users or applications work in the same directory where they repeatedly list or open the same files, cache hit rates will be high. Unusually low cache hit rates might be seen on HP-UX systems where users or applications access many different directories in no particular pattern. Low cache hit rates can also be an indicator of an underconfigured inode cache. When an inode cache is too small, the kernel will more frequently have to flush older inode cache and their corresponding directory name cache entries in order to make room for new inode cache entries. On HP-UX, the directory name lookup cache is static in size and is allocated in kernel memory. As a result, it is not affected by user memory constraints. The size of the cache is stored in the kernel variable “ncsize” and is not directly tunable by the system administrator; however, it can be changed indirectly by tuning other tables used in the formula to compute the “ncsize”. The formula is: ncsize = MAX(((nproc+16+maxusers)+ 32+(2*npty)),ninode) Note that ncsize is always >= ninode which is the default size of the inode cache. This is because the directory name cache contains inode table offsets for each cached pathname component. On SUN, long file names (greater than 30 characters) are not cached and are a type of cache miss. “Enters”, or cache data updates, are not included in this data. The DNLC size is: (maxusers * 17) + 90 GBL_MEM_DNLC_LONGS_PCT -------------------- The percentage of time a pathname component was too long to be found in the directory name lookup cache during the interval. On HP-UX, the directory name lookup cache is used to minimize sequential searches through directory entries for pathname components during pathname to inode translations. Such translations are done whenever a file is accessed through its filename. The cache holds the inode cache table offset for recently referenced pathname components. Pathname components that exceed 15 characters are not cached. Any HP-UX system call that includes a path parameter can result in directory name lookup cache activity, including but not limited to system calls such as open, stat, exec, lstat, unlink. Each component of a path parameter is parsed and converted to an inode separately, therefore several dnlc hits per path are possible. High directory name cache hit rates on HP-UX will be seen on systems where pathname component requests are frequently repeated. For example, when users or applications work in the same directory where they repeatedly list or open the same files, cache hit rates will be high. Unusually low cache hit rates might be seen on HP-UX systems where users or applications access many different directories in no particular pattern. Low cache hit rates can also be an indicator of an underconfigured inode cache. When an inode cache is too small, the kernel will more frequently have to flush older inode cache and their corresponding directory name cache entries in order to make room for new inode cache entries. On HP-UX, the directory name lookup cache is static in size and is allocated in kernel memory. As a result, it is not affected by user memory constraints. The size of the cache is stored in the kernel variable “ncsize” and is not directly tunable by the system administrator; however, it can be changed indirectly by tuning other tables used in the formula to compute the “ncsize”. The formula is: ncsize = MAX(((nproc+16+maxusers)+ 32+(2*npty)),ninode) Note that ncsize is always >= ninode which is the default size of the inode cache. This is because the directory name cache contains inode table offsets for each cached pathname component. On SUN, long file names (greater than 30 characters) are not cached and are a type of cache miss. “Enters”, or cache data updates, are not included in this data. The DNLC size is: (maxusers * 17) + 90 GBL_MEM_DNLC_LONGS_PCT_CUM -------------------- The percentage of time a pathname component was too long to be found in the directory name lookup cache over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, the directory name lookup cache is used to minimize sequential searches through directory entries for pathname components during pathname to inode translations. Such translations are done whenever a file is accessed through its filename. The cache holds the inode cache table offset for recently referenced pathname components. Pathname components that exceed 15 characters are not cached. Any HP-UX system call that includes a path parameter can result in directory name lookup cache activity, including but not limited to system calls such as open, stat, exec, lstat, unlink. Each component of a path parameter is parsed and converted to an inode separately, therefore several dnlc hits per path are possible. High directory name cache hit rates on HP-UX will be seen on systems where pathname component requests are frequently repeated. For example, when users or applications work in the same directory where they repeatedly list or open the same files, cache hit rates will be high. Unusually low cache hit rates might be seen on HP-UX systems where users or applications access many different directories in no particular pattern. Low cache hit rates can also be an indicator of an underconfigured inode cache. When an inode cache is too small, the kernel will more frequently have to flush older inode cache and their corresponding directory name cache entries in order to make room for new inode cache entries. On HP-UX, the directory name lookup cache is static in size and is allocated in kernel memory. As a result, it is not affected by user memory constraints. The size of the cache is stored in the kernel variable “ncsize” and is not directly tunable by the system administrator; however, it can be changed indirectly by tuning other tables used in the formula to compute the “ncsize”. The formula is: ncsize = MAX(((nproc+16+maxusers)+ 32+(2*npty)),ninode) Note that ncsize is always >= ninode which is the default size of the inode cache. This is because the directory name cache contains inode table offsets for each cached pathname component. On SUN, long file names (greater than 30 characters) are not cached and are a type of cache miss. “Enters”, or cache data updates, are not included in this data. The DNLC size is: (maxusers * 17) + 90 GBL_MEM_DNLC_LONGS_PCT_HIGH -------------------- The highest percentage of time during any one interval that a pathname component was too long to be found in the directory name lookup cache over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, the directory name lookup cache is used to minimize sequential searches through directory entries for pathname components during pathname to inode translations. Such translations are done whenever a file is accessed through its filename. The cache holds the inode cache table offset for recently referenced pathname components. Pathname components that exceed 15 characters are not cached. Any HP-UX system call that includes a path parameter can result in directory name lookup cache activity, including but not limited to system calls such as open, stat, exec, lstat, unlink. Each component of a path parameter is parsed and converted to an inode separately, therefore several dnlc hits per path are possible. High directory name cache hit rates on HP-UX will be seen on systems where pathname component requests are frequently repeated. For example, when users or applications work in the same directory where they repeatedly list or open the same files, cache hit rates will be high. Unusually low cache hit rates might be seen on HP-UX systems where users or applications access many different directories in no particular pattern. Low cache hit rates can also be an indicator of an underconfigured inode cache. When an inode cache is too small, the kernel will more frequently have to flush older inode cache and their corresponding directory name cache entries in order to make room for new inode cache entries. On HP-UX, the directory name lookup cache is static in size and is allocated in kernel memory. As a result, it is not affected by user memory constraints. The size of the cache is stored in the kernel variable “ncsize” and is not directly tunable by the system administrator; however, it can be changed indirectly by tuning other tables used in the formula to compute the “ncsize”. The formula is: ncsize = MAX(((nproc+16+maxusers)+ 32+(2*npty)),ninode) Note that ncsize is always >= ninode which is the default size of the inode cache. This is because the directory name cache contains inode table offsets for each cached pathname component. On SUN, long file names (greater than 30 characters) are not cached and are a type of cache miss. “Enters”, or cache data updates, are not included in this data. The DNLC size is: (maxusers * 17) + 90 GBL_MEM_FILE_PAGEIN_RATE -------------------- The number of page ins from the file system per second during the interval. On Solaris, this is the same as the “fpi” value from the “vmstat -p” command, divided by page size in KB. GBL_MEM_FILE_PAGEOUT_RATE -------------------- The number of page outs to the file system per second during the interval. On Solaris, this is the same as the “fpo” value from the “vmstat -p” command, divided by page size in KB. GBL_MEM_FREE -------------------- The amount of memory not allocated (in MBs unless otherwise specified). As this value drops, the likelihood increases that swapping or paging out to disk may occur to satisfy new memory requests. Beginning with the OVPA 4.0 release, this metric is now reported in MBytes to better report the significant increases in system memory capacities. WARNING: This change in scale applies to this metric when logged by OVPA or displayed with GlancePlus for this release and beyond. However, the presentation of this metric recorded in legacy data (data logged with OVPA C.03 and previous releases), will remain in units of KBytes when viewed with extract or OVPM. On SUN, low values for this metric may not indicate a true memory shortage. This metric can be influenced by the VMM (Virtual Memory Management) system. GBL_MEM_FREE_UTIL -------------------- The percentage of physical memory that was free at the end of the interval. GBL_MEM_PAGEIN -------------------- The total number of disk blocks paged into memory (or page ins) from the disk during the interval. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. On HP-UX, this is the same as the “page ins” value from the “vmstat -s” command. On AIX, this is the same as the “paging space page ins” value. Remember that “vmstat -s” reports cumulative counts. GBL_MEM_PAGEIN_BYTE -------------------- The number of KBs (or MBs if specified) of page ins during the interval. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEIN_BYTE_CUM -------------------- The number of KBs (or MBs if specified) of page ins over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEIN_BYTE_RATE -------------------- The number of KBs per second of page ins during the interval. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEIN_BYTE_RATE_CUM -------------------- The average number of KBs per second of page ins over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEIN_BYTE_RATE_HIGH -------------------- The highest number of KBs per second of page ins during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEIN_CUM -------------------- The total number of page ins from the disk over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEIN_RATE -------------------- The total number of disk blocks paged into memory (or page ins) per second from the disk during the interval. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. On HP-UX and AIX, this is the same as the “pi” value from the vmstat command. On Solaris, this is the same as the sum of the “epi” and “api” values from the “vmstat -p” command, divided by the page size in KB. GBL_MEM_PAGEIN_RATE_CUM -------------------- The average number of page ins per second over the cumulative collection time. This includes pages paged in from paging space and, except for AIX, from the file system. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEIN_RATE_HIGH -------------------- The highest number of page ins per second from disk during any interval over the cumulative collection time. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEOUT -------------------- The total number of page outs to the disk during the interval. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. On HP-UX, this is the same as the “page outs” value from the “vmstat -s” command. On AIX, this is the same as the “paging space page outs” value. Remember that “vmstat -s” reports cumulative counts. GBL_MEM_PAGEOUT_BYTE -------------------- The number of KBs (or MBs if specified) of page outs during the interval. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEOUT_BYTE_CUM -------------------- The number of KBs (or MBs if specified) of page outs over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEOUT_BYTE_RATE -------------------- The number of KBs (or MBs if specified) per second of page outs during the interval. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEOUT_BYTE_RATE_CUM -------------------- The average number of KBs per second of page outs over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEOUT_BYTE_RATE_HIGH -------------------- The highest number of KBs per second of page outs during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEOUT_CUM -------------------- The total number of page outs to the disk over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEOUT_RATE -------------------- The total number of page outs to the disk per second during the interval. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. On HP-UX and AIX, this is the same as the “po” value from the vmstat command. On Solaris, this is the same as the sum of the “epo” and “apo” values from the “vmstat -p” command, divided by the page size in KB. On Windows, this counter also includes paging traffic on behalf of the system cache to access file data for applications and so may be high when there is no memory pressure. GBL_MEM_PAGEOUT_RATE_CUM -------------------- The average number of page outs to the disk per second over the cumulative collection time. This includes pages paged out to paging space and, except for AIX, to the file system. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGEOUT_RATE_HIGH -------------------- The highest number of page outs per second to disk during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, Solaris, and AIX, this reflects paging activity between memory and paging space. It does not include activity between memory and file systems. On Linux and Windows, this includes paging activity for both file systems and paging space. GBL_MEM_PAGE_FAULT -------------------- The number of page faults that occurred during the interval. GBL_MEM_PAGE_FAULT_CUM -------------------- The number of page faults that occurred over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_MEM_PAGE_FAULT_RATE -------------------- The number of page faults per second during the interval. GBL_MEM_PAGE_FAULT_RATE_CUM -------------------- The average number of page faults per second over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_MEM_PAGE_FAULT_RATE_HIGH -------------------- The highest page fault per second during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_MEM_PAGE_REQUEST -------------------- The number of page requests to or from the disk during the interval. On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging space and not to the file system. On Linux and Windows, this includes pages paged to or from both paging space and the file system. On HP-UX, this is the same as the sun of the “page ins” and “page outs” values from the “vmstat -s” command. On AIX, this is the same as the sum of the “paging space page ins” and “paging space page outs” values. Remember that “vmstat -s” reports cumulative counts. On Windows, this counter also includes paging traffic on behalf of the system cache to access file data for applications and so may be high when there is no memory pressure. GBL_MEM_PAGE_REQUEST_CUM -------------------- The total number of page requests to or from the disk over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging space and not to or from the file system. On Linux and Windows, this includes pages paged to or from both paging space and the file system. On Windows, this counter also includes paging traffic on behalf of the system cache to access file data for applications and so may be high when there is no memory pressure. GBL_MEM_PAGE_REQUEST_RATE -------------------- The number of page requests to or from the disk per second during the interval. On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging space and not to or from the file system. On Linux and Windows, this includes pages paged to or from both paging space and the file system. On HP-UX and AIX, this is the same as the sum of the “pi” and “po” values from the vmstat command. On Solaris, this is the same as the sum of the “epi”, “epo”, “api”, and “apo” values from the “vmstat -p” command, divided by the page size in KB. Higher than normal rates can indicate either a memory or a disk bottleneck. Compare GBL_DISK_UTIL_PEAK and GBL_MEM_UTIL to determine which resource is more constrained. High rates may also indicate memory thrashing caused by a particular application or set of applications. Look for processes with high major fault rates to identify the culprits. GBL_MEM_PAGE_REQUEST_RATE_CUM -------------------- The average number of page requests to or from the disk per second over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging space and not to or from the file system. On Linux and Windows, this includes pages paged to or from both paging space and the file system. GBL_MEM_PAGE_REQUEST_RATE_HIGH -------------------- The highest number of page requests per second during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging space and not to or from the file system. On Linux and Windows, this includes pages paged to or from both paging space and the file system. GBL_MEM_PG_SCAN -------------------- The number of pages scanned by the pageout daemon (or by the Clock Hand on AIX) during the interval. The clock hand algorithm is used to control page aging on the system. GBL_MEM_PG_SCAN_CUM -------------------- The number of pages scanned by the pageout daemon (or by the Clock Hand on AIX) over the cumulative collection time. The clock hand algorithm is used to control page aging on the system. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_MEM_PG_SCAN_RATE -------------------- The number of pages scanned per second by the pageout daemon (or by the Clock Hand on AIX) during the interval. The clock hand algorithm is used to control page aging on the system. GBL_MEM_PG_SCAN_RATE_CUM -------------------- The average number of pages scanned per second by the pageout daemon (or by the Clock Hand on AIX) over the cumulative collection time. The clock hand algorithm is used to control page aging on the system. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_MEM_PG_SCAN_RATE_HIGH -------------------- The highest number of pages scanned per second by the pageout daemon (or by the Clock Hand on AIX) during any interval over the cumulative collection time. The clock hand algorithm is used to control page aging on the system. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_MEM_PHYS -------------------- The amount of physical memory in the system (in MBs unless otherwise specified). Beginning with the OVPA 4.0 release, this metric is now reported in MBytes to better report the significant increases in system memory capacities. WARNING: This change in scale applies to this metric when logged by OVPA or displayed with GlancePlus for this release and beyond. However, the presentation of this metric recorded in legacy data (data logged with OVPA C.03 and previous releases), will remain in units of KBytes when viewed with extract or OVPM. On HP-UX, banks with bad memory are not counted. Note that on some machines, the Processor Dependent Code (PDC) code uses the upper 1MB of memory and thus reports less than the actual physical memory of the system. Thus, on a system with 256MB of physical memory, this metric and dmesg(1M) might only report 267,386,880 bytes (255MB). This is all the physical memory that software on the machine can access. On Windows, this is the total memory available, which may be slightly less than the total amount of physical memory present in the system. This value is also reported in the Control Panel's About Windows NT help topic. GBL_MEM_SWAP -------------------- The total number of swap ins and swap outs (or deactivations and reactivations on HP-UX) during the interval. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPIN -------------------- The number of swap ins (or reactivations on HP-UX) during the interval. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, this is the same as the “swap ins” value from the “vmstat -s” command. Remember that “vmstat -s” reports cumulative counts. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPIN_BYTE -------------------- The number of KBs transferred in from disk due to swap ins (or reactivations on HP-UX) during the interval. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPIN_BYTE_CUM -------------------- The number of KBs transferred in from disk due to swap ins (or reactivations on HP-UX) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPIN_BYTE_RATE -------------------- The number of KBs per second transferred from disk due to swap ins (or reactivations on HP-UX) during the interval. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPIN_BYTE_RATE_CUM -------------------- The number of KBs per second transferred from disk due to swap ins (or reactivations on HP-UX) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPIN_BYTE_RATE_HIGH -------------------- The highest number of KBs per second transferred from disk due to swap ins (or reactivations on HP-UX) during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPIN_CUM -------------------- The number of swap ins (or reactivations on HP-UX) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPIN_RATE -------------------- The number of swap ins (or reactivations on HP-UX) per second during the interval. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPIN_RATE_CUM -------------------- The average number of swap ins (or reactivations on HP-UX) per second over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPIN_RATE_HIGH -------------------- The highest number of swap ins (or reactivations on HP-UX) per second during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPOUT -------------------- The number of swap outs (or deactivations on HP-UX) during the interval. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, this is the same as the “swap outs” values from the “vmstat -s” command. Remember that “vmstat -s” reports cumulative counts. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPOUT_BYTE -------------------- The number of KBs (or MBs if specified) transferred out to disk due to swap outs (or deactivations on HP-UX) during the interval. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPOUT_BYTE_CUM -------------------- The number of KBs (or MBs if specified) transferred out to disk due to swap outs (or deactivations on HP-UX) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPOUT_BYTE_RATE -------------------- The number of KBs (or MBs if specified) per second transferred out to disk due to swap outs (or deactivations on HP-UX) during the interval. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPOUT_BYTE_RATE_CUM -------------------- The average number of KBs (or MBs if specified) per second transferred out to disk due to swap outs (or deactivations on HP-UX) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPOUT_BYTE_RATE_HIGH -------------------- The highest number of KBs (or MBs if specified) per second transferred out to disk due to swap outs (or deactivations on HP-UX) during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPOUT_CUM -------------------- The number of swap outs (or deactivations on HP-UX) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPOUT_RATE -------------------- The number of swap outs (or deactivations on HP-UX) per second during the interval. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPOUT_RATE_CUM -------------------- The number of swap outs (or deactivations on HP-UX) per second over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAPOUT_RATE_HIGH -------------------- The highest number of swap outs (or deactivations on HP-UX) per second during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAP_1_MIN_RATE -------------------- The number of swap ins and swap outs (or deactivations/reactivations on HP-UX) per minute during the interval. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAP_CUM -------------------- The total number of swap ins and swap outs (or deactivations and reactivations on HP-UX) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAP_RATE -------------------- The total number of swap ins and swap outs (or deactivations and reactivations on HP-UX) per second during the interval. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAP_RATE_CUM -------------------- The average number of swap ins and swap outs (or deactivations and reactivations on HP-UX) per second over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SWAP_RATE_HIGH -------------------- The highest number of swap ins and swap outs (or deactivations and reactivations on HP-UX) per second during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On AIX, swap metrics are equal to the corresponding page metrics. On HP-UX, process swapping was replaced by a combination of paging and deactivation. Process deactivation occurs when the system is thrashing or when the amount of free memory falls below a critical level. The swapper then marks certain processes for deactivation and removes them from the run queue. Pages within the associated memory regions are reused or paged out by the memory management vhand process in favor of pages belonging to processes that are not deactivated. Unlike traditional process swapping, deactivated memory pages may or may not be written out to the swap area, because a process could be reactivated before the paging occurs. To summarize, a process swap-out on HP-UX is a process deactivation. A swap-in is a reactivation of a deactivated process. Swap metrics that report swap-out bytes now represent bytes paged out to swap areas from deactivated regions. Because these pages are pushed out over time based on memory demands, these counts are much smaller than HP-UX 9.x counts where the entire process was written to the swap area when it was swapped-out. Likewise, swap-in bytes now represent bytes paged in as a result of reactivating a deactivated process and reading in any pages that were actually paged out to the swap area while the process was deactivated. GBL_MEM_SYS -------------------- The amount of physical memory (in MBs unless otherwise specified) used by the system (kernel) during the interval. System memory does not include the buffer cache. Beginning with the OVPA 4.0 release, this metric is now reported in MBytes to better report the significant increases in system memory capacities. WARNING: This change in scale applies to this metric when logged by OVPA or displayed with GlancePlus for this release and beyond. However, the presentation of this metric recorded in legacy data (data logged with OVPA C.03 and previous releases), will remain in units of KBytes when viewed with extract or OVPM. On HP-UX 11.0, this metric does not include some kinds of dynamically allocated kernel memory. This has always been reported in the GBL_MEM_USER* metrics. On HP-UX 11.11 and beyond, this metric includes some kinds of dynamically allocated kernel memory. GBL_MEM_SYS_AND_CACHE_UTIL -------------------- The percentage of physical memory used by the system (kernel) and the buffer cache at the end of the interval. On HP-UX 11.0, this metric does not include some kinds of dynamically allocated kernel memory. This has always been reported in the GBL_MEM_USER* metrics. On HP-UX 11.11 and beyond, this metric includes some kinds of dynamically allocated kernel memory. GBL_MEM_SYS_UTIL -------------------- The percentage of physical memory used by the system during the interval. System memory does not include the buffer cache. On HP-UX 11.0, this metric does not include some kinds of dynamically allocated kernel memory. This has always been reported in the GBL_MEM_USER* metrics. On HP-UX 11.11 and beyond, this metric includes some kinds of dynamically allocated kernel memory. GBL_MEM_USER -------------------- The amount of physical memory (in MBs unless otherwise specified) allocated to user code and data at the end of the interval. User memory regions include code, heap, stack, and other data areas including shared memory. This does not include memory for buffer cache. Beginning with the OVPA 4.0 release, this metric is now reported in MBytes to better report the significant increases in system memory capacities. WARNING: This change in scale applies to this metric when logged by OVPA or displayed with GlancePlus for this release and beyond. However, the presentation of this metric recorded in legacy data (data logged with OVPA C.03 and previous releases), will remain in units of KBytes when viewed with extract or OVPM. On HP-UX 11.0, this metric includes some kinds of dynamically allocated kernel memory. On HP-UX 11.11 and beyond, this metric does not include some kinds of dynamically allocated kernel memory. This is now reported in the GBL_MEM_SYS* metrics. Large fluctuations in this metric can be caused by programs which allocate large amounts of memory and then either release the memory or terminate. A slow continual increase in this metric may indicate a program with a memory leak. GBL_MEM_USER_UTIL -------------------- The percent of physical memory allocated to user code and data at the end of the interval. This metric shows the percent of memory owned by user memory regions such as user code, heap, stack and other data areas including shared memory. This does not include memory for buffer cache. On HP-UX 11.0, this metric includes some kinds of dynamically allocated kernel memory. On HP-UX 11.11 and beyond, this metric does not include some kinds of dynamically allocated kernel memory. This is now reported in the GBL_MEM_SYS* metrics. Large fluctuations in this metric can be caused by programs which allocate large amounts of memory and then either release the memory or terminate. A slow continual increase in this metric may indicate a program with a memory leak. GBL_MEM_UTIL -------------------- The percentage of physical memory in use during the interval. This includes system memory (occupied by the kernel), buffer cache and user memory. On HP-UX, this calculation is done using the byte values for physical memory and used memory, and is therefore more accurate than comparing the reported kilobyte values for physical memory and used memory. On SUN, high values for this metric may not indicate a true memory shortage. This metric can be influenced by the VMM (Virtual Memory Management) system. GBL_MEM_UTIL_CUM -------------------- The average percentage of physical memory in use over the cumulative collection time. This includes system memory (occupied by the kernel), buffer cache and user memory. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_MEM_UTIL_HIGH -------------------- The highest percentage of physical memory in use in any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_NET_COLLISION -------------------- The number of collisions that occurred on all network interfaces during the interval. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not include deferred packets. For HP-UX, this will be the same as the sum of the “Single Collision Frames“, ”Multiple Collision Frames“, ”Late Collisions“, and ”Excessive Collisions“ values from the output of the ”lanadmin“ utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Coll” column from the “netstat -i” command (“collisions” from the “netstat -i -e” command on Linux) for a network device. See also netstat(1). AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_COLLISION_1_MIN_RATE -------------------- The number of collisions per minute on all network interfaces during the interval. This metric does not include deferred packets. Collisions occur on any busy network, but abnormal collision rates could indicate a hardware or software problem. AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_COLLISION_CUM -------------------- The number of collisions that occurred on all network interfaces over the cumulative collection time. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not include deferred packets. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. For HP-UX, this will be the same as the sum of the “Single Collision Frames“, ”Multiple Collision Frames“, ”Late Collisions“, and ”Excessive Collisions“ values from the output of the ”lanadmin“ utility for the network interface. Remember that “lanadmin” reports cumulative counts. For this release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For other Unix systems, this is the same as the sum of the “Coll” column from the “netstat -i” command (“collisions” from the “netstat -i -e” command on Linux) for a network device. See also netstat(1). AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_COLLISION_PCT -------------------- The percentage of collisions to total outbound packet attempts during the interval. Outbound packet attempts include both successful packets and collisions. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets. AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_COLLISION_PCT_CUM -------------------- The percentage of collisions to total outbound packet attempts over the cumulative collection time. Outbound packet attempts include both successful packets and collisions. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets. AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_COLLISION_RATE -------------------- The number of collisions per second on all network interfaces during the interval. This metric does not include deferred packets. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_DEFERRED -------------------- The number of outbound deferred packets due to the network being in use during the interval. GBL_NET_DEFERRED_CUM -------------------- The number of outbound deferred packets due to the network being in use over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_NET_DEFERRED_PCT -------------------- The percentage of deferred packets to total outbound packet attempts during the interval. Outbound packet attempts include both packets successfully transmitted and those that were deferred. GBL_NET_DEFERRED_PCT_CUM -------------------- The percentage of deferred packets to total outbound packet attempts over the cumulative collection time. Outbound packet attempts include both packets successfully transmitted and those that were deferred. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_NET_DEFERRED_RATE -------------------- The number of deferred packets per second on all network interfaces during the interval. GBL_NET_DEFERRED_RATE_CUM -------------------- The number of deferred packets per second on all network interfaces over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_NET_ERROR -------------------- The number of errors that occurred on all network interfaces during the interval. For HP-UX, this will be the same as the sum of the “Inbound Errors” and “Outbound Errors” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_ERROR_1_MIN_RATE -------------------- The number of errors per minute on all network interfaces during the interval. This rate should normally be zero or very small. A large error rate can indicate a hardware or software problem. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_ERROR_CUM -------------------- The number of errors that occurred on all network interfaces over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. For HP-UX, this will be the same as the total sum of the “Inbound Errors” and “Outbound Errors” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_ERROR_RATE -------------------- The number of errors per second on all network interfaces during the interval. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_IN_ERROR -------------------- The number of inbound errors that occurred on all network interfaces during the interval. A large number of errors may indicate a hardware problem on the network. For HP-UX, this will be the same as the sum of the “Inbound Errors” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_IN_ERROR_CUM -------------------- The number of inbound errors that occurred on all network interfaces over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. A large number of errors may indicate a hardware problem on the network. For HP-UX, this will be the same as the total sum of the “Inbound Errors” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_IN_ERROR_PCT -------------------- The percentage of inbound network errors to total inbound packet attempts during the interval. Inbound packet attempts include both packets successfully received and those that encountered errors. A large number of errors may indicate a hardware problem on the network. The percentage of inbound errors to total packets attempted should remain low. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_IN_ERROR_PCT_CUM -------------------- The percentage of inbound network errors to total inbound packet attempts over the cumulative collection time. Inbound packet attempts include both packets successfully received and those that encountered errors. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. A large number of errors may indicate a hardware problem on the network. The percentage of inbound errors to total packets attempted should remain low. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_IN_ERROR_RATE -------------------- The number of inbound errors per second on all network interfaces during the interval. A large number of errors may indicate a hardware problem on the network. The percentage of inbound errors to total packets attempted should remain low. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_IN_ERROR_RATE_CUM -------------------- The average number of inbound errors per second on all network interfaces over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_IN_PACKET -------------------- The number of successful packets received through all network interfaces during the interval. Successful packets are those that have been processed without errors or collisions. For HP-UX, this will be the same as the sum of the “Inbound Unicast Packets“ and ”Inbound Non-Unicast Packets“ values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Ipkts” column (RX-OK on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. GBL_NET_IN_PACKET_CUM -------------------- The number of successful packets received through all network interfaces over the cumulative collection time. Successful packets are those that have been processed without errors or collisions. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. For HP-UX, this will be the same as the total sum of the “Inbound Unicast Packets“ and ”Inbound Non-Unicast Packets“ values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Ipkts” column (RX-OK on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_IN_PACKET_RATE -------------------- The number of successful packets per second received through all network interfaces during the interval. Successful packets are those that have been processed without errors or collisions. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. GBL_NET_OUT_ERROR -------------------- The number of outbound errors that occurred on all network interfaces during the interval. For HP-UX, this will be the same as the sum of the “Outbound Errors” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_OUT_ERROR_CUM -------------------- The number of outbound errors that occurred on all network interfaces over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. For HP-UX, this will be the same as the total sum of the “Outbound Errors” values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_OUT_ERROR_PCT -------------------- The percentage of outbound network errors to total outbound packet attempts during the interval. Outbound packet attempts include both packets successfully sent and those that encountered errors. The percentage of outbound errors to total packets attempted to be transmitted should remain low. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_OUT_ERROR_PCT_CUM -------------------- The percentage of outbound network errors to total outbound packet attempts over the cumulative collection time. Outbound packet attempts include both packets successfully sent and those that encountered errors. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. The percentage of outbound errors to total packets attempted to be transmitted should remain low. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_OUT_ERROR_RATE -------------------- The number of outbound errors per second on all network interfaces during the interval. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_OUT_ERROR_RATE_CUM -------------------- The number of outbound errors per second on all network interfaces over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_OUT_PACKET -------------------- The number of successful packets sent through all network interfaces during the last interval. Successful packets are those that have been processed without errors or collisions. For HP-UX, this will be the same as the sum of the “Outbound Unicast Packets“ and ”Outbound Non-Unicast Packets“ values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Opkts” column (TX-OK on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. GBL_NET_OUT_PACKET_CUM -------------------- The number of successful packets sent through all network interfaces over the cumulative collection time. Successful packets are those that have been processed without errors or collisions. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. For HP-UX, this will be the same as the total sum of the “Outbound Unicast Packets“ and ”Outbound Non-Unicast Packets“ values from the output of the “lanadmin” utility for the network interface. Remember that “lanadmin” reports cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the logical level (IP) only. For all other Unix systems, this is the same as the sum of the “Opkts” column (TX-OK on Linux) from the “netstat -i” command for a network device. See also netstat(1). This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. GBL_NET_OUT_PACKET_RATE -------------------- The number of successful packets per second sent through the network interfaces during the interval. Successful packets are those that have been processed without errors or collisions. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. GBL_NET_PACKET -------------------- The total number of successful inbound and outbound packets for all network interfaces during the interval. These are the packets that have been processed without errors or collisions. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. GBL_NET_PACKET_RATE -------------------- The number of successful packets per second (both inbound and outbound) for all network interfaces during the interval. Successful packets are those that have been processed without errors or collisions. This metric is updated at the sampling interval, regardless of the number of IP addresses on the system. On Windows system, the packet size for NBT connections is defined as 1 Kbyte. GBL_NFS_CALL -------------------- The number of NFS calls the local system has made as either a NFS client or server during the interval. This includes both successful and unsuccessful calls. Unsuccessful calls are those that cannot be completed due to resource limitations or LAN packet errors. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. GBL_NFS_CALL_RATE -------------------- The number of NFS calls per second the system made as either a NFS client or NFS server during the interval. Each computer can operate as both a NFS server, and as an NFS client. This metric includes both successful and unsuccessful calls. Unsuccessful calls are those that cannot be completed due to resource limitations or LAN packet errors. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. GBL_NFS_CLIENT_BAD_CALL -------------------- The number of failed NFS client calls during the interval. Calls fail due to lack of system resources (lack of virtual memory) as well as network errors. GBL_NFS_CLIENT_BAD_CALL_CUM -------------------- The number of failed NFS client calls over the cumulative collection time. Calls fail due to lack of system resources (lack of virtual memory) as well as network errors. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_NFS_CLIENT_CALL -------------------- The number of NFS calls the local machine has processed as a NFS client during the interval. Calls are the system calls used to initiate physical NFS operations. These calls are not always successful due to resource constraints or LAN errors, which means that the call rate should exceed the IO rate. This metric includes both successful and unsuccessful calls. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. GBL_NFS_CLIENT_CALL_CUM -------------------- The number of NFS calls the local machine has processed as a NFS client over the cumulative collection time. Calls are the system calls used to initiate physical NFS operations. These calls are not always successful due to resource constraints or LAN errors, which means that the call rate should exceed the IO rate. This metric includes both successful and unsuccessful calls. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. GBL_NFS_CLIENT_CALL_RATE -------------------- The number of NFS calls the local machine has processed as a NFS client per second during the interval. Calls are the system call used to initiate physical NFS operations. These calls are not always successful due to resource constraints or LAN errors, which means that the call rate should exceed the IO rate. This metric includes both successful and unsuccessful calls. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. GBL_NFS_CLIENT_IO -------------------- The number of NFS IOs the local machine has completed as an NFS client during the interval. This number represents physical IOs sent by the client in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both an NFS server, and as a NFS client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_CLIENT_IO_CUM -------------------- The number of NFS IOs the local machine has completed as an NFS client over the cumulative collection time. This number represents physical IOs sent by the client in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both an NFS server, and as a NFS client. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_CLIENT_IO_PCT -------------------- The percentage of NFs IOs the local machine has completed as an NFS client versus total NFS IOs completed during the interval. This number represents physical IOs sent by the client in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both an NFS server, and as a NFS client. A percentage greater than 50 indicates that this machine is acting more as a client. A percentage less than 50 indicates this machine is acting more as a server for others. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_CLIENT_IO_PCT_CUM -------------------- The percentage of NFS IOs the local machine has completed as an NFS client versus total NFS IOs completed over the cumulative collection time. This number represents physical IOs sent by the client in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both an NFS server, and as a NFS client. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. A percentage greater than 50 indicates that this machine is acting more as a client. A percentage less than 50 indicates this machine is acting more as a server for others. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_CLIENT_IO_RATE -------------------- The number of NFS IOs per second the local machine has completed as an NFS client during the interval. This number represents physical IOs sent by the client in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both an NFS server, and as a NFS client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_CLIENT_IO_RATE_CUM -------------------- The number of NFS IOs per second the local machine has completed as an NFS client over the cumulative collection time. This number represents physical IOs sent by the client in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both an NFS server, and as a NFS client. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_CLIENT_READ_RATE -------------------- The number of NFS “read” operations per second the system generated as an NFS client during the interval. NFS Version 2 read operations consist of getattr, lookup, readlink, readdir, null, root, statfs, and read. NFS Version 3 read operations consist of getattr, lookup, access, readlink, read, readdir, readdirplus, fsstat, fsinfo, and null. GBL_NFS_CLIENT_READ_RATE_CUM -------------------- The average number of NFS “read” operations per second the system generated as an NFS client over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. NFS Version 2 read operations consist of getattr, lookup, readlink, readdir, null, root, statfs, and read. NFS Version 3 read operations consist of getattr, lookup, access, readlink, read, readdir, readdirplus, fsstat, fsinfo, and null. GBL_NFS_CLIENT_WRITE_RATE -------------------- The number of NFS “write” operations per second the system generated as an NFS client during the interval. NFS Version 2 write operations consist of setattr, write, writecache, create, remove, rename, link, symlink, mkdir, and rmdir. NFS Version 3 write operations consist of setattr, write, create, mkdir, symlink, mknod, remove, rmdir, rename, link, pathconf, and commit. GBL_NFS_CLIENT_WRITE_RATE_CUM -------------------- The average number of NFS “write” operations per second the system generated as an NFS client over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. NFS Version 2 write operations consist of setattr, write, writecache, create, remove, rename, link, symlink, mkdir, and rmdir. NFS Version 3 write operations consist of setattr, write, create, mkdir, symlink, mknod, remove, rmdir, rename, link, pathconf, and commit. GBL_NFS_SERVER_BAD_CALL -------------------- The number of failed NFS server calls during the interval. Calls fail due to lack of system resources (lack of virtual memory) as well as network errors. GBL_NFS_SERVER_BAD_CALL_CUM -------------------- The number of failed NFS server calls over the cumulative collection time. Calls fail due to lack of system resources (lack of virtual memory) as well as network errors. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_NFS_SERVER_CALL -------------------- The number of NFS calls the local machine has processed as a NFS server during the interval. Calls are the system calls used to initiate physical NFS operations. These calls are not always successful due to resource constraints or LAN errors, which means that the call rate could exceed the IO rate. This metric includes both successful and unsuccessful calls. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. GBL_NFS_SERVER_CALL_CUM -------------------- The number of NFS calls the local machine has processed as a NFS server over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. Calls are the system calls used to initiate physical NFS operations. These calls are not always successful due to resource constraints or LAN errors, which means that the call rate could exceed the IO rate. This metric includes both successful and unsuccessful calls. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. GBL_NFS_SERVER_CALL_RATE -------------------- The number of NFS calls the local machine has processed per second as a NFS server during the interval. Calls are the system calls used to initiate physical NFS operations. These calls are not always successful due to resource constraints or LAN errors, which means that the call rate could exceed the IO rate. This metric includes both successful and unsuccessful calls. NFS calls include create, remove, rename, link, symlink, mkdir, rmdir, statfs, getattr, setattr, lookup, read, readdir, readlink, write, writecache, null and root operations. GBL_NFS_SERVER_IO -------------------- The number of NFS IOs the local machine has completed as an NFS server during the interval. This number represents physical IOs received by the serverein contrast to a call which is an attempt to initiate these operations. Each computer can operate as both a NFS server, and as an NFS client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_SERVER_IO_CUM -------------------- The number of NFS IOs the local machine has completed as an NFS server over the cumulative collection time. This number represents physical IOs received by the server n contrast to a call which is an attempt to initiate these operations. Each computer can operate as both a NFS server, and as an NFS client. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_SERVER_IO_PCT -------------------- The percentage of NFS IOs the local machine has completed as an NFS server versus total NFS IOs completed during the interval. This number represents physical IOs received by the server in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both a NFS server, and as an NFS client. A percentage greater than 50 indicates that this machine is acting more as a server for others. A percentage less than 50 indicates this machine is acting more as a client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_SERVER_IO_PCT_CUM -------------------- The percentage of NFs IOs the local machine has completed as an NFS server versus total NFS IOs completed over the cumulative collection time. This number represents physical IOs received by the server in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both a NFS server, and as an NFS client. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. A percentage greater than 50 indicates that this machine is acting more as a server for others. A percentage less than 50 indicates this machine is acting more as a client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_SERVER_IO_RATE -------------------- The number of NFS IOs per second the local machine has completed as an NFS server during the interval. This number represents physical IOs received by the server in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both a NFS server, and as an NFS client. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_SERVER_IO_RATE_CUM -------------------- The number of NFS IOs per second the local machine has completed as an NFS server over the cumulative collection time. This number represents physical IOs received by the server in contrast to a call which is an attempt to initiate these operations. Each computer can operate as both a NFS server, and as an NFS client. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. NFS IOs include reads and writes from successful calls to getattr, setattr, lookup, read, readdir, readlink, write, and writecache. GBL_NFS_SERVER_READ_RATE -------------------- The number of NFS “read” operations per second the system processed as an NFS server during the interval. NFS Version 2 read operations consist of getattr, lookup, readlink, readdir, null, root, statfs, and read. NFS Version 3 read operations consist of getattr, lookup, access, readlink, read, readdir, readdirplus, fsstat, fsinfo, and null. GBL_NFS_SERVER_READ_RATE_CUM -------------------- The average number of NFS “read” operations per second the system processed as an NFS server over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. NFS Version 2 read operations consist of getattr, lookup, readlink, readdir, null, root, statfs, and read. NFS Version 3 read operations consist of getattr, lookup, access, readlink, read, readdir, readdirplus, fsstat, fsinfo, and null. GBL_NFS_SERVER_WRITE_RATE -------------------- The number of NFS “write” operations per second the system processed as an NFS server during the interval. NFS Version 2 write operations consist of setattr, write, writecache, create, remove, rename, link, symlink, mkdir, and rmdir. NFS Version 3 write operations consist of setattr, write, create, mkdir, symlink, mknod, remove, rmdir, rename, link, pathconf, and commit. GBL_NFS_SERVER_WRITE_RATE_CUM -------------------- The average number of NFS “write” operations per second the system processed as an NFS server over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. NFS Version 2 write operations consist of setattr, write, writecache, create, remove, rename, link, symlink, mkdir, and rmdir. NFS Version 3 write operations consist of setattr, write, create, mkdir, symlink, mknod, remove, rmdir, rename, link, pathconf, and commit. GBL_NODENAME -------------------- On Unix systems, this is the name of the computer as returned by the command “uname -n” (that is, the string returned from the “hostname” program). On Windows, this is the name of the computer as returned by GetComputerName. GBL_NUM_APP -------------------- The number of applications defined in the parm file plus one (for “other”). The application called “other” captures all other processes not defined in the parm file. You can define up to 128 applications. GBL_NUM_CPU -------------------- The number of CPUs physically on the system. This includes all CPUs, either online or offline. For HP-UX and certain versions of Linux, the sar(1M) command allows you to check the status of the system CPUs. For SUN and DEC, the commands psrinfo(1M) and psradm(1M) allow you to check or change the status of the system CPUs. GBL_NUM_DISK -------------------- The number of disks on the system. Only local disk devices are counted in this metric. On HP-UX, this is a count of the number of disks on the system that have ever had activity over the cumulative collection time. GBL_NUM_LV -------------------- The sum of configured logical volumes. GBL_NUM_NETWORK -------------------- The number of network interfaces on the system. This includes the loopback interface. On certain platforms, this also include FDDI, Hyperfabric, ATM, Serial Software interfaces such as SLIP or PPP, and Wide Area Network interfaces (WAN) such as ISDN or X.25. The “netstat -i” command also displays the list of network interfaces on the system. GBL_NUM_SWAP -------------------- The number of configured swap areas. GBL_NUM_TT -------------------- The number of unique Transaction Tracker (TT) transactions that have been registered on this system. GBL_NUM_USER -------------------- The number of users logged in at the time of the interval sample. This is the same as the command “who | wc -l”. For Unix systems, the information for this metric comes from the utmp file which is updated by the login command. For more information, read the man page for utmp. Some applications may create users on the system without using login and updating the utmp file. These users are not reflected in this count. This metric can be a general indicator of system usage. In a networked environment, however, users may maintain inactive logins on several systems. On Windows, the information for this metric comes from the Server Sessions counter in the Performance Libraries Server object. It is a count of the number of users using this machine as a file server. GBL_NUM_VG -------------------- The number of available volume groups. GBL_OSKERNELTYPE -------------------- This indicates the word size of the current kernel on the system. Some hardware can load the 64-bit kernel or the 32-bit kernel. GBL_OSKERNELTYPE_INT -------------------- This indicates the word size of the current kernel on the system. Some hardware can load the 64-bit kernel or the 32-bit kernel. GBL_OSNAME -------------------- A string representing the name of the operating system. On Unix systems, this is the same as the output from the “uname - s” command. GBL_OSRELEASE -------------------- The current release of the operating system. On most Unix systems, this is same as the output from the “uname -r” command. On AIX, this is the actual patch level of the operating system. This is similar to what is returned by the command “lslpp -l bos.rte” as the most recent level of the COMMITTED Base OS Runtime. For example, “5.2.0”. GBL_OSVERSION -------------------- A string representing the version of the operating system. This is the same as the output from the “uname -v” command. This string is limited to 20 characters, and as a result, the complete version name might be truncated. On Windows, this is a string representing the service pack installed on the operating system. GBL_PROC_RUN_TIME -------------------- The average run time, in seconds, for processes that terminated during the interval. GBL_PROC_SAMPLE -------------------- The number of process data samples that have been averaged into global metrics (such as GBL_ACTIVE_PROC) that are based on process samples. GBL_RENICE_PRI_LIMIT -------------------- User priorities range from -x to +x where the value of x is configurable. This is the configured value x. This defines the range of possible values for altering the priority of processes in the time-sharing class. GBL_RUN_QUEUE -------------------- On Unix systems, the value shown is the 1-minute load average for all processors. On HP-UX, the load average is the average number of processes waiting for CPU per processor, whereas on other Unix systems, the load average is the total number of runnable and running threads summed over all processors during the interval. In other words, for non HP-UX systems, this metric correlates to the number of threads executing on and waiting for any processor. On Windows, this is approximately the average Processor Queue Length during the interval. On Unix systems, GBL_RUN_QUEUE will typically be a small number. Larger than normal values for this metric indicate CPU contention among processes. This CPU bottleneck is also normally indicated by 100 percent GBL_CPU_TOTAL_UTIL. It may be OK to have GBL_CPU_TOTAL_UTIL be 100 percent if no other processes are waiting for the CPU. However, if GBL_CPU_TOTAL_UTIL is 100 percent and GBL_RUN_QUEUE is greater than the number of processors, it indicates a CPU bottleneck. On Windows, the Processor Queue reflects a count of process threads which are ready to execute. A thread is ready to execute (in the Ready state) when the only resource it is waiting on is the processor. The Windows operating system itself has many system threads which intermittently use small amounts of processor time. Several low priority threads intermittently wake up and execute for very short intervals. Depending on when the collection process samples this queue, there may be none or several of these low-priority threads trying to execute. Therefore, even on an otherwise quiescent system, the Processor Queue Length can be high. High values for this metric during intervals where the overall CPU utilization (gbl_cpu_total_util) is low do not indicate a performance bottleneck. Relatively high values for this metric during intervals where the overall CPU utilization is near 100% can indicate a CPU performance bottleneck. HP-UX RUN/PRI/CPU Queue differences for multi-cpu systems: For example, let's assume we're using a system with eight processors. We start eight CPU intensive processes that consume almost all of the CPU resources. The approximate values shown for the CPU related queue metrics would be: GBL_RUN_QUEUE = 1.0 GBL_PRI_QUEUE = 0.1 GBL_CPU_QUEUE = 1.0 Assume we start an additional eight CPU intensive processes. The approximate values now shown are: GBL_RUN_QUEUE = 2.0 GBL_PRI_QUEUE = 8.0 GBL_CPU_QUEUE = 16.0 At this point, we have sixteen CPU intensive processes running on the eight processors. Keeping the definitions of the three queue metrics in mind, the run queue is 2 (that is, 16 / 8); the pri queue is 8 (only half of the processes can be active at any given time); and the cpu queue is 16 (half of the processes waiting in the cpu queue that are ready to run, plus one for each active process). This illustrates that the run queue is the average of the 1- minute load averages for all processors; the pri queue is the number of processes or kernel threads that are blocked on “PRI” (priority); and the cpu queue is the number of processes or kernel threads in the cpu queue that are ready to run, including the processes or kernel threads using the CPU. GBL_RUN_QUEUE_CUM -------------------- On the non HP-UX systems, this is the average number of “runnable” processes over the cumulative collection time. On HP-UX, this is the average number of “runnable” processes or kernel threads over all processors over the cumulative collection time. The value shown for the run queue represents the average of the 1-minute load averages for all processors. On Windows, this is approximately the average Processor Queue Length over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. In this case, this metric is a cumulative average of data that was collected as an average. This metric is derived from GBL_RUN_QUEUE. HP-UX RUN/PRI/CPU Queue differences for multi-cpu systems: For example, let's assume we're using a system with eight processors. We start eight CPU intensive processes that consume almost all of the CPU resources. The approximate values shown for the CPU related queue metrics would be: GBL_RUN_QUEUE = 1.0 GBL_PRI_QUEUE = 0.1 GBL_CPU_QUEUE = 1.0 Assume we start an additional eight CPU intensive processes. The approximate values now shown are: GBL_RUN_QUEUE = 2.0 GBL_PRI_QUEUE = 8.0 GBL_CPU_QUEUE = 16.0 At this point, we have sixteen CPU intensive processes running on the eight processors. Keeping the definitions of the three queue metrics in mind, the run queue is 2 (that is, 16 / 8); the pri queue is 8 (only half of the processes can be active at any given time); and the cpu queue is 16 (half of the processes waiting in the cpu queue that are ready to run, plus one for each active process). This illustrates that the run queue is the average of the 1- minute load averages for all processors; the pri queue is the number of processes or kernel threads that are blocked on “PRI” (priority); and the cpu queue is the number of processes or kernel threads in the cpu queue that are ready to run, including the processes or kernel threads using the CPU. GBL_RUN_QUEUE_HIGH -------------------- On the non HP-UX systems, this is the highest value of the load average during any interval over the cumulative collection time. On HP-UX, this is the highest average number of “runnable” processes or kernel threads over all processors during any interval over the cumulative collection time. The value shown for the run queue represents the average of the 1-minute load averages for all processors. GBL_SAMPLE -------------------- The number of data samples (intervals) that have occurred over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_SERIALNO -------------------- On HP-UX, this is the ID number of the computer as returned by the command “uname -i”. If this value is not available, an empty string is returned. On SUN, this is the ASCII representation of the hardware- specific serial number. This is printed in hexadecimal as presented by the “hostid” command when possible. If that is not possible, the decimal format is provided instead. On AIX, this is the machine ID number as returned by the command “uname -m”. This number has the form xxyyyyyymmss. For the RISC System/6000, “xx” position is always 00. The “yyyyyy” positions contain the unique ID number for the central processing unit (cpu). While “mm” represents the model number, and “ss” is the submodel number (always 00). On Linux, this is the ASCII representation of the hardware- specific serial number, as returned by the command “hostid”. GBL_STARTDATE -------------------- The date that the collector started. GBL_STARTED_PROC -------------------- The number of processes that started during the interval. GBL_STARTED_PROC_RATE -------------------- The number of processes that started per second during the interval. GBL_STARTTIME -------------------- The time of day that the collector started. GBL_STATDATE -------------------- The date at the end of the interval, based on local time. GBL_STATTIME -------------------- An ASCII string representing the time at the end of the interval, based on local time. GBL_SWAP_RESERVED_ONLY_UTIL -------------------- The percentage of available swap space reserved (for currently running programs), but not yet used. Swap space must be reserved (but not allocated) before virtual memory can be created. Swap space locations are actually assigned (used) when a page is actually written to disk. On HP-UX, when compared to the “swapinfo -mt” command results, this is calculated as: Util = ((USED: reserve) / (AVAIL: total)) * 100 On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SWAP_SPACE_AVAIL -------------------- The total amount of potential swap space, in MB. On HP-UX, this is the sum of the device swap areas enabled by the swapon command, the allocated size of any file system swap areas, and the allocated size of pseudo swap in memory if enabled. Note that this is potential swap space. This is the same as (AVAIL: total) as reported by the “swapinfo -mt” command. On SUN, this is the total amount of swap space available from the physical backing store devices (disks) plus the amount currently available from main memory. This is the same as (used + available) /1024, reported by the “swap -s” command. On Linux, this is same as (Swap: total) as reported by the “free -m” command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SWAP_SPACE_AVAIL_KB -------------------- The total amount of potential swap space, in KB. On HP-UX, this is the sum of the device swap areas enabled by the swapon command, the allocated size of any file system swap areas, and the allocated size of pseudo swap in memory if enabled. Note that this is potential swap space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be usable. For example, on a 61MB disk using 2 MB swap size allocations, 1 MB remains unusable and is considered wasted space. On HP-UX, this is the same as (AVAIL: total) as reported by the “swapinfo -t” command. On SUN, this is the total amount of swap space available from the physical backing store devices (disks) plus the amount currently available from main memory. This is the same as (used + available)/1024, reported by the “swap -s” command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SWAP_SPACE_DEVICE_AVAIL -------------------- The amount of swap space available on disk devices configured exclusively as swap space (in MB). On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SWAP_SPACE_DEVICE_UTIL -------------------- On HP-UX, this is the percentage of device swap space currently in use of the total swap space available. This does not include file system or remote swap space. On HP-UX, note that available swap is only potential swap space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be usable. For example, on a 61 MB disk using 2 MB swap size allocations, 1 MB remains unusable and is considered wasted space. Consequently, 100 percent utilization on a single device is not always obtainable. The wasted swap space, and the remainder of allocated SWCHUNKs that have not been used is what is reported in the hold field of the /usr/sbin/swapinfo command. On HP-UX, when compared to the “swapinfo -mt” command results, this is calculated as: Util = ((USED: dev) sum / (AVAIL: total)) * 100 On SUN, this is the percentage of total system device swap space currently in use. This metric only gives the percentage of swap space used from the available physical swap device space, and does not include the memory that can be used for swap. (On SunOS 5.X, the virtual swap swapfs can allocate swap space from memory.) On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SWAP_SPACE_MEM_AVAIL -------------------- The amount of physical memory available for pseudo swap (in MB). On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SWAP_SPACE_MEM_UTIL -------------------- The percent of physical memory available for pseudo swap currently allocated to running processes. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SWAP_SPACE_RESERVED -------------------- The amount of swap space (in MB) reserved for the swapping and paging of programs currently executing. Process pages swapped include data (heap and stack pages), bss (data uninitialized at the beginning of process execution), and the process user area (uarea). Shared memory regions also require the reservation of swap space. Swap space is reserved (by decrementing a counter) when virtual memory for a program is created, but swap is only used when a page or swap to disk is actually done or the page is locked in memory if swapping to memory is enabled. Virtual memory cannot be created if swap space cannot be reserved. On HP-UX, this is the same as (USED: total) as reported by the “swapinfo -mt” command. On SUN, this is the same as used/1024, reported by the “swap - s” command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SWAP_SPACE_RESERVED_UTIL -------------------- This is the percentage of available swap space currently reserved for running processes. Reserved utilization = (amount of swap space reserved / amount of swap space available) * 100 On HP-UX, swap space must be reserved (but not allocated) before virtual memory can be created. If all of available swap is reserved, then no new processes or virtual memory can be created. Swap space locations are actually assigned (used) when a page is actually written to disk. On HP-UX, note that available swap is only potential swap space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be usable. For example, on a 61 MB disk using 2 MB swap size allocations, 1 MB remains unusable and is considered wasted space. Consequently, 100 percent utilization on a single device is not always obtainable. When compared to the “swapinfo -mt” command results, this is calculated as: Util = ((USED: total) / (AVAIL: total)) * 100 On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SWAP_SPACE_USED -------------------- The amount of swap space used, in MB. On HP-UX, “Used” indicates written to disk (or locked in memory), rather than reserved. This is the same as (USED: total - reserve) as reported by the “swapinfo -mt” command. On SUN, “Used” indicates amount written to disk (or locked in memory), rather than reserved. Swap space is reserved (by decrementing a counter) when virtual memory for a program is created. This is the same as (bytes allocated)/1024, reported by the “swap -s” command. On Linux, this is same as (Swap: used) as reported by the “free -m” command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SWAP_SPACE_USED_UTIL -------------------- The amount of swap space used, in MB. On HP-UX, “Used” indicates written to disk (or locked in memory), rather than reserved. This is the same as (USED: total - reserve) as reported by the “swapinfo -mt” command. On SUN, “Used” indicates amount written to disk (or locked in memory), rather than reserved. Swap space is reserved (by decrementing a counter) when virtual memory for a program is created. This is the same as (bytes allocated)/1024, reported by the “swap -s” command. On SUN, global swap space is tracked through the operating system. Device swap space is tracked through the devices. For this reason, the amount of swap space used may differ between the global and by-device metrics. Sometimes pages that are marked to be swapped to disk by the operating system are never swapped. The operating system records this as used swap space, but the devices do not, since no physical IOs occur. (Metrics with the prefix “GBL” are global and metrics with the prefix “BYSWP” are by device.) On Linux, this is same as (Swap: used) as reported by the “free -m” command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SWAP_SPACE_UTIL -------------------- The percent of available swap space that was being used by running processes in the interval. On Windows, this is the percentage of virtual memory, which is available to user processes, that is in use at the end of the interval. It is not an average over the entire interval. It reflects the ratio of committed memory to the current commit limit. The limit may be increased by the operating system if the paging file is extended. This is the same as (Committed Bytes / Commit Limit) * 100 when comparing the results to Performance Monitor. On HP-UX, swap space must be reserved (but not allocated) before virtual memory can be created. If all of available swap is reserved, then no new processes or virtual memory can be created. Swap space locations are actually assigned (used) when a page is actually written to disk or locked in memory (pseudo swap in memory). This is the same as (PCT USED: total) as reported by the “swapinfo -mt” command. On Unix systems, this metric is a measure of capacity rather than performance. As this metric nears 100 percent, processes are not able to allocate any more memory and new processes may not be able to run. Very low swap utilization values may indicate that too much area has been allocated to swap, and better use of disk space could be made by reallocating some swap partitions to be user filesystems. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SWAP_SPACE_UTIL_CUM -------------------- The average percentage of available swap space currently in use (has memory belonging to processes paged or swapped out on it) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, note that available swap is only potential swap space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be usable. For example, on a 61 MB disk using 2 MB swap size allocations, 1 MB remains unusable and is considered wasted space. Consequently, 100 percent utilization on a single device is not always obtainable. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SWAP_SPACE_UTIL_HIGH -------------------- The highest average percentage of available swap space currently in use (has memory belonging to processes paged or swapped out on it) in any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, note that available swap is only potential swap space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this space may actually be usable. For example, on a 61 MB disk using 2 MB swap size allocations, 1 MB remains unusable and is considered wasted space. Consequently, 100 percent utilization on a single device is not always obtainable. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. GBL_SYSCALL -------------------- The number of system calls during the interval. High system call rates are normal on busy systems, especially with IO intensive applications. Abnormally high system call rates may indicate problems such as a “hung” terminal that is stuck in a loop generating read system calls. GBL_SYSCALL_BYTE_RATE -------------------- The number of KBs transferred per second via read and write system calls during the interval. This includes reads and writes to all devices including disks, terminals and tapes. GBL_SYSCALL_RATE -------------------- The average number of system calls per second during the interval. High system call rates are normal on busy systems, especially with IO intensive applications. Abnormally high system call rates may indicate problems such as a “hung” terminal that is stuck in a loop generating read system calls. On HP-UX, system call rates affect the overhead of the midaemon. Due to the system call instrumentation on HP-UX, the fork and vfork system calls are double counted. In the case of fork and vfork, one process starts the system call, but two processes exit. HP-UX lightweight system calls, such as umask, do not show up in the GlancePlus System Calls display, but will get added to the global system call rates. If a process is being traced (debugged) using standard debugging tools (such as adb or xdb), all system calls used by that process will show up in the System Calls display while being traced. On HP-UX, compare this metric to GBL_DISK_LOGL_IO_RATE to see if high system callrates correspond to high disk IO. GBL_CPU_SYSCALL_UTIL shows the CPU utilization due to processing system calls. GBL_SYSCALL_RATE_CUM -------------------- The average number of system calls per second over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. Due to the system call instrumentation on HP-UX, the fork and vfork system calls are double counted. In the case of fork and vfork, one process starts the system call, but two processes exit. HP-UX lightweight system calls, such as umask, do not show up in the GlancePlus System Calls display, but will get added to the global system call rates. If a process is being traced (debugged) using standard debugging tools (such as adb or xdb), all system calls used by that process will show up in the System Calls display while being traced. GBL_SYSCALL_RATE_HIGH -------------------- The highest number of system calls per second during any interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. Due to the system call instrumentation on HP-UX, the fork and vfork system calls are double counted. In the case of fork and vfork, one process starts the system call, but two processes exit. HP-UX lightweight system calls, such as umask, do not show up in the GlancePlus System Calls display, but will get added to the global system call rates. If a process is being traced (debugged) using standard debugging tools (such as adb or xdb), all system calls used by that process will show up in the System Calls display while being traced. GBL_SYSCALL_READ -------------------- The number of read system calls made during the interval. This includes reads to all devices including disks, terminals and tapes. GBL_SYSCALL_READ_BYTE -------------------- The number of KBs transferred through read system calls during the interval. This includes reads to all devices including disks, terminals and tapes. GBL_SYSCALL_READ_BYTE_CUM -------------------- The number of KBs transferred through read system calls over the cumulative collection time. This includes reads to all devices including disks, terminals and tapes. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_SYSCALL_READ_BYTE_RATE -------------------- The number of KBs transferred per second via read system calls during the interval. This includes reads to all devices including disks, terminals and tapes. GBL_SYSCALL_READ_CUM -------------------- The total number of read system calls made over the cumulative collection time. This includes reads to all devices including disks, terminals and tapes. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_SYSCALL_READ_PCT -------------------- The percentage of read system calls of the total system read and write system calls during the interval. GBL_SYSCALL_READ_PCT_CUM -------------------- The percentage of read system calls of the total system read and write system calls over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_SYSCALL_READ_RATE -------------------- The average number of read system calls per second made during the interval. This includes reads to all devices including disks, terminals and tapes. This is the same as “sread/s” reported by the sar - c command. GBL_SYSCALL_READ_RATE_CUM -------------------- The average number of read system calls per second made over the cumulative collection time. This includes reads to all devices including disks, terminals, and tapes. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_SYSCALL_WRITE -------------------- The number of write system calls made during the interval. GBL_SYSCALL_WRITE_BYTE -------------------- The number of KBs transferred via write system calls during the interval. This includes writes to all devices including disks, terminals and tapes. GBL_SYSCALL_WRITE_BYTE_CUM -------------------- The number of KBs transferred via write system calls over the cumulative collection time. This includes writes to all devices including disks, terminals and tapes. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_SYSCALL_WRITE_BYTE_RATE -------------------- The number of KBs per second transferred via write system calls during the interval. This includes writes to all devices including disks, terminals and tapes. GBL_SYSCALL_WRITE_CUM -------------------- The total number of write system calls made over the cumulative collection time. This includes writes to all devices including disks, terminals and tapes. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_SYSCALL_WRITE_PCT -------------------- The percentage of write system calls of the total system read and write system calls during the interval. GBL_SYSCALL_WRITE_PCT_CUM -------------------- The percentage of write system calls of the total read and write system calls over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_SYSCALL_WRITE_RATE -------------------- The average number of write system calls per second made during the interval. This includes writes to all devices including disks, terminals and tapes. GBL_SYSCALL_WRITE_RATE_CUM -------------------- The average number of write system calls per second made over the cumulative collection time. This includes writes to all devices including disks, terminals, and tapes. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GBL_SYSTEM_ID -------------------- The network node hostname of the system. This is the same as the output from the “uname -n” command. On Windows, the name obtained from GetComputerName. GBL_SYSTEM_TYPE -------------------- On Unix systems, this is either the model of the system or the instruction set architecture of the system. On Windows, this is the processor architecture of the system. GBL_SYSTEM_UPTIME_HOURS -------------------- The time, in hours, since the last system reboot. GBL_SYSTEM_UPTIME_SECONDS -------------------- The time, in seconds, since the last system reboot. GBL_TT_OVERFLOW_COUNT -------------------- The number of new transactions that could not be measured because the Measurement Processing Daemon's (midaemon) Measurement Performance Database is full. If this happens, the default Measurement Performance Database size is not large enough to hold all of the registered transactions on this system. This can be remedied by stopping and restarting the midaemon process using the -smdvss option to specify a larger Measurement Performance Database size. The current Measurement Performance Database size can be checked using the midaemon - sizes option. LV_AVG_READ_SERVICE_TIME -------------------- The average time, in milliseconds, that this logical volume spent processing each read request during the interval. For example, a value of 5.14 would indicate that read requests during the last interval took on average slightly longer than five one-thousandths of a second to complete for this device. This metric can be used to help determine which logical volumes are taking more time than usual to process requests. DiskSuite metadevices are not supported. This metric is reported as “na” for volume groups since it is not applicable. LV_AVG_WRITE_SERVICE_TIME -------------------- The average time, in milliseconds, that this logical volume spent processing each write request during the interval. For example, a value of 5.14 would indicate that write requests during the last interval took on average slightly longer than five one-thousandths of a second to complete for this device. This metric can be used to help determine which logical volumes are taking more time than usual to process requests. DiskSuite metadevices are not supported. This metric is reported as “na” for volume groups since it is not applicable. LV_DEVNO -------------------- Major / Minor number of this logical volume. Volume groups in the Veritas LVM do not have device files, so for this entry, “na” is shown for the major/minor numbers. LV_DIRNAME -------------------- The absolute path name of this logical volume, volume group, or DiskSuite metadevice name. For example: Volume group: /dev/vx/dsk/ Logical volume: /dev/vx/dsk// Disk Suite: /dev/md/dsk/ LV_GROUP_NAME -------------------- On HP-UX, this is the name of this volume/disk group associated with a logical volume. On SUN and AIX, this is the name of this volume group associated with a logical volume. On SUN, this metric is applicable only for the Veritas LVM. On HP-UX 11i and beyond, data is available from VERITAS Volume Manager (VxVM). LVM (Logical Volume Manager) uses the terminology “volume group” to describe a set of related volumes. VERITAS Volume Manager uses the terminology “disk group” to describe a collection of VM disks. For additional information on VERITAS Volume Manager, see vxintro(1M). LV_INTERVAL -------------------- The amount of time in the interval. LV_INTERVAL_CUM -------------------- The amount of time over the cumulative collection time, or since the last configuration change. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. LV_LOGLP_LV -------------------- On SUN, this is the total number of plexes configured for this logical volume. This metric is reported as “na” for volume groups since it is not applicable. On AIX, this is the total number of logical partitions configured for this logical volume. LV_OPEN_LV -------------------- The number of logical volumes currently opened in this volume group (or disk group, if HP-UX). An entry of “na” indicates that there are no logical volumes open in this volume group and there are no active disks in this volume group. On HP-UX, the extra entry (referred to as the “/dev/vgXX/group” entry), shows the internal resources used by the LVM software to manage the logical volumes. On HP-UX 11i and beyond, data is available from VERITAS Volume Manager (VxVM). LVM (Logical Volume Manager) uses the terminology “volume group” to describe a set of related volumes. VERITAS Volume Manager uses the terminology “disk group” to describe a collection of VM disks. For additional information on VERITAS Volume Manager, see vxintro(1M). On SUN, this metric is reported as “na” for logical volumes and metadevices since it is not applicable. LV_PHYSLV_SIZE -------------------- On SUN, this is the physical size in MBs of this logical volume or metadevice. This metric is reported as “na” for volume groups since it is not applicable. On AIX, this is the physical size in MBs of this logical volume. LV_READ_BYTE_RATE -------------------- The number of physical KBs per second read from this logical volume during the interval. Note that bytes read from the buffer cache are not included in this calculation. DiskSuite metadevices are not supported. This metric is reported as “na” for volume groups since it is not applicable. LV_READ_BYTE_RATE_CUM -------------------- The average number of physical KBs per second read from this logical volume over the cumulative collection time, or since the last configuration change. Note that bytes read from the buffer cache are not included in this calculation. On SUN, DiskSuite metadevices are not supported. This metric is reported as “na” for volume groups since it is not applicable. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. LV_READ_RATE -------------------- The number of physical reads per second for this logical volume during the interval. This may not correspond to the physical read rate from a particular disk drive since a logical volume may be composed of many disk drives or it may be a subset of a disk drive. An individual physical read from one logical volume may span multiple individual disk drives. Since this is a physical read rate, there may not be any correspondence to the logical read rate since many small reads are satisfied in the buffer cache, and large logical read requests must be broken up into physical read requests. DiskSuite metadevices are not supported. This metric is reported as “na” for volume groups since it is not applicable. LV_READ_RATE_CUM -------------------- The average number of physical reads per second for this volume over the cumulative collection time, or since the last configuration change. DiskSuite metadevices are not supported. This metric is reported as “na” for volume groups since it is not applicable. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. LV_SPACE_UTIL -------------------- Percentage of the logical volume file system space in use during the interval. A value of “na” is displayed for volume groups and logical volumes which have no mounted filesystem. LV_STATE_LV -------------------- On SUN, this is the kernel state of this volume. Enabled means the volume block device can be used. Detached means the volume block device cannot be used, but ioctl's will still be accepted and the plex block devices will still accept reads and writes. Disabled means that the volume or its plexes cannot be used for any operations. DiskSuite metadevices are not supported. This metric is reported as “na” for volume groups since it is not applicable. On AIX, this is the state of this logical volume in the volume group. The normal state of a logical volume should be “open/syncd”, which means that the logical volume is open and clean. LV_TYPE -------------------- Either “G” or “V”, indicating either a volume/disk group (“G”) or a logical volume (“V”). On SUN, it can also be a Disk Suite meta device (“S”). On HP-UX 11i and beyond, data is available from VERITAS Volume Manager (VxVM). LVM (Logical Volume Manager) uses the terminology “volume group” to describe a set of related volumes. VERITAS Volume Manager uses the terminology “disk group” to describe a collection of VM disks. For additional information on VERITAS Volume Manager, see vxintro(1M). LV_TYPE_LV -------------------- This metric is only applicable for DiskSuite metadevices and it can be one of the following: * TRANS * RAID * MIRROR * CONCAT/STRIPE TRANS A metadevice called the trans device manages the UFS log. The trans normally has 2 metadevices: MASTER DEVICE, contains the file system that is being logged. Can be used as a block device (up to 2 Gbytes) or a raw device (up to 1 Tbyte). LOGGING DEVICE, contains the log and can be shared by several file systems. The log is a sequence of cords, each of which describes a change to a file system. RAID Redundant Array of Inexpen- sive Disks. A scheme for classifying data distribution and redundancy. MIRROR For high data availability, DiskSuite can write data in metadevices to other meta- devices. A mirror is a meta- device made of one or more concatenations or striped metadevices. Concatenation is the combining of two or more physical components into a single metadevice by treating slices (partitions) as a logical device. STRIPE (or Striping) For increased performance, you can create striped metadevices (or “stripes”). Striping is creating a single metadevice by interlacing data on slices across disks. After a striped metadevice is created, read/write requests are spread to multiple disk controllers, increasing performance. LV_WRITE_BYTE_RATE -------------------- The number of KBs per second written to this logical volume during the interval. DiskSuite metadevices are not supported. This metric is reported as “na” for volume groups since it is not applicable. LV_WRITE_BYTE_RATE_CUM -------------------- The average number of KBs per second written to this logical volume over the cumulative collection time, or since the last configuration change. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. DiskSuite metadevices are not supported. This metric is reported as “na” for volume groups since it is not applicable. LV_WRITE_RATE -------------------- The number of physical writes per second to this logical volume during the interval. This may not correspond to the physical write rate to a particular disk drive since a logical volume may be composed of many disk drives or it may be a subset of a disk drive. Since this is a physical write rate, there may not be any correspondence to the logical write rate since many small writes are combined in the buffer cache, and many large logical writes must be broken up. DiskSuite metadevices are not supported. This metric is reported as “na” for volume groups since it is not applicable. LV_WRITE_RATE_CUM -------------------- The average number of physical writes per second to this volume over the cumulative collection time, or since the last configuration change. DiskSuite metadevices are not supported. This metric is reported as “na” for volume groups since it is not applicable. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. PROC_ACTIVE_PROC -------------------- The number of active processes (or kernel threads, if HP-UX) in the interval. A process (or kernel thread, if HP-UX) is active if it is alive and consumes CPU time. PROC_APP_ID -------------------- The ID number of the application to which the process (or kernel thread, if HP-UX) belonged during the interval. Application “other” always has an ID of 1. There can be up to 128 user-defined applications, which are defined in the parm file. PROC_APP_NAME -------------------- The application name of a process (or kernel thread, if HP-UX). Processes (or kernel threads, if HP-UX) are assigned into application groups based upon rules in the parm file. If a process does not fit any rules in this file, it is assigned to the application “other.” The rules include decisions based upon pathname, user ID, priority, and so forth. As these values change during the life of a process (or kernel thread, if HP-UX), it is re-assigned to another application. This re-evaluation is done every measurement interval. PROC_CPU_SYS_MODE_TIME -------------------- The CPU time in system mode in the context of the process (or kernel thread, if HP-UX) during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. PROC_CPU_SYS_MODE_TIME_CUM -------------------- The CPU time in system mode in the context of the process (or kernel thread, if HP-UX) over the cumulative collection time. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. PROC_CPU_SYS_MODE_UTIL -------------------- The percentage of time that the CPU was in system mode in the context of the process (or kernel thread, if HP-UX) during the interval. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. High system mode CPU utilizations are normal for IO intensive programs. Abnormally high system CPU utilization can indicate that a hardware problem is causing a high interrupt rate. It can also indicate programs that are not using system calls efficiently. A classic “hung shell” shows up with very high system mode CPU because it gets stuck in a loop doing terminal reads (a system call) to a device that never responds. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor HP-UX systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. The maximum percentage is 100% times the number of CPUs online. PROC_CPU_SYS_MODE_UTIL_CUM -------------------- The average percentage of time that the CPU was in system mode in the context of the process (or kernel thread, if HP-UX) over the cumulative collection time. A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor HP-UX systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. The maximum percentage is 100% times the number of CPUs online. PROC_CPU_TOTAL_TIME -------------------- The total CPU time, in seconds, consumed by a process (or kernel thread, if HP-UX) during the interval. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On HP-UX, the total CPU time is the sum of the CPU time components for a process or kernel thread, including system, user, context switch, interrupts processing, realtime, and nice utilization values. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor HP-UX systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. The maximum percentage is 100% times the number of CPUs online. PROC_CPU_TOTAL_TIME_CUM -------------------- The total CPU time consumed by a process (or kernel thread, if HP-UX) over the cumulative collection time. CPU time is in seconds unless otherwise specified. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. This is calculated as PROC_CPU_TOTAL_TIME_CUM = PROC_CPU_SYS_MODE_TIME_CUM + PROC_CPU_USER_MODE_TIME_CUM On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. PROC_CPU_TOTAL_UTIL -------------------- The total CPU time consumed by a process (or kernel thread, if HP-UX) as a percentage of the total CPU time available during the interval. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On HP-UX, the total CPU utilization is the sum of the CPU utilization components for a process or kernel thread, including system, user, context switch, interrupts processing, realtime, and nice utilization values. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor HP-UX systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. The maximum percentage is 100% times the number of CPUs online. PROC_CPU_TOTAL_UTIL_CUM -------------------- The total CPU time consumed by a process (or kernel thread, if HP-UX) as a percentage of the total CPU time available over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On HP-UX, the total CPU utilization is the sum of the CPU utilization components for a process or kernel thread, including system, user, context switch, interrupts processing, realtime, and nice utilization values. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor HP-UX systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. The maximum percentage is 100% times the number of CPUs online. PROC_CPU_USER_MODE_TIME -------------------- The time, in seconds, the process (or kernel threads, if HP-UX) was using the CPU in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. PROC_CPU_USER_MODE_TIME_CUM -------------------- The time, in seconds, the process (or kernel thread, if HP-UX) was using the CPU in user mode over the cumulative collection time. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. PROC_CPU_USER_MODE_UTIL -------------------- The percentage of time the process (or kernel thread, if HP-UX) was using the CPU in user mode during the interval. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor HP-UX systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. The maximum percentage is 100% times the number of CPUs online. PROC_CPU_USER_MODE_UTIL_CUM -------------------- The average percentage of time the process (or kernel thread, if HP_UX) was using the CPU in user mode over the cumulative collection time. User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. Unlike the global and application CPU metrics, process CPU is not averaged over the number of processors on systems with multiple CPUs. Single-threaded processes can use only one CPU at a time and never exceed 100% CPU utilization. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On multi-processor HP-UX systems, processes which have component kernel threads executing simultaneously on different processors could have resource utilization sums over 100%. The maximum percentage is 100% times the number of CPUs online. PROC_DISK_BLOCK_IO -------------------- The number of block IOs made by (or for) a process during the interval. On Sun 5.X (Solaris 2.X or later), these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, block IOs refer to data transferred between disk and the file system buffer cache in block size chunks. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_DISK_BLOCK_IO_CUM -------------------- The number of block IOs made by (or for) a process during its lifetime or over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Sun 5.X (Solaris 2.X or later), these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, block IOs refer to data transferred between disk and the file system buffer cache in block size chunks. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_DISK_BLOCK_IO_RATE -------------------- The number of block IOs per second made by (or for) a process during the interval. On Sun 5.X (Solaris 2.X or later), these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, block IOs refer to data transferred between disk and the file system buffer cache in block size chunks. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_DISK_BLOCK_IO_RATE_CUM -------------------- The average number of block IOs per second made by (or for) a process during its lifetime or over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Sun 5.X (Solaris 2.X or later), these are physical IOs generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). On AIX, block IOs refer to data transferred between disk and the file system buffer cache in block size chunks. Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_DISK_BLOCK_READ -------------------- The number of block reads made by a process during the interval. On Sun 5.X (Solaris 2.X or later), these are physical reads generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_DISK_BLOCK_READ_CUM -------------------- The number of block reads made by a process over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Sun 5.X (Solaris 2.X or later), these are physical reads generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_DISK_BLOCK_READ_RATE -------------------- The number of block reads per second made by (or for) a process during the interval. On Sun 5.X (Solaris 2.X or later), these are physical reads generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_DISK_BLOCK_WRITE -------------------- Number of block writes made by a process during the interval. Calls destined for NFS mounted files are not included. On Sun 5.X (Solaris 2.X or later), these are physical writes generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_DISK_BLOCK_WRITE_CUM -------------------- Number of block writes made by a process over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Sun 5.X (Solaris 2.X or later), these are physical writes generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_DISK_BLOCK_WRITE_RATE -------------------- The number of block writes per second made by (or for) a process during the interval. On Sun 5.X (Solaris 2.X or later), these are physical writes generated by file system access and do not include virtual memory IOs, or IOs relating to raw disk access. These are IOs for inode and superblock updates which are handled through the buffer cache. Because virtual memory IOs are not credited to the process, the block IOs tend to be much lower on SunOS 5.X than they are on SunOS 4.1.X systems. When a file is accessed on SunOS 5.X or later, it is memory mapped by the operating system. Accesses generate virtual memory IOs. Reading a file generates block IOs as the file's inode information is cached. File writes are a combination of posting to memory mapped allocations (VM IOs) and posting updated inode information to disk (block IOs). Note, when a file is accessed on AIX, it is memory mapped by the operating system, so accesses generate virtual memory IOs, not block IOs. PROC_EUID -------------------- The Effective User ID of a process. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_FILE_COUNT -------------------- The number of times this file is opened currently. Terminal devices are often opened more than once by several different processes. PROC_FILE_MODE -------------------- A text string summarizing the type of open mode: rd/wr Opened for input & output read Opened for input only write Opened for output only PROC_FILE_NAME -------------------- The path name or identifying information about the open file descriptor. If the path name string exceeds 40 characters in length, the beginning and the end of the path is shown and the middle of the name is replaced by “...”. An attempt is made to obtain the file path name by either searching the current cylinder group to find directory entries that point to the currently opened inode, or by searching the kernel name cache. Since looking up file path names would require high disk overhead, some names may not be resolved. If the path name can not be resolved, a string is returned indicating the type and inode number of the file. For the string format including an inode number, you may use the ncheck(1M) program to display the file path name relative to the mount point. Sometimes files may be deleted before they are closed. In these cases, the process file table may still have the inode even though the file is not actually present and as a result, ncheck will fail. If the following file information was displayed: and then from that display, the following ncheck command was entered: ncheck -i 23 An output like the following would be generated: /dev/dsk/c0t0d0s6: 23 /status.perflbd The string for an inode is as follows: or where: xxx: Is the file type: blk - Block device chr - Character device dir - Directory file fifo - FIFO (pipes have a “fifo” label) lnk - Soft file link reg - Regular file yyy: Is the file domain. Some examples are ufs (Unix file system), nfs (NFS), proc (process file system) and tmpfs (memory based file system). In some cases the only information obtainable is the major and minor number of the file or device. Then, the following format is displayed where the “n” strings are replaced by the major and minor numbers respectively. When trying to identify files with this information, often the major number from this format will equal the minor number of a device file in the /devices/pseudo directory. For example, “” is probably one of the following files: crw-rw-rw- 1 root sys 105, 2 Aug 26 13:13 tl@0:ticlts crw-rw-rw- 1 root sys 105, 0 Aug 26 13:13 tl@0:ticots crw-rw-rw- 1 root sys 105, 1 Aug 26 13:13 tl@0:ticotsord PROC_FILE_NUMBER -------------------- The file number of the current open file. PROC_FILE_OFFSET -------------------- The decimal value of the next access position of the current file at the end of the interval. If the open file is a tty, this is the total number of bytes sent and received since the file was first opened. PROC_FILE_OPEN -------------------- Number of files the current process has remaining open as of the end of the interval. PROC_FILE_TYPE -------------------- A text string describing the type of the current file. This is one of: block Block special device char Character device dir Directory fifo A pipe or named pipe file Simple file link Symbolic file link other An unknown file type PROC_FORCED_CSWITCH -------------------- The number of times that the process (or kernel thread, if HP- UX) was preempted by an external event and another process (or kernel thread, if HP-UX) was allowed to execute during the interval. Examples of reasons for a forced switch include expiration of a time slice or returning from a system call with a higher priority process (or kernel thread, if HP-UX) ready to run. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. PROC_FORCED_CSWITCH_CUM -------------------- The number of times the process (or kernel thread, if HP-UX) was preempted by an external event and another process (or kernel thread, if HP-UX) was allowed to execute over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. Examples of reasons for a forced switch include expiration of a time slice or returning from a system call with a higher priority process (or kernel thread, if HP-UX) ready to run. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. PROC_GROUP_ID -------------------- On most systems, this is the real group ID number of the process. On AIX, this is the effective group ID number of the process. On HP-UX, this is the effective group ID number of the process if not in setgid mode. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_GROUP_NAME -------------------- The group name (from /etc/group) of a process. The group identifier is obtained from searching the /etc/passwd file using the user ID (uid) as a key. Therefore, if more than one account is listed in /etc/passwd with the same user ID (uid) field, the first one is used. If no entry can be found for the user ID in /etc/passwd, the group name is the uid number. If no matching entry in /etc/group can be found, the group ID is returned as the group name. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_INTERVAL -------------------- The amount of time in the interval. This is the same value for all processes (and kernel threads, if HP-UX), regardless of whether they were alive for the entire interval. Note, calculations such as utilizations or rates are calculated using this standardized process interval (PROC_INTERVAL), rather than the actual alive time during the interval (PROC_INTERVAL_ALIVE). Thus, if a process was only alive for 1 second and used the CPU during its entire life (1 second), but the process sample interval was 5 seconds, it would be reported as using 1/5 or 20% CPU utilization, rather than 100% CPU utilization. PROC_INTERVAL_ALIVE -------------------- The number of seconds that the process (or kernel thread, if HP-UX) was alive during the interval. This may be less than the time of the interval if the process (or kernel thread, if HP-UX) was new or died during the interval. PROC_INTERVAL_CUM -------------------- The amount of time over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On SUN, AIX, and OSF1, this differs from PROC_RUN_TIME in that PROC_RUN_TIME may not include all of the first and last sample interval times and PROC_INTERVAL_CUM does. PROC_IO_BYTE -------------------- On HP-UX, this is the total number of physical IO KBs (unless otherwise specified) that was used by this process or kernel thread, either directly or indirectly, during the interval. On all other systems, this is the total number of physical IO KBs (unless otherwise specified) that was used by this process during the interval. IOs include disk, terminal, tape and network IO. On HP-UX, indirect IOs include paging and deactivation/reactivation activity done by the kernel on behalf of the process or kernel thread. Direct IOs include disk, terminal, tape, and network IO, but exclude all NFS traffic. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On SUN, counts in the MB ranges in general can be attributed to disk accesses and counts in the KB ranges can be attributed to terminal IO. This is useful when looking for processes with heavy disk IO activity. This may vary depending on the sample interval length. PROC_IO_BYTE_CUM -------------------- On HP-UX, this is the total number of physical IO KBs (unless otherwise specified) that was used by this process or kernel thread, either directly or indirectly, over the cumulative collection time. On all other systems, this is the total number of physical IO KBs (unless otherwise specified) that was used by this process over the cumulative collection time. IOs include disk, terminal, tape and network IO. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, indirect IOs include paging and deactivation/reactivation activity done by the kernel on behalf of the process or kernel thread. Direct IOs include disk, terminal, tape, and network IO, but exclude all NFS traffic. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. PROC_IO_BYTE_RATE -------------------- On HP-UX, this is the number of physical IO KBs per second that was used by this process or kernel thread, either directly or indirectly, during the interval. On all other systems, this is the number of physical IO KBs per second that was used by this process during the interval. IOs include disk, terminal, tape and network IO. On HP-UX, indirect IOs include paging and deactivation/reactivation activity done by the kernel on behalf of the process or kernel thread. Direct IOs include disk, terminal, tape, and network IO, but exclude all NFS traffic. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On SUN, counts in the MB ranges in general can be attributed to disk accesses and counts in the KB ranges can be attributed to terminal IO. This is useful when looking for processes with heavy disk IO activity. This may vary depending on the sample interval length. Certain types of disk IOs are not counted by AIX at the process level, so they are excluded from this metric. PROC_IO_BYTE_RATE_CUM -------------------- On HP-UX, this is the average number of physical IO KBs per second that was used by this process or kernel thread, either directly or indirectly, over the cumulative collection time. On all other systems, this is the average number of physical IO KBs per second that was used by this process over the cumulative collection time. IOs include disk, terminal, tape and network IO. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, indirect IOs include paging and deactivation/reactivation activity done by the kernel on behalf of the process or kernel thread. Direct IOs include disk, terminal, tape, and network IO, but exclude all NFS traffic. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. On SUN, counts in the MB ranges in general can be attributed to disk accesses and counts in the KB ranges can be attributed to terminal IO. This is useful when looking for processes with heavy disk IO activity. This may vary depending on the sample interval length. PROC_MAJOR_FAULT -------------------- Number of major page faults for this process (or kernel thread, if HP-UX) during the interval. On HP-UX, major page faults and minor page faults are a subset of vfaults (virtual faults). Stack and heap accesses can cause vfaults, but do not result in a disk page having to be loaded into memory. PROC_MAJOR_FAULT_CUM -------------------- Number of major page faults for this process (or kernel thread, if HP-UX) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, major page faults and minor page faults are a subset of vfaults (virtual faults). Stack and heap accesses can cause vfaults, but do not result in a disk page having to be loaded into memory. PROC_MEM_DATA_VIRT -------------------- On SUN, this is the virtual set size (in KB) of the heap memory for this process. Note that heap can reside partially in BSS and partially in the data segment, so its value will not be the same as PROC_REGION_VIRT of the data segment or PROC_REGION_VIRT_DATA, which is the sum of all data segments for the process. On the other non HP-UX systems, this is the virtual set size (in KB) of the data segment for this process. A value of “na” is displayed when this information is unobtainable. On AIX, this is the same as the SIZE value reported by “ps v”. PROC_MEM_RES -------------------- The size (in KB) of resident memory allocated for the process. On HP-UX, the calculation of this metric differs depending on whether this process has used any CPU time since the midaemon process was started. This metric is less accurate and does not include shared memory regions in its calculation when the process has been idle since the midaemon was started. On HP-UX, for processes that use CPU time subsequent to midaemon startup, the resident memory is calculated as RSS = sum of private region pages + (sum of shared region pages / number of references) The number of references is a count of the number of attachments to the memory region. Attachments, for shared regions, may come from several processes sharing the same memory, a single process with multiple attachments, or combinations of these. This value is only updated when a process uses CPU. Thus, under memory pressure, this value may be higher than the actual amount of resident memory for processes which are idle because their memory pages may no longer be resident or the reference count for shared segments may have changed. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. A value of “na” is displayed when this information is unobtainable. This information may not be obtainable for some system (kernel) processes. It may also not be available for processes. On AIX, this is the same as the RSS value shown by “ps v”. On Windows, this is the number of KBs in the working set of this process. The working set includes the memory pages touched recently by the threads of the process. If free memory in the system is above a threshold, then pages are left in the working set even if they are not in use. When free memory falls below a threshold, pages are trimmed from the working set, but not necessarily paged out to disk from memory. If those pages are subsequently referenced, they will be page faulted back into the working set. Therefore, the working set is a general indicator of the memory resident set size of this process, but it will vary depending on the overall status of memory on the system. Note that the size of the working set is often larger than the amount of pagefile space consumed (PROC_MEM_VIRT). PROC_MEM_RES_HIGH -------------------- The largest value of resident memory (in KB) during its lifetime. See the description for PROC_MEM_RES for details about how resident memory is determined. A value of “na” is displayed when this information is unobtainable. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_MEM_STACK_VIRT -------------------- Size (in KB) of the stack for this process. On SUN, the stack is initialized to 8K bytes. PROC_MEM_VIRT -------------------- The size (in KB) of virtual memory allocated for the process. On HP-UX, this consists of the sum of the virtual set size of all private memory regions used by this process, plus this process' share of memory regions which are shared by multiple processes. For processes that use CPU time, the value is divided by the reference count for those regions which are shared. On HP-UX, this metric is less accurate and does not reflect the reference count for shared regions for processes that were started prior to the midaemon process and have not used any CPU time since the midaemon was started. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. On all other Unix systems, this consists of private text, private data, private stack and shared memory. The reference count for shared memory is not taken into account, so the value of this metric represents the total virtual size of all regions regardless of the number of processes sharing access. Note also that lazy swap algorithms, sparse address space malloc calls, and memory-mapped file access can result in large VSS values. On systems that provide Glance memory regions detail reports, the drilldown detail per memory region is useful to understand the nature of memory allocations for the process. A value of “na” is displayed when this information is unobtainable. This information may not be obtainable for some system (kernel) processes. It may also not be available for processes. On Windows, this is the number of KBs the process has used in the paging file(s). Paging files are used to store pages of memory used by the process, such as local data, that are not contained in other files. Examples of memory pages which are contained in other files include pages storing a program's .EXE and .DLL files. These would not be kept in pagefile space. Thus, often programs will have a memory working set size (PROC_MEM_RES) larger than the size of its pagefile space. PROC_MINOR_FAULT -------------------- Number of minor page faults for this process (or kernel thread, if HP-UX) during the interval. On HP-UX, major page faults and minor page faults are a subset of vfaults (virtual faults). Stack and heap accesses can cause vfaults, but do not result in a disk page having to be loaded into memory. PROC_MINOR_FAULT_CUM -------------------- Number of minor page faults for this process (or kernel thread, if HP-UX) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, major page faults and minor page faults are a subset of vfaults (virtual faults). Stack and heap accesses can cause vfaults, but do not result in a disk page having to be loaded into memory. PROC_NICE_PRI -------------------- The nice priority for the process (or kernel thread, if HP-UX) when it was last dispatched. The value is a bias used to adjust the priority for the process. On AIX, the nice user value, makes a process less favored than it otherwise would be, has a range of 0-40 with a default value of 20. The value of PUSER is always added to the value of nice to weight the user process down below the range of priorities expected to be in use by system jobs like the scheduler and special wait queues. On all other Unix systems, the value ranges from 0 to 39. A higher value causes a process (or kernel thread, if HP-UX) to be dispatched less. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_PAGEFAULT -------------------- The number of page faults that occurred during the interval for the process. PROC_PAGEFAULT_RATE -------------------- The number of page faults per second that occurred during the interval for the process. PROC_PAGEFAULT_RATE_CUM -------------------- The average number of page faults per second that occurred over the cumulative collection time for the process. PROC_PARENT_PROC_ID -------------------- The parent process' PID number. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_PRI -------------------- On Unix systems, this is the dispatch priority of a process (or kernel thread, if HP-UX) at the end of the interval. The lower the value, the more likely the process is to be dispatched. On Windows, this is the current base priority of this process. On HP-UX, whenever the priority is changed for the selected process or kernel thread, the new value will not be reflected until the process or kernel thread is reactivated if it is currently idle (for example, SLEEPing). On HP-UX, the lower the value, the more the process or kernel thread is likely to be dispatched. Values between zero and 127 are considered to be “real-time” priorities, which the kernel does not adjust. Values above 127 are normal priorities and are modified by the kernel for load balancing. Some special priorities are used in the HP-UX kernel and subsystems for different activities. These values are described in /usr/include/sys/param.h. Priorities less than PZERO 153 are not signalable. Note that on HP-UX, many network-related programs such as inetd, biod, and rlogind run at priority 154 which is PPIPE. Just because they run at this priority does not mean they are using pipes. By examining the open files, you can determine if a process or kernel thread is using pipes. For HP-UX 10.0 and later releases, priorities between -32 and - 1 can be seen for processes or kernel threads using the Posix Real-time Schedulers. When specifying a Posix priority, the value entered must be in the range from 0 through 31, which the system then remaps to a negative number in the range of -1 through -32. Refer to the rtsched man pages for more information. On a threaded operating system, such as HP-UX 11.0 and beyond, this metric represents a kernel thread characteristic. If this metric is reported for a process, the value for its last executing kernel thread is given. For example, if a process has multiple kernel threads and kernel thread one is the last to execute during the interval, the metric value for kernel thread one is assigned to the process. On AIX, values for priority range from 0 to 127. Processes running at priorities less than PZERO (40) are not signalable. On Windows, the higher the value the more likely the process or thread is to be dispatched. Values for priority range from 0 to 31. Values of 16 and above are considered to be “realtime” priorities. Threads within a process can raise and lower their own base priorities relative to the process's base priority. On Sun Systems this metric is only available on 4.1.X. PROC_PROC_ARGV1 -------------------- The first argument (argv[1]) of the process argument list or the second word of the command line, if present. The OV Performance Agent logs the first 32 characters of this metric. For releases that support the parm file javaarg flag, this metric may not be the first argument. When javaarg=true, the value of this metric is replaced (for java processes only) by the java class or jar name. This can then be useful to construct parm file java application definitions using the argv1= keyword. PROC_PROC_CMD -------------------- The full command line with which the process was initiated. On HP-UX, the maximum length returned depends upon the version of the OS, but typically up to 1020 characters are available. On other Unix systems, the maximum length is 4095 characters. On Linux, if the command string exceeds 4096 characters, the kernel instrumentation may not report any value. If the command line contains special characters, such as carriage return and tab, these characters will be converted to , , and so on. PROC_PROC_ID -------------------- The process ID number (or PID) of this process that is used by the kernel to uniquely identify this process. Process numbers are reused, so they only identify a process for its lifetime. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_PROC_NAME -------------------- The process program name. It is limited to 16 characters. On Unix systems, this is derived from the 1st parameter to the exec(2) system call. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. On Windows, the “System Idle Process” is not reported by OVPA since Idle is a process that runs to occupy the processors when they are not executing other threads. Idle has one thread per processor. PROC_REGION_FILENAME -------------------- The file path that corresponds to the front store file of a memory region. For text and data regions, this is the name of the program; for shared libraries it is the library name. Certain “special” names are displayed if there is no actual “front store” for a memory region. These special names correspond to the region type (for example, ). If the name is “”, then this is a memory region without “front store,” created by the system call mmap(2). If the file format includes an inode number, use the program ncheck (1M) to display the filename relative to the mount point. Sometimes files may be deleted before they are closed. In these cases, the process file table may still have the inode even though the file is not actually present and as a result, ncheck will fail. In the following example, note that the file system name has been included to avoid the overhead of searching all of the file systems for the inode number. If the following file name was displayed: and then from that display, the following ncheck command was entered: ncheck -F ufs -i 2266 An output like the following would be generated: /dev/root: 2266 /lib/libXm.so.5.0 The string for an inode is as follows: or where: xxx: Is the file type: blk - Block device chr - Character device dir - Directory file fifo - FIFO (pipes have a “fifo” label) lnk - Soft file link reg - Regular file yyy: Is the file domain. Some examples are ufs (Unix file system), nfs (NFS), proc (process file system) and tmpfs (memory based file system). If a program is “hard linked” (that is, two files pointing to the same inode), then a different name may be reported for the text and data regions than is actually running. Use the “-i” option of the “ls” command to see the inode numbers. PROC_REGION_PRIVATE_SHARED_FLAG -------------------- A text indicator of either private memory (Priv) or shared (Shared) for this memory region. Private memory is only being used by the current process. Shared memory is mapped into the address space of other processes. PROC_REGION_PROT_FLAG -------------------- The protection mode of the process memory segment. It represents Read/Write/eXecute permissions in the same way as ls(1) does for files. This metric is available only for regions that have global protection mode. It is not available (“na”) for regions that use per-page protection. PROC_REGION_REF_COUNT -------------------- The number of processes sharing this memory region. For private regions this value is 1. For shared regions, this value is the number of processes sharing the region. This metric is currently unavailable on HP-UX 11.0. PROC_REGION_TYPE -------------------- A text name for the type of this memory region. It can be one of the following: DATA Data region LIBDAT Shared Library data LIBTXT Shared Library text STACK Stack region TEXT Text (that is, code) On HP-UX, it can also be one of the following: GRAPH Frame buffer lock page IOMAP IO region (iomap) MEMMAP Memory-mapped file, which includes shared libraries (text and data), or memory created by calls to mmap(2) NULLDR Null pointer dereference shared page (see below) RSESTA Itanium Registered stack engine region SIGSTK Signal stack region UAREA User Area region UNKNWN Region of unknown type On HP-UX, a whole page is allocated for NULL pointer dereferencing, which is reported as the NULLDR area. If the program is compiled with the “-z” option (which disallows NULL dereferencing), this area is missing. Shared libraries are accessed as memory mapped files, so that the code will show up as “MEMMAP/Shared” and data will show up as “MEMMAP/Priv”. On SUN, it can also be one of the following: BSS Static initialized data MEMMAP Memory mapped files NULLDR Null pointer dereference shared page (see below). SHMEM Shared memory UNKNWN Region of unknown type On SUN, programs might have an area for NULL pointer dereferencing, which is reported as the NULLDR area. Special segment types that are supported by the kernel that are used for frame buffer devices or other purposes are typed as UNKNWN. The following kernel processes are examples of this: sched, pageout, and fsflush. PROC_REGION_VIRT -------------------- The size (in KBs unless otherwise indicated) of the virtual memory occupied by this memory region. This value is not affected by the reference count. The number of references is a count of the number of attachments to the memory region. Attachments, for shared regions, may come from several processes sharing the same memory, a single process with multiple attachments, or combinations of these. PROC_REGION_VIRT_ADDRS -------------------- The virtual address of this memory region displayed in hexadecimal showing the space and offset of the region. On HP-UX, this is a 64-bit (96-bit on a 64-bit OS) hexadecimal value indicating the space and space offset of the region. PROC_REGION_VIRT_DATA -------------------- The size (in KBs unless otherwise indicated) of the total virtual memory occupied by data regions of this process. This value is not affected by the reference count since all data regions are private. This metric is specific to the process as a whole and will not change its value. If this metric is used in a glance adviser script, only pick up one value. Do not sum the values since the same value is shown for all regions. PROC_REGION_VIRT_OTHER -------------------- The size (in KBs unless otherwise indicated) of the total virtual memory occupied by regions of this process that are not text, data, stack, or shared memory. This value is not affected by the reference count. This metric is specific to the process as a whole and will not change its value. If this metric is used in a glance adviser script, only pick up one value. Do not sum the values since the same value is shown for all regions. The number of references is a count of the number of attachments to the memory region. Attachments, for shared regions, may come from several processes sharing the same memory, a single process with multiple attachments, or combinations of these. PROC_REGION_VIRT_SHMEM -------------------- The size (in KBs unless otherwise indicated) of the total virtual memory occupied by shared memory regions of this process. Note that this memory is shared by other processes and this figure is reported in their metrics also. This value is not affected by the reference count. This metric is specific to the process as a whole and will not change its value. If this metric is used in a glance adviser script, only pick up one value. Do not sum the values since the same value is shown for all regions. The number of references is a count of the number of attachments to the memory region. Attachments, for shared regions, may come from several processes sharing the same memory, a single process with multiple attachments, or combinations of these. PROC_REGION_VIRT_STACK -------------------- The size (in KBs unless otherwise indicated) of the total virtual memory occupied by stack regions of this process. Stack regions are always private and will have a reference count of one. This metric is specific to the process as a whole and will not change its value. If this metric is used in a glance adviser script, only pick up one value. Do not sum the values since the same value is shown for all regions. PROC_REGION_VIRT_TEXT -------------------- The size (in KBs unless otherwise indicated) of the total virtual memory occupied by text regions of this process. This value is not affected by the reference count. This metric is specific to the process as a whole and will not change its value. If this metric is used in a glance adviser script, only pick up one value. Do not sum the values since the same value is shown for all regions. PROC_REVERSE_PRI -------------------- The process priority in a range of 0 to 127, with a lower value interpreted as a higher priority. Since priority ranges can be customized, this metric provides a standardized way of interpreting priority that is consistent with other versions of Unix. This is the same value as reported in the PRI field by the ps command when the -c option is not used. PROC_RUN_TIME -------------------- The elapsed time since a process (or kernel thread, if HP-UX) started, in seconds. This metric is less than the interval time if the process (or kernel thread, if HP-UX) was not alive during the entire first or last interval. On a threaded operating system such as HP-UX 11.0 and beyond, this metric is available for a process or kernel thread. PROC_SIGNAL -------------------- Number of signals seen by the current process (or kernel thread, if HP-UX) during the lifetime of the process or kernel thread. PROC_SIGNAL_CUM -------------------- Number of signals seen by the current process (or kernel thread, if HP-UX) over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. PROC_STARTTIME -------------------- The creation date and time of the process (or kernel thread, if HP-UX). PROC_STATE -------------------- A text string summarizing the current state of a process (or kernel thread, if HP-UX), either: new This is the first interval the process has been displayed. active Process is continuing. died Process expired during the interval. PROC_STATE_FLAG -------------------- The Unix STATE flag of the process during the interval. PROC_STOP_REASON -------------------- A text string describing what caused the process (or kernel thread, if HP-UX) to stop executing. For example, if the process is waiting for a CPU while higher priority processes are executing, then its block reason is PRI. A complete list of block reasons follows: SunOS 5.X String Reason for Process Block ------------------------------------ died Process terminated during the interval. new Process was created (via the exec() system call) during the interval. NONE Process is ready to run. It is not apparent that the process is blocked. OTHER Waiting for a reason not decipherable by the measurement software. PMEM Waiting for more primary memory. PRI Process is on the run queue. SLEEP Waiting for an event to complete. TRACE Received a signal to stop because parent is tracing this process. ZOMB Process has terminated and the parent is not waiting. On SunOS 5.X, instead of putting the scheduler to sleep and waking it up, the kernel just stops and continues the scheduler as needed. This is done by changing the state of the scheduler to ws_stop, which is when you see the TRACE state. This is for efficiency and happens every clock tick so the “sched” process will always appear to be in a “TRACE” state. PROC_STOP_REASON_FLAG -------------------- A numeric value for the stop reason. This is used by scopeux instead of the ASCII string returned by PROC_STOP_REASON in order to conserve space in the log file. On a threaded operating system, such as HP-UX 11.0 and beyond, this metric represents a kernel thread characteristic. If this metric is reported for a process, the value for its last executing kernel thread is given. For example, if a process has multiple kernel threads and kernel thread one is the last to execute during the interval, the metric value for kernel thread one is assigned to the process. PROC_SYSCALL -------------------- The number of system calls this process executed during the interval. PROC_SYSCALL_CUM -------------------- The number of system calls this process has executed over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. PROC_THREAD_COUNT -------------------- The total number of kernel threads for the current process. On Linux systems, every thread has its own process ID so this metric will always be 1. On Solaris systems, this metric reflects the total number of Light Weight Processes (LWPs) associated with the process. PROC_TOP_CPU_INDEX -------------------- The index of the process which consumed the most CPU during the interval. From this index, the process PID, process name, and CPU utilization can be obtained. This metric is used by the Performance Tools to index into the Data collection interface's internal table. This is not a metric that will be interesting to Tool users. PROC_TTY -------------------- The controlling terminal for a process. This field is blank if there is no controlling terminal. On HP-UX, Linux, and AIX, this is the same as the “TTY” field of the ps command. On all other Unix systems, the controlling terminal name is found by searching the directories provided in the /etc/ttysrch file. See man page ttysrch(4) for details. The matching criteria field (“M”, “F” or “I” values) of the ttysrch file is ignored. If a terminal is not found in one of the ttysrch file directories, the following directories are searched in the order here: “/dev”, “/dev/pts”, “/dev/term” and “dev/xt”. When a match is found in one of the “/dev” subdirectories, “/dev/” is not displayed as part of the terminal name. If no match is found in the directory searches, the major and minor numbers of the controlling terminal are displayed. In most cases, this value is the same as the “TTY” field of the ps command. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_TTY_DEV -------------------- The device number of the controlling terminal for a process. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_UID -------------------- The real UID (user ID number) of a process. This is the UID returned from the getuid system call. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_USER_NAME -------------------- On Unix systems, this is the login account of a process (from /etc/passwd). If more than one account is listed in /etc/passwd with the same user ID (uid) field, the first one is used. If an account cannot be found that matches the uid field, then the uid number is returned. This would occur if the account was removed after a process was started. On Windows, this is the process owner account name, without the domain name this account resides in. On HP-UX, this metric is specific to a process. If this metric is reported for a kernel thread, the value for its associated process is given. PROC_VOLUNTARY_CSWITCH -------------------- The number of times a process (or kernel thread, if HP-UX) has given up the CPU before an external event preempted it during the interval. Examples of voluntary switches include calls to sleep(2) and select(2). On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. PROC_VOLUNTARY_CSWITCH_CUM -------------------- The number of times a process (or kernel thread, if HP-UX) has given up the CPU before an external event preempted it over the cumulative collection time. Examples of voluntary switches include calls to sleep(2) and select(2). The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On a threaded operating system, such as HP-UX 11.0 and beyond, process usage of a resource is calculated by summing the usage of that resource by its kernel threads. If this metric is reported for a kernel thread, the value is the resource usage by that single kernel thread. If this metric is reported for a process, the value is the sum of the resource usage by all of its kernel threads. Alive kernel threads and kernel threads that have died during the interval are included in the summation. TBL_BUFFER_CACHE_AVAIL -------------------- The size (in KBs unless otherwise specified) of the file system buffer cache on the system. On HP-UX, these buffers are used for all file system IO operations, as well as all other block IO operations in the system (exec, mount, inode reading, and some device drivers). On HP-UX, if dynamic buffer cache is enabled, the system allocates a percentage of available memory not less than dbc_min_pct nor more than dbc_max_pct, depending on the system needs at any given time. On systems with a static buffer cache, this value will remain equal to bufpages, or not less than dbc_min_pct nor more than dbc_max_pct. On SUN, this value is obtained by multiplying the system page size times the number of buffer headers (nbuf). For example, on a SPARCstation 10 the buffer size is usually (200 (page size buffers) * 4096 (bytes/page) = 800 KB). NOTE: (For SUN systems with VERITAS File System installed) Veritas implemented their Direct I/O feature in their file system to provide mechanism for bypassing the Unix system buffer cache while retaining the on disk structure of a file system. The way in which Direct I/O works involves the way the system buffer cache is handled by the Unix OS. Once the VERITAS file system returns with the requested block, instead of copying the content to a system buffer page, it copies the block into the application's buffer space. That's why if you have installed vxfs on your system, the TBL_BUFFER_CACHE_AVAIL can exceed the TBL_BUFFER_CACHE_HWM metric. On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The “nbuf” value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On AIX, this cache is used for all block IO. TBL_BUFFER_CACHE_HWM -------------------- The value of the system configurable parameter “bufhwm”. This is the maximum amount of memory that can be allocated to the buffer cache. Unless otherwise set in the /etc/system file, the default is 2 percent of system memory. TBL_BUFFER_HEADER_AVAIL -------------------- This is the maximum number of headers pointing to buffers in the file system buffer cache. On HP-UX, this is the configured number, not the maximum number. This can be set by the “nbuf” kernel configuration parameter. nbuf is used to determine the maximum total number of buffers on the system. On HP-UX, these are used to manage the buffer cache, which is used for all block IO operations. When nbuf is zero, this value depends on the “bufpages” size of memory (see System Administration Tasks manual). A value of “na” indicates either a dynamic buffer cache configuration, or the nbuf kernel parameter has been left unconfigured and allowed to “float” with the bufpages parameter. This is not a maximum available value in a fixed buffer cache configuration. Instead, it is the initial configured value. The actual number of used buffer headers can grow beyond this initial value. On SUN, this value is “nbuf”. On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The “nbuf” value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. TBL_BUFFER_HEADER_USED -------------------- The number of buffer headers currently in use. On HP-UX, this dynamic value will rarely change once the system boots. During the system bootup, the kernel allocates a large number of buffer headers and the count is likely to stay at that value after the bootup completes. If the value increases beyond the initial boot value, it will not decrease. Buffer headers are allocated in kernel memory, not user memory, and therefore, will not decrease. This value can exceed the available or configured number of buffer headers in a fixed buffer cache configuration. On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The “nbuf” value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_BUFFER_HEADER_USED_HIGH -------------------- The largest number of buffer headers used in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The “nbuf” value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_BUFFER_HEADER_UTIL -------------------- The percentage of buffer headers currently used. On HP-UX, a value of “na” indicates either a dynamic buffer cache configuration, or the nbuf kernel parameter has been left unconfigured and allowed to “float” with the bufpages parameter. On SUN, the buffer cache is a memory pool used by the system to cache inode, indirect block and cylinder group related disk accesses. This is different from the traditional concept of a buffer cache that also holds file system data. On Solaris 5.X, as file data is cached, accesses to it show up as virtual memory IOs. File data caching occurs through memory mapping managed by the virtual memory system, not through the buffer cache. The “nbuf” value is dynamic, but it is very hard to create a situation where the memory cache metrics change, since most systems have more than adequate space for inode, indirect block, and cylinder group data caching. This cache is more heavily utilized on NFS file servers. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_BUFFER_HEADER_UTIL_HIGH -------------------- The highest percentage of buffer header used in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On HP-UX, a value of “na” indicates either a dynamic buffer cache configuration, or the nbuf kernel parameter has been left unconfigured and allowed to “float” with the bufpages parameter. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_FILE_LOCK_USED -------------------- The number of file or record locks currently in use. One file can have multiple locks. Files and/or records are locked by calls to lockf(2). On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_FILE_LOCK_USED_HIGH -------------------- The highest number of file locks used by the file system in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_FILE_TABLE_AVAIL -------------------- The number of entries in the file table. On HP-UX and AIX, this is the configured maximum number of the file table entries used by the kernel to manage open file descriptors. On HP-UX, this is the sum of the “nfile” and “file_pad” values used in kernel generation. On SUN, this is the number of entries in the file cache. This is a size. All entries are not always in use. The cache size is dynamic. Entries in this cache are used to manage open file descriptors. They are reused as files are closed and new ones are opened. The size of the cache will go up or down in chunks as more or less space is required in the cache. On AIX, the file table entries are dynamically allocated by the kernel if there is no entry available. These entries are allocated in chunks. TBL_FILE_TABLE_USED -------------------- The number of entries in the file table currently used by file descriptors. On SUN, this is the number of file cache entries currently used by file descriptors. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_FILE_TABLE_USED_HIGH -------------------- The highest number of entries in the file table that is used by file descriptors in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_FILE_TABLE_UTIL -------------------- The percentage of file table entries currently used by file descriptors. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_FILE_TABLE_UTIL_HIGH -------------------- The highest percentage of entries in the file table used by file descriptors in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_INODE_CACHE_AVAIL -------------------- On HP-UX, this is the configured total number of entries for the incore inode tables on the system. For HP-UX releases prior to 11.2x, this value reflects only the HFS inode table. For subsequent HP-UX releases, this value is the sum of inode tables for both HFS and VxFS file systems (ninode plus vxfs_ninode). On HP-UX, file system directory activity is done through inodes that are stored on disk. The kernel keeps a memory cache of active and recently accessed inodes to reduce disk IOs. When a file is opened through a pathname, the kernel converts the pathname to an inode number and attempts to obtain the inode information from the cache based on the filesystem type. If the inode entry is not in the cache, the inode is read from disk into the inode cache. On HP-UX, the number of used entries in the inode caches are usually at or near the capacity. This does not necessarily indicate that the configured sizes are too small because the tables may contain recently used inodes and inodes referenced by entries in the directory name lookup cache. When a new inode cache entry is required and a free entry does not exist, inactive entries referenced by the directory name cache are used. If after freeing inode entries only referenced by the directory name cache does not create enough free space, the message “inode: table is full” message may appear on the console. If this occurs, increase the size of the kernel parameter, ninode. Low directory name cache hit ratios may also indicate an underconfigured inode cache. On HP-UX, the default formula for the ninode size is: ninode = ((nproc+16+maxusers)+32+ (2*npty)+(4*num_clients)) On all other Unix systems, this is the number of entries in the inode cache. This is a size. All entries are not always in use. The cache size is dynamic. Entries in this cache are reused as files are closed and new ones are opened. The size of the cache will go up or down in chunks as more or less space is required in the cache. Inodes are used to store information about files within the file system. Every file has at least two inodes associated with it (one for the directory and one for the file itself). The information stored in an inode includes the owners, timestamps, size, and an array of indices used to translate logical block numbers to physical sector numbers. There is a separate inode maintained for every view of a file, so if two processes have the same file open, they both use the same directory inode, but separate inodes for the file. TBL_INODE_CACHE_HIGH -------------------- On HP-UX and OSF1, this is the highest number of inodes that have been used in any one interval over the cumulative collection time. On HP-UX, file system directory activity is done through inodes that are stored on disk. The kernel keeps a memory cache of active and recently accessed inodes to reduce disk IOs. When a file is opened through a pathname, the kernel converts the pathname to an inode number and attempts to obtain the inode information from the cache based on the filesystem type. If the inode entry is not in the cache, the inode is read from disk into the inode cache. On HP-UX, the number of used entries in the inode caches are usually at or near the capacity. This does not necessarily indicate that the configured sizes are too small because the tables may contain recently used inodes and inodes referenced by entries in the directory name lookup cache. When a new inode cache entry is required and a free entry does not exist, inactive entries referenced by the directory name cache are used. If after freeing inode entries only referenced by the directory name cache does not create enough free space, the message “inode: table is full” message may appear on the console. If this occurs, increase the size of the kernel parameter, ninode. Low directory name cache hit ratios may also indicate an underconfigured inode cache. On HP-UX, the default formula for the ninode size is: ninode = ((nproc+16+maxusers)+32+ (2*npty)+(4*num_clients)) On all other Unix systems, this is the largest size of the inode cache in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_INODE_CACHE_USED -------------------- The number of inode cache entries currently in use. On HP-UX, this is the number of “non-free” inodes currently used. Since the inode table contains recently closed inodes as well as open inodes, the table often appears to be fully utilized. When a new entry is needed, one can usually be found by reusing one of the recently closed inode entries. On HP-UX, file system directory activity is done through inodes that are stored on disk. The kernel keeps a memory cache of active and recently accessed inodes to reduce disk IOs. When a file is opened through a pathname, the kernel converts the pathname to an inode number and attempts to obtain the inode information from the cache based on the filesystem type. If the inode entry is not in the cache, the inode is read from disk into the inode cache. On HP-UX, the number of used entries in the inode caches are usually at or near the capacity. This does not necessarily indicate that the configured sizes are too small because the tables may contain recently used inodes and inodes referenced by entries in the directory name lookup cache. When a new inode cache entry is required and a free entry does not exist, inactive entries referenced by the directory name cache are used. If after freeing inode entries only referenced by the directory name cache does not create enough free space, the message “inode: table is full” message may appear on the console. If this occurs, increase the size of the kernel parameter, ninode. Low directory name cache hit ratios may also indicate an underconfigured inode cache. On HP-UX, the default formula for the ninode size is: ninode = ((nproc+16+maxusers)+32+ (2*npty)+(4*num_clients)) On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_MAX_USERS -------------------- The value of the system configurable parameter “maxusers”. This value signifies the approximate number of users on a system. Note, changing this value can significantly affect the performance of a system because memory allocation calculations are based on it. This value can be set in the /etc/system file. TBL_MSG_BUFFER_ACTIVE -------------------- The current active total size (in KBs unless otherwise specified) of all IPC message buffers. These buffers are created by msgsnd(2) calls and released by msgrcv(2) calls. This metric only counts the active message queue buffers, which means that a msgsnd(2) call has been made and the msgrcv(2) has not yet been done on the queue entry or a msgrcv(2) call is waiting on a message queue entry. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_MSG_BUFFER_AVAIL -------------------- The maximum achievable size (in KBs unless otherwise specified) of the message queue buffer pool on the system. Each message queue can contain many buffers which are created whenever a program issues a msgsnd(2) call. Each of these buffers is allocated from this buffer pool. Refer to the ipcs(1) man page for more information. This value is determined by taking the product of the three kernel configuration variables “msgseg”, “msgssz” and “msgmni”. On SUN, the InterProcess Communication facilities are dynamically loadable. If the amount available is zero, this facility was not loaded when data collection began, and its data is not obtainable. The data collector is unable to determine that a facility has been loaded once data collection has started. If you know a new facility has been loaded, restart the data collection, and the data for that facility will be collected. See ipcs(1) to report on interprocess communication resources. TBL_MSG_BUFFER_HIGH -------------------- The largest size (in KBs unless otherwise specified) of the message queues in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_MSG_BUFFER_USED -------------------- The current total size (in KBs unless otherwise specified) of all IPC message buffers. These buffers are created by msgsnd(2) calls and released by msgrcv(2) calls. On HP-UX and OSF1, this field corresponds to the CBYTES field of the “ipcs -qo” command. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_MSG_TABLE_ACTIVE -------------------- The number of message queues currently active. A message queue is allocated by a program using the msgget(2) call. This metric returns only the entries in the message queue currently active. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_MSG_TABLE_AVAIL -------------------- The configured maximum number of message queues that can be allocated on the system. A message queue is allocated by a program using the msgget(2) call. Refer to the ipcs(1) man page for more information. On SUN, the InterProcess Communication facilities are dynamically loadable. If the amount available is zero, this facility was not loaded when data collection began, and its data is not obtainable. The data collector is unable to determine that a facility has been loaded once data collection has started. If you know a new facility has been loaded, restart the data collection, and the data for that facility will be collected. See ipcs(1) to report on interprocess communication resources. TBL_MSG_TABLE_USED -------------------- On HP-UX, this is the number of message queues currently in use. On all other Unix systems, this is the number of message queues that have been built. A message queue is allocated by a program using the msgget(2) call. See ipcs(1) to list the message queues. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_MSG_TABLE_UTIL -------------------- The percentage of configured message queues currently in use. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_MSG_TABLE_UTIL_HIGH -------------------- The highest percentage of configured message queues that have been in use during any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_NUM_NFSDS -------------------- The number of NFS servers configured. This is the value “nservers” passed to nfsd (the NFS daemon) upon startup. If no value is specified, the default is one. This value determines the maximum number of concurrent NFS requests that the server can handle. See man page for “nfsd”. TBL_PROC_TABLE_AVAIL -------------------- The configured maximum number of the proc table entries used by the kernel to manage processes. This number includes both free and used entries. On HP-UX, this is set by the NPROC value during system generation. AIX has a “dynamic” proc table, which means that AVAIL has been set higher than should ever be needed. TBL_PROC_TABLE_USED -------------------- The number of entries in the proc table currently used by processes. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_PROC_TABLE_UTIL -------------------- The percentage of proc table entries currently used by processes. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_PROC_TABLE_UTIL_HIGH -------------------- The highest percentage of entries in the proc table used by processes in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_PTY_AVAIL -------------------- The configured number of entries used by the pseudo-teletype driver on the system. This limits the number of pty logins possible. For HP-UX, both telnet and rlogin use streams devices. Note: On Solaris 8, by default, the number of ptys is unlimited but restricted by the size of RAM. If the number of ptys is unlimited, this metric is reported as “na”. TBL_PTY_USED -------------------- The number of pseudo-teletype driver (pty) entries currently in use. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_PTY_UTIL -------------------- The percentage of configured pseudo-teletype driver (pty) entries currently in use. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_PTY_UTIL_HIGH -------------------- The highest percentage of configured pseudo-teletype driver (pty) entries in use during any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SEM_TABLE_ACTIVE -------------------- The number of semaphore identifiers currently active. This means that the semaphores are currently locked by processes. Any new process requesting this semaphore is blocked if IPC_NOWAIT flag is not set. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SEM_TABLE_AVAIL -------------------- The configured number of semaphore identifiers (sets) that can be allocated on the system. On SUN, the InterProcess Communication facilities are dynamically loadable. If the amount available is zero, this facility was not loaded when data collection began, and its data is not obtainable. The data collector is unable to determine that a facility has been loaded once data collection has started. If you know a new facility has been loaded, restart the data collection, and the data for that facility will be collected. See ipcs(1) to report on interprocess communication resources. TBL_SEM_TABLE_USED -------------------- On HP-UX, this is the number of semaphore identifiers currently in use. On all other Unix systems, this is the number of semaphore identifiers that have been built. A semaphore identifier is allocated by a program using the semget(2) call. See ipcs(1) to list semaphores. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SEM_TABLE_UTIL -------------------- The percentage of configured semaphores identifiers currently in use. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SEM_TABLE_UTIL_HIGH -------------------- The highest percentage of configured semaphore identifiers that have been in use during any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_ACTIVE -------------------- The size (in KBs unless otherwise specified) of the shared memory segments that have running processes attached to them. This may be less than the amount of shared memory used on the system because a shared memory segment may exist and not have any process attached to it. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_AVAIL -------------------- The maximum achievable size (in MB unless otherwise specified) of the shared memory pool on the system. This is a theoretical maximum determined by multiplying the configured maximum number of shared memory entries (shmmni) by the maximum size of each shared memory segment (shmmax). Your system may not have enough virtual memory to actually reach this theoretical limit - one cannot allocate more shared memory than the available reserved space configured for virtual memory. It should be noted that this value does not include any architectural limitations. (For example, on a 32-bit kernel, there is an addressing limit of 1.75 GB.) On SUN, the InterProcess Communication facilities are dynamically loadable. If the amount available is zero, this facility was not loaded when data collection began, and its data is not obtainable. The data collector is unable to determine that a facility has been loaded once data collection has started. If you know a new facility has been loaded, restart the data collection, and the data for that facility will be collected. See ipcs(1) to report on interprocess communication resources. TBL_SHMEM_HIGH -------------------- The highest size (in KBs unless otherwise specified) of shared memory used in any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_TABLE_ACTIVE -------------------- The number of shared memory segments that have running processes attached to them. This may be less than the number of shared memory segments that have been allocated. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_TABLE_AVAIL -------------------- The configured number of shared memory segments that can be allocated on the system. On SUN, the InterProcess Communication facilities are dynamically loadable. If the amount available is zero, this facility was not loaded when data collection began, and its data is not obtainable. The data collector is unable to determine that a facility has been loaded once data collection has started. If you know a new facility has been loaded, restart the data collection, and the data for that facility will be collected. See ipcs(1) to report on interprocess communication resources. TBL_SHMEM_TABLE_USED -------------------- On HP-UX, this is the number of shared memory segments currently in use. On all other Unix systems, this is the number of shared memory segments that have been built. This includes shared memory segments with no processes attached to them. A shared memory segment is allocated by a program using the shmget(2) call. Also refer to ipcs(1). On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_TABLE_UTIL -------------------- The percentage of configured shared memory segments currently in use. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_TABLE_UTIL_HIGH -------------------- The highest percentage of configured shared memory segments that have been in use during any one interval over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TBL_SHMEM_USED -------------------- The size (in KBs unless otherwise specified) of the shared memory segments. Additionally, it includes memory segments to which no processes are attached. If a shared memory segment has zero attachments, the space may not always be allocated in memory. See ipcs(1) to list shared memory segments. On Unix systems, this metric is updated every 30 seconds or the sampling interval, whichever is greater. TTBIN_TRANS_COUNT TT_CLIENT_BIN_TRANS_COUNT -------------------- The number of completed transactions in this range during the last interval. TTBIN_TRANS_COUNT_CUM TT_CLIENT_BIN_TRANS_COUNT_CUM -------------------- The number of completed transactions in this range over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. TTBIN_UPPER_RANGE -------------------- The upper range (transaction time) for this TT bin. There are a maximum of nine user-defined transaction response time bins (TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is the overflow bin and will always have a value of -2 (overflow). Note that the values specified in the transaction configuration file cannot exceed 2147483.6, which is the number of seconds in 24.85 days. If the user specifies any values greater than 2147483.6, the numbers reported for those bins or Service Level Objectives (SLO) will be -2. TT_ABORT TT_CLIENT_ABORT -------------------- The number of aborted transactions during the last interval for this transaction. TT_ABORT_CUM TT_CLIENT_ABORT_CUM -------------------- The number of aborted transactions over the cumulative collection time for this transaction. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. TT_ABORT_WALL_TIME TT_CLIENT_ABORT_WALL_TIME -------------------- The total time, in seconds, of all aborted transactions during the last interval for this transaction. TT_ABORT_WALL_TIME_CUM TT_CLIENT_ABORT_WALL_TIME_CUM -------------------- The total time, in seconds, of all aborted transactions over the cumulative collection time for this transaction class. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. TT_APPNO -------------------- The registered ARM Application/User ID for this transaction class. TT_APP_NAME -------------------- The registered ARM Application name. TT_CLIENT_ADDRESS TT_INSTANCE_CLIENT_ADDRESS -------------------- The correlator address. This is the address where the child transaction originated. TT_CLIENT_ADDRESS_FORMAT TT_INSTANCE_CLIENT_ADDRESS_FORMAT -------------------- The correlator address format. This shows the protocol family for the client network address. Refer to the ARM API Guide for the list and description of supported address formats. TT_CLIENT_CORRELATOR_COUNT -------------------- The number of client or child transaction correlators this transaction has started over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. TT_CLIENT_TRAN_ID TT_INSTANCE_CLIENT_TRAN_ID -------------------- A numerical ID that uniquely identifies the transaction class in this correlator. TT_COUNT TT_CLIENT_COUNT -------------------- The number of completed transactions during the last interval for this transaction. TT_COUNT_CUM TT_CLIENT_COUNT_CUM -------------------- The number of completed transactions over the cumulative collection time for this transaction. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. TT_FAILED TT_CLIENT_FAILED -------------------- The number of Failed transactions during the last interval for this transaction name. TT_FAILED_CUM TT_CLIENT_FAILED_CUM -------------------- The number of failed transactions over the cumulative collection time for this transaction name. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. TT_FAILED_WALL_TIME TT_CLIENT_FAILED_WALL_TIME -------------------- The total time, in seconds, of all failed transactions during the last interval for this transaction name. TT_FAILED_WALL_TIME_CUM TT_CLIENT_FAILED_WALL_TIME_CUM -------------------- The total time, in seconds, of all failed transactions over the cumulative collection time for this transaction name. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. TT_INFO -------------------- The registered ARM Transaction Information for this transaction. TT_INPROGRESS_COUNT -------------------- The number of transactions in progress (started, but not stopped) at the end of the interval for this transaction class. TT_INSTANCE_ID -------------------- A numerical ID that uniquely identifies this transaction instance at the end of the interval. TT_INSTANCE_PROC_ID -------------------- The ID of the process that started or last updated the transaction instance. TT_INSTANCE_START_TIME -------------------- The time this transaction instance started. TT_INSTANCE_STOP_TIME -------------------- The time this transaction instance stopped. If the transaction instance is currently active, the value returned will be -1. It will be shown as “na” in Glance and GPM to indicate that the transaction instance did not stop during the interval. TT_INSTANCE_THREAD_ID -------------------- The ID of the kernel thread that started or last updated the transaction instance. TT_INSTANCE_UPDATE_COUNT -------------------- The number of times this transaction instance called update since the start of this transaction instance. TT_INSTANCE_UPDATE_TIME -------------------- The time this transaction instance last called update. If the transaction instance is currently active, the value returned will be -1. It will be shown as “na” in Glance and GPM to indicate that a call to update did not occur during the interval. TT_INSTANCE_WALL_TIME -------------------- The elapsed time since this transaction instance was started. TT_INTERVAL TT_CLIENT_INTERVAL -------------------- The amount of time in the collection interval. TT_INTERVAL_CUM TT_CLIENT_INTERVAL_CUM -------------------- The amount of time over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. TT_MEASUREMENT_COUNT -------------------- The number of user defined measurements for this transaction class. TT_NAME -------------------- The registered transaction name for this transaction. TT_SLO_COUNT TT_CLIENT_SLO_COUNT -------------------- The number of completed transactions that violated the defined Service Level Objective (SLO) by exceeding the SLO threshold time during the interval. TT_SLO_COUNT_CUM TT_CLIENT_SLO_COUNT_CUM -------------------- The number of completed transactions that violated the defined Service Level Objective by exceeding the SLO threshold time over the cumulative collection time. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. TT_SLO_PERCENT -------------------- The percentage of transactions which violate service level objectives. TT_SLO_THRESHOLD -------------------- The upper range (transaction time) of the Service Level Objective (SLO) threshold value. This value is used to count the number of transactions that exceed this user-supplied transaction time value. TT_TRAN_1_MIN_RATE -------------------- For this transaction name, the number of completed transactions calculated to a 1 minute rate. For example, if you completed five of these transactions in a 5 minute window, the rate is one transaction per minute. TT_TRAN_ID -------------------- The registered ARM Transaction ID for this transaction class as returned by arm_getid(). A unique transaction id is returned for a unique application id (returned by arm_init), tran name, and meta data buffer contents. TT_UID -------------------- The registered ARM Transaction User ID for this transaction name. TT_UNAME -------------------- The registered ARM Transaction User Name for this transaction. If the arm_init function has NULL for the appl_user_id field, then the user name is blank. Otherwise, if “*” was specified, then the user name is displayed. For example, to show the user name for the armsample1 program, use: appl_id = arm_init(“armsample1”,“*”,0,0,0); To ignore the user name for the armsample1 program, use: appl_id = arm_init(“armsample1”,NULL,0,0,0); TT_UPDATE TT_CLIENT_UPDATE -------------------- The number of updates during the last interval for this transaction class. This count includes update calls for completed and in progress transactions. TT_UPDATE_CUM TT_CLIENT_UPDATE_CUM -------------------- The number of updates over the cumulative collection time for this transaction class. This count includes update calls for completed and in progress transactions. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. TT_USER_MEASUREMENT_AVG TT_INSTANCE_USER_MEASUREMENT_AVG TT_CLIENT_USER_MEASUREMENT_AVG -------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the average counter differences of the transaction or transaction instance during the last interval. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this returns the average of the values passed on any ARM call for the transaction or transaction instance during the last interval. TT_USER_MEASUREMENT_MAX TT_INSTANCE_USER_MEASUREMENT_MAX TT_CLIENT_USER_MEASUREMENT_MAX -------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the highest measured counter value over the life of the transaction or transaction instance. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this metric returns the highest value passed on any ARM call over the life of the transaction or transaction instance. TT_USER_MEASUREMENT_MIN TT_INSTANCE_USER_MEASUREMENT_MIN TT_CLIENT_USER_MEASUREMENT_MIN -------------------- If the measurement type is a numeric or a string, this metric returns “na”. If the measurement type is a counter, this metric returns the lowest measured counter value over the life of the transaction or transaction instance. The counter value is the difference observed from a counter between the start and the stop (or last update) of a transaction. If the measurement type is a gauge, this metric returns the lowest value passed on any ARM call over the life of the transaction or transaction instance. TT_USER_MEASUREMENT_NAME TT_INSTANCE_USER_MEASUREMENT_NAME TT_CLIENT_USER_MEASUREMENT_NAME -------------------- The name of the user defined transactional measurement. The length of the string complies with the ARM 2.0 standard, which is 44 characters long (there are 43 usable characters since this is a NULL terminated character string). TT_USER_MEASUREMENT_STRING1024_VALUE TT_INSTANCE_USER_MEASUREMENT_STRING1024_VALUE TT_CLIENT_USER_MEASUREMENT_STRING1024_VALUE -------------------- The last value of the user defined measurement of type string 1024. TT_USER_MEASUREMENT_STRING32_VALUE TT_INSTANCE_USER_MEASUREMENT_STRING32_VALUE TT_CLIENT_USER_MEASUREMENT_STRING32_VALUE -------------------- The last value of the user defined measurement of type string 32. TT_USER_MEASUREMENT_TYPE TT_INSTANCE_USER_MEASUREMENT_TYPE TT_CLIENT_USER_MEASUREMENT_TYPE -------------------- The type of the user defined transactional measurement. 1 = ARM_COUNTER32 2 = ARM_COUNTER64 3 = ARM_CNTRDIVR32 4 = ARM_GAUGE32 5 = ARM_GAUGE64 6 = ARM_GAUGEDIVR32 7 = ARM_NUMERICID32 8 = ARM_NUMERICID64 9 = ARM_STRING8 (max 8 chars) 10 = ARM_STRING32 (max 32 chars) 11 = ARM_STRING1024 (max 1024 char) TT_USER_MEASUREMENT_VALUE TT_INSTANCE_USER_MEASUREMENT_VALUE TT_CLIENT_USER_MEASUREMENT_VALUE -------------------- The last value of the user defined measurement of type counter, gauge, numeric ID, or string 8. Both 32 and 64 bit numeric types are returned as 64 bit values. TT_WALL_TIME TT_CLIENT_WALL_TIME -------------------- The total time, in seconds, of all transactions completed during the last interval for this transaction. TT_WALL_TIME_CUM TT_CLIENT_WALL_TIME_CUM -------------------- The total time, in seconds, of all transactions completed over the cumulative collection time for this transaction. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. TT_WALL_TIME_PER_TRAN TT_CLIENT_WALL_TIME_PER_TRAN -------------------- The average transaction time, in seconds, during the last interval for this transaction. TT_WALL_TIME_PER_TRAN_CUM TT_CLIENT_WALL_TIME_PER_TRAN_CUM -------------------- The average transaction time, in seconds, over the cumulative collection time for this transaction. The cumulative collection time is defined from the point in time when either: a) the process (or kernel thread, if HP-UX) was first started, or b) the performance tool was first started, or c) the cumulative counters were reset (relevant only to GlancePlus, if available for the given platform), whichever occurred last. GLOSSARY ============ alarm -------------------- A signal that an event has occurred. The signal can be either a notification or an automatically triggered action. The event can be a pre-defined threshold that is exceeded, a network node in trouble, and so on. Alarm information can be sent to Network Node Manager (NNM) and OpenView Operations (OVO). Alarms can also be identified in historical log file data. alarm generator -------------------- The service that handles the communication of alarm information. It consists of the perfalarm and the agdb database that it manages. The agdb database contains of list of nodes (if any) to which alarms are communicated, and various on/off flags that are set to define when and where the alarm information is sent. alarmdef file -------------------- The file containing the alarm definitions in which alarm conditions are specified. alert -------------------- A message sent when alarm conditions or conditions in an IF statement have been met. analysis software -------------------- Analysis software analyzes system performance data. The optional OV Performance Manager product provides a central window from which you can monitor, manage, and troubleshoot the performance of all networked systems in your computing environment, as well as analyze historical data from OV Performance Agent systems. With OV Performance Manager, you can view graphs of a systems' performance data to help you diagnose and resolve performance problems quickly. application -------------------- A user-defined group of related processes or program files. Applications are defined so that performance software can collect performance metrics for and report on the combined activities of the processes and programs. available memory -------------------- Available memory is that part of physical memory not allocated by the kernel. This includes the buffer cache, user allocated memory, and free memory. backtrack -------------------- Backtracking allows the large data structures used by the Virtual Memory Manager (VMM) to be pageable. It is a method of safely allowing the VMM to handle page faults within its own critical sections of code. Examples of backtracking are: * ? A process page faults. * ? The VMM attempts to locate the missing page via its External Page table (XPT). * ? The VMM page faults due to the required XPT itself having been paged out. * ? The VMM safely saves enough information on the stack to restart the process * ? at its first fault. * ? Normal VMM pagein/out routines are used to recover the missing XPT. * ? The required XPT is now present, so the missing page is located and paged-in. * ? The process continues normal execution at the original page fault. bad call -------------------- A failed NFS server call. Calls fail due to lack of system resources (lack of virtual memory) and network errors. biod -------------------- A daemon process responsible for asynchronous block IO on the NFS client. It is used to buffer read-ahead and write-behind IOs. block IO -------------------- Buffered reads and writes. Data is held in the buffer cache, then transferred in fixed-size blocks. Any hardware device that transmits and receives data in blocks is a block-mode device. Compare with character mode. block IO buffer -------------------- A buffer used to store data being transferred to or from a block-mode device through file system input and output, as opposed to character-mode or raw-mode devices. block IO operation -------------------- Any operation being carried out on a block-mode device (such as read, write, or mount). block size -------------------- The size of the primary unit of information used for a file system. It is set when a file system is created. blocked state -------------------- The reason for the last recorded process block. Also called blocked-on state. bottleneck -------------------- A situation that occurs when a system resource is constrained by demand that exceeds its capability. The resource is said to be "bottlenecked." A bottleneck causes system performance to degrade. A primary characteristic of a bottleneck is that it does not occur in all resources at the same time; other resources may instead be underutilized. buffer -------------------- A memory storage area used to temporarily hold code or data until used for input/output operations. buffer cache -------------------- An area of memory that mediates between application programs and disk drives. When a program writes data, it is first placed in the buffer cache, then delivered to the disk at a later time. This allows the disk driver to perform IO operations in batches, minimizing seek time. buffer header -------------------- Entries used by all block IO operations to point to buffers in the file system buffer cache. buffer pool -------------------- See buffer cache. cache -------------------- See buffer cache. cache efficiency -------------------- The extent to which buffered read and read-ahead requests can be satisfied by data already in the cache. cache hit -------------------- Read requests that are satisfied by data already in the buffer cache. See also cache efficiency. character mode -------------------- The mode in which data transfers are accomplished byte-by-byte, rather than in blocks. Printers, plotters, and terminals are examples of character-mode devices. Also known as raw mode. Compare with block IO. child process -------------------- A new process created at another active process' request through a fork or vfork system call. The process making the request becomes the parent process. client -------------------- A system that requests a service from a server. In the context of diskless clusters, a client uses the server's disks and has none of its own. In the context of NFS, a client mounts file systems that physically reside on another system (the Network File System server). clock hand algorithm -------------------- The algorithm used by the page daemon to scan pages. clock hand cycle -------------------- The clock hand algorithm used to control paging and to select pages for removal from system memory. When page faults and/or system demands cause the free list size to fall below a certain level, the page replacement algorithm starts the clock hand and it cycles through the page table. cluster -------------------- One or more work stations linked by a local area network (LAN) but having only one root file system. cluster server process -------------------- (CSPs). A special kernel process that runs in a cluster and handles requests from remote cnodes. coda -------------------- A daemon that provides data to the alarm generator and the OVPM OV Reporter product from all the data sources configured in the datasources configuration file. By default data source for scopeux log file set is defined cnode -------------------- The client on a diskless system. The term cnode is derived from "client node." collision -------------------- Occurs when the system attempts to send a packet at the same time that another system is attempting a send on the same LAN. The result is garbled transmissions and both sides have to resubmit the packet. Some collisions occur during normal operation. context switch -------------------- The action of the dispatcher (scheduler) changing from running one process to another. The scheduler maintains algorithms for managing process switching, mostly directed by process priorities. CPU -------------------- Central Processing Unit. The part of a computer that executes program instructions. CPU queue -------------------- The average number of processes in the "run" state awaiting CPU scheduling, which includes processes short waited for IOs. This is calculated from GBL_RUN_QUEUE and the number of times this metric is updated. This is also a measure of how busy the system's CPU resource is. cyclical redundancy check -------------------- (CRC). A networking checksum protocol used to detect transmission errors. cylinder -------------------- The tracks of a disk accessible from one position of the head assembly. cylinder group -------------------- In the filesystem, a collection of cylinders on a disk drive grouped together for the purpose of localizing information. The filesystem allocates inodes and data blocks on a per- cylinder-group basis. daemon -------------------- A process that runs continuously in the background but provides important system services. data class -------------------- A particular category of data collected by a data collection process. Single-instance data classes, such as the global class, contain a single set of metrics that appear only once in any data source. Multiple-instance classes, such as the application class, may have many occurrences in a single data source, with the same set of metrics collected for each occurrence of the class. data locality -------------------- The location of data relative to associated data. Associated data has good data locality if it is located near one another, because accesses are limited to a small number of pages and the data is more likely to be in memory. Poor data locality means associated data must be obtained from different data pages. data point -------------------- A specific point in time displayed on a performance graph where data has been summarized every five, fifteen, or thirty minutes, or every hour, two hours or one day. data segment -------------------- A section of memory reserved for storing a process' static and dynamic data. data source -------------------- A data source consists of one or more classes of data in a single scopeux or DSI log file set. For example, the default OV Performance Agent data source, SCOPE, is a scopeux log file set consisting of global data. See also respository server. data source integration (DSI) -------------------- Enables OV Performance Server Agent to receive, log, and detect alarms on dat from external sources such as applications, databases, networks, and other operating systems. datasources configuration file -------------------- A configuration file residing in the /var/opt/newconfig/ directory. Each entry in the file represents a scopeux or DSI data source consisting of a single log file set. deactivated/reactivated pages out -------------------- Pages from deactivated process regions that are moved from memory to the swap area. These pages are swapped out only when they are needed by another active process. When a process becomes reactivated, the pages are moved from the swap area back to memory. default -------------------- An option that is automatically selected or chosen by the system. deferred packet -------------------- A deferred packet occurs when the network hardware detects that the LAN is already in use. Rather than incur a collision, the outbound packet transmission is delayed until the LAN is available. device driver -------------------- A collection of kernel routines and data structures that handle the lowest levels of input and output between a peripheral device and executing processes. Device drivers are part of the UNIX kernel. device file -------------------- A special file that permits direct access to a hardware device. device swap space -------------------- Space devoted to swapping. directory name lookup cache -------------------- The directory name lookup cache (DNLC) is used to cache directory and file names. When a file is referenced by name, the name must be broken into its components and each component's inode must be looked up. By caching the component names, disk IOs are reduced. diskless cluster server -------------------- A system that supports disk activity for diskless client nodes. diskless file system buffer -------------------- A buffer pool that is used only by the diskless server for diskless cluster traffic. dispatcher -------------------- A module of the kernel responsible for allocating CPU resources among several competing processes. DSI log file -------------------- A log file, created by OV Performance Server Agent’s DSI (data sourcre integration) programs, that contains self-describing data.. empty space -------------------- The difference between the maximum size of a log file and its current size. K error (LAN) -------------------- Unsuccessful transmission of a packet over a local area network (LAN). Inbound errors are typically checksum errors. Outbound errors are typically local hardware problems. K exec fill page -------------------- When a process is 'execed' the working segments of the process are marked as copy on write. Only when segments change are they copied into a separate segment private to the process that is modifying the page. extract program -------------------- OV Performance Agent program that helps you manage your data. In extract mode, raw or previously extracted log files can be read in, reorganized or filtered as desired, and the results are combined into a single, easy-to-manage extracted log file. In export mode, raw or previously extracted log files can be read in, reorganized or filtered as desired, and the results are written to class-specific exported data files for use in spreadsheets and analysis programs. extracted log file -------------------- An OV Performance Server Agent log file containing a user- defined subset of data extracted (copied) from raw or previously extracted log file. It is formatted for optimal access by OV Performance Manager. Extracted log files are also used for archiving performance data. file IO -------------------- IO activity to a physical disk. It includes file system IOs, system IOs to manage the file system, both raw and block activity, and excludes virtual memory management IOs. file lock -------------------- A file lock guarantees exclusive access to an entire file, or parts of a file. file system -------------------- The organization and placement of files and directories on a hard disk. The file system includes the operating system software's facilities for naming the files and controlling access to these files. file system activity -------------------- Access calls (read, write, control) of file system block IO files contained on disk. file system swap -------------------- File system space identified as available to be used as swap. This is a lower performance method of swapping as its operations are processed through the file system. file table -------------------- The table contains inode descriptors used by the user file descriptors for all open files. It is set to the maximum number of files the system can have open at any one time. fork -------------------- A system call that enables a process to duplicate itself into two identical processes - a parent and a child process. Unlike the vfork system call, the child process produced does not have access to the parent process' memory and control. free list -------------------- The system keeps a list of free pages on the system. Free list points to all the pages that are marked free. free memory -------------------- Memory not currently allocated to any user process or to the kernel. GlancePlus -------------------- An online diagnostic tool that displays current performance data directly to a user terminal or workstation. It is designed to assit you in identifying and troubleshooting system performance problems as they occur. global -------------------- A qualifier implying the whole system. Thus "global metrics" are metrics that describe the activities and states of each system. Similarly, application metrics describe application activity; process metrics describe process activity. global log file -------------------- The raw log file, logglob, where the scopeux collector places summarized measurements of the system-wide workload. idle -------------------- The state in which the CPU is idle when it is waiting for the dispatcher (scheduler) to provide processes to execute. idle biod -------------------- The number of inactive NFS daemons on a client. inode -------------------- A reference pointer to a file. This reference pointer contains a description of the disk layout of the file data and other information, such as the file owner, access permissions, and access times. Inode is a contraction of the term 'index node'. inode cache -------------------- An in memory table containing up-to-date information on the state of a currently referenced file. interesting process -------------------- A filter mechanism that allows the user to limit the number of process entries to view. A process becomes interesting when it is first created, when it ends, and when it exceeds user- defined thresholds for CPU use, disk use, response time, and so on. interrupt -------------------- High priority interruptions of the CPU to notify it that something has happened. For example, a disk IO completion is an interrupt. intervals -------------------- Specific time periods during which performance data is gathered. ioctl -------------------- A system call that provides an interface to allow processes to control IO or pseudo devices. IO done -------------------- The Virtual Memory Management (VMM) system reads and writes from the disk and keeps track of how many IOs are completed by the system. Since IOs are asynchronous, they are not completed immediately. Sometimes IOs done can be higher than IO starts, since some of the IOs that are started in the previous interval can be completed. IO start -------------------- The Virtual Memory Management (VMM) system reads and writes from the disk and keeps track of how many IOs are started by the system. Since IOs are async, they are not completed immediately. InterProcess Communication (IPC) -------------------- Communication protocols used between processes. kernel -------------------- The core of the UNIX operating system. It is the code responsible for managing the computer's resources and performing functions such as allocating memory. The kernel also performs administrative functions required for overall system performance. kernel table -------------------- An internal system table such as the Process Table or Text Table. A table's configured size can affect system behavior. last measurement reset -------------------- When you run a performance product, it starts collecting performance data. Cumulative metrics begin to accumulate at this time. When you reset measurement to zero, all cumulative metrics are set to zero and averages are reset so their values are calculated beginning with the next interval. load average -------------------- A measure of the CPU load on the system. The load average is defined as an average of the number of processes running and ready to run, as sampled over the previous one-minute interval of system operation. The kernel maintains this data. lock miss -------------------- The Virtual Memory Management (VMM) system locks pages for synchronization purposes. If the lock has to be broken for any reason that is considered a lock miss. Usually this is a very small number. logappl (application log file) -------------------- The raw log file that contains summary measurements of processes in each user-defined application. logdev (device log file) -------------------- The raw log file that contains measurements of individual device (disk, logical volume, network interface) performance. logglob (global log file) -------------------- The raw log file that contains measurements of the system-wide, or global, workload. logindex -------------------- The raw log file that contains information required for accessing data in the other log files. logproc (process log file) -------------------- The raw log file that contains measurements of selected interesting processes. logtran (transaction log file) -------------------- The raw log file that contains measurements of transaction tracking data. log files -------------------- Performance measurement files that contain either raw or extracted log file data. logical IO -------------------- A read or write system call to a file system to obtain data. Because of the effects of buffer caching, this operation may not require a physical access to the disk if the buffer is located in the buffer cache. macro -------------------- A group of instructions that you can combine into a single instruction for the application to execute. major fault -------------------- A page fault requiring an access to disk to retrieve the page. measurement interface -------------------- A set of proprietary library calls used by the performance applications to obtain performance data. memory pressure -------------------- A situation that occurs when processes are requesting more memory space than is available. memory swap space -------------------- The part of physical memory allocated for swapping. memory thrashing -------------------- See thrashing. message buffer pool -------------------- A cache used to store all used message queue buffers on the system. message queue -------------------- The messaging mechanism allows processes to send formatted data streams to arbitrary processes. A message queue holds the buffers from which processes read the data. message table -------------------- A table that shows the maximum number of message queues allowed for the system. metric -------------------- A specific measurement that defines performance.characteristics. midaemon -------------------- A process that monitors system performance and creates counters from system event traces that are read and displayed by performance applications. minor fault -------------------- A page fault that is satisfied by a memory access (the page was not yet released from memory). mount/unmount -------------------- The process of adding or removing additional, functionally- independent file systems to or from the pool of available file systems. Network Node Manager (NNM) -------------------- A network management application that provides the network map used by OV Performance Manager. network time -------------------- The amount of time required for a particular network request to be completed. NFS call -------------------- A physical Network File System (NFS) operation a system has received or processed. NFS client -------------------- A node that requests data or services from other nodes on the network. NFS IO -------------------- A system count of the NFS calls. NFS Logical IO -------------------- A logical I/O request made to an NFS mounted file system. NFS-mounted -------------------- A file system connected by software to one system but physically residing on another system's disk. NFS server -------------------- A node that provides data or services to other nodes on the network. NFS transfer -------------------- Transfer of data packets across a local area network (LAN) to support Network File System (NFS) services. nice -------------------- Altering the priority of a time-share process, using either the nice/renice command or the nice system call. High nice values lessen the priority; low nice values increase the priority. node -------------------- A computing resource on a network, such as a networked computer system, hub, or bridge. normal CPU -------------------- CPU time spent processing user applications which have not been real-time dispatched or niced. OpenView (OV) Performance Manager -------------------- A tool that provides integrated performance management for multi-vendor distributed networks. Uses a single workstation to monitor environment performance on networks that range in size from tens to thousands of nodes. outbound read/write -------------------- The designation used when a local process requests a read from or write to a remote system via NFS. o/f (overflow) -------------------- This designates that the measurement software has detected a number that is too large to fit in the available space. packet -------------------- A unit of information that is transferred between a server and a client over the LAN. packet in/out -------------------- A request sent to the server by a client is an "in" packet. A request sent to a client by the server is an "out" packet. page -------------------- A basic unit of memory. A process is accessed in pages (demand paging) during execution. pagedaemon -------------------- A system daemon responsible for writing parts of a process' address space to secondary storage (disk) to support the paging capability of the virtual memory system. page fault -------------------- An event recorded when a process tries to execute code instructions or to reference a data page not resident in a process' mapped physical memory. The system must page-in the missing code or data to allow execution to continue. page freed -------------------- When a paging daemon puts a page in the free list, it is considered as page freed. page in/page out -------------------- Moving pages of data from virtual memory (disk) to physical memory (page in) or vice versa (page out). page reclaim -------------------- Virtual address space is partitioned into segments, which are then partitioned into fixed size units called pages. There are usually two kinds of segments: persistent segments, and working segments. Files containing data or executable programs are mapped into persistent segments. A persistent segment (text) has a permanent storage location on disk so the Virtual Memory Manager writes the page back to that location when the page has been modified and it is no longer kept in real memory. If the page has not changed, its frame is simply reclaimed. page request -------------------- A page fault that has to be satisfied by accessing virtual memory. page scan -------------------- The clock hand algorithm used to control page and to select pages for removal from system memory. It scans pages to select pages for possible removal. page space -------------------- The area of a disk or memory reserved for paging out portions of processes or swapping out entire processes. Also known as swap space. page steal -------------------- Occurs when a page used by a process is taken away by the Virtual Memory Management system. pagein routine -------------------- A kernel routine that brings pages of a process' address space into physical memory. pageout routine -------------------- A kernel routing that executes when physical memory space is scarce, and the pagedaemon is activated to remove the least- needed pages from memory by writing them to swap space or to the file system. parm file -------------------- The file containing the parameters used by OV Performance Server Agent’s scopeux data collector to customize data collection. Also used to define your applications. perflbd.rc -------------------- The configuration file that contains entries for one or more data sources, each of which represents a scopeux or DSI data source. See also repository server. performance distribution range -------------------- An amount of time that you define with the range= keyword in the transaction tracking configuration file, ttd.conf. perfstat -------------------- The script used for viewing the status of all Hewlett-Packard performance products on your system. pfaults -------------------- Most resolvable pfaults (protection faults) are caused by copy on writes (for example, writing to private memory segments). Most other pfaults are protection violations (for example, writing to a read-only region) and result in SIGBUS. See mprotect(2). physical IO -------------------- A input/output operation where data is transferred from memory to disk or vice versa. Physical IO includes file system IO, raw IO, system IO, and virtual memory IO. physical memory -------------------- The actual hardware memory components contained within your computer system. PID -------------------- A process identifier - a process' unique identification number that distinguishes it from all other processes on the system. PPID is a parent process identifier - the process identifier of a process that forked or vforked another process. pipe -------------------- A mechanism that allows a stream of data to be passed between read and write processes. priority -------------------- The number assigned to a PID that determines its importance to the CPU scheduler. proc table -------------------- The process table that holds information for every process on the system. process -------------------- The execution of a program file. This execution can represent an interactive user (processes running at normal, nice, or real-time priorities) or an operating system process. process block -------------------- A process block occurs when a process is not executing because it is waiting for a resource or IO completion. process deactivation/reactivation -------------------- A technique used for memory management. Process deactivation marks pages of memory within a process as available for use by other more active processes. A process becomes a candidate for deactivation when physical memory becomes scarce or when a system starts thrashing. Processes are reactivated when they become ready to run. process state -------------------- Different types of tasks executed by a CPU on behalf of a process. For example: user, nice, system and interrupt. pseudo terminal (pty) -------------------- A software device that operates in pairs. Output directed to one member of the pair is sent to the input of the other member. Input is sent to the upstream module. queue -------------------- A waiting line in which unsatisfied requests are placed until a resource becomes available. raw IO -------------------- Unbuffered input/output that transfers data directly between a disk device and the user program requesting the data. It bypasses the file system's buffer cache. Also known as character mode. Compare with block mode. raw log file -------------------- An OV Performance Server Agent file into which scopeux (on UNIX systems) or scopeNT (on Windows systems) logs collected data. It contains summarized measurements of system data. See logglob, logappl, logproc, logdev, logtran, and logindx. read byte rate -------------------- The rate of kilobytes per second the system sent or received doing read operations. read rate -------------------- The number of NFS and local read operations per second a system has processed. Read operations consist of getattr, lookup, readlink, readdir, null, root, statfs, and read. Read/write Qlen -------------------- The number of pending NFS operations. read/write system call -------------------- A request that a program uses to tell the kernel to perform a specific service on the program's behalf. When the user requests a read, a read system call is activated. When the user requests a write, a write system call is activated. real time -------------------- The actual time in which an event takes place. real time cpu -------------------- Time the CPU spent executing processes that have a real-time priority. remote swapping -------------------- Swapping that uses swap space from a pool located on a different system's swap device. This type of swapping is often used by diskless systems that swap on a server machine. repeat time -------------------- An action that can be selelcted for performance alarms. Repeat time designates the amount of time that must pass before an activated and continuing alarm condition triggers another alarm signal. repository server -------------------- A server that provides data to the OVPM and OVReporter product. There is one repository for each data source configured in the perflbd.rc configuration file. A default repository server, provided at start up, contains a single data source consisting of a scopeux log file set. reserved swap space -------------------- Area set aside on your disk for virtual memory. resident buffer -------------------- Data stored in physical memory. resident memory -------------------- Information currently loaded into memory for the execution of a process. resident set size -------------------- The amount of physical memory a process is using. It includes memory allocated for the process' data, stack, and text segments. resize -------------------- Changing the overall size of a raw log file. response time -------------------- The time spent to service all NFS operations. roll back -------------------- Deleting one or more days worth of data from a raw log file with the oldest data deleted first. Roll backs are performed when a raw log file exceeds its maximum size parameted, rxlog -------------------- The default extract log file created when data is extracted from raw log files. SCOPE -------------------- The OV Performance default data source that contains a scopeux (on UNIX systems) or scopeNT (on Windows systems) global log file set. scopeux -------------------- On UNIX systems, the OV Performance Server Agent data collector program that collects performance data and writes (logs) it to raw log files for later analysis or archiving. scopeNT -------------------- On Windows systems, the OV Performance Server Agent data collector program that collects performance data and writes (logs) it to raw log files for later analysis or archiving. scopeux log files -------------------- On UNIX systems, the raw log files that are created by the scopeux collector: logglob, logappl, logproc, logdev, logtran, and logindx. scopeNT log files -------------------- On Windows systems, the raw log files that are created by the scopeNT collector: logglob, logappl, logproc, logdev, logtran, and logindx. semaphore -------------------- Special types of flags used for signaling between two cooperating processes. They are typically used to guard critical sections of code that modify shared data structures. semaphore table -------------------- Maximum number of semaphores currently allowed for the system. service level agreement -------------------- A document prepared for a business critical application that explicitly defines the service level objectives that IT (Information Technology) is expected to deliver to users. It specifies what the users can expect in terms of system response, quantities of work, and system availability. service level objective -------------------- A definable level of responsiveness for a transaction. For example, if you decide that all database updates must occur within 2 seconds, set the Service Level Objective (SLO) for that transaction as slo=2. shared memory -------------------- System memory allocated for sharing data among processes. It includes shared text, data and stack. shared memory pool -------------------- The cache in which shared memory segments are stored. shared memory segment -------------------- A portion of a system's memory dedicated to sharing data for several processes. shared memory table -------------------- A list of entries that identifies shared memory segments currently allocated on your system. shared text segment -------------------- Code shared between several processes. signal -------------------- A software event to notify a process of a change. Similar to a hardware interrupt. sleeping process -------------------- A process that either has blocked itself or that has been blocked, and is placed in a waiting state. socket operation -------------------- A process that creates an endpoint for communication and returns a descriptor for use in all subsequent socket-related system calls. start of collection -------------------- When you run a performance product, it starts collecting performance data. summary data -------------------- The time period represented in one data point of a performance measurement. Summary levels can be five minutes, one hour, and one day. swap -------------------- A memory management technique used to shuttle information between the main memory and a dedicated area on a disk (swap space). Swapping allows the system to run more processes than could otherwise fit into the main memory at a given time. swap in/out -------------------- Moving information between the main memory and a dedicated (reserved) area on a disk. ''Swapping in'' is reading in to virtual memory; ''swapping out'' is reading out from virtual memory. swap space -------------------- The area of a disk or memory reserved for swapping out entire processes or paging out portions of processes. Also known as page space. system call -------------------- A command that a program uses to tell the kernel to perform a specific service on the program's behalf. This is the user's and application programmer's interface to the UNIX kernel. system code -------------------- Kernel code that is executed through system calls. system CPU -------------------- Time that the CPU was busy executing kernel code. Also called kernel mode. system disk -------------------- Physical disk IO generated for file system management. These include inode access, super block access and cylinder group access. system interrupt handling code -------------------- Kernel code that processes interrupts. terminal transaction -------------------- A terminal transaction occurs whenever a read is completed to a terminal device or MPE message file. On a terminal device, a read is normally completed when the user presses the return or the enter key. Some devices such as serial printers may satisfy terminal reads by returning hardware status information. Several metrics are collected to characterize terminal transactions. The FIRST_RESPONSE_TIME metric measures the time between the completion of the read and the completion of the first write back to that device. This metric is most often quoted in bench marks as it yields the quickest response time. For transactions which return a large amount of data to the terminal, such as reading an electronic mail message, the time to first response may be the best indicator of overall system responsiveness. The RESPONSE_TIME_TO_PROMPT metric measures the time between the completion of the read and the posting of the next read. It is the amount of time that a user must wait before being able to enter the next transaction. This response time includes the amount of time it took to write data back to the terminal as a result of the transaction. The response time to prompt is the best metric for determining the limits of transaction throughput. The THINK_TIME metric measures the time between posting a read and its completion. It is a measure of how much time the user took to examine the results of the transaction and then complete entering the next transaction. Transaction metrics are expressed as average times per transaction and as total times in seconds. Total times are calculated by multiplying the average time per transaction times the number of transactions completed. Terminal transactions can be created by interactive or batch processes that do reads to terminal devices or message files. Reads to terminal devices or message files done by system processes will not be counted as transactions. text segment -------------------- A memory segment that holds executable program code. thrashing -------------------- A condition in which a system is spending too much time swapping data in and out, and too little time doing useful work. This is characteristic of situations in which either too many page faults are being created or too much swapping is occurring. Thrashing causes the system's performance to degrade and the response time for the interactive users to increase. threadpool queue -------------------- A queue of requests waiting for an available server thread. threshold -------------------- Numerical values that can be set to define alarm conditions. When a threshold is surpassed, an alarm is triggered. tooltip -------------------- Display of the full text of a truncated data string in a row- column formated GlancePlus report window. Tooltips are enabled and disabled by choosing Tooltips from the window's Configure menu or by clicking the "T" button in the upper right corner of the window. transaction -------------------- Some amount of work performed by a computer system on behalf of a user. The boundaries of this work are defined by the user. transaction tracking -------------------- The technology used by OV Performance Agent and GlancePlus that lets information technology (IT) managers measure end-to-end response time of business application transactions. trap -------------------- Software interrupt that requires service from a trap handler routine. An example would be a floating point exception on a system that does not have floating point hardware support. This requires the floating point operations to be emulated in the software trap handler code. trap handler code -------------------- Traps are measured when the kernel executes the code in the trap handler routine. For a list of trap types, refer to the file /usr/include/machine/trap.h. ttd.conf -------------------- The transaction tracking configuration file where you define each transaction and the information to be tracked for each transaction, such as transaction name, performance distribution range, and service level objective. unmount/mount -------------------- The process of removing or adding functionally-independent file systems from or to the root file system. update interval -------------------- The interval of time between updates of the metrics that display in a report window or graph. user code -------------------- Code that does not perform system calls. user CPU -------------------- Time that the CPU was busy executing user code. This includes time spent executing non-kernel code by daemon processes. It does not include CPU time spent executing system calls, context switching, or interrupt handling. user disk -------------------- Physical disk IO generated by accessing the file system. utility program -------------------- An OV Performance Server Agent program that lets you check parm file and alarmdef file syntax, resize log files, scan log files for information, and obtain alarm information from historical log file data. vfault CPU -------------------- CPU time spent handling page faults. vfaults A vfault (virtual fault) is the mechanism that causes paging. Accessing an unmapped valid page causes a resolvable vfault. Accessing an illegal address results in a SIGSEGV. vfork -------------------- A version of the fork system call that spawns a child process that is capable of sharing code and data with its parent process. virtual memory -------------------- Secondary memory that exists on a portion of a disk or other storage device. It is used as an extension of the primary physical memory. virtual memory IO -------------------- The virtual memory reads or writes from the disk for memory mapped files, and for paging out pages from paging area (swap area). Since all the files are memory mapped, all the reads or writes are virtual memory reads or writes as well. The computational memory of the processes that are changing is paged out if necessary to the swap area and read or written from there again. write byte rate -------------------- The rate of kilobytes per second the system sent or received during write operations. write rate -------------------- The number of NFS and local write operations the local machine has processed per second. Write operations include setattr, writecache, create, remove, rename, link, symlink, mkdir, rmdir, and write. X-Axis -------------------- The horizontal scale on a graph. Y-Axis -------------------- The vertical scale on a graph. zero fill page -------------------- When pages are requested by the processes they are usually allocated by the Virtual Memory Management system and filled with zeros. K error (LAN) K exec fill page