HP Operations Agent - Performance Collection Component for Windows
Dictionary of Operating System Performance Metrics
Print Date 12/2012
HP Operations Agent for Windows Release 11.11
*************************************************************
Legal Notices
=============
Warranty
--------
The only warranties for HP products and services are set forth in the express warranty
statements accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. HP shall not be liable for technical or editorial errors or
omissions contained herein.
The information contained herein is subject to change without notice.
Restricted Rights Legend
------------------------
Confidential computer software. Valid license from HP required for possession, use or copying.
Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software
Documentation, and Technical Data for Commercial Items are licensed to the U.S.
Government under vendor's standard commercial license.
Copyright Notices
-----------------
©Copyright 2010-2012 Hewlett-Packard Development Company, L.P. All
rights reserved.
*****************************************************************
Introduction
============
This dictionary contains definitions of the Windows operating
system performance metrics for the Performance Collection Component.
This document is divided into the following sections:
* "Metric Names by Data Class," which lists the metrics
alphabetically by data class. Use these metric names for
exporting data with the extract utility. You can also use
these metric names in defining alarm conditions in your
alarmdef file.
* "Metric Definitions," which describes each metric in
alphabetical order.
Please note that the metric help has been put in a more generic
format and references are made to the other platforms
that also support each of the metrics.
Metric Names by Data Class
==========================
Windows Global Metrics
----------------------------------
BLANK
DATE
DATE_SECONDS
DAY
INTERVAL
RECORD_TYPE
TIME
YEAR
GBL_ACTIVE_CPU
GBL_ACTIVE_CPU_CORE
GBL_ACTIVE_PROC
GBL_ALIVE_PROC
GBL_COMPLETED_PROC
GBL_CPU_CLOCK
GBL_CPU_ENTL_UTIL
GBL_CPU_HISTOGRAM
GBL_CPU_IDLE_TIME
GBL_CPU_IDLE_UTIL
GBL_CPU_INTERRUPT_TIME
GBL_CPU_INTERRUPT_UTIL
GBL_CPU_MT_ENABLED
GBL_CPU_PHYSC
GBL_CPU_PHYS_TOTAL_UTIL
GBL_CPU_SYS_MODE_TIME
GBL_CPU_SYS_MODE_UTIL
GBL_CPU_TOTAL_TIME
GBL_CPU_TOTAL_UTIL
GBL_CPU_USER_MODE_TIME
GBL_CPU_USER_MODE_UTIL
GBL_CSWITCH_RATE
GBL_DISK_CACHE_READ
GBL_DISK_CACHE_READ_RATE
GBL_DISK_HISTOGRAM
GBL_DISK_LOGL_READ
GBL_DISK_LOGL_READ_RATE
GBL_DISK_PHYS_BYTE
GBL_DISK_PHYS_BYTE_RATE
GBL_DISK_PHYS_IO
GBL_DISK_PHYS_IO_RATE
GBL_DISK_PHYS_READ
GBL_DISK_PHYS_READ_BYTE_RATE
GBL_DISK_PHYS_READ_PCT
GBL_DISK_PHYS_READ_RATE
GBL_DISK_PHYS_WRITE
GBL_DISK_PHYS_WRITE_BYTE_RATE
GBL_DISK_PHYS_WRITE_RATE
GBL_DISK_REQUEST_QUEUE
GBL_DISK_TIME_PEAK
GBL_DISK_UTIL_PEAK
GBL_FS_SPACE_UTIL_PEAK
GBL_INTERRUPT
GBL_INTERRUPT_RATE
GBL_INTERVAL
GBL_LOADAVG
GBL_MACHINE_MEM_USED
GBL_MEM_CACHE
GBL_MEM_CACHE_FLUSH_RATE
GBL_MEM_CACHE_HIT_PCT
GBL_MEM_CACHE_UTIL
GBL_MEM_DATAMAP_HIT_PCT
GBL_MEM_FREE
GBL_MEM_FREE_UTIL
GBL_MEM_LOCKED
GBL_MEM_LOCKED_UTIL
GBL_MEM_OVERHEAD
GBL_MEM_PAGEIN
GBL_MEM_PAGEIN_RATE
GBL_MEM_PAGEOUT
GBL_MEM_PAGEOUT_RATE
GBL_MEM_PAGE_FAULT
GBL_MEM_PAGE_FAULT_RATE
GBL_MEM_PAGE_REQUEST
GBL_MEM_PAGE_REQUEST_RATE
GBL_MEM_PHYS_SWAPPED
GBL_MEM_SYS
GBL_MEM_SYS_AND_CACHE_UTIL
GBL_MEM_SYS_UTIL
GBL_MEM_USER
GBL_MEM_USER_UTIL
GBL_MEM_UTIL
GBL_NET_DEFERRED_PCT
GBL_NET_ERROR
GBL_NET_ERROR_1_MIN_RATE
GBL_NET_ERROR_RATE
GBL_NET_IN_ERROR_PCT
GBL_NET_IN_ERROR_RATE
GBL_NET_IN_PACKET
GBL_NET_IN_PACKET_RATE
GBL_NET_OUTQUEUE
GBL_NET_OUT_ERROR_PCT
GBL_NET_OUT_ERROR_RATE
GBL_NET_OUT_PACKET
GBL_NET_OUT_PACKET_RATE
GBL_NET_PACKET_RATE
GBL_NET_UTIL_PEAK
GBL_NUM_NETWORK
GBL_NUM_USER
GBL_PROC_RUN_TIME
GBL_PROC_SAMPLE
GBL_RUN_QUEUE
GBL_SRV_WRKITM_SHORTAGES
GBL_STARTED_PROC
GBL_STATTIME
GBL_SWAP_SPACE_USED
GBL_SWAP_SPACE_UTIL
GBL_SYSCALL
GBL_SYSCALL_RATE
GBL_SYSTEM_UPTIME_HOURS
GBL_SYSTEM_UPTIME_SECONDS
GBL_TT_OVERFLOW_COUNT
GBL_WEB_CACHE_HIT_PCT
GBL_WEB_CGI_REQUEST_RATE
GBL_WEB_CONNECTION_RATE
GBL_WEB_FILES_RECEIVED_RATE
GBL_WEB_FILES_SENT_RATE
GBL_WEB_FTP_READ_BYTE_RATE
GBL_WEB_FTP_WRITE_BYTE_RATE
GBL_WEB_GET_REQUEST_RATE
GBL_WEB_GOPHER_READ_BYTE_RATE
GBL_WEB_GOPHER_WRITE_BYTE_RATE
GBL_WEB_HEAD_REQUEST_RATE
GBL_WEB_HTTP_READ_BYTE_RATE
GBL_WEB_HTTP_WRITE_BYTE_RATE
GBL_WEB_ISAPI_REQUEST_RATE
GBL_WEB_LOGON_FAILURES
GBL_WEB_NOT_FOUND_ERRORS
GBL_WEB_OTHER_REQUEST_RATE
GBL_WEB_POST_REQUEST_RATE
GBL_WEB_READ_BYTE_RATE
GBL_WEB_WRITE_BYTE_RATE
STATDATE
STATTIME
Windows Application Metrics
----------------------------------
BLANK
DATE
DATE_SECONDS
DAY
INTERVAL
RECORD_TYPE
TIME
YEAR
APP_ACTIVE_PROC
APP_ALIVE_PROC
APP_COMPLETED_PROC
APP_CPU_SYS_MODE_TIME
APP_CPU_SYS_MODE_UTIL
APP_CPU_TOTAL_TIME
APP_CPU_TOTAL_UTIL
APP_CPU_USER_MODE_TIME
APP_CPU_USER_MODE_UTIL
APP_IO_BYTE
APP_IO_BYTE_RATE
APP_MEM_RES
APP_MEM_UTIL
APP_MEM_VIRT
APP_MINOR_FAULT_RATE
APP_NAME
APP_NUM
APP_PRI
APP_PROC_RUN_TIME
APP_SAMPLE
Windows Process Metrics
----------------------------------
BLANK
DATE
DATE_SECONDS
DAY
INTERVAL
RECORD_TYPE
TIME
YEAR
PROC_APP_ID
PROC_CPU_ALIVE_SYS_MODE_UTIL
PROC_CPU_ALIVE_TOTAL_UTIL
PROC_CPU_ALIVE_USER_MODE_UTIL
PROC_CPU_SYS_MODE_TIME
PROC_CPU_SYS_MODE_UTIL
PROC_CPU_TOTAL_TIME
PROC_CPU_TOTAL_TIME_CUM
PROC_CPU_TOTAL_UTIL
PROC_CPU_TOTAL_UTIL_CUM
PROC_CPU_USER_MODE_TIME
PROC_CPU_USER_MODE_UTIL
PROC_INTEREST
PROC_INTERVAL_ALIVE
PROC_IO_BYTE
PROC_IO_BYTE_CUM
PROC_IO_BYTE_RATE
PROC_IO_BYTE_RATE_CUM
PROC_MEM_LOCKED
PROC_MEM_RES
PROC_MEM_VIRT
PROC_MINOR_FAULT
PROC_PARENT_PROC_ID
PROC_PRI
PROC_PROC_ID
PROC_PROC_NAME
PROC_RUN_TIME
PROC_STARTTIME
PROC_THREAD_COUNT
PROC_USER_NAME
Windows Transaction Metrics
----------------------------------
BLANK
DATE
DATE_SECONDS
DAY
INTERVAL
RECORD_TYPE
TIME
YEAR
TTBIN_TRANS_COUNT_1
TTBIN_TRANS_COUNT_10
TTBIN_TRANS_COUNT_2
TTBIN_TRANS_COUNT_3
TTBIN_TRANS_COUNT_4
TTBIN_TRANS_COUNT_5
TTBIN_TRANS_COUNT_6
TTBIN_TRANS_COUNT_7
TTBIN_TRANS_COUNT_8
TTBIN_TRANS_COUNT_9
TTBIN_UPPER_RANGE_1
TTBIN_UPPER_RANGE_10
TTBIN_UPPER_RANGE_2
TTBIN_UPPER_RANGE_3
TTBIN_UPPER_RANGE_4
TTBIN_UPPER_RANGE_5
TTBIN_UPPER_RANGE_6
TTBIN_UPPER_RANGE_7
TTBIN_UPPER_RANGE_8
TTBIN_UPPER_RANGE_9
TT_ABORT
TT_ABORT_WALL_TIME_PER_TRAN
TT_APP_NAME
TT_APP_TRAN_NAME
TT_CLIENT_ADDRESS
TT_CLIENT_ADDRESS_FORMAT
TT_CLIENT_TRAN_ID
TT_COUNT
TT_FAILED
TT_INFO
TT_NAME
TT_NUM_BINS
TT_SLO_COUNT
TT_SLO_PERCENT
TT_SLO_THRESHOLD
TT_TERM_TRAN_1_HR_RATE
TT_TRAN_1_MIN_RATE
TT_TRAN_ID
TT_UNAME
TT_USER_MEASUREMENT_AVG
TT_USER_MEASUREMENT_AVG_2
TT_USER_MEASUREMENT_AVG_3
TT_USER_MEASUREMENT_AVG_4
TT_USER_MEASUREMENT_AVG_5
TT_USER_MEASUREMENT_AVG_6
TT_USER_MEASUREMENT_COUNT
TT_USER_MEASUREMENT_COUNT_2
TT_USER_MEASUREMENT_COUNT_3
TT_USER_MEASUREMENT_COUNT_4
TT_USER_MEASUREMENT_COUNT_5
TT_USER_MEASUREMENT_COUNT_6
TT_USER_MEASUREMENT_MAX
TT_USER_MEASUREMENT_MAX_2
TT_USER_MEASUREMENT_MAX_3
TT_USER_MEASUREMENT_MAX_4
TT_USER_MEASUREMENT_MAX_5
TT_USER_MEASUREMENT_MAX_6
TT_USER_MEASUREMENT_MIN
TT_USER_MEASUREMENT_MIN_2
TT_USER_MEASUREMENT_MIN_3
TT_USER_MEASUREMENT_MIN_4
TT_USER_MEASUREMENT_MIN_5
TT_USER_MEASUREMENT_MIN_6
TT_USER_MEASUREMENT_NAME
TT_USER_MEASUREMENT_NAME_2
TT_USER_MEASUREMENT_NAME_3
TT_USER_MEASUREMENT_NAME_4
TT_USER_MEASUREMENT_NAME_5
TT_USER_MEASUREMENT_NAME_6
TT_WALL_TIME_PER_TRAN
Windows Disk Metrics
----------------------------------
BLANK
DATE
DATE_SECONDS
DAY
INTERVAL
RECORD_TYPE
TIME
YEAR
BYDSK_AVG_SERVICE_TIME
BYDSK_BUSY_TIME
BYDSK_DEVNAME
BYDSK_HISTOGRAM
BYDSK_ID
BYDSK_PHYS_BYTE
BYDSK_PHYS_BYTE_RATE
BYDSK_PHYS_IO
BYDSK_PHYS_IO_RATE
BYDSK_PHYS_READ
BYDSK_PHYS_READ_BYTE
BYDSK_PHYS_READ_BYTE_RATE
BYDSK_PHYS_READ_RATE
BYDSK_PHYS_WRITE
BYDSK_PHYS_WRITE_BYTE
BYDSK_PHYS_WRITE_BYTE_RATE
BYDSK_PHYS_WRITE_RATE
BYDSK_REQUEST_QUEUE
BYDSK_UTIL
Windows Network Interface Metrics
----------------------------------
BLANK
DATE
DATE_SECONDS
DAY
INTERVAL
RECORD_TYPE
TIME
YEAR
BYNETIF_ERROR
BYNETIF_ERROR_RATE
BYNETIF_ID
BYNETIF_IN_BYTE
BYNETIF_IN_BYTE_RATE
BYNETIF_IN_PACKET
BYNETIF_IN_PACKET_RATE
BYNETIF_NAME
BYNETIF_NET_SPEED
BYNETIF_OUT_BYTE
BYNETIF_OUT_BYTE_RATE
BYNETIF_OUT_PACKET
BYNETIF_OUT_PACKET_RATE
BYNETIF_PACKET_RATE
BYNETIF_QUEUE
BYNETIF_UTIL
BYPROTOCOL_IN_PACKET
BYPROTOCOL_IN_PACKET_RATE
BYPROTOCOL_OUT_PACKET
BYPROTOCOL_OUT_PACKET_RATE
Windows CPU Metrics
----------------------------------
BLANK
DATE
DATE_SECONDS
DAY
INTERVAL
RECORD_TYPE
TIME
YEAR
BYCPU_CPU_CLOCK
BYCPU_CPU_SYS_MODE_TIME
BYCPU_CPU_SYS_MODE_UTIL
BYCPU_CPU_TOTAL_TIME
BYCPU_CPU_TOTAL_UTIL
BYCPU_CPU_USER_MODE_TIME
BYCPU_CPU_USER_MODE_UTIL
BYCPU_ID
BYCPU_INTERRUPT
BYCPU_INTERRUPT_RATE
BYCPU_STATE
Windows Filesystem Metrics
----------------------------------
BLANK
DATE
DATE_SECONDS
DAY
INTERVAL
RECORD_TYPE
TIME
YEAR
FS_BLOCK_SIZE
FS_DEVNAME
FS_DEVNO
FS_DIRNAME
FS_MAX_SIZE
FS_REQUEST_QUEUE
FS_SPACE_RESERVED
FS_SPACE_USED
FS_SPACE_UTIL
FS_TYPE
Windows Configuration Metrics
----------------------------------
BLANK
DATE
DATE_SECONDS
DAY
INTERVAL
RECORD_TYPE
TIME
YEAR
GBL_APP_THRESHOLD
GBL_BOOT_TIME
GBL_BYCPU_THRESHOLD
GBL_BYDSK_THRESHOLD
GBL_BYFS_THRESHOLD
GBL_BYNETIF_THRESHOLD
GBL_COLLECTOR
GBL_COLLECT_INTERVAL
GBL_COLLECT_INTERVAL_PROC
GBL_CPU_CYCLE_ENTL_MAX
GBL_CPU_CYCLE_ENTL_MIN
GBL_CPU_ENTL_MAX
GBL_CPU_ENTL_MIN
GBL_CPU_SHARES_PRIO
GBL_FLUSH
GBL_GMTOFFSET
GBL_IGNORE_MT
GBL_LOGFILE_VERSION
GBL_LOGGING_TYPES
GBL_LS_MODE
GBL_LS_ROLE
GBL_LS_SHARED
GBL_LS_TYPE
GBL_MACHINE
GBL_MEM_AVAIL
GBL_MEM_ENTL_MAX
GBL_MEM_ENTL_MIN
GBL_MEM_PHYS
GBL_MEM_SHARES_PRIO
GBL_NUM_ACTIVE_LS
GBL_NUM_CPU
GBL_NUM_CPU_CORE
GBL_NUM_DISK
GBL_NUM_LS
GBL_NUM_SOCKET
GBL_OSNAME
GBL_OSRELEASE
GBL_OSVERSION
GBL_SWAP_SPACE_AVAIL
GBL_SWAP_SPACE_AVAIL_KB
GBL_SYSTEM_ID
GBL_THRESHOLD_CPU
GBL_THRESHOLD_NOKILLED
GBL_THRESHOLD_NONEW
GBL_THRESHOLD_PROCMEM
Windows Logical System Metrics
----------------------------------
BLANK
DATE
DATE_SECONDS
DAY
INTERVAL
RECORD_TYPE
TIME
YEAR
BYLS_CPU_ENTL_MAX
BYLS_CPU_ENTL_MIN
BYLS_CPU_ENTL_UTIL
BYLS_CPU_PHYSC
BYLS_CPU_PHYS_SYS_MODE_UTIL
BYLS_CPU_PHYS_TOTAL_UTIL
BYLS_CPU_PHYS_USER_MODE_UTIL
BYLS_CPU_SHARES_PRIO
BYLS_DISPLAY_NAME
BYLS_HYPCALL
BYLS_HYP_UTIL
BYLS_LS_HOSTNAME
BYLS_LS_ID
BYLS_LS_MODE
BYLS_LS_NAME
BYLS_LS_OSTYPE
BYLS_LS_PATH
BYLS_LS_PROC_ID
BYLS_LS_SHARED
BYLS_LS_STATE
BYLS_LS_UUID
BYLS_MEM_ENTL
BYLS_NUM_CPU
BYLS_NUM_DISK
BYLS_NUM_NETIF
BYLS_UPTIME_SECONDS
Metric Definitions
==================
APP_ACTIVE_PROC
----------------------------------
An active process is one that exists and consumes some CPU time.
APP_ACTIVE_PROC is the sum of the alive-process-time/interval-time ratios of
every process belonging to an application that is active (uses any CPU time)
during an interval.
The following diagram of a four second interval showing two processes, A and
B, for an application should be used to understand the above definition. Note
the difference between active processes, which consume CPU time, and alive
processes which merely exist on the system.
----------- Seconds -----------
1 2 3 4
Proc
---- ---- ---- ---- ----
A live live live live
B live/CPU live/CPU live dead
Process A is alive for the entire four second interval, but consumes no CPU.
A’s contribution to APP_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to
APP_ACTIVE_PROC. B’s contribution to APP_ALIVE_PROC is 3*1/4. B contributes
2*1/4 to APP_ACTIVE_PROC. Thus, for this interval, APP_ACTIVE_PROC equals 0.5
and APP_ALIVE_PROC equals 1.75.
Because a process may be alive but not active, APP_ACTIVE_PROC will always be
less than or equal to APP_ALIVE_PROC.
This metric indicates the number of processes in an application group that are
competing for the CPU. This metric is useful, along with other metrics, for
comparing loads placed on the system by different groups of processes.
On non HP-UX systems, this metric is derived from sampled process data.
Since the data for a process is not available after the process has died on
this operating system, a process whose life is shorter than the sampling
interval may not be seen when the samples are taken. Thus this metric may be
slightly less than the actual value. Increasing the sampling frequency
captures a more accurate count, but the overhead of collection may also rise.
APP_ALIVE_PROC
----------------------------------
An alive process is one that exists on the system. APP_ALIVE_PROC is the sum
of the alive-process-time/interval-time ratios for every process belonging to
a given application.
The following diagram of a four second interval showing two processes, A and
B, for an application should be used to understand the above definition. Note
the difference between active processes, which consume CPU time, and alive
processes which merely exist on the system.
----------- Seconds -----------
1 2 3 4
Proc
---- ---- ---- ---- ----
A live live live live
B live/CPU live/CPU live dead
Process A is alive for the entire four second interval but consumes no CPU.
A’s contribution to APP_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to
APP_ACTIVE_PROC. B’s contribution to APP_ALIVE_PROC is 3*1/4. B contributes
2*1/4 to APP_ACTIVE_PROC. Thus, for this interval, APP_ACTIVE_PROC equals 0.5
and APP_ALIVE_PROC equals 1.75.
Because a process may be alive but not active, APP_ACTIVE_PROC will always be
less than or equal to APP_ALIVE_PROC.
On non HP-UX systems, this metric is derived from sampled process data.
Since the data for a process is not available after the process has died on
this operating system, a process whose life is shorter than the sampling
interval may not be seen when the samples are taken. Thus this metric may be
slightly less than the actual value. Increasing the sampling frequency
captures a more accurate count, but the overhead of collection may also rise.
APP_COMPLETED_PROC
----------------------------------
The number of processes in this group that completed during the interval.
On non HP-UX systems, this metric is derived from sampled process data.
Since the data for a process is not available after the process has died on
this operating system, a process whose life is shorter than the sampling
interval may not be seen when the samples are taken. Thus this metric may be
slightly less than the actual value. Increasing the sampling frequency
captures a more accurate count, but the overhead of collection may also rise.
APP_CPU_SYS_MODE_TIME
----------------------------------
The time, in seconds, during the interval that the CPU was in system mode for
processes in this group.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available. On platforms
other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric
will report values normalized against the number of active cores in the
system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
APP_CPU_SYS_MODE_UTIL
----------------------------------
The percentage of time during the interval that the CPU was used in system
mode for processes in this group.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
High system CPU utilizations are normal for IO intensive groups. Abnormally
high system CPU utilization can indicate that a hardware problem is causing a
high interrupt rate. It can also indicate programs that are not making
efficient system calls. On platforms other than HPUX, If the ignore_mt flag
is set(true) in parm file, this metric will report values normalized against
the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
APP_CPU_TOTAL_TIME
----------------------------------
The total CPU time, in seconds, devoted to processes in this group during the
interval.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available. On platforms
other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric
will report values normalized against the number of active cores in the
system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
APP_CPU_TOTAL_UTIL
----------------------------------
The percentage of the total CPU time devoted to processes in this group during
the interval. This indicates the relative CPU load placed on the system by
processes in this group.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
Large values for this metric may indicate that this group is causing a CPU
bottleneck. This would be normal in a computation-bound workload, but might
mean that processes are using excessive CPU time and perhaps looping.
If the “other” application shows significant amounts of CPU, you may want to
consider tuning your parm file so that process activity is accounted for in
known applications.
APP_CPU_TOTAL_UTIL =
APP_CPU_SYS_MODE_UTIL +
APP_CPU_USER_MODE_UTIL
NOTE: On Windows, the sum of the APP_CPU_TOTAL_UTIL metrics may not equal
GBL_CPU_TOTAL_UTIL. Microsoft states that “this is expected behavior” because
the GBL_CPU_TOTAL_UTIL metric is taken from the NT performance library
Processor objects while the APP_CPU_TOTAL_UTIL metrics are taken from the
Process objects. Microsoft states that there can be CPU time accounted for in
the Processor system objects that may not be seen in the Process objects. On
platforms other than HPUX, If the ignore_mt flag is set(true) in parm file,
this metric will report values normalized against the number of active cores
in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
APP_CPU_USER_MODE_TIME
----------------------------------
The time, in seconds, that processes in this group were in user mode during
the interval.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available. On platforms
other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric
will report values normalized against the number of active cores in the
system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
APP_CPU_USER_MODE_UTIL
----------------------------------
The percentage of time that processes in this group were using the CPU in user
mode during the interval.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
High user mode CPU percentages are normal for computation-intensive groups.
Low values of user CPU utilization compared to relatively high values for
APP_CPU_SYS_MODE_UTIL can indicate a hardware problem or improperly tuned
programs in this group.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available. On platforms
other than HPUX, If the ignore_mt flag is set(true) in parm file, this metric
will report values normalized against the number of active cores in the
system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
APP_IO_BYTE
----------------------------------
The number of characters (in KB) transferred for processes in this group to
all devices during the interval. This includes IO to disk, terminal, tape and
printers.
APP_IO_BYTE_RATE
----------------------------------
The number of characters (in KB) per second transferred for processes in this
group to all devices during the interval. This includes IO to disk, terminal,
tape and printers.
APP_MEM_RES
----------------------------------
On Unix systems, this is the sum of the size (in MB) of resident memory for
processes in this group that were alive at the end of the interval. This
consists of text, data, stack, and shared memory regions.
On HP-UX, since PROC_MEM_RES typically takes shared region references into
account, this approximates the total resident (physical) memory consumed by
all processes in this group.
On all other Unix systems, this is the sum of the resident memory region sizes
for all processes in this group. When the resident memory size for processes
includes shared regions, such as shared memory and library text and data, the
shared regions are counted multiple times in this sum. For example, if the
application contains four processes that are attached to a 500MB shared memory
region that is all resident in physical memory, then 2000MB is contributed
towards the sum in this metric. As such, this metric can overestimate the
resident memory being used by processes in this group when they share memory
regions.
Refer to the help text for PROC_MEM_RES for additional information.
On Windows, this is the sum of the size (in MB) of the working sets for
processes in this group during the interval. The working set counts memory
pages referenced recently by the threads making up this group. Note that the
size of the working set is often larger than the amount of pagefile space
consumed.
APP_MEM_UTIL
----------------------------------
On Unix systems, this is the approximate percentage of the system’s physical
memory used as resident memory by processes in this group that were alive at
the end of the interval. This metric summarizes process private and shared
memory in each application.
On Windows, this is an estimate of the percentage of the system’s physical
memory allocated for working set memory by processes in this group during the
interval.
On HP-UX, this consists of text, data, stack, as well the process’ portion of
shared memory regions (such as, shared libraries, text segments, and shared
data). The sum of the shared region pages is typically divided by the number
of references.
APP_MEM_VIRT
----------------------------------
On Unix systems, this is the sum (in MB) of virtual memory for processes in
this group that were alive at the end of the interval. This consists of text,
data, stack, and shared memory regions.
On HP-UX, since PROC_MEM_VIRT typically takes shared region references into
account, this approximates the total virtual memory consumed by all processes
in this group.
On all other Unix systems, this is the sum of the virtual memory region sizes
for all processes in this group. When the virtual memory size for processes
includes shared regions, such as shared memory and library text and data, the
shared regions are counted multiple times in this sum. For example, if the
application contains four processes that are attached to a 500MB shared memory
region, then 2000MB is reported in this metric. As such, this metric can
overestimate the virtual memory being used by processes in this group when
they share memory regions.
On Windows, this is the sum (in MB) of paging file space used for all
processes in this group during the interval. Groups of processes may have
working set sizes (APP_MEM_RES) larger than the size of their pagefile space.
APP_MINOR_FAULT_RATE
----------------------------------
The number of minor page faults per second satisfied in memory (pages were
reclaimed from one of the free lists) for processes in this group during the
interval.
APP_NAME
----------------------------------
The name of the application (up to 20 characters). This comes from the parm
file where the applications are defined.
The application called “other” captures all processes not aggregated into
applications specifically defined in the parm file. In other words, if no
applications are defined in the parm file, then all process data would be
reflected in the “other” application. The name of the Windows module for this
application.
APP_NUM
----------------------------------
The sequentially assigned number of this application or, on Solaris, the
project ID when application grouping by project is enabled.
APP_PRI
----------------------------------
On Unix systems, this is the average priority of the processes in this group
during the interval.
On Windows, this is the average base priority of the processes in this group
during the interval.
APP_PROC_RUN_TIME
----------------------------------
The average run time for processes in this group that completed during the
interval.
On non HP-UX systems, this metric is derived from sampled process data.
Since the data for a process is not available after the process has died on
this operating system, a process whose life is shorter than the sampling
interval may not be seen when the samples are taken. Thus this metric may be
slightly less than the actual value. Increasing the sampling frequency
captures a more accurate count, but the overhead of collection may also rise.
APP_SAMPLE
----------------------------------
The number of samples of process data that have been averaged or accumulated
during this sample.
BLANK
----------------------------------
An empty field used for spacing reports. For example, this field can be used
to create a blank column in a spreadsheet that may be used to sum several
items.
BYCPU_CPU_CLOCK
----------------------------------
The clock speed of the CPU in the current slot. The clock speed is in MHz for
the selected CPU.
The Linux kernel currently doesn’t provide any metadata information for
disabled CPUs. This means that there is no way to find out types, speeds, as
well as hardware IDs or any other information that is used to determine the
number of cores, the number of threads, the HyperThreading state, etc... If
the agent (or Glance) is started while some of the CPUs are disabled, some of
these metrics will be “na”, some will be based on what is visible at startup
time. All information will be updated if/when additional CPUs are enabled and
information about them becomes available. The configuration counts will remain
at the highest discovered level (i.e. if CPUs are then disabled, the maximum
number of CPUs/cores/etc... will remain at the highest observed level). It is
recommended that the agent be started with all CPUs enabled.
On Linux, this value is always rounded up to the next MHz.
BYCPU_CPU_SYS_MODE_TIME
----------------------------------
The time, in seconds, that this CPU (or logical processor) was in system mode
during the interval.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode. On platforms other than
HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report
values normalized against the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
BYCPU_CPU_SYS_MODE_UTIL
----------------------------------
The percentage of time that this CPU (or logical processor) was in system mode
during the interval.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode. On platforms other than
HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report
values normalized against the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
BYCPU_CPU_TOTAL_TIME
----------------------------------
The total time, in seconds, that this CPU (or logical processor) was not idle
during the interval.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
BYCPU_CPU_TOTAL_UTIL
----------------------------------
The percentage of time that this CPU (or logical processor) was not idle
during the interval.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
BYCPU_CPU_USER_MODE_TIME
----------------------------------
The time, in seconds, during the interval that this CPU (or logical processor)
was in user mode.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On
platforms other than HPUX, If the ignore_mt flag is set(true) in parm file,
this metric will report values normalized against the number of active cores
in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
BYCPU_CPU_USER_MODE_UTIL
----------------------------------
The percentage of time that this CPU (or logical processor) was in user mode
during the interval.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority. On
platforms other than HPUX, If the ignore_mt flag is set(true) in parm file,
this metric will report values normalized against the number of active cores
in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
BYCPU_ID
----------------------------------
The ID number of this CPU. On some Unix systems, such as SUN, CPUs are not
sequentially numbered.
BYCPU_INTERRUPT
----------------------------------
The number of device interrupts for this CPU during the interval.
On HP-UX, a value of “na” is displayed on a system with multiple CPUs.
BYCPU_INTERRUPT_RATE
----------------------------------
The average number of device interrupts per second for this CPU during the
interval.
On HP-UX, a value of “na” is displayed on a system with multiple CPUs.
BYCPU_STATE
----------------------------------
A text string indicating the current state of a processor.
On HP-UX, this is either “Enabled”, “Disabled” or “Unknown”. On AIX, this is
either “Idle/Offline” or “Online”. On all other systems, this is either
“Offline”, “Online” or “Unknown”.
BYDSK_AVG_SERVICE_TIME
----------------------------------
The average time, in milliseconds, that this disk device spent processing each
disk request during the interval. For example, a value of 5.14 would indicate
that disk requests during the last interval took on average slightly longer
than five one-thousandths of a second to complete for this device.
Some Linux kernels, typically 2.2 and older kernels, do not support the
instrumentation needed to provide values for this metric. This metric will be
“na” on the affected kernels. The “sar -d” command will also not be present
on these systems. Distributions and OS releases that are known to be affected
include: TurboLinux 7, SuSE 7.2, and Debian 3.0.
This is a measure of the speed of the disk, because slower disk devices
typically show a larger average service time. Average service time is also
dependent on factors such as the distribution of I/O requests over the
interval and their locality. It can also be influenced by disk driver and
controller features such as I/O merging and command queueing. Note that this
service time is measured from the perspective of the kernel, not the disk
device itself. For example, if a disk device can find the requested data in
its cache, the average service time could be quicker than the speed of the
physical disk hardware.
This metric can be used to help determine which disk devices are taking more
time than usual to process requests.
BYDSK_BUSY_TIME
----------------------------------
The time, in seconds, that this disk device was busy transferring data during
the interval.
On HP-UX, this is the time, in seconds, during the interval that the disk
device had IO in progress from the point of view of the Operating System. In
other words, the time, in seconds, the disk was busy servicing requests for
this device.
BYDSK_DEVNAME
----------------------------------
The name of this disk device.
On HP-UX, the name identifying the specific disk spindle is the hardware path
which specifies the address of the hardware components leading to the disk
device.
On SUN, these names are the same disk names displayed by “iostat”.
On AIX, this is the path name string of this disk device. This is the fsname
parameter in the mount(1M) command. If more than one file system is contained
on a device (that is, the device is partitioned), this is indicated by an
asterisk (“*”) at the end of the path name.
On OSF1, this is the path name string of this disk device. This is the file-
system parameter in the mount(1M) command.
On Windows, this is the unit number of this disk device.
BYDSK_HISTOGRAM
----------------------------------
A bar chart of the disk IO.
Shows a breakout of the disk IO.
Disk IO Rate = BYDSK_PHYS_READ_RATE
+ BYDSK_PHYS_WRITE_RATE
ASCII and binary files contain a line of ASCII characters that make up one row
of a printed histogram. This can be a quick way to get a graphical view of
Disk IO on a character mode terminal display.
BYDSK_ID
----------------------------------
The ID of the current disk device.
BYDSK_PHYS_BYTE
----------------------------------
The number of KBs of physical IOs transferred to or from this disk device
during the interval.
On Unix systems, all types of physical disk IOs are counted, including file
system, virtual memory, and raw IO.
BYDSK_PHYS_BYTE_RATE
----------------------------------
The average KBs per second transferred to or from this disk device during the
interval.
On Unix systems, all types of physical disk IOs are counted, including file
system, virtual memory, and raw IO.
BYDSK_PHYS_IO
----------------------------------
The number of physical IOs for this disk device during the interval.
On Unix systems, all types of physical disk IOs are counted, including file
system, virtual memory, and raw reads.
BYDSK_PHYS_IO_RATE
----------------------------------
The average number of physical IO requests per second for this disk device
during the interval.
On Unix systems, all types of physical disk IOs are counted, including file
system IO, virtual memory and raw IO.
BYDSK_PHYS_READ
----------------------------------
The number of physical reads for this disk device during the interval.
On Unix systems, all types of physical disk reads are counted, including file
system, virtual memory, and raw reads.
On AIX, this is an estimated value based on the ratio of read bytes to total
bytes transferred. The actual number of reads is not tracked by the kernel.
This is calculated as
BYDSK_PHYS_READ =
BYDSK_PHYS_IO *
(BYDSK_PHYS_READ_BYTE /
BYDSK_PHYS_IO_BYTE)
BYDSK_PHYS_READ_BYTE
----------------------------------
The KBs transferred from this disk device during the interval.
On Unix systems, all types of physical disk reads are counted, including file
system, virtual memory, and raw IO.
BYDSK_PHYS_READ_BYTE_RATE
----------------------------------
The average KBs per second transferred from this disk device during the
interval.
On Unix systems, all types of physical disk reads are counted, including file
system, virtual memory, and raw IO.
BYDSK_PHYS_READ_RATE
----------------------------------
The average number of physical reads per second for this disk device during
the interval.
On Unix systems, all types of physical disk reads are counted, including file
system, virtual memory, and raw reads.
On AIX, this is an estimated value based on the ratio of read bytes to total
bytes transferred. The actual number of reads is not tracked by the kernel.
This is calculated as
BYDSK_PHYS_READ_RATE =
BYDSK_PHYS_IO_RATE *
(BYDSK_PHYS_READ_BYTE /
BYDSK_PHYS_IO_BYTE)
BYDSK_PHYS_WRITE
----------------------------------
The number of physical writes for this disk device during the interval.
On Unix systems, all types of physical disk writes are counted, including file
system IO, virtual memory IO, and raw writes.
On AIX, this is an estimated value based on the ratio of write bytes to total
bytes transferred because the actual number of writes is not tracked by the
kernel. This is calculated as
BYDSK_PHYS_WRITE =
BYDSK_PHYS_IO *
(BYDSK_PHYS_WRITE_BYTE /
BYDSK_PHYS_IO_BYTE)
BYDSK_PHYS_WRITE_BYTE
----------------------------------
The KBs transferred to this disk device during the interval.
On Unix systems, all types of physical disk writes are counted, including file
system, virtual memory, and raw IO.
BYDSK_PHYS_WRITE_BYTE_RATE
----------------------------------
The average KBs per second transferred to this disk device during the
interval.
On Unix systems, all types of physical disk writes are counted, including file
system, virtual memory, and raw IO.
BYDSK_PHYS_WRITE_RATE
----------------------------------
The average number of physical writes per second for this disk device during
the interval.
On Unix systems, all types of physical disk writes are counted, including file
system IO, virtual memory IO, and raw writes.
On AIX, this is an estimated value based on the ratio of write bytes to total
bytes transferred. The actual number of writes is not tracked by the kernel.
This is calculated as
BYDSK_PHYS_WRITE_RATE =
BYDSK_PHYS_IO_RATE *
(BYDSK_PHYS_WRITE_BYTE /
BYDSK_PHYS_IO_BYTE)
BYDSK_REQUEST_QUEUE
----------------------------------
The average number of IO requests that were in the wait queue for this disk
device during the interval. These requests are the physical requests (as
opposed to logical IO requests).
Some Linux kernels, typically 2.2 and older kernels, do not support the
instrumentation needed to provide values for this metric. This metric will be
“na” on the affected kernels. The “sar -d” command will also not be present
on these systems. Distributions and OS releases that are known to be affected
include: TurboLinux 7, SuSE 7.2, and Debian 3.0.
BYDSK_UTIL
----------------------------------
On HP-UX, this is the percentage of the time during the interval that the disk
device had IO in progress from the point of view of the Operating System. In
other words, the utilization or percentage of time busy servicing requests for
this device.
On the non-HP-UX systems, this is the percentage of the time that this disk
device was busy transferring data during the interval.
Some Linux kernels, typically 2.2 and older kernels, do not support the
instrumentation needed to provide values for this metric. This metric will be
“na” on the affected kernels. The “sar -d” command will also not be present
on these systems. Distributions and OS releases that are known to be affected
include: TurboLinux 7, SuSE 7.2, and Debian 3.0.
This is a measure of the ability of the IO path to meet the transfer demands
being placed on it. Slower disk devices may show a higher utilization with
lower IO rates than faster disk devices such as disk arrays. A value of
greater than 50% utilization over time may indicate that this device or its IO
path is a bottleneck, and the access pattern of the workload, database, or
files may need reorganizing for better balance of disk IO load.
BYLS_CPU_ENTL_MAX
----------------------------------
The maximum CPU units configured for a logical system.
On HP-UX HPVM, this metric indicates the maximum percentage of physical CPU
that a virtual CPU of this logical system can get.
On AIX SPLPAR, this metric is equivalent to “Maximum Capacity” field of
‘lparstat -i’ command.
For WPARs, it is the maximum percentage of CPU that a WPAR can have even if
there is no contention for CPU. WPAR shares CPU units of its global
environment.
On Hyper-V host, for Root partition, this metric is NA.
On vMA, for a host, the metric is equivalent to total number of cores on the
host. For a resource pool and a logical system, this metrics indicates the
maximum CPU units configured for it.
BYLS_CPU_ENTL_MIN
----------------------------------
The minimum CPU units configured for this logical system.
On HP-UX HPVM, this metric indicates the minimum percentage of physical CPU
that a virtual CPU of this logical system is guaranteed.
On AIX SPLPAR, this metric is equivalent to “Minimum Capacity” field of
‘lparstat -i’ command.
For WPARs, it is the minimum CPU share assigned to a WPAR that is guaranteed.
WPAR shares CPU units of its global environment.
On Hyper-V host, for Root partition, this metric is NA.
On vMA, for a host, the metric is equivalent to total number of cores on the
host. For a resource pool and a logical system, this metrics indicates the
guranteed minimum CPU units configured for it.
On Solaris Zones, this metrics indicates the configured minimum CPU percentage
reserved for a logical system.
For Solaris Zones, this metric is calculated as:
BYLS_CPU_ENTL_MIN = ( BYLS_CPU_SHARES_PRIO / Pool-Cpu-Shares )
where, Pool-Cpu-Shares is the total CPU shares available with CPU pool the
zone is associated with. Pool-Cpu-Shares is addition of BYLS_CPU_SHARES_PRIO
values for all active zones associated with this pool.
BYLS_CPU_ENTL_UTIL
----------------------------------
Percentage of entitled processing units (guaranteed processing units allocated
to this logical system) consumed by the logical system.
On a HP-UX HPVM host the metric indicates the logical system’s CPU utilization
with respect to minimum CPU entitlement.
On HP-UX HPVM host, this metric is calculated as: BYLS_CPU_ENTL_UTIL =
(BYLS_CPU_PHYSC / (BYLS_CPU_ENTL_MIN * BYLS_NUM_CPU)) * 100
On AIX, this metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_PHYSC /
BYLS_CPU_ENTL) * 100
On WPAR, this metric is calculated as: BYLS_CPU_ENTL_UTIL = (BYLS_CPU_PHYSC /
BYLS_CPU_ENTL_MAX) * 100 This metric matches “%Resc” of topas command (inside
WPAR)
On Solaris Zones, the metric indicates the logical system’s CPU utilization
with respect to minimum CPU entitlement. This metric is calculated as:
BYLS_CPU_ENTL_UTIL = (BYLS_CPU_TOTAL_UTIL / BYLS_CPU_SHARES_PRIO) * 100
If a Solaris zone is not assigned a CPU entitlement value then a CPU
entitlement value is derived for this zone based on total CPU entitlement
associated with the CPU pool this zone is attached to.
On Hyper-V host, for Root partition, this metric is NA.
On vMA, for a host the value is same as BYLS_CPU_PHYS_TOTAL_UTIL while for
logical system and resource pool the value is the percentage of processing
units consumed w.r.t minimum CPU entitlement.
BYLS_CPU_PHYSC
----------------------------------
This metric indicates the number of CPU units utilized by the logical system.
On an Uncapped logical system, this value will be equal to the CPU units
capacity used by the logical system during the interval. This can be more than
the value entitled for a logical system.
BYLS_CPU_PHYS_SYS_MODE_UTIL
----------------------------------
The percentage of time the physical CPUs were in system mode (kernel mode) for
the logical system during the interval.
On AIX LPAR, this value is equivalent to “%sys” field reported by the
“lparstat” command.
On Hyper-V host, this metric indicates the percentage of time spent in
Hypervisor code.
On vMA, the metric indicates the percentage of time the physical CPUs were in
system mode during the interval for the host or logical system. On vMA, for a
resource pool, this metric is “na”.
BYLS_CPU_PHYS_TOTAL_UTIL
----------------------------------
Percentage of total time the physical CPUs were utilized by this logical
system during the interval.
On HPUX, this information is updated internally every 10 seconds so it may
take that long for these values to be updated in PA/Glance.
On Solaris, this metric is calculated with respect to the available active
physical CPUs on the system.
On AIX, this metric is equivalent to sum of BYLS_CPU_PHYS_USER_MODE_UTIL and
BYLS_CPU_PHYS_SYS_MODE_UTIL.
For AIX lpars, the metric is calculated with respect to the available physical
CPUs in the pool to which this LPAR belongs to.
For AIX WPARs, the metric is calculated with respect to the available physical
CPUs in the resource set or Global Environment.
On vMA, the value indicates percentage of total time the physical CPUs were
utilized by logical system or host or resource pool,
On KVM/Xen, this value is core-normalized if GBL_IGNORE_MT is enabled on the
server.
BYLS_CPU_PHYS_USER_MODE_UTIL
----------------------------------
The percentage of time the physical CPUs were in user mode for the logical
system during the interval.
On AIX LPAR, this value is equivalent to “%user” field reported by the
“lparstat” command.
On Hyper-V host, this metric indicates the percentage of time spent in guest
code.
On vMA, the metrics indicates the percentage of time the physical CPUs were in
user mode during the interval for the host or logical system. On vMA, for a
resource pool, this metric is “na”.
BYLS_CPU_SHARES_PRIO
----------------------------------
This metric indicates the weightage/priority assigned to a Uncapped logical
system. This value determines the minimum share of unutilized processing units
that this logical system can utilize.
The value of this metric will be “-3” in PA and “ul” in other clients if cpu
shares value is ‘Unlimited’ for a logical system.
On AIX SPLPAR this value is dependent on the available processing units in the
pool and can range from 0 to 255.
For WPARs, this metric represents how much of a particular resource a WPAR
receives relative to the other WPARs.
On vMA, for logical system and resource pool this value can range from 1 to
1000000 while for host the value is NA.
On Solaris Zones, this metric sets a limit on the number of fair share
scheduler (FSS) CPU shares for a zone.
On Hyper-V host, this metric specifies allocation of CPU resources when more
than one virtual machine is running and competing for resources. This value
can range from 0 to 10000. For Root partition, this metric is NA.
BYLS_DISPLAY_NAME
----------------------------------
On vMA, this metric indicates the name of the host or logical system or
resource pool.
On HPVM, this metric indicates the Virtual Machine name of the logical
systemand is equivalent to “Virtual Machine Name” field of ‘hpvmstatus’
command.
On AIX the value is as returned by the command “uname -n” (that is, the string
returned from the “hostname” program).
On Solaris Zones, this metric indicates the zone name and is equivalent to
‘NAME’ field of ‘zoneadm list -vc’ command.
On Hyper-V host, this metric indicates the Virtual Machine name of the logical
systemand is equivalent to the Name displayed in Hyper-V Manager. For Root
partition, the value is always “Root”.
BYLS_HYPCALL
----------------------------------
The number of Hypervisor calls made by a logical system during the interval.
Higher number of calls will result in higher BYLS_CPU_PHYS_SYS_MODE_UTIL,
BYLS_CPU_PHYS_WAIT_MODE_UTIL, GBL_CPU_SYS_MODE_UTIL and GBL_CPU_WAIT_UTIL.
For AIX wpars, the metric will be “na”.
BYLS_HYP_UTIL
----------------------------------
Percentage of time spent in Hypervisor by a logical system during the
interval.
Higher utilization of hypervisor will result in higher
BYLS_CPU_PHYS_SYS_MODE_UTIL, BYLS_CPU_PHYS_WAIT_MODE_UTIL,
GBL_CPU_SYS_MODE_UTIL and GBL_CPU_WAIT_UTIL.
For AIX wpars, the metric will be “na”.
BYLS_LS_HOSTNAME
----------------------------------
This is the DNS registered name of the system.
On Hyper-V host, this metric is NA if the logical system is not active or
Hyper-V Integration Components are not installed on it.
On vMA, for a host and logical system the metric is the Fully Qualified Domain
Name, while for resource pool the value is NA.
BYLS_LS_ID
----------------------------------
An unique identifier of the logical system.
On HPVM, this metric is a numeric id and is equivalent to “VM # “ field of
‘hpvmstatus’ command.
On AIX LPAR, this metric indicates partition number and is equivalent to
“Partition Number” field of ‘lparstat -i’ command. For aix wpar, this metric
represents the partition number and is equivalent to “uname -W” from inside
wpar.
On Solaris Zones, this metric indicates the zone id and is equivalent to ‘ID’
field of ‘zoneadm list -vc’ command.
On Hyper-V host, this metric indicates the PID of the process corresponding to
this logical system. For Root partition, this metric is NA.
On vMA, this metric is a unique identifier for a host, resource pool and a
logical system. The value of this metric may change for an instance across
collection intervals.
BYLS_LS_MODE
----------------------------------
This metric indicates whether the CPU entitlement for the logical system is
Capped or Uncapped.
On AIX SPLPAR, this metric is same as “Mode” field of ‘lparstat -i’ command.
For WPARs, this metric is always CAPPED.
On vMA, the value is Capped for a host and Uncapped for a logical system. For
resource pool, the value is Uncapped or Capped depending on whether the
reservation is expandable or not for it.
On Solaris Zones, this metric is “Capped” when the zone is assigned CPU shares
and is attached to a valid CPU pool.
BYLS_LS_NAME
----------------------------------
This is the name of the computer.
On HPVM, this metric indicates the Virtual Machine name of the logical
systemand is equivalent to “Virtual Machine Name” field of ‘hpvmstatus’
command.
On AIX the value is as returned by the command “uname -n” (that is, the string
returned from the “hostname” program).
On vMA, this metric is a unique identifier for host, resource pool and a
logical system. The value of this metric remains the same, for an instance,
across collection intervals.
On Solaris Zones, this metric indicates the zone name and is equivalent to
‘NAME’ field of ‘zoneadm list -vc’ command.
On Hyper-V host, this metric indicates the name of the XML file which has
configuration information of the logical system. This file will be present
under the logical system’s installation directory indicated by BYLS_LS_PATH.
For Root partition, the value is always “Root”.
BYLS_LS_OSTYPE
----------------------------------
The Guest OS this logical system is hosting.
On HPVM, the metric can have following values: HP-UX Linux Windows OpenVMS
Other Unknown
On Hyper-V host, the metric can have following values: Windows Other
On Hyper-V host, this metric is NA if the logical system is not active or
Hyper-V Integration Components are not installed on it.
On vMA, the metric can have the following values for host and logical system:
ESX/ESXi followed by version or ESX-Serv (applicable only for a host) Linux
Windows Solaris Unknown The value is NA for resource pool
BYLS_LS_PATH
----------------------------------
This metric indicates the installation path for the logical system.
On Hyper-V host, for Root partition, this metric is NA.
On vMA, the metric indicates the installation path for host or logical system.
On vMA, for a resource pool and a host, this metric is “na”.
BYLS_LS_PROC_ID
----------------------------------
On HPVM host and Hyper-V host, each VM is manifested as a process. These
processes have the executable name hpvmapp for HPVM and vmwp.exe for Hyper-V
host. This metric will have the PID of the process corresponding to this
logical system.
On HPVM, typically hpvmapp has the option -d whose argument is the name of the
VM.
On Hyper-V host, for Root partition, this metric is NA.
BYLS_LS_SHARED
----------------------------------
This metric indicates whether the physical CPUs are dedicated to this logical
system or shared.
On HPUX HPVM, and Hyper-V host,this metric is always “Shared”.
On vMA, the value is “Dedicated” for host, and “Shared” for logical system and
resource pool.
On AIX SPLPAR, this metric is equivalent to “Type” field of ‘lparstat -i’
command. For AIX wpars,this metric will be always “Shared”.
On Solaris Zones, this metric is “Dedicated” when this zone is attached to a
CPU pool not shared by any other zone.
BYLS_LS_STATE
----------------------------------
The state of this logical system.
On HPVM, the logical systems can have one of the following states: Unknown
Other invalid Up Down Boot Crash Shutdown Hung
On vMA, this metric can have one of the following states for a host: on off
unknown The values for a logical system can be one of the following: on off
suspended unknown The value is NA for resource pool.
On Solaris Zones, the logical systems can have one of the following states:
configured incomplete installed ready running shutting down mounted
On AIX lpars, the logical system will be always active. On AIX wpars, the
logical systems can have one of the following states: Broken Transitional
Defined Active Loaded Paused Frozen Error
A logical system on a Hyper-V host can have the following states: unknown
enabled disabled paused suspended starting snapshtng migrating saving stopping
deleted pausing resuming
BYLS_LS_UUID
----------------------------------
UUID of this logical system. This Id uniquely identifies this logical system
across multiple hosts.
On Hyper-V host, for Root partition, this metric is NA.
On vMA, for a logical system or a host, the value indicates the UUID appended
to display_name of the system. For a resource pool the value is hostname of
the host where resource pool is hosted followed by the unique id of resource
pool.
For an AIX frame, the value is the display name appended with serial number.
For an LPAR, this value is the frame’s name appended with serial number.
BYLS_MEM_ENTL
----------------------------------
The entitled memory configured for this logical system (in MB).
On Hyper-V host, for Root partition, this metric is NA.
On vMA, for host the value is the physical memory available in the system and
for logical system this metric indicates the minimum memory configured while
for resource pool the value is NA.
For an AIX frame, this value is obtained from the command “lshwres -m
-r mem --level sys “.
BYLS_NUM_CPU
----------------------------------
The number of virtual CPUs configured for this logical system. This metric is
equivalent to GBL_NUM_CPU on the corresponding logical system.
On HPVM, the maximum CPUs a logical system can have is 4 with respect to HPVM
3.x.
On AIX SPLPAR, the number of CPUs can be configured irrespective of the
available physical CPUs in the pool this logical system belongs to. For AIX
wpars, this metric represents the logical CPUs of the global environment.
On vMA, for a host the metric is the number of physical CPU threads on the
host. For a logical system, the metric is the number of virtual cpus
configured.For a resource pool the metric is NA.
On Solaris Zones, this metric represents number of CPUs in the CPU pool this
zone is attached to. This metric value is equivalent to GBL_NUM_CPU inside
corresponding non-global zone.
BYLS_NUM_DISK
----------------------------------
The number of disks configured for this logical system. Only local disk
devices and optical devices present on the system are counted in this metric.
On vMA, for a host the metric is the number of disks configured for the host .
For a logical system, the metric is the number of logical disk devices present
on the logical system. For a resource pool the metric is NA.
For AIX wpars, this metric will be “na”.
On Hyper-V host, this metric value is equivalent to GBL_NUM_DISK inside
corresponding Hyper-V guest.
On Hyper-V host, this metric is NA if the logical system is not active.
BYLS_NUM_NETIF
----------------------------------
The number of network interfaces configured for this logical system.
On LPAR, this metric includes the loopback interface.
On Hyper-V host, this metric value is equivalent to GBL_NUM_NETWORK inside
corresponding Hyper-V guest.
On Solaris Zones, this metric value is equivalent to GBL_NUM_NETWORK inside
corresponding non-global zone.
On Hyper-V host, this metric is NA if the logical system is not active.
On vMA, for a host the metric is the number of network adapters on the host.
For a logical system, the metric is the number of network interfaces
configured for the logical system. For a resource pool the metric is NA.
BYLS_UPTIME_SECONDS
----------------------------------
The uptime of this logical system in seconds.
On AIX LPARs, this metric will be “na”.
On vMA, for a host and logical system the metric is the uptime in seconds
while for a resource pool the metric is NA.
BYNETIF_ERROR
----------------------------------
The number of physical errors that occurred on the network interface during
the interval. An increasing number of errors may indicate a hardware problem
in the network.
On Unix systems, this data is not available for loop-back (lo) devices and is
always zero.
For HP-UX, this will be the same as the sum of the “Inbound Errors” and
“Outbound Errors” values from the output of the “lanadmin” utility for the
network interface. Remember that “lanadmin” reports cumulative counts. As of
the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the
logical level (IP) only.
For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on
Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a
network device. See also netstat(1).
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric will be N/A.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
BYNETIF_ERROR_RATE
----------------------------------
The number of physical errors per second on the network interface during the
interval.
On Unix systems, this data is not available for loop-back (lo) devices and is
always zero.
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric will be N/A.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
BYNETIF_ID
----------------------------------
The ID number of the network interface.
BYNETIF_IN_BYTE
----------------------------------
The number of KBs received from the network via this interface during the
interval. Only the bytes in packets that carry data are included in this
rate.
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the
Lan card in the host.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
BYNETIF_IN_BYTE_RATE
----------------------------------
The number of KBs per second received from the network via this interface
during the interval. Only the bytes in packets that carry data are included
in this rate.
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the
Lan card in the host.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
BYNETIF_IN_PACKET
----------------------------------
The number of successful physical packets received through the network
interface during the interval. Successful packets are those that have been
processed without errors or collisions.
For HP-UX, this will be the same as the sum of the “Inbound Unicast Packets”
and “Inbound Non-Unicast Packets” values from the output of the “lanadmin”
utility for the network interface. Remember that “lanadmin” reports
cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i”
shows network activity on the logical level (IP) only.
For all other Unix systems, this is the same as the sum of the “Ipkts” column
(RX-OK on Linux) from the “netstat -i” command for a network device. See also
netstat(1).
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the
Lan card in the host.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
BYNETIF_IN_PACKET_RATE
----------------------------------
The number of successful physical packets per second received through the
network interface during the interval. Successful packets are those that have
been processed without errors or collisions.
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the
Lan card in the host.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
BYNETIF_NAME
----------------------------------
The name of the network interface.
For HP-UX 11.0 and beyond, these are the same names that appear in the
“Description” field of the “lanadmin” command output.
On all other Unix systems, these are the same names that appear in the “Name”
column of the “netstat -i” command.
Some examples of device names are:
lo - loop-back driver
ln - Standard Ethernet driver
en - Standard Ethernet driver
le - Lance Ethernet driver
ie - Intel Ethernet driver
tr - Token-Ring driver
et - Ether Twist driver
bf - fiber optic driver
All of the device names will have the unit number appended to the name. For
example, a loop-back device in unit 0 will be “lo0”.
On vMA for Lan cards which are of type ESXVLan, this metric contains the
vmnic as first half and the second half is the ESX host name.
BYNETIF_NET_SPEED
----------------------------------
The speed of this interface. This is the bandwidth in Mega bits/sec.
BYNETIF_OUT_BYTE
----------------------------------
The number of KBs sent to the network via this interface during the interval.
Only the bytes in packets that carry data are included in this rate.
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the
Lan card in the host.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
BYNETIF_OUT_BYTE_RATE
----------------------------------
The number of KBs per second sent to the network via this interface during the
interval. Only the bytes in packets that carry data are included in this
rate.
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the
Lan card in the host.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
BYNETIF_OUT_PACKET
----------------------------------
The number of successful physical packets sent through the network interface
during the interval. Successful packets are those that have been processed
without errors or collisions.
For HP-UX, this will be the same as the sum of the “Outbound Unicast Packets”
and “Outbound Non-Unicast Packets” values from the output of the “lanadmin”
utility for the network interface. Remember that “lanadmin” reports
cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i”
shows network activity on the logical level (IP) only.
For all other Unix systems, this is the same as the sum of the “Opkts” column
(TX-OK on Linux) from the “netstat -i” command for a network device. See also
netstat(1).
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the
Lan card in the host.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
BYNETIF_OUT_PACKET_RATE
----------------------------------
The number of successful physical packets per second sent through the network
interface during the interval. Successful packets are those that have been
processed without errors or collisions.
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the
Lan card in the host.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
BYNETIF_PACKET_RATE
----------------------------------
The number of successful physical packets per second sent and received through
the network interface during the interval. Successful packets are those that
have been processed without errors or collisions.
If BYNETIF_NET_TYPE is “ESXVLan”, then this metric shows the values for the
Lan card in the host.
Physical statistics are packets recorded by the network drivers. These
numbers most likely will not be the same as the logical statistics. The
values returned for the loopback interface will show “na” for the physical
statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer
of the networking subsystem. Not all packets seen by IP will go out and come
in through a network driver. An example is the loopback interface
(127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so
forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP
addresses on remote systems will change physical driver statistics.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
BYNETIF_QUEUE
----------------------------------
The length of the outbound queue at the time of the last sample. This metric
will be the same as the “Outbound Queue Length” values from the output of
“lanadmin” utility.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On HP-UX, this metric is only available for LAN interfaces. For WAN (Wide-
Area Network) interfaces such as ATM and X.25, with interface names such as
el, cip/ixe, and netisdn, this metric returns “na”.
BYNETIF_UTIL
----------------------------------
The percentage of bandwidth used with respect to the total available bandwidth
on a given network interface at the end of the interval.
On vMA this value will be N/A for those Lan cards which are of type ESXVLan.
BYPROTOCOL_IN_PACKET
----------------------------------
The number of successful packets received via this protocol during the
interval. Successful packets are those that have been processed without
errors or collisions.
On Windows system, the packet size for NBT connections is defined as 1 Kbyte.
BYPROTOCOL_IN_PACKET_RATE
----------------------------------
The number of successful packets per second received via this protocol during
the interval. Successful packets are those that have been processed without
errors or collisions.
On Windows system, the packet size for NBT connections is defined as 1 Kbyte.
BYPROTOCOL_OUT_PACKET
----------------------------------
The number of successful packets sent via this protocol during the interval.
Successful packets are those that have been processed without errors or
collisions.
On Windows system, the packet size for NBT connections is defined as 1 Kbyte.
BYPROTOCOL_OUT_PACKET_RATE
----------------------------------
The number of successful packets per second sent via this protocol during the
interval. Successful packets are those that have been processed without
errors or collisions.
On Windows system, the packet size for NBT connections is defined as 1 Kbyte.
DATE
----------------------------------
The date the information in this record was captured, based on local time.
The date is an ASCII field in mm/dd/yyyy format unless localized. If
localized, the separators may be different and the subfield may be in a
different sequence. In ASCII files this field will always contain 10
characters. Each subfield (mm, dd, yyyy) will contain a leading zero if the
value is less than 10. This metric is extracted from GBL_STATTIME, which is
obtained using the time() system call at the time of data collection.
This field responds to language localization. For example, in Italy the field
would appear as dd/mm/yyyy and in Japan it would be yyyy/mm/dd.
In binary files this field is in MPE CALENDAR format in the least significant
16 bits of the field. The most significant 16 bits should all be zero.
Dividing the field by 512 will isolate the year (that is, 94). This field MOD
512 will isolate the day of the year.
DATE_SECONDS
----------------------------------
The time that the data in this record was captured, expressed in seconds since
January 1, 1970, based on local time. This is related to the standard time-
stamp returned by the unix system call time(), but has had the local time zone
correction applied.
DAY
----------------------------------
The julian day of the year that the data in this record was captured. This
metric is extracted from GBL_STATTIME.
FS_BLOCK_SIZE
----------------------------------
The maximum block size of this file system, in bytes.
A value of “na” may be displayed if the file system is not mounted. If the
product is restarted, these unmounted file systems are not displayed until
remounted.
FS_DEVNAME
----------------------------------
On Unix systems, this is the path name string of the current device.
On Windows, this is the disk drive string of the current device.
On HP-UX, this is the “fsname” parameter in the mount(1M) command. For NFS
devices, this includes the name of the node exporting the file system. It is
possible that a process may mount a device using the mount(2) system call.
This call does not update the “/etc/mnttab” and its name is blank. This
situation is rare, and should be corrected by syncer(1M). Note that once a
device is mounted, its entry is displayed, even after the device is unmounted,
until the midaemon process terminates.
On SUN, this is the path name string of the current device, or “tmpfs” for
memory based file systems. See tmpfs(7).
FS_DEVNO
----------------------------------
On Unix systems, this is the major and minor number of the file system.
On Windows, this is the unit number of the disk device on which the logical
disk resides.
The scope collector logs the value of this metric in decimal format.
FS_DIRNAME
----------------------------------
On Unix systems, this is the path name of the mount point of the file system.
On Windows, this is the drive letter associated with the selected disk
partition.
On HP-UX, this is the path name of the mount point of the file system if the
logical volume has a mounted file system. This is the directory parameter of
the mount(1M) command for most entries. Exceptions are:
* For lvm swap areas, this field
contains “lvm swap device”.
* For logical volumes with no
mounted file systems, this field
contains “Raw Logical Volume”
(relevant only to Perf Agent).
On HP-UX, the file names are in the same order as shown in the
“/usr/sbin/mount -p” command. File systems are not displayed until they
exhibit IO activity once the midaemon has been started. Also, once a device
is displayed, it continues to be displayed (even after the device is
unmounted) until the midaemon process terminates.
On SUN, only “UFS”, “HSFS” and “TMPFS” file systems are listed. See mount(1M)
and mnttab(4). “TMPFS” file systems are memory based filesystems and are
listed here for convenience. See tmpfs(7).
On AIX, see mount(1M) and filesystems(4). On OSF1, see mount(2).
FS_MAX_SIZE
----------------------------------
Maximum number that this file system could obtain if full, in MB.
Note that this is the user space capacity - it is the file system space
accessible to non root users. On most Unix systems, the df command shows the
total file system capacity which includes the extra file system space
accessible to root users only.
The equivalent fields to look at are “used” and “avail”. For the target file
system, to calculate the maximum size in MB, use
FS Max Size = (used + avail)/1024
A value of “na” may be displayed if the file system is not mounted. If the
product is restarted, these unmounted file systems are not displayed until
remounted.
On HP-UX, this metric is updated at 4 minute intervals to minimize collection
overhead.
FS_REQUEST_QUEUE
----------------------------------
The average number of both i/o requests that were queued for the selected
filesystem during the interval.
FS_SPACE_RESERVED
----------------------------------
The amount of file system space in MBs reserved for superuser allocation.
On AIX, this metric is typically zero for local filesystems because by default
AIX does not reserve any file system space for the superuser.
FS_SPACE_USED
----------------------------------
The amount of file system space in MBs that is being used.
FS_SPACE_UTIL
----------------------------------
Percentage of the file system space in use during the interval.
Note that this is the user space capacity - it is the file system space
accessible to non root users. On most Unix systems, the df command shows the
total file system capacity which includes the extra file system space
accessible to root users only.
A value of “na” may be displayed if the file system is not mounted. If the
product is restarted, these unmounted file systems are not displayed until
remounted.
On HP-UX, this metric is updated at 4 minute intervals to minimize collection
overhead.
FS_TYPE
----------------------------------
A string indicating the file system type. On Unix systems, some of the
possible types are:
hfs - user file system
ufs - user file system
ext2 - user file system
cdfs - CD-ROM file system
vxfs - Veritas (vxfs) file system
nfs - network file system
nfs3 - network file system
Version 3
On Windows, some of the possible types are:
NTFS - New Technology File System
FAT - 16-bit File Allocation
Table
FAT32 - 32-bit File Allocation
Table
FAT uses a 16-bit file allocation table entry (216 clusters).
FAT32 uses a 32-bit file allocation table entry. However, Windows 2000
reserves the first 4 bits of a FAT32 file allocation table entry, which means
FAT32 has a theoretical maximum of 228 clusters. NTFS is native file system
of Windows NT and beyond.
GBL_ACTIVE_CPU
----------------------------------
The number of CPUs online on the system.
For HP-UX and certain versions of Linux, the sar(1M) command allows you to
check the status of the system CPUs.
For SUN and DEC, the commands psrinfo(1M) and psradm(1M) allow you to check or
change the status of the system CPUs.
For AIX, the pstat(1) command allows you to check the status of the system
CPUs.
On AIX System WPARs, this metric value is identical to the value on AIX Global
Environment if RSET is not configured for the System WPAR. If RSET is
configured for the System WPAR, this metric value will report the number of
CPUs in the RSET.
On Solaris non-global zones with Uncapped CPUs, this metric shows data from
the global zone.
GBL_ACTIVE_CPU_CORE
----------------------------------
This metric provides the total number of active CPU cores on a physical
system.
GBL_ACTIVE_PROC
----------------------------------
An active process is one that exists and consumes some CPU time.
GBL_ACTIVE_PROC is the sum of the alive-process-time/interval-time ratios of
every process that is active (uses any CPU time) during an interval.
The following diagram of a four second interval during which two processes
exist on the system should be used to understand the above definition. Note
the difference between active processes, which consume CPU time, and alive
processes which merely exist on the system.
----------- Seconds -----------
1 2 3 4
Proc
---- ---- ---- ---- ----
A live live live live
B live/CPU live/CPU live dead
Process A is alive for the entire four second interval but consumes no CPU.
A’s contribution to GBL_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to
GBL_ACTIVE_PROC. B’s contribution to GBL_ALIVE_PROC is 3*1/4. B contributes
2*1/4 to GBL_ACTIVE_PROC. Thus, for this interval, GBL_ACTIVE_PROC equals 0.5
and GBL_ALIVE_PROC equals 1.75.
Because a process may be alive but not active, GBL_ACTIVE_PROC will always be
less than or equal to GBL_ALIVE_PROC.
This metric is a good overall indicator of the workload of the system. An
unusually large number of active processes could indicate a CPU bottleneck.
To determine if the CPU is a bottleneck, compare this metric with
GBL_CPU_TOTAL_UTIL and GBL_RUN_QUEUE. If GBL_CPU_TOTAL_UTIL is near 100
percent and GBL_RUN_QUEUE is greater than one, there is a bottleneck.
On non HP-UX systems, this metric is derived from sampled process data.
Since the data for a process is not available after the process has died on
this operating system, a process whose life is shorter than the sampling
interval may not be seen when the samples are taken. Thus this metric may be
slightly less than the actual value. Increasing the sampling frequency
captures a more accurate count, but the overhead of collection may also rise.
GBL_ALIVE_PROC
----------------------------------
An alive process is one that exists on the system. GBL_ALIVE_PROC is the sum
of the alive-process-time/interval-time ratios for every process.
The following diagram of a four second interval during which two processes
exist on the system should be used to understand the above definition. Note
the difference between active processes, which consume CPU time, and alive
processes which merely exist on the system.
----------- Seconds -----------
1 2 3 4
Proc
---- ---- ---- ---- ----
A live live live live
B live/CPU live/CPU live dead
Process A is alive for the entire four second interval but consumes no CPU.
A’s contribution to GBL_ALIVE_PROC is 4*1/4. A contributes 0*1/4 to
GBL_ACTIVE_PROC. B’s contribution to GBL_ALIVE_PROC is 3*1/4. B contributes
2*1/4 to GBL_ACTIVE_PROC. Thus, for this interval, GBL_ACTIVE_PROC equals 0.5
and GBL_ALIVE_PROC equals 1.75.
Because a process may be alive but not active, GBL_ACTIVE_PROC will always be
less than or equal to GBL_ALIVE_PROC.
On non HP-UX systems, this metric is derived from sampled process data.
Since the data for a process is not available after the process has died on
this operating system, a process whose life is shorter than the sampling
interval may not be seen when the samples are taken. Thus this metric may be
slightly less than the actual value. Increasing the sampling frequency
captures a more accurate count, but the overhead of collection may also rise.
GBL_APP_THRESHOLD
----------------------------------
appthreshold specifies the thresholds for APPLICATION class. This is the
percentage of cpu being utilized by an application (APP_CPU_TOTAL_UTIL) during
the interval.
This threshold value is supplied by the parm file. An application must exceed
this threshold value in any given interval before it will be considered
interesting to be logged.
GBL_BOOT_TIME
----------------------------------
The date and time when the system was last booted.
GBL_BYCPU_THRESHOLD
----------------------------------
bycputhreshold specifies the thresholds for CPU class. This is the percentage
of time a cpu was busy (BYCPU_CPU_TOTAL_UTIL) during the interval.
This threshold value is supplied by the parm file. A cpu must exceed this
threshold value in any given interval before it will be considered interesting
to be logged.
GBL_BYDSK_THRESHOLD
----------------------------------
diskthreshold specifies the threshold for DISK class. This is the percentage
of time that a disk busy in performing IO (BYDSK_UTIL) during the interval.
This threshold value is supplied by the parm file. A disk must exceed this
threshold value in any given interval before it will be considered interesting
and be logged.
GBL_BYFS_THRESHOLD
----------------------------------
fsthreshold specifies the thresholds for FILESYSTEM class. This is the
percentage of space used (FS_SPACE_UTIL) of the filesystem.
This threshold value is supplied by the parm file. A filesystem must exceed
this threshold value in any given interval before it will be considered
interesting to be logged.
GBL_BYNETIF_THRESHOLD
----------------------------------
bynetifthreshold specifies the thresholds for NETIF class. This is the number
of packets transferred per second during the interval(BYNETIF_PACKET_RATE).
This threshold value is supplied by the parm file. A network interface must
exceed this threshold value in any given interval before it will be considered
interesting to be logged.
GBL_COLLECTOR
----------------------------------
ASCII field containing collector name and version. The collector name will
appear as either “SCOPE/xx V.UU.FF.LF” or “Coda RV.UU.FF.LF”. xx identifies
the platform; V = version, UU = update level, FF = fix level, and LF = lab fix
id. For example, SCOPE/UX C.04.00.00; or Coda A.07.10.04.
GBL_COLLECT_INTERVAL
----------------------------------
The interval, in seconds, at which non-process metrics are collected.
Collection intervals are set in parm file.
GBL_COLLECT_INTERVAL_PROC
----------------------------------
The interval, in seconds, at which process metrics are collected. Collection
intervals are set in parm file.
GBL_COMPLETED_PROC
----------------------------------
The number of processes that terminated during the interval.
On non HP-UX systems, this metric is derived from sampled process data.
Since the data for a process is not available after the process has died on
this operating system, a process whose life is shorter than the sampling
interval may not be seen when the samples are taken. Thus this metric may be
slightly less than the actual value. Increasing the sampling frequency
captures a more accurate count, but the overhead of collection may also rise.
GBL_CPU_CLOCK
----------------------------------
The clock speed of the CPUs in MHz if all of the processors have the same
clock speed. Otherwise, “na” is shown if the processors have different clock
speeds. Note that Linux supports dynamic frequency scaling and if it is
enabled then there can be a change in CPU speed with varying load.
GBL_CPU_CYCLE_ENTL_MAX
----------------------------------
On a recognized VMware ESX guest, where VMware guest SDK is enabled,, this
value indicates the maximum processor capacity, in MHz, configured for this
logical system. The value is -3 if entitlement is ‘Unlimited’ for this
logical system.
On a recognized VMware ESX guest, where VMware guest SDK is disabled, the
value is “na”.
On a standalone system, the value is the sum of clock speed of individual
CPUs.
GBL_CPU_CYCLE_ENTL_MIN
----------------------------------
On a recognized VMware ESX guest, where VMware guest SDK is enabled,, this
value indicates the minimum processor capacity, in MHz, configured for this
logical system.
On a recognized VMware ESX guest, where VMware guest SDK is disabled, the
value is “na”.
On a standalone system, the value is the sum of clock speed of individual
CPUs.
GBL_CPU_ENTL_MAX
----------------------------------
In a virtual environment, this metric indicates the maximum number of
processing units configured for this logical system.
On AIX SPLPAR, this metric is equivalent to “Maximum Capacity” field of
‘lparstat -i’ command.
On a recognized VMware ESX guest the value is equivalent to
GBL_CPU_CYCLE_ENTL_MAX represented in CPU units.
On a recognized VMware ESX guest, where VMware guest SDK is disabled, the
value is “na”.
On a standalone system the value is same as GBL_NUM_CPU.
GBL_CPU_ENTL_MIN
----------------------------------
In a virtual environment, this metric indicates the minimum number of
processing units configured for this Logical system.
On AIX SPLPAR, this metric is equivalent to “Minimum Capacity” field of
‘lparstat -i’ command.
On a recognized VMware ESX guest, where VMware guest SDK is enabled, the value
is equivalent to GBL_CPU_CYCLE_ENTL_MIN represented in CPU units.
On a recognized VMware ESX guest, where VMware guest SDK is disabled, the
value is “na”.
On a standalone system the value is same as GBL_NUM_CPU.
GBL_CPU_ENTL_UTIL
----------------------------------
Percentage of entitled processing units (guaranteed processing units allocated
to this logical system) consumed by the logical system.
On AIX, this metric is calculated as:
GBL_CPU_ENTL_UTIL = (GBL_CPU_PHYSC / GBL_CPU_ENTL) * 100
On a recognized VMware ESX guest, where VMware guest SDK is enabled, this
metric is calculated as:
GBL_CPU_ENTL_UTIL = (GBL_CPU_PHYSC / GBL_CPU_ENTL_MIN) * 100
On a recognized VMware ESX guest, where VMware guest SDK is disabled, the
value is “na”.
On a standalone system, the value is same as GBL_CPU_TOTAL_UTIL.
GBL_CPU_HISTOGRAM
----------------------------------
Histogram of CPU utilization components.
Shows breakout:
GBL_CPU_TOTAL_UTIL = GBL_CPU_SYS_MODE_UTIL
+ GBL_CPU_USER_MODE_UTIL
+ GBL_CPU_INTERRUPT_UTIL
ASCII and BINARY files contain a line of ASCII characters that make up one row
of a printed histogram. This can be a quick way to get a graphical view of
CPU usage on a character-mode terminal display.
GBL_CPU_IDLE_TIME
----------------------------------
The time, in seconds, that the CPU was idle during the interval. This is the
total idle time, including waiting for I/O.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online.
On AIX System WPARs, this metric value is calculated against physical cpu
time.
On Solaris non-global zones, this metric is N/A. On platforms other than
HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report
values normalized against the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
GBL_CPU_IDLE_UTIL
----------------------------------
The percentage of time that the CPU was idle during the interval. This is the
total idle time, including waiting for I/O.
On Unix systems, this is the same as the sum of the “%idle” and “%wio” fields
reported by the “sar -u” command.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online.
On Solaris non-global zones, this metric is N/A. On platforms other than
HPUX, If the ignore_mt flag is set(true) in parm file, this metric will report
values normalized against the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
GBL_CPU_INTERRUPT_TIME
----------------------------------
The time, in seconds, that the CPU spent processing interrupts during the
interval.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On Hyper-V host, this metric is NA.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
GBL_CPU_INTERRUPT_UTIL
----------------------------------
The percentage of time that the CPU spent processing interrupts during the
interval.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On Hyper-V host, this metric is NA.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
GBL_CPU_MT_ENABLED
----------------------------------
On AIX, this metric indicates if this (Logical) System has SMT enabled or not.
Other platforms, this metric shows either HyperThreading(HT) is Enabled or
Disabled/Not Supported.
On Linux, this state is dynamic: if HyperThreading is enabled but all the CPUs
have only one logical processor enabled, this metric will report that HT is
disabled.
On AIX System WPARs, this metric is NA.
On Windows, this metric will be “na” on Windows Server 2003 Itanium systems.
GBL_CPU_PHYSC
----------------------------------
The number of physical processors utilized by the logical system.
On an Uncapped logical system (partition), this value will be equal to the
physical processor capacity used by the logical system during the interval.
This can be more than the value entitled for a logical system.
On a standalone system the value is calculated based on GBL_CPU_TOTAL_UTIL
GBL_CPU_PHYS_TOTAL_UTIL
----------------------------------
The percentage of time the available physical CPUs were not idle for this
logical system during the interval.
On AIX, this metric is calculated as :
GBL_CPU_PHYS_TOTAL_UTIL = GBL_CPU_PHYS_USER_MODE_UTIL +
GBL_CPU_PHYS_SYS_MODE_UTIL ;
GBL_CPU_PHYS_TOTAL_UTIL + GBL_CPU_PHYS_WAIT_UTIL + GBL_CPU_PHYS_IDLE_UTIL =
100%
On Power5 based systems, traditional sample based calculations cannot be made
because the dispatch cycle for each of the virtual CPUs is not same. So Power5
processor maintains a per-thread register PURR. The thread is dispatching
instructions or the thread that last dispatched an instruction will be
incremented at every processor clock cycle. This makes the value to be
distributed between the two threads. Power5 processor also maintains two more
registers, one is timebase - which gets incremented at every tick and
decrementer - that provided periodic interrupts.
On a Shared LPAR environment, PURR is equal to the time that a virtual
processor has spent on a physical processor. Hypervisor maintains a virtual
timebase which is same as the sum of two PURRs.
On a Capped Shared logical system (partition), the calculations for the metric
GBL_CPU_PHYS_USER_MODE_UTIL is as follows:
(delta PURR in user mode/entitlement) * 100 On an Uncapped Shared
logical system (partition): (delta PURR in user mode/entitlement consumed) *
100
The calculations for the other utilizations such as
GBL_CPU_PHYS_USER_MODE_UTIL, GBL_CPU_PHYS_SYS_MODE_UTIL, and
GBL_CPU_PHYS_WAIT_UTIL are also similar.
On a standalone system, the value will be equivalent to GBL_CPU_TOTAL_UTIL.
On AIX System WPARs, this metric value is calculated against physical cpu
time.
GBL_CPU_SHARES_PRIO
----------------------------------
The weightage/priority assigned to a Uncapped logical system. This value
determines the minimum share of unutilized processing units that this logical
system can utilize.
On AIX SPLPAR this value is dependent on the available processing units in the
pool and can range from 0 to 255
On recognized VMware ESX guest, this value can range from 1 to 100000
On a standalone system the value will be “na”.
GBL_CPU_SYS_MODE_TIME
----------------------------------
The time, in seconds, that the CPU was in system mode during the interval.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
On AIX System WPARs, this metric value is calculated against physical cpu
time.
On Hyper-V host, this metric indicates the time spent in Hypervisor code.
GBL_CPU_SYS_MODE_UTIL
----------------------------------
Percentage of time the CPU was in system mode during the interval.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode.
This metric is a subset of the GBL_CPU_TOTAL_UTIL percentage.
This is NOT a measure of the amount of time used by system daemon processes,
since most system daemons spend part of their time in user mode and part in
system calls, like any other process.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
High system mode CPU percentages are normal for IO intensive applications.
Abnormally high system mode CPU percentages can indicate that a hardware
problem is causing a high interrupt rate. It can also indicate programs that
are not calling system calls efficiently. On a logical system, this metric
indicates the percentage of time the logical processor was in kernel mode
during this interval.
On Hyper-V host, this metric indicates the percentage of time spent in
Hypervisor code.
GBL_CPU_TOTAL_TIME
----------------------------------
The total time, in seconds, that the CPU was not idle in the interval.
This is calculated as
GBL_CPU_TOTAL_TIME =
GBL_CPU_USER_MODE_TIME +
GBL_CPU_SYS_MODE_TIME
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
On AIX System WPARs, this metric value is calculated against physical cpu
time.
GBL_CPU_TOTAL_UTIL
----------------------------------
Percentage of time the CPU was not idle during the interval.
This is calculated as
GBL_CPU_TOTAL_UTIL =
GBL_CPU_USER_MODE_UTIL +
GBL_CPU_SYS_MODE_UTIL
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
GBL_CPU_TOTAL_UTIL +
GBL_CPU_IDLE_UTIL = 100%
This metric varies widely on most systems, depending on the workload. A
consistently high CPU utilization can indicate a CPU bottleneck, especially
when other indicators such as GBL_RUN_QUEUE and GBL_ACTIVE_PROC are also high.
High CPU utilization can also occur on systems that are bottlenecked on
memory, because the CPU spends more time paging and swapping.
NOTE: On Windows, this metric may not equal the sum of the APP_CPU_TOTAL_UTIL
metrics. Microsoft states that “this is expected behavior” because this
GBL_CPU_TOTAL_UTIL metric is taken from the performance library Processor
objects while the APP_CPU_TOTAL_UTIL metrics are taken from the Process
objects. Microsoft states that there can be CPU time accounted for in the
Processor system objects that may not be seen in the Process objects. On a
logical system, this metric indicates the logical utilization with respect to
number of processors available for the logical system (GBL_NUM_CPU).
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
GBL_CPU_USER_MODE_TIME
----------------------------------
The time, in seconds, that the CPU was in user mode during the interval.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
On AIX System WPARs, this metric value is calculated against physical cpu
time.
On Hyper-V host, this metric indicates the time spent in guest code.
GBL_CPU_USER_MODE_UTIL
----------------------------------
The percentage of time the CPU was in user mode during the interval.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
This metric is a subset of the GBL_CPU_TOTAL_UTIL percentage.
On a system with multiple CPUs, this metric is normalized. That is, the CPU
used over all processors is divided by the number of processors online. This
represents the usage of the total processing capacity available.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
High user mode CPU percentages are normal for computation-intensive
applications. Low values of user CPU utilization compared to relatively high
values for GBL_CPU_SYS_MODE_UTIL can indicate an application or hardware
problem. On a logical system, this metric indicates the percentage of time
the logical processor was in user mode during this interval.
On Hyper-V host, this metric indicates the percentage of time spent in guest
code.
GBL_CSWITCH_RATE
----------------------------------
The average number of context switches per second during the interval.
On HP-UX, this includes context switches that result in the execution of a
different process and those caused by a process stopping, then resuming, with
no other process running in the meantime.
On Windows, this includes switches from one thread to another either inside a
single process or across processes. A thread switch can be caused either by
one thread asking another for information or by a thread being preempted by
another higher priority thread becoming ready to run.
On Solaris non-global zones with Uncapped CPUs, this metric shows data from
the global zone.
GBL_DISK_CACHE_READ
----------------------------------
The number of cached reads made during the interval.
GBL_DISK_CACHE_READ_RATE
----------------------------------
The number of cached reads per second made during the interval.
GBL_DISK_HISTOGRAM
----------------------------------
Histogram of physical Disk IO rate components.
On HP-UX, this shows a breakout of:
GBL_DISK_PHYS_IO_RATE =
GBL_DISK_VM_IO_RATE + GBL_DISK_SYSTEM_IO_RATE +
GBL_DISK_FS_IO_RATE + GBL_DISK_RAW_IO_RATE
On SUN systems, this shows a breakout of:
GBL_DISK_PHYS_IO_RATE =
GBL_DISK_BLOCK_READ_RATE + GBL_DISK_BLOCK_WRITE_RATE +
GBL_DISK_RAW_READ_RATE + GBL_DISK_RAW_WRITE_RATE +
GBL_DISK_VM_IO_RATE
On the remaining Unix systems, this shows a breakout of:
GBL_DISK_PHYS_IO_RATE =
GBL_DISK_BLOCK_IO_RATE + GBL_DISK_VM_IO_RATE +
GBL_DISK_RAW_IO_RATE
On Windows, this shows a breakout of:
GBL_DISK_PHYS_IO_RATE =
GBL_DISK_PHYS_READ_RATE + GBL_DISK_PHYS_WRITE_RATE
ASCII and BINARY files contain a line of ASCII characters that make up one row
of a printed histogram. This can be a quick way to get a graphical view of
Disk usage on a character-mode terminal display.
GBL_DISK_LOGL_READ
----------------------------------
On most systems, this is the number of logical reads made during the interval.
On SUN, this is the number of logical block reads made during the interval. On
Windows, this includes both buffered (cached) read requests and unbuffered
reads.
Only local disks are counted in this measurement. NFS devices are excluded.
On many Unix systems, logical disk IOs are measured by counting the read
system calls that are directed to disk devices. Also counted are read system
calls made indirectly through other system calls, including readv, recvfrom,
recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend.
On many Unix systems, there are several reasons why logical IOs may not
correspond with physical IOs. Logical IOs may not always result in a physical
disk access, since the data may already reside in memory -- either in the
buffer cache, or in virtual memory if the IO is to a memory mapped file.
Several logical IOs may all map to the same physical page or block. In these
two cases, logical IOs are greater than physical IOs.
The reverse can also happen. A single logical write can cause a physical read
to fetch the block to be updated from disk, and then cause a physical write to
put it back on disk. A single logical IO can require more than one physical
page or block, and these can be found on different disks. Mirrored disks
further distort the relationship between logical and physical IO, since
physical writes are doubled.
GBL_DISK_LOGL_READ_RATE
----------------------------------
On most systems, this is The average number of logical reads per second made
during the interval. On SUN, this is the average number of logical block
reads per second made during the interval. On Windows, this includes both
buffered (cached) read requests and unbuffered reads.
Only local disks are counted in this measurement. NFS devices are excluded.
On many Unix systems, logical disk IOs are measured by counting the read
system calls that are directed to disk devices. Also counted are read system
calls made indirectly through other system calls, including readv, recvfrom,
recv, recvmsg, ipcrecvcn, recfrom, send, sento, sendmsg, and ipcsend.
On many Unix systems, there are several reasons why logical IOs may not
correspond with physical IOs. Logical IOs may not always result in a physical
disk access, since the data may already reside in memory -- either in the
buffer cache, or in virtual memory if the IO is to a memory mapped file.
Several logical IOs may all map to the same physical page or block. In these
two cases, logical IOs are greater than physical IOs.
The reverse can also happen. A single logical write can cause a physical read
to fetch the block to be updated from disk, and then cause a physical write to
put it back on disk. A single logical IO can require more than one physical
page or block, and these can be found on different disks. Mirrored disks
further distort the relationship between logical and physical IO, since
physical writes are doubled.
On Solaris non-global zones with Uncapped CPUs, this metric shows data from
the global zone.
GBL_DISK_PHYS_BYTE
----------------------------------
The number of KBs transferred to and from disks during the interval. The
bytes for all types of physical IOs are counted. Only local disks are counted
in this measurement. NFS devices are excluded.
It is not directly related to the number of IOs, since IO requests can be of
differing lengths.
On Unix systems, this includes file system IO, virtual memory IO, and raw IO.
On Windows, all types of physical IOs are counted.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at
boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_BYTE_RATE
----------------------------------
The average number of KBs per second at which data was transferred to and from
disks during the interval. The bytes for all types physical IOs are counted.
Only local disks are counted in this measurement. NFS devices are excluded.
This is a measure of the physical data transfer rate. It is not directly
related to the number of IOs, since IO requests can be of differing lengths.
This is an indicator of how much data is being transferred to and from disk
devices. Large spikes in this metric can indicate a disk bottleneck.
On Unix systems, all types of physical disk IOs are counted, including file
system, virtual memory, and raw reads.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at
boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_IO
----------------------------------
The number of physical IOs during the interval. Only local disks are counted
in this measurement. NFS devices are excluded.
On Unix systems, all types of physical disk IOs are counted, including file
system IO, virtual memory IO and raw IO.
On HP-UX, this is calculated as
GBL_DISK_PHYS_IO =
GBL_DISK_FS_IO +
GBL_DISK_VM_IO +
GBL_DISK_SYSTEM_IO +
GBL_DISK_RAW_IO
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at
boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_IO_RATE
----------------------------------
The number of physical IOs per second during the interval. Only local disks
are counted in this measurement. NFS devices are excluded.
On Unix systems, all types of physical disk IOs are counted, including file
system IO, virtual memory IO and raw IO.
On HP-UX, this is calculated as
GBL_DISK_PHYS_IO_RATE =
GBL_DISK_FS_IO_RATE +
GBL_DISK_VM_IO_RATE +
GBL_DISK_SYSTEM_IO_RATE +
GBL_DISK_RAW_IO_RATE
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at
boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_READ
----------------------------------
The number of physical reads during the interval. Only local disks are
counted in this measurement. NFS devices are excluded.
On Unix systems, all types of physical disk reads are counted, including file
system, virtual memory, and raw reads.
On HP-UX, there are many reasons why there is not a direct correlation between
the number of logical IOs and physical IOs. For example, small sequential
logical reads may be satisfied from the buffer cache, resulting in fewer
physical IOs than logical IOs. Conversely, large logical IOs or small random
IOs may result in more physical than logical IOs. Logical volume mappings,
logical disk mirroring, and disk striping also tend to remove any correlation.
On HP-UX, this is calculated as
GBL_DISK_PHYS_READ =
GBL_DISK_FS_READ +
GBL_DISK_VM_READ +
GBL_DISK_SYSTEM_READ +
GBL_DISK_RAW_READ
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at
boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_READ_BYTE_RATE
----------------------------------
The average number of KBs transferred from the disk per second during the
interval. Only local disks are counted in this measurement. NFS devices are
excluded.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at
boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_READ_PCT
----------------------------------
The percentage of physical reads of total physical IO during the interval.
Only local disks are counted in this measurement. NFS devices are excluded.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at
boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_READ_RATE
----------------------------------
The number of physical reads per second during the interval. Only local disks
are counted in this measurement. NFS devices are excluded.
On Unix systems, all types of physical disk reads are counted, including file
system, virtual memory, and raw reads.
On HP-UX, this is calculated as
GBL_DISK_PHYS_READ_RATE =
GBL_DISK_FS_READ_RATE +
GBL_DISK_VM_READ_RATE +
GBL_DISK_SYSTEM_READ_RATE +
GBL_DISK_RAW_READ_RATE
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at
boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_WRITE
----------------------------------
The number of physical writes during the interval. Only local disks are
counted in this measurement. NFS devices are excluded.
On Unix systems, all types of physical disk writes are counted, including file
system IO, virtual memory IO, and raw writes.
On HP-UX, since this value is reported by the drivers, multiple physical
requests that have been collapsed to a single physical operation (due to
driver IO merging) are only counted once.
On HP-UX, there are many reasons why there is not a direct correlation between
logical IOs and physical IOs. For example, small logical writes may end up
entirely in the buffer cache, and later generate fewer physical IOs when
written to disk due to the larger IO size. Or conversely, small logical
writes may require physical prefetching of the corresponding disk blocks
before the data is merged and posted to disk. Logical volume mappings,
logical disk mirroring, and disk striping also tend to remove any correlation.
On HP-UX, this is calculated as
GBL_DISK_PHYS_WRITE =
GBL_DISK_FS_WRITE +
GBL_DISK_VM_WRITE +
GBL_DISK_SYSTEM_WRITE +
GBL_DISK_RAW_WRITE
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at
boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_WRITE_BYTE_RATE
----------------------------------
The average number of KBs transferred to the disk per second during the
interval. Only local disks are counted in this measurement. NFS devices are
excluded.
On Unix systems, all types of physical disk writes are counted, including file
system IO, virtual memory IO, and raw writes.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at
boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_PHYS_WRITE_RATE
----------------------------------
The number of physical writes per second during the interval. Only local
disks are counted in this measurement. NFS devices are excluded.
On Unix systems, all types of physical disk writes are counted, including file
system IO, virtual memory IO, and raw writes.
On HP-UX, since this value is reported by the drivers, multiple physical
requests that have been collapsed to a single physical operation (due to
driver IO merging) are only counted once.
On HP-UX, this is calculated as
GBL_DISK_PHYS_WRITE_RATE =
GBL_DISK_FS_WRITE_RATE +
GBL_DISK_VM_WRITE_RATE +
GBL_DISK_SYSTEM_WRITE_RATE +
GBL_DISK_RAW_WRITE_RATE
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at
boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_REQUEST_QUEUE
----------------------------------
The total length of all of the disk queues at the end of the interval.
Some Linux kernels, typically 2.2 and older kernels, do not support the
instrumentation needed to provide values for this metric. This metric will be
“na” on the affected kernels. The “sar -d” command will also not be present
on these systems. Distributions and OS releases that are known to be affected
include: TurboLinux 7, SuSE 7.2, and Debian 3.0.
On SUN, if a CD drive is powered off, or no CD is inserted in the CD drive at
boottime, the operating system does not provide performance data for that
device. This can be determined by checking the “by-disk” data when provided
in a product. If the CD drive has an entry in the list of active disks on a
system, then data for that device is being collected.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_TIME_PEAK
----------------------------------
The time, in seconds, during the interval that the busiest disk was performing
IO transfers. This is for the busiest disk only, not all disk devices. This
counter is based on an end-to-end measurement for each IO transfer updated at
queue entry and exit points.
Only local disks are counted in this measurement. NFS devices are excluded.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_DISK_UTIL_PEAK
----------------------------------
The utilization of the busiest disk during the interval.
On HP-UX, this is the percentage of time during the interval that the busiest
disk device had IO in progress from the point of view of the Operating System.
On all other systems, this is the percentage of time during the interval that
the busiest disk was performing IO transfers.
It is not an average utilization over all the disk devices. Only local disks
are counted in this measurement. NFS devices are excluded.
Some Linux kernels, typically 2.2 and older kernels, do not support the
instrumentation needed to provide values for this metric. This metric will be
“na” on the affected kernels. The “sar -d” command will also not be present
on these systems. Distributions and OS releases that are known to be affected
include: TurboLinux 7, SuSE 7.2, and Debian 3.0.
A peak disk utilization of more than 50 percent often indicates a disk IO
subsystem bottleneck situation. A bottleneck may not be in the physical disk
drive itself, but elsewhere in the IO path.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_FLUSH
----------------------------------
Flush specifies the interval, in seconds, at which scope logs the application
and device data classes even though the data does not meet the threshold
conditions being set.
Flush parameter is set in parm file.
GBL_FS_SPACE_UTIL_PEAK
----------------------------------
The percentage of occupied disk space to total disk space for the fullest file
system found during the interval. Only locally mounted file systems are
counted in this metric.
This metric can be used as an indicator that at least one file system on the
system is running out of disk space.
On Unix systems, CDROM and PC file systems are also excluded. This metric can
exceed 100 percent. This is because a portion of the file system space is
reserved as a buffer and can only be used by root. If the root user has made
the file system grow beyond the reserved buffer, the utilization will be
greater than 100 percent. This is a dangerous situation since if the root
user totally fills the file system, the system may crash.
On Windows, CDROM file systems are also excluded.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_GMTOFFSET
----------------------------------
The difference, in minutes, between local time and GMT (Greenwich Mean Time).
GBL_IGNORE_MT
----------------------------------
This boolean value indicates whether the CPU normalization is on or off. If
the metric value is “true”, CPU related metrics in the global class will
report values which are normalized against the number of active cores on the
system.
If the metric value is “false”, CPU related metrics in the global class will
report values which are normalized against the number of CPU threads on the
system.
If CPU MultiThreading is turned off this configuration option is a no-op and
the metric value will be “true”.
On Linux, this metric will only report “true” if this configuration is on and
if the kernel provides enough information to determine whether MultiThreading
is turned on.
On HPUX, this metric will report “na” if the processor doesn’t support the
feature.
GBL_INTERRUPT
----------------------------------
The number of IO interrupts during the interval.
On Solaris non-global zones with Uncapped CPUs, this metric shows data from
the global zone.
GBL_INTERRUPT_RATE
----------------------------------
The average number of IO interrupts per second during the interval.
On HPUX and SUN this value includes clock interrupts. To get non-clock device
interrupts, subtract clock interrupts from the value.
On Solaris non-global zones with Uncapped CPUs, this metric shows data from
the global zone.
GBL_INTERVAL
----------------------------------
The amount of time in the interval.
This measured interval is slightly larger than the desired or configured
interval if the collection program is delayed by a higher priority process and
cannot sample the data immediately.
GBL_LOADAVG
----------------------------------
The 1 minute load average of the system obtained at the time of logging.
On windows this is the load average of the system over the interval. Load
average on windows is the average number of threads that have been waiting in
ready state during the interval. This is obtained by checking the number of
threads in ready state every sub proc interval, accumulating them over the
interval and averaging over the interval.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_LOGFILE_VERSION
----------------------------------
Three byte ASCII field containing the log file version number. The log file
version is assigned by scopeux and is incremented when changes to the log file
causes the layout to be different from previous versions. The current version
is “ D”. Every effort is made to protect the information investment
maintained in historical log files by providing forward compatibility and/or
conversion utilities when log files change.
GBL_LOGGING_TYPES
----------------------------------
A 13-byte field indicating the types of data logged by the collector. This is
controlled by the LOG statement in the parm file. Each position will contain
either a space or the characters as shown below. Note that positions two (all
applications) and four (all processes) were implemented for HP internal use
only and are not normally used outside of HP. An @ in position two indicates
that all applications are logged each five minute interval even if they had no
activity during the interval. An @ in position four indicates that all
processes, not just the interesting ones, are logged each one minute interval.
This can result in very large log files.An @ in position 6 indicates all
devices( File System Device,Disk,CPU,LAN,Logical Volume) are logged.
Position Char Meaning
1 G Global data
2 @ All applications
3 A Applications
4 @ All processes
5 P Interesting processes
6 @ All Devices
7 F File System Device
8 D Disk
9 C CPU
10 L LAN
11 V Logical Volume
12 T Transaction data
13 space Not used
GBL_LS_MODE
----------------------------------
Indicates whether the CPU entitlement for the logical system is Capped or
Uncapped.
On a recognized VMware ESX guest, where VMware guest SDK is enabled, the value
is “Uncapped” if maximum CPU entitlement (GBL_CPU_ENTL_MAX) is unlimited.
Else, the value is always “Capped”.
GBL_LS_ROLE
----------------------------------
Indicates whether Perf Agent is installed on Logical system or host or
standalone system. This metric will be either “GUEST”, “HOST” or “STAND”.
GBL_LS_SHARED
----------------------------------
In a virtual environment, this metric indicates whether the physical CPUs are
dedicated to this Logical system or shared.
On AIX SPLPAR, this metric is equivalent to “Type” field of ‘lparstat -i’
command.
On a recognized VMware ESX guest, where VMware guest SDK is enabled, the value
is “Shared”.
On a standalone system the value of this metrics is “Dedicated”.
On AIX System WPARs, this metric is NA.
GBL_LS_TYPE
----------------------------------
The virtulization technology if applicable. The value of this metric is “HPVM”
on HP-UX host, “LPAR” on AIX LPAR, “Sys WPAR” on system WPAR, “Zone” on
Solaris Zones, “VMware” on recognized VMware ESX guest and VMware ESX Server
console, “Hyper-V” on Hyper-V host, else “NoVM”.
In conjunction with GBL_LS_ROLE this metric could be used to identify the
environment in which Perf Agent/Glance is running. For example, if
GBL_LS_ROLE is “Guest” and GBL_LS_TYPE is “VMware” then PA/Glance is running
on a VMware Guest.
GBL_MACHINE
----------------------------------
An ASCII string representing the Processor Architecture. And machine hardware
model is represented by GBL_MACHINE_MODEL metric.
GBL_MACHINE_MEM_USED
----------------------------------
The amount of physical host memory currently consumed for this logical
system’s physical memory. On a standalone system, the value will be
(GBL_MEM_UTIL * GBL_MEM_PHYS) / 100
GBL_MEM_AVAIL
----------------------------------
The amount of physical available memory in the system (in MBs unless otherwise
specified).
On Windows, memory resident operating system code and data is not included as
available memory.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_CACHE
----------------------------------
The amount of physical memory (in MBs unless otherwise specified) used by the
buffer cache during the interval.
On HP-UX 11i v2 and below, the buffer cache is a memory pool used by the
system to stage disk IO data for the driver.
On HP-UX 11i v3 and above this metric value represents the usage of the file
system buffer cache which is still being used for file system metadata.
On SUN, this value is obtained by multiplying the system page size times the
number of buffer headers (nbuf). For example, on a SPARCstation 10 the buffer
size is usually (200 (page size buffers) * 4096 (bytes/page) = 800 KB).
On SUN, the buffer cache is a memory pool used by the system to cache inode,
indirect block and cylinder group related disk accesses. This is different
from the traditional concept of a buffer cache that also holds file system
data. On Solaris 5.X, as file data is cached, accesses to it show up as
virtual memory IOs. File data caching occurs through memory mapping managed
by the virtual memory system, not through the buffer cache. The “nbuf” value
is dynamic, but it is very hard to create a situation where the memory cache
metrics change, since most systems have more than adequate space for inode,
indirect block, and cylinder group data caching. This cache is more heavily
utilized on NFS file servers.
On AIX, this value should be minimal since most disk IOs are done through
memory mapped files.
GBL_MEM_CACHE_FLUSH_RATE
----------------------------------
The rate at which the file system cache has flushed its contents to disk as
the result of a request to flush or to satisfy a write-through file write
request.
GBL_MEM_CACHE_HIT_PCT
----------------------------------
On HP-UX, the percentage of buffer cache reads resolved from the buffer cache
(rather than going to disk) during the interval. Buffer cache reads can occur
as a result of a logical read (for example, file read system call), a read
generated by a client, a read-ahead on behalf of a logical read or a system
procedure.
On HP-UX, this metric is obtained by measuring the number of buffered read
calls that were satisfied by the data that was in the file system buffer
cache. Reads to filesystem file buffers that are not in the buffer cache
result in disk IO. Reads to raw IO and virtual memory IO (including memory
mapped files), do not go through the filesystem buffer cache, and so are not
relevant to this metric.
On HP-UX, a low cache hit rate may indicate low efficiency of the buffer
cache, either because applications have poor data locality or because the
buffer cache is too small. Overly large buffer cache sizes can lead to a
memory bottleneck. The buffer cache should be sized small enough so that
pageouts do not occur even when the system is busy. However, in the case of
VxFS, all memory-mapped IOs show up as page ins/page outs and are not a result
of memory pressure.
On AIX, the percentage of disk reads that were satisfied in the file system
buffer cache (rather than going to disk) during the interval.
On AIX, the traditional file system buffer cache is not normally used, since
files are implicitly memory mapped and the access is through the virtual
memory system rather than the buffer cache. However, if a file is read as a
block device (e.g /dev/hdisk1), the file system buffer cache is used, making
this metric meaningful in that situation. If no IO through the buffer cache
occurs during the interval, this metric is 0.
On the remaining Unix systems, this is the percentage of logical reads
satisfied in memory (rather than going to disk) during the interval. This
includes inode, indirect block and cylinder group related disk reads, plus
file reads from files memory mapped by the virtual memory IO system.
On Windows, this is the percentage of buffered reads satisfied in the buffer
cache (rather than going to disk) during the interval. This metric is
obtained by measuring the number of buffered read calls that were satisfied by
the data that was in the system buffer cache. Reads that are not in the
buffer cache result in disk IO. Unbuffered IO and virtual memory IO
(including memory mapped files), are not counted in this metric.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_MEM_CACHE_UTIL
----------------------------------
The percentage of physical memory used by the buffer cache during the
interval.
On HP-UX 11i v2 and below, the buffer cache is a memory pool used by the
system to stage disk IO data for the driver.
On HP-UX 11i v3 and above this metric value represents the usage of the file
system buffer cache which is still being used for file system metadata.
On SUN, this percentage is based on calculating the buffer cache size by
multiplying the system page size times the number of buffer headers (nbuf).
For example, on a SPARCstation 10 the buffer size is usually (200 (page size
buffers) * 4096 (bytes/page) = 800 KB).
On SUN, the buffer cache is a memory pool used by the system to cache inode,
indirect block and cylinder group related disk accesses. This is different
from the traditional concept of a buffer cache that also holds file system
data. On Solaris 5.X, as file data is cached, accesses to it show up as
virtual memory IOs. File data caching occurs through memory mapping managed
by the virtual memory system, not through the buffer cache. The “nbuf” value
is dynamic, but it is very hard to create a situation where the memory cache
metrics change, since most systems have more than adequate space for inode,
indirect block, and cylinder group data caching. This cache is more heavily
utilized on NFS file servers.
On AIX, this value should be minimal since most disk IOs are done through
memory mapped files. On Windows the value reports ‘copy read hit %’ and ‘Pin
read hit %’.
GBL_MEM_DATAMAP_HIT_PCT
----------------------------------
The percentage of data maps in the file system cache that could be resolved
without having to retrieve a page from the disk, because the page was already
in physical memory.
GBL_MEM_ENTL_MAX
----------------------------------
In a virtual environment, this metric indicates the maximum amount of memory
configured for this logical system. The value is -3 if entitlement is
‘Unlimited’ for this logical system.
On a recognized VMware ESX guest, where VMware guest SDK is disabled, the
value is “na”
On Solaris non-global zones, this metric value is equivalent to ‘capped-
memory’ value for ‘zonecfg -z zonename info’ command.
On a standalone system this metric is equivalent to GBL_MEM_PHYS.
GBL_MEM_ENTL_MIN
----------------------------------
In a virtual environment, this metric indicates the minimum amount of memory
configured for this logical system.
On a recognized VMware ESX guest, where VMware guest SDK is disabled, the
value is “na”
On a standalone system, this metrics is equivalent to GBL_MEM_PHYS.
GBL_MEM_FREE
----------------------------------
The amount of memory not allocated (in MBs unless otherwise specified). As
this value drops, the likelihood increases that swapping or paging out to disk
may occur to satisfy new memory requests.
On SUN, low values for this metric may not indicate a true memory shortage.
This metric can be influenced by the VMM (Virtual Memory Management) system.
On uncapped solaris zones, the metric indicates the amount of memory that is
available across the whole system that is not consumed by the global zone and
other non-global zones. In case of capped solaris zones, the metric indicates
the amount of memory that is not consumed by this zone against the memory cap
set.
On Linux, this metric is sum of ‘free’ and ‘cached’ memory.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
Locality Domain metrics are available on HP-UX 11iv2 and above. GBL_MEM_FREE
and LDOM_MEM_FREE, as well as the memory utilization metrics derived from
them, may not always fully match. GBL_MEM_FREE represents free memory in the
kernel’s reservation layer while LDOM_MEM_FREE shows actual free pages. If
memory has been reserved but not actually consumed from the Locality Domains,
the two values won’t match. Because GBL_MEM_FREE includes pre-reserved memory,
the GBL_MEM_* metrics are a better indicator of actual memory consumption in
most situations.
GBL_MEM_FREE_UTIL
----------------------------------
The percentage of physical memory that was free at the end of the interval.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_LOCKED
----------------------------------
The amount of physical memory (in KBs unless otherwise specified) marked as
locked memory at the end of the interval. This includes memory locked by
processes, kernel and driver code, and can not exceed available physical
memory on the system.
This is the total non-paged pool memory usage. This memory is allocated from
the system-wide non-paged pool, and is not affected by the pageout process.
The kernel and driver code use the non-paged pool for data that should always
be in physical memory. The size of the non-paged pool is limited to the
approximately 128 MB on Windows NT systems and to 256 MB on Windows 2000
systems. A failure to allocate memory from the non-paged pool can cause a
system crash.
GBL_MEM_LOCKED_UTIL
----------------------------------
The percentage of physical memory marked as locked memory at the end of the
interval. This includes memory locked by processes, kernel and driver code.
This is the total non-paged pool memory usage. This memory is allocated from
the system-wide non-paged pool, and is not affected by the pageout process.
The kernel and driver code use the non-paged pool for data that should always
be in physical memory. The size of the non-paged pool is limited to the
approximately 128 MB on Windows NT systems and to 256 MB on Windows 2000
systems. A failure to allocate memory from the non-paged pool can cause a
system crash.
GBL_MEM_OVERHEAD
----------------------------------
The amount of “overhead” memory associated with this logical system that is
currently consumed on the host system. On VMware ESX Server console, the
value is equivalent to sum of the current overhead memory for all running
virtual machines On a standalone system, the value will be 0. On a recognized
VMware ESX guest, where VMware guest SDK is disabled, the value is “na”.
GBL_MEM_PAGEIN
----------------------------------
The total number of page ins from the disk during the interval.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and file
systems.
On Windows, this includes paging activity for both file systems and paging
space.
On HP-UX, this is the same as the “page ins” value from the “vmstat -s”
command. On AIX, this is the same as the “paging space page ins” value.
Remember that “vmstat -s” reports cumulative counts.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PAGEIN_RATE
----------------------------------
The total number of page ins per second from the disk during the interval.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and file
systems.
On Windows, this includes paging activity for both file systems and paging
space.
On HP-UX and AIX, this is the same as the “pi” value from the vmstat command.
On Solaris, this is the same as the sum of the “epi” and “api” values from the
“vmstat -p” command, divided by the page size in KB.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PAGEOUT
----------------------------------
The total number of page outs to the disk during the interval.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and file
systems.
On Windows, this includes paging activity for both file systems and paging
space.
On HP-UX, this is the same as the “page outs” value from the “vmstat -s”
command. On HP-UX 11iv3 and above this includes filecache page outs also. On
AIX, this is the same as the “paging space page outs” value. Remember that
“vmstat -s” reports cumulative counts.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PAGEOUT_RATE
----------------------------------
The total number of page outs to the disk per second during the interval.
On HP-UX, Solaris, Linux and AIX, this reflects paging activity between
memory and paging space. It does not include activity between memory and file
systems.
On Windows, this includes paging activity for both file systems and paging
space.
On HP-UX and AIX, this is the same as the “po” value from the vmstat command.
On Solaris, this is the same as the sum of the “epo” and “apo” values from the
“vmstat -p” command, divided by the page size in KB.
On Windows, this counter also includes paging traffic on behalf of the system
cache to access file data for applications and so may be high when there is no
memory pressure.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PAGE_FAULT
----------------------------------
The number of page faults that occurred during the interval.
On Linux this metric is available only on 2.6 and above kernel versions.
GBL_MEM_PAGE_FAULT_RATE
----------------------------------
The number of page faults per second during the interval.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PAGE_REQUEST
----------------------------------
The number of page requests to or from the disk during the interval.
On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging
space and not to the file system.
On Windows, this includes pages paged to or from both paging space and the
file system.
On HP-UX, this is the same as the sun of the “page ins” and “page outs” values
from the “vmstat -s” command. On AIX, this is the same as the sum of the
“paging space page ins” and “paging space page outs” values. Remember that
“vmstat -s” reports cumulative counts.
On Windows, this counter also includes paging traffic on behalf of the system
cache to access file data for applications and so may be high when there is no
memory pressure.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PAGE_REQUEST_RATE
----------------------------------
The number of page requests to or from the disk per second during the
interval.
On HP-UX, Solaris, and AIX, this includes pages paged to or from the paging
space and not to or from the file system.
On Windows, this includes pages paged to or from both paging space and the
file system.
On HP-UX and AIX, this is the same as the sum of the “pi” and “po” values from
the vmstat command.
On Solaris, this is the same as the sum of the “epi”, “epo”, “api”, and “apo”
values from the “vmstat -p” command, divided by the page size in KB.
Higher than normal rates can indicate either a memory or a disk bottleneck.
Compare GBL_DISK_UTIL_PEAK and GBL_MEM_UTIL to determine which resource is
more constrained. High rates may also indicate memory thrashing caused by a
particular application or set of applications. Look for processes with high
major fault rates to identify the culprits.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PHYS
----------------------------------
The amount of physical memory in the system (in MBs unless otherwise
specified).
On HP-UX, banks with bad memory are not counted. Note that on some machines,
the Processor Dependent Code (PDC) code uses the upper 1MB of memory and thus
reports less than the actual physical memory of the system. Thus, on a system
with 256MB of physical memory, this metric and dmesg(1M) might only report
267,386,880 bytes (255MB). This is all the physical memory that software on
the machine can access.
On Windows, this is the total memory available, which may be slightly less
than the total amount of physical memory present in the system. This value is
also reported in the Control Panel’s About Windows NT help topic.
On Linux, this is the amount of memory given by dmesg(1M). If the value is
not available in kernel ring buffer, then the sum of system memory and
available memory will be reported as physical memory.
On Solaris non-global zones with Uncapped Memory scenario, this metric value
is same as seen in global zone.
GBL_MEM_PHYS_SWAPPED
----------------------------------
On a recognized VMware ESX guest, where VMware guest SDK is enabled, this
metrics indicates the amount of memory that has been reclaimed by ESX Server
from this logical system by transparently swapping logical system’s memory to
disk. The value is “na” otherwise.
GBL_MEM_SHARES_PRIO
----------------------------------
The weightage/priority for memory assigned to this logical system. This value
influences the share of unutilized physical Memory that this logical system
can utilize. On a recognized VMware ESX guest, where VMware guest SDK is
enabled, this value can range from 0 to 100000. The value will be “na”
otherwise.
GBL_MEM_SYS
----------------------------------
The amount of physical memory (in MBs unless otherwise specified) used by the
system (kernel) during the interval. System memory does not include the
buffer cache. On HP-UX and Linux this does not include filecache also.
On HP-UX 11.0, this metric does not include some kinds of dynamically
allocated kernel memory. This has always been reported in the GBL_MEM_USER*
metrics.
On HP-UX 11.11 and beyond, this metric includes some kinds of dynamically
allocated kernel memory.
On Solaris non-global zones, this metric shows value as 0.
GBL_MEM_SYS_AND_CACHE_UTIL
----------------------------------
The percentage of physical memory used by the system (kernel) and the buffer
cache at the end of the interval.
On HP-UX 11iv3, this includes file cache also.
On HP-UX 11.0, this metric does not include some kinds of dynamically
allocated kernel memory. This has always been reported in the GBL_MEM_USER*
metrics.
On HP-UX 11.11 and beyond, this metric includes some kinds of dynamically
allocated kernel memory.
On Solaris non-global zones, this metric is N/A.
GBL_MEM_SYS_UTIL
----------------------------------
The percentage of physical memory used by the system during the interval.
System memory does not include the buffer cache. On HP-UX and Linux this does
not include filecache also.
On HP-UX 11.0, this metric does not include some kinds of dynamically
allocated kernel memory. This has always been reported in the GBL_MEM_USER*
metrics.
On HP-UX 11.11 and beyond, this metric includes some kinds of dynamically
allocated kernel memory.
On Solaris non-global zones, this metric shows value as 0.
GBL_MEM_USER
----------------------------------
The amount of physical memory (in MBs unless otherwise specified) allocated to
user code and data at the end of the interval. User memory regions include
code, heap, stack, and other data areas including shared memory. This does
not include memory for buffer cache. On HP-UX and Linux this does not include
filecache also.
On HP-UX 11.0, this metric includes some kinds of dynamically allocated
kernel memory.
On HP-UX 11.11 and beyond, this metric does not include some kinds of
dynamically allocated kernel memory. This is now reported in the GBL_MEM_SYS*
metrics.
Large fluctuations in this metric can be caused by programs which allocate
large amounts of memory and then either release the memory or terminate. A
slow continual increase in this metric may indicate a program with a memory
leak.
GBL_MEM_USER_UTIL
----------------------------------
The percent of physical memory allocated to user code and data at the end of
the interval. This metric shows the percent of memory owned by user memory
regions such as user code, heap, stack and other data areas including shared
memory. This does not include memory for buffer cache. On HP-UX and Linux
this does not include filecache also. On HP-UX 11.0, this metric includes
some kinds of dynamically allocated kernel memory.
On HP-UX 11.11 and beyond, this metric does not include some kinds of
dynamically allocated kernel memory. This is now reported in the GBL_MEM_SYS*
metrics.
Large fluctuations in this metric can be caused by programs which allocate
large amounts of memory and then either release the memory or terminate. A
slow continual increase in this metric may indicate a program with a memory
leak.
GBL_MEM_UTIL
----------------------------------
The percentage of physical memory in use during the interval. This includes
system memory (occupied by the kernel), buffer cache and user memory.
On HP-UX 11iv3 and above, this includes file cache. This excludes file cache
when cachemem parameter in the parm file is set to free.
On HP-UX, this calculation is done using the byte values for physical memory
and used memory, and is therefore more accurate than comparing the reported
kilobyte values for physical memory and used memory.
On Linux, the value of this metric includes file cache when the cachemem
parameter in the parm file is set to user.
On SUN, high values for this metric may not indicate a true memory shortage.
This metric can be influenced by the VMM (Virtual Memory Management) system.
This excludes ZFS ARC cache when cachemem parameter in the parm file is set to
free.
On AIX, this excludes file cache when cachemem parameter in the parm file is
set to free.
Locality Domain metrics are available on HP-UX 11iv2 and above. GBL_MEM_FREE
and LDOM_MEM_FREE, as well as the memory utilization metrics derived from
them, may not always fully match. GBL_MEM_FREE represents free memory in the
kernel’s reservation layer while LDOM_MEM_FREE shows actual free pages. If
memory has been reserved but not actually consumed from the Locality Domains,
the two values won’t match. Because GBL_MEM_FREE includes pre-reserved memory,
the GBL_MEM_* metrics are a better indicator of actual memory consumption in
most situations.
GBL_NET_DEFERRED_PCT
----------------------------------
The percentage of deferred packets to total outbound packet attempts during
the interval. Outbound packet attempts include both packets successfully
transmitted and those that were deferred.
This does not include data for loopback interface.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_ERROR
----------------------------------
The number of errors that occurred on all network interfaces during the
interval.
This does not include data for loopback interface.
For HP-UX, this will be the same as the sum of the “Inbound Errors” and
“Outbound Errors” values from the output of the “lanadmin” utility for the
network interface. Remember that “lanadmin” reports cumulative counts. As of
the HP-UX 11.0 release and beyond, “netstat -i” shows network activity on the
logical level (IP) only.
For all other Unix systems, this is the same as the sum of “Ierrs” (RX-ERR on
Linux) and “Oerrs” (TX-ERR on Linux) from the “netstat -i” command for a
network device. See also netstat(1).
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_ERROR_1_MIN_RATE
----------------------------------
The number of errors per minute on all network interfaces during the interval.
This rate should normally be zero or very small. A large error rate can
indicate a hardware or software problem.
This does not include data for loopback interface.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_ERROR_RATE
----------------------------------
The number of errors per second on all network interfaces during the interval.
This does not include data for loopback interface.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_IN_ERROR_PCT
----------------------------------
The percentage of inbound network errors to total inbound packet attempts
during the interval. Inbound packet attempts include both packets
successfully received and those that encountered errors.
This does not include data for loopback interface.
A large number of errors may indicate a hardware problem on the network. The
percentage of inbound errors to total packets attempted should remain low.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_IN_ERROR_RATE
----------------------------------
The number of inbound errors per second on all network interfaces during the
interval.
This does not include data for loopback interface.
A large number of errors may indicate a hardware problem on the network. The
percentage of inbound errors to total packets attempted should remain low.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_IN_PACKET
----------------------------------
The number of successful packets received through all network interfaces
during the interval. Successful packets are those that have been processed
without errors or collisions.
This does not include data for loopback interface.
For HP-UX, this will be the same as the sum of the “Inbound Unicast Packets”
and “Inbound Non-Unicast Packets” values from the output of the “lanadmin”
utility for the network interface. Remember that “lanadmin” reports
cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i”
shows network activity on the logical level (IP) only.
For all other Unix systems, this is the same as the sum of the “Ipkts” column
(RX-OK on Linux) from the “netstat -i” command for a network device. See also
netstat(1).
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On Windows system, the packet size for NBT connections is defined as 1 Kbyte.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_IN_PACKET_RATE
----------------------------------
The number of successful packets per second received through all network
interfaces during the interval. Successful packets are those that have been
processed without errors or collisions.
This does not include data for loopback interface.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On Windows system, the packet size for NBT connections is defined as 1 Kbyte.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_OUTQUEUE
----------------------------------
The sum of the outbound queue lengths for all network interfaces
(BYNETIF_QUEUE). This metric is derived from the same source as the Outbound
Queue Length shown in the lanadmin(1M) program.
This does not include data for loopback interface.
For most interfaces, the outbound queue is usually zero. When the value is
non-zero over a period of time, the network may be experiencing a bottleneck.
Determine which network interface has a non-zero queue and compare its traffic
levels to normal. Also see if processes are blocking on network wait states.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
GBL_NET_OUT_ERROR_PCT
----------------------------------
The percentage of outbound network errors to total outbound packet attempts
during the interval. Outbound packet attempts include both packets
successfully sent and those that encountered errors.
This does not include data for loopback interface.
The percentage of outbound errors to total packets attempted to be transmitted
should remain low.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_OUT_ERROR_RATE
----------------------------------
The number of outbound errors per second on all network interfaces during the
interval.
This does not include data for loopback interface.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_OUT_PACKET
----------------------------------
The number of successful packets sent through all network interfaces during
the last interval. Successful packets are those that have been processed
without errors or collisions.
This does not include data for loopback interface.
For HP-UX, this will be the same as the sum of the “Outbound Unicast Packets”
and “Outbound Non-Unicast Packets” values from the output of the “lanadmin”
utility for the network interface. Remember that “lanadmin” reports
cumulative counts. As of the HP-UX 11.0 release and beyond, “netstat -i”
shows network activity on the logical level (IP) only.
For all other Unix systems, this is the same as the sum of the “Opkts” column
(TX-OK on Linux) from the “netstat -i” command for a network device. See also
netstat(1).
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On Windows system, the packet size for NBT connections is defined as 1 Kbyte.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_OUT_PACKET_RATE
----------------------------------
The number of successful packets per second sent through the network
interfaces during the interval. Successful packets are those that have been
processed without errors or collisions.
This does not include data for loopback interface.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On Windows system, the packet size for NBT connections is defined as 1 Kbyte.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_PACKET_RATE
----------------------------------
The number of successful packets per second (both inbound and outbound) for
all network interfaces during the interval. Successful packets are those that
have been processed without errors or collisions.
This does not include data for loopback interface.
This metric is updated at the sampling interval, regardless of the number of
IP addresses on the system.
On Windows system, the packet size for NBT connections is defined as 1 Kbyte.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_NET_UTIL_PEAK
----------------------------------
It is the utilisation of the most used network interfaces at the end of the
interval.
GBL_NUM_ACTIVE_LS
----------------------------------
This indicates the number of LS hosted in a system that are active . If Perf
Agent is installed in a guest or in a standalone system this value will be 0.
On Solaris non-global zones, this metric shows value as 0.
GBL_NUM_CPU
----------------------------------
The number of physical CPUs on the system. This includes all CPUs, either
online or offline. For HP-UX and certain versions of Linux, the sar(1M)
command allows you to check the status of the system CPUs. For SUN and DEC,
the commands psrinfo(1M) and psradm(1M) allow you to check or change the
status of the system CPUs. For AIX, this metric indicates the maximum number
of CPUs the system ever had.
On a logical system, this metric indicates the number of virtual CPUs
configured. When hardware threads are enabled, this metric indicates the
number of logical processors.
On Solaris non-global zones with Uncapped CPUs, this metric shows data from
the global zone.
On AIX System WPARs, this metric value is identical to the value on AIX
Global Environment.
The Linux kernel currently doesn’t provide any metadata information for
disabled CPUs. This means that there is no way to find out types, speeds, as
well as hardware IDs or any other information that is used to determine the
number of cores, the number of threads, the HyperThreading state, etc... If
the agent (or Glance) is started while some of the CPUs are disabled, some of
these metrics will be “na”, some will be based on what is visible at startup
time. All information will be updated if/when additional CPUs are enabled and
information about them becomes available. The configuration counts will remain
at the highest discovered level (i.e. if CPUs are then disabled, the maximum
number of CPUs/cores/etc... will remain at the highest observed level). It is
recommended that the agent be started with all CPUs enabled.
GBL_NUM_CPU_CORE
----------------------------------
This metric provides the total number of CPU cores on a physical system. On
VMs, this metric shows information according to resources available on that
VM. On non HP-UX system, this metric is equivalent to active CPU cores. On
AIX System WPARs, this metric value is identical to the value on AIX Global
Environment. On Windows, this metric will be “na” on Windows Server 2003
Itanium systems.
The Linux kernel currently doesn’t provide any metadata information for
disabled CPUs. This means that there is no way to find out types, speeds, as
well as hardware IDs or any other information that is used to determine the
number of cores, the number of threads, the HyperThreading state, etc... If
the agent (or Glance) is started while some of the CPUs are disabled, some of
these metrics will be “na”, some will be based on what is visible at startup
time. All information will be updated if/when additional CPUs are enabled and
information about them becomes available. The configuration counts will remain
at the highest discovered level (i.e. if CPUs are then disabled, the maximum
number of CPUs/cores/etc... will remain at the highest observed level). It is
recommended that the agent be started with all CPUs enabled.
GBL_NUM_DISK
----------------------------------
The number of disks on the system. Only local disk devices are counted in
this metric.
On HP-UX, this is a count of the number of disks on the system that have ever
had activity over the cumulative collection time.
On Solaris non-global zones, this metric shows value as 0.
On AIX System WPARs, this metric shows value as 0.
GBL_NUM_LS
----------------------------------
This indicates the number of LS hosted in a system. If Perf Agent is installed
in a guest or in a standalone system this value will be 0.
On Solaris non-global zones, this metric shows value as 0.
GBL_NUM_NETWORK
----------------------------------
The number of network interfaces on the system. This includes the loopback
interface. On certain platforms, this also include FDDI, Hyperfabric, ATM,
Serial Software interfaces such as SLIP or PPP, and Wide Area Network
interfaces (WAN) such as ISDN or X.25. The “netstat -i” command also displays
the list of network interfaces on the system. The number of Network protocols
in use on the system.
GBL_NUM_SOCKET
----------------------------------
The number of physical cpu sockets on the system. On VMs, this metric shows
information according to resources available on that VM.
On Windows, this metric will be “na” on Windows Server 2003 Itanium systems.
GBL_NUM_USER
----------------------------------
The number of users logged in at the time of the interval sample. This is the
same as the command “who | wc -l”.
For Unix systems, the information for this metric comes from the utmp file
which is updated by the login command. For more information, read the man
page for utmp. Some applications may create users on the system without using
login and updating the utmp file. These users are not reflected in this
count.
This metric can be a general indicator of system usage. In a networked
environment, however, users may maintain inactive logins on several systems.
On Windows, the information for this metric comes from the Server Sessions
counter in the Performance Libraries Server object. It is a count of the
number of users using this machine as a file server.
GBL_OSNAME
----------------------------------
A string representing the name of the operating system. On Unix systems, this
is the same as the output from the “uname -s” command.
GBL_OSRELEASE
----------------------------------
The current release of the operating system.
On most Unix systems, this is same as the output from the “uname -r” command.
On AIX, this is the actual patch level of the operating system. This is
similar to what is returned by the command “lslpp -l bos.rte” as the most
recent level of the COMMITTED Base OS Runtime. For example, “5.2.0”.
GBL_OSVERSION
----------------------------------
A string representing the version of the operating system. This is the same
as the output from the “uname -v” command. This string is limited to 20
characters, and as a result, the complete version name might be truncated.
On Windows, this is a string representing the service pack installed on the
operating system.
GBL_PROC_RUN_TIME
----------------------------------
The average run time, in seconds, for processes that terminated during the
interval.
GBL_PROC_SAMPLE
----------------------------------
The number of process data samples that have been averaged into global metrics
(such as GBL_ACTIVE_PROC) that are based on process samples.
GBL_RUN_QUEUE
----------------------------------
On UNIX systems except Linux, this is the average number of threads waiting in
the runqueue over the interval. The average is computed against the number of
times the run queue is occupied instead of time. The average is updated by the
kernel at a fine grain interval, only when the run queue is occupied. It is
not averaged against the interval and can therefore be misleading for long
intervals when the run queue is empty most or part of the time. This value
matches runq-sz reported by the “sar -q” command. The GBL_LOADAVG* metrics are
better indicators of run queue pressure.
On Linux and Windows, this is instantaneous value obtained at the time of
logging. On Linux, it shows the number of threads waiting in the runqueue. On
Windows, it shows the Processor Queue Length.
On Unix systems, GBL_RUN_QUEUE will typically be a small number. Larger than
normal values for this metric indicate CPU contention among threads. This CPU
bottleneck is also normally indicated by 100 percent GBL_CPU_TOTAL_UTIL. It
may be OK to have GBL_CPU_TOTAL_UTIL be 100 percent if no other threads are
waiting for the CPU. However, if GBL_CPU_TOTAL_UTIL is 100 percent and
GBL_RUN_QUEUE is greater than the number of processors, it indicates a CPU
bottleneck.
On Windows, the Processor Queue reflects a count of process threads which are
ready to execute. A thread is ready to execute (in the Ready state) when the
only resource it is waiting on is the processor. The Windows operating system
itself has many system threads which intermittently use small amounts of
processor time. Several low priority threads intermittently wake up and
execute for very short intervals. Depending on when the collection process
samples this queue, there may be none or several of these low-priority threads
trying to execute. Therefore, even on an otherwise quiescent system, the
Processor Queue Length can be high. High values for this metric during
intervals where the overall CPU utilization (gbl_cpu_total_util) is low do not
indicate a performance bottleneck. Relatively high values for this metric
during intervals where the overall CPU utilization is near 100% can indicate a
CPU performance bottleneck.
HP-UX RUN/PRI/CPU Queue differences for multi-cpu systems:
For example, let’s assume we’re using a system with eight processors. We
start eight CPU intensive threads that consume almost all of the CPU
resources. The approximate values shown for the CPU related queue metrics
would be:
GBL_RUN_QUEUE = 1.0
GBL_PRI_QUEUE = 0.1
GBL_CPU_QUEUE = 1.0
Assume we start an additional eight CPU intensive threads. The approximate
values now shown are:
GBL_RUN_QUEUE = 2.0
GBL_PRI_QUEUE = 8.0
GBL_CPU_QUEUE = 16.0
At this point, we have sixteen CPU intensive threads running on the eight
processors. Keeping the definitions of the three queue metrics in mind, the
run queue is 2 (that is, 16 / 8); the pri queue is 8 (only half of the threads
can be active at any given time); and the cpu queue is 16 (half of the threads
waiting in the cpu queue that are ready to run, plus one for each active
thread).
This illustrates that the run queue is the average of number of threads
waiting in the runqueue for all processors; the pri queue is the number of
threads that are blocked on “PRI” (priority); and the cpu queue is the number
of threads in the cpu queue that are ready to run, including the threads using
the CPU.
On Solaris non-global zones, this metric shows data from the global zone.
GBL_SRV_WRKITM_SHORTAGES
----------------------------------
The number of times STATUS_DATA_NOT_ACCEPTED was returned at receive
indication time. This occurs when no work item is available or can be
allocated to service the incoming request.
GBL_STARTED_PROC
----------------------------------
The number of processes that started during the interval.
GBL_STATTIME
----------------------------------
An ASCII string representing the time at the end of the interval, based on
local time.
GBL_SWAP_SPACE_AVAIL
----------------------------------
The total amount of potential swap space, in MB.
On HP-UX, this is the sum of the device swap areas enabled by the swapon
command, the allocated size of any file system swap areas, and the allocated
size of pseudo swap in memory if enabled. Note that this is potential swap
space. This is the same as (AVAIL: total) as reported by the “swapinfo -mt”
command.
On SUN, this is the total amount of swap space available from the physical
backing store devices (disks) plus the amount currently available from main
memory. This is the same as (used + available) /1024, reported by the “swap -
s” command.
On Linux, this is same as (Swap: total) as reported by the “free -m” command.
On Unix systems, this metric is updated every 30 seconds or the sampling
interval, whichever is greater.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_SWAP_SPACE_AVAIL_KB
----------------------------------
The total amount of potential swap space, in KB.
On HP-UX, this is the sum of the device swap areas enabled by the swapon
command, the allocated size of any file system swap areas, and the allocated
size of pseudo swap in memory if enabled. Note that this is potential swap
space. Since swap is allocated in fixed (SWCHUNK) sizes, not all of this
space may actually be usable. For example, on a 61MB disk using 2 MB swap
size allocations, 1 MB remains unusable and is considered wasted space.
On HP-UX, this is the same as (AVAIL: total) as reported by the “swapinfo -t”
command.
On SUN, this is the total amount of swap space available from the physical
backing store devices (disks) plus the amount currently available from main
memory. This is the same as (used + available)/1024, reported by the “swap -
s” command.
On Unix systems, this metric is updated every 30 seconds or the sampling
interval, whichever is greater.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_SWAP_SPACE_USED
----------------------------------
The amount of swap space used, in MB.
On HP-UX, “Used” indicates written to disk (or locked in memory), rather than
reserved. This is the same as (USED: total - reserve) as reported by the
“swapinfo -mt” command.
On SUN, “Used” indicates amount written to disk (or locked in memory), rather
than reserved. Swap space is reserved (by decrementing a counter) when
virtual memory for a program is created. This is the same as (bytes
allocated)/1024, reported by the “swap -s” command.
On Linux, this is same as (Swap: used) as reported by the “free -m” command.
On AIX System WPARs, this metric is NA.
On Solaris non-global zones, this metric is N/A. On Unix systems, this
metric is updated every 30 seconds or the sampling interval, whichever is
greater.
GBL_SWAP_SPACE_UTIL
----------------------------------
The percent of available swap space that was being used by running processes
in the interval.
On Windows, this is the percentage of virtual memory, which is available to
user processes, that is in use at the end of the interval. It is not an
average over the entire interval. It reflects the ratio of committed memory
to the current commit limit. The limit may be increased by the operating
system if the paging file is extended. This is the same as (Committed Bytes /
Commit Limit) * 100 when comparing the results to Performance Monitor.
On HP-UX, swap space must be reserved (but not allocated) before virtual
memory can be created. If all of available swap is reserved, then no new
processes or virtual memory can be created. Swap space locations are actually
assigned (used) when a page is actually written to disk or locked in memory
(pseudo swap in memory). This is the same as (PCT USED: total) as reported by
the “swapinfo -mt” command.
On Unix systems, this metric is a measure of capacity rather than performance.
As this metric nears 100 percent, processes are not able to allocate any more
memory and new processes may not be able to run. Very low swap utilization
values may indicate that too much area has been allocated to swap, and better
use of disk space could be made by reallocating some swap partitions to be
user filesystems.
On Unix systems, this metric is updated every 30 seconds or the sampling
interval, whichever is greater.
On Solaris non-global zones, this metric is N/A.
On AIX System WPARs, this metric is NA.
GBL_SYSCALL
----------------------------------
The number of system calls during the interval.
High system call rates are normal on busy systems, especially with IO
intensive applications. Abnormally high system call rates may indicate
problems such as a “hung” terminal that is stuck in a loop generating read
system calls.
GBL_SYSCALL_RATE
----------------------------------
The average number of system calls per second during the interval.
High system call rates are normal on busy systems, especially with IO
intensive applications. Abnormally high system call rates may indicate
problems such as a “hung” terminal that is stuck in a loop generating read
system calls.
On HP-UX, system call rates affect the overhead of the midaemon.
Due to the system call instrumentation on HP-UX, the fork and vfork system
calls are double counted. In the case of fork and vfork, one process starts
the system call, but two processes exit.
HP-UX lightweight system calls, such as umask, do not show up in the Glance
System Calls display, but will get added to the global system call rates. If
a process is being traced (debugged) using standard debugging tools (such as
adb or xdb), all system calls used by that process will show up in the System
Calls display while being traced.
On HP-UX, compare this metric to GBL_DISK_LOGL_IO_RATE to see if high system
callrates correspond to high disk IO. GBL_CPU_SYSCALL_UTIL shows the CPU
utilization due to processing system calls.
GBL_SYSTEM_ID
----------------------------------
The network node hostname of the system. This is the same as the output from
the “uname -n” command.
On Windows, the name obtained from GetComputerName.
GBL_SYSTEM_UPTIME_HOURS
----------------------------------
The time, in hours, since the last system reboot.
GBL_SYSTEM_UPTIME_SECONDS
----------------------------------
The time, in seconds, since the last system reboot.
GBL_THRESHOLD_CPU
----------------------------------
The percent of CPU that a process must use to become interesting during an
interval. The default for this threshold is “5.0”, which means a process must
have a value of at least 5.0% for PROC_CPU_TOTAL_UTIL to exceed this
threshold.
All threshold values are supplied by the parm file. A process must exceed at
least one threshold value in any given interval before it will be considered
interesting and be logged.
GBL_THRESHOLD_NOKILLED
----------------------------------
This is a flag specifying that terminating processes are not interesting. The
flag is set by the THRESHOLD NOKILLED statement in the parm file. If this
flag is set, then the process will be logged only if it exceeds at least one
of the thresholds. The default (blank) is for the flag to be turned off,
which means a terminating process will be logged in the interval it exits even
if it did not exceed any thresholds during that interval. This is so that the
death of a process is recorded even if it does not exceed any of the
thresholds.
On HP-UX, an exception to this is short-lived processes that are alive for
less than one second. By default, short-lived processes are not considered
interesting. However, there is a flag (THRESHOLD_SHORTLIVED) to turn on the
logging of short-lived processes.
GBL_THRESHOLD_NONEW
----------------------------------
This is a flag specifying that newly created processes are not interesting.
The flag is set by the THRESHOLD NONEW statement in the parm file. If this
flag is set, then the process will be logged only if it exceeds at least one
of the thresholds. The default (blank) is for the flag to be turned off,
which means a new process will be logged in the interval it was created even
if it did not exceed any thresholds during that interval. This is so that the
existence of a process is recorded even if it does not exceed any of the
thresholds.
On HP-UX, an exception to this is short-lived processes that are alive for
less than one second. By default, short-lived processes are not considered
interesting. However, there is a flag (THRESHOLD_SHORTLIVED) to turn on the
logging of short-lived processes.
GBL_THRESHOLD_PROCMEM
----------------------------------
The process memory threshold specified in the parm file.
GBL_TT_OVERFLOW_COUNT
----------------------------------
The number of new transactions that could not be measured because the
Measurement Processing Daemon’s (midaemon) Measurement Performance Database is
full. If this happens, the default Measurement Performance Database size is
not large enough to hold all of the registered transactions on this system.
This can be remedied by stopping and restarting the midaemon process using the
-smdvss option to specify a larger Measurement Performance Database size. The
current Measurement Performance Database size can be checked using the
midaemon -sizes option.
GBL_WEB_CACHE_HIT_PCT
----------------------------------
The ratio of cache hits to all cache requests during the interval. Cache hits
occur when a file open, directory listing or service specific object request
is found in the cache.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_CGI_REQUEST_RATE
----------------------------------
The number of CGI requests being processed per second.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_CONNECTION_RATE
----------------------------------
The sum of the number of simultaneous connections to the HTTP, FTP or gopher
servers during the interval.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_FILES_RECEIVED_RATE
----------------------------------
The rate of files/sec received by the HTTP or FTP servers during the interval.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_FILES_SENT_RATE
----------------------------------
The rate of files/sec sent by the HTTP, FTP or gopher servers during the
interval.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_FTP_READ_BYTE_RATE
----------------------------------
The byte rate in KBs per second that data bytes are received by FTP servers
during the interval.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_FTP_WRITE_BYTE_RATE
----------------------------------
The byte rate in KBs per second that data bytes are sent by FTP servers during
the interval.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_GET_REQUEST_RATE
----------------------------------
The number of GET requests being processed per second.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_GOPHER_READ_BYTE_RATE
----------------------------------
The byte rate in KBs per second that data bytes are received by gopher servers
during the interval.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_GOPHER_WRITE_BYTE_RATE
----------------------------------
The byte rate in KBs per second that data bytes are sent by gopher servers
during the interval.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_HEAD_REQUEST_RATE
----------------------------------
The number of HEAD requests being processed per second.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_HTTP_READ_BYTE_RATE
----------------------------------
The byte rate in KBs per second that data bytes are received by HTTP servers
during the interval.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_HTTP_WRITE_BYTE_RATE
----------------------------------
The byte rate in KBs per second that data bytes are sent by HTTP servers
during the interval.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_ISAPI_REQUEST_RATE
----------------------------------
The number of ISAPI requests being processed per second.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_LOGON_FAILURES
----------------------------------
The number of logon failures that have been made by the HTTP, FTP or gopher
servers during the interval.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_NOT_FOUND_ERRORS
----------------------------------
Number of requests that could not be satisfied by service because requested
documents could not be found; typically reported as HTTP 404 error code to
client.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_OTHER_REQUEST_RATE
----------------------------------
The number of OTHER requests being processed per second.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_POST_REQUEST_RATE
----------------------------------
The number of POST requests being processed per second.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_READ_BYTE_RATE
----------------------------------
The byte rate in KBs per second that data bytes are received by the HTTP, FTP
or gopher servers during the interval.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
GBL_WEB_WRITE_BYTE_RATE
----------------------------------
The byte rate in KBs per second that data bytes are sent by the HTTP, FTP or
gopher servers during the interval.
This metric is available only for Internet Information Server (IIS) 3.0
because IIS 3.0 uses the HTTP object. The GBL_WEB_* metrics are not available
for IIS 4.0 because IIS 4.0 uses the Web Service object, not the HTTP object.
There is a sample Extended Collection Builder policy that uses selected
metrics from the Web Service object. This policy is provided with the
MeasureWare Agent product.
INTERVAL
----------------------------------
The number of seconds in the measurement interval.
For the process data class, this is the number of seconds the process was
alive during the interval.
PROC_APP_ID
----------------------------------
The ID number of the application to which the process (or kernel thread, if
HP-UX/Linux Kernel 2.6 and above) belonged during the interval.
Application “other” always has an ID of 1. There can be up to 999 user-
defined applications, which are defined in the parm file.
PROC_CPU_ALIVE_SYS_MODE_UTIL
----------------------------------
The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) in system mode as a percentage of the time it is alive
during the interval. On platforms other than HPUX, If the ignore_mt flag is
set(true) in parm file, this metric will report values normalized against the
number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
PROC_CPU_ALIVE_TOTAL_UTIL
----------------------------------
The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) as a percentage of the time it is alive during the
interval. On platforms other than HPUX, If the ignore_mt flag is set(true) in
parm file, this metric will report values normalized against the number of
active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
PROC_CPU_ALIVE_USER_MODE_UTIL
----------------------------------
The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) in user mode as a percentage of the time it is alive
during the interval. On platforms other than HPUX, If the ignore_mt flag is
set(true) in parm file, this metric will report values normalized against the
number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
PROC_CPU_SYS_MODE_TIME
----------------------------------
The CPU time in system mode in the context of the process (or kernel thread,
if HP-UX/Linux Kernel 2.6 and above) during the interval.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode.
On a threaded operating system, such as HP-UX 11.0 and beyond, process usage
of a resource is calculated by summing the usage of that resource by its
kernel threads. If this metric is reported for a kernel thread, the value is
the resource usage by that single kernel thread. If this metric is reported
for a process, the value is the sum of the resource usage by all of its kernel
threads. Alive kernel threads and kernel threads that have died during the
interval are included in the summation. On platforms other than HPUX, If the
ignore_mt flag is set(true) in parm file, this metric will report values
normalized against the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
PROC_CPU_SYS_MODE_UTIL
----------------------------------
The percentage of time that the CPU was in system mode in the context of the
process (or kernel thread, if HP-UX/Linux Kernel 2.6 and above) during the
interval.
A process operates in either system mode (also called kernel mode on Unix or
privileged mode on Windows) or user mode. When a process requests services
from the operating system with a system call, it switches into the machine’s
privileged protection mode and runs in system mode.
Unlike the global and application CPU metrics, process CPU is not averaged
over the number of processors on systems with multiple CPUs. Single-threaded
processes can use only one CPU at a time and never exceed 100% CPU
utilization.
High system mode CPU utilizations are normal for IO intensive programs.
Abnormally high system CPU utilization can indicate that a hardware problem is
causing a high interrupt rate. It can also indicate programs that are not
using system calls efficiently.
A classic “hung shell” shows up with very high system mode CPU because it gets
stuck in a loop doing terminal reads (a system call) to a device that never
responds.
On a threaded operating system, such as HP-UX 11.0 and beyond, process usage
of a resource is calculated by summing the usage of that resource by its
kernel threads. If this metric is reported for a kernel thread, the value is
the resource usage by that single kernel thread. If this metric is reported
for a process, the value is the sum of the resource usage by all of its kernel
threads. Alive kernel threads and kernel threads that have died during the
interval are included in the summation.
On multi-processor HP-UX systems, processes which have component kernel
threads executing simultaneously on different processors could have resource
utilization sums over 100%. The maximum percentage is 100% times the number
of CPUs online. On platforms other than HPUX, If the ignore_mt flag is
set(true) in parm file, this metric will report values normalized against the
number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
PROC_CPU_TOTAL_TIME
----------------------------------
The total CPU time, in seconds, consumed by a process (or kernel thread, if
HP-UX/Linux Kernel 2.6 and above) during the interval.
Unlike the global and application CPU metrics, process CPU is not averaged
over the number of processors on systems with multiple CPUs. Single-threaded
processes can use only one CPU at a time and never exceed 100% CPU
utilization.
On HP-UX, the total CPU time is the sum of the CPU time components for a
process or kernel thread, including system, user, context switch, interrupts
processing, realtime, and nice utilization values.
On a threaded operating system, such as HP-UX 11.0 and beyond, process usage
of a resource is calculated by summing the usage of that resource by its
kernel threads. If this metric is reported for a kernel thread, the value is
the resource usage by that single kernel thread. If this metric is reported
for a process, the value is the sum of the resource usage by all of its kernel
threads. Alive kernel threads and kernel threads that have died during the
interval are included in the summation.
On multi-processor HP-UX systems, processes which have component kernel
threads executing simultaneously on different processors could have resource
utilization sums over 100%. The maximum percentage is 100% times the number
of CPUs online. On platforms other than HPUX, If the ignore_mt flag is
set(true) in parm file, this metric will report values normalized against the
number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
PROC_CPU_TOTAL_TIME_CUM
----------------------------------
The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) over the cumulative collection time. CPU time is in
seconds unless otherwise specified.
The cumulative collection time is defined from the point in time when either:
a) the process (or thread) was first started, or b) the performance tool was
first started, or c) the cumulative counters were reset (relevant only to
Glance, if available for the given platform), whichever occurred last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after the
system has been up for more than 466 days, cumulative process CPU data won’t
include times accumulated prior to the performance tool’s start and a message
will be logged to indicate this.
This is calculated as
PROC_CPU_TOTAL_TIME_CUM =
PROC_CPU_SYS_MODE_TIME_CUM +
PROC_CPU_USER_MODE_TIME_CUM
On a threaded operating system, such as HP-UX 11.0 and beyond, process usage
of a resource is calculated by summing the usage of that resource by its
kernel threads. If this metric is reported for a kernel thread, the value is
the resource usage by that single kernel thread. If this metric is reported
for a process, the value is the sum of the resource usage by all of its kernel
threads. Alive kernel threads and kernel threads that have died during the
interval are included in the summation. On platforms other than HPUX, If the
ignore_mt flag is set(true) in parm file, this metric will report values
normalized against the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
PROC_CPU_TOTAL_UTIL
----------------------------------
The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) as a percentage of the total CPU time available during
the interval.
Unlike the global and application CPU metrics, process CPU is not averaged
over the number of processors on systems with multiple CPUs. Single-threaded
processes can use only one CPU at a time and never exceed 100% CPU
utilization.
On HP-UX, the total CPU utilization is the sum of the CPU utilization
components for a process or kernel thread, including system, user, context
switch, interrupts processing, realtime, and nice utilization values.
On a threaded operating system, such as HP-UX 11.0 and beyond, process usage
of a resource is calculated by summing the usage of that resource by its
kernel threads. If this metric is reported for a kernel thread, the value is
the resource usage by that single kernel thread. If this metric is reported
for a process, the value is the sum of the resource usage by all of its kernel
threads. Alive kernel threads and kernel threads that have died during the
interval are included in the summation.
On multi-processor HP-UX systems, processes which have component kernel
threads executing simultaneously on different processors could have resource
utilization sums over 100%. The maximum percentage is 100% times the number
of CPUs online.
On platforms other than HPUX, If the ignore_mt flag is set(true) in parm
file, this metric will report values normalized against the number of active
cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
PROC_CPU_TOTAL_UTIL_CUM
----------------------------------
The total CPU time consumed by a process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) as a percentage of the total CPU time available over the
cumulative collection time.
The cumulative collection time is defined from the point in time when either:
a) the process (or thread) was first started, or b) the performance tool was
first started, or c) the cumulative counters were reset (relevant only to
Glance, if available for the given platform), whichever occurred last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after the
system has been up for more than 466 days, cumulative process CPU data won’t
include times accumulated prior to the performance tool’s start and a message
will be logged to indicate this.
Unlike the global and application CPU metrics, process CPU is not averaged
over the number of processors on systems with multiple CPUs. Single-threaded
processes can use only one CPU at a time and never exceed 100% CPU
utilization.
On HP-UX, the total CPU utilization is the sum of the CPU utilization
components for a process or kernel thread, including system, user, context
switch, interrupts processing, realtime, and nice utilization values.
On a threaded operating system, such as HP-UX 11.0 and beyond, process usage
of a resource is calculated by summing the usage of that resource by its
kernel threads. If this metric is reported for a kernel thread, the value is
the resource usage by that single kernel thread. If this metric is reported
for a process, the value is the sum of the resource usage by all of its kernel
threads. Alive kernel threads and kernel threads that have died during the
interval are included in the summation.
On multi-processor HP-UX systems, processes which have component kernel
threads executing simultaneously on different processors could have resource
utilization sums over 100%. The maximum percentage is 100% times the number
of CPUs online. On platforms other than HPUX, If the ignore_mt flag is
set(true) in parm file, this metric will report values normalized against the
number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
PROC_CPU_USER_MODE_TIME
----------------------------------
The time, in seconds, the process (or kernel threads, if HP-UX/Linux Kernel
2.6 and above) was using the CPU in user mode during the interval.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
On a threaded operating system, such as HP-UX 11.0 and beyond, process usage
of a resource is calculated by summing the usage of that resource by its
kernel threads. If this metric is reported for a kernel thread, the value is
the resource usage by that single kernel thread. If this metric is reported
for a process, the value is the sum of the resource usage by all of its kernel
threads. Alive kernel threads and kernel threads that have died during the
interval are included in the summation. On platforms other than HPUX, If the
ignore_mt flag is set(true) in parm file, this metric will report values
normalized against the number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
PROC_CPU_USER_MODE_UTIL
----------------------------------
The percentage of time the process (or kernel thread, if HP-UX/Linux Kernel
2.6 and above) was using the CPU in user mode during the interval.
User CPU is the time spent in user mode at a normal priority, at real-time
priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
Unlike the global and application CPU metrics, process CPU is not averaged
over the number of processors on systems with multiple CPUs. Single-threaded
processes can use only one CPU at a time and never exceed 100% CPU
utilization.
On a threaded operating system, such as HP-UX 11.0 and beyond, process usage
of a resource is calculated by summing the usage of that resource by its
kernel threads. If this metric is reported for a kernel thread, the value is
the resource usage by that single kernel thread. If this metric is reported
for a process, the value is the sum of the resource usage by all of its kernel
threads. Alive kernel threads and kernel threads that have died during the
interval are included in the summation.
On multi-processor HP-UX systems, processes which have component kernel
threads executing simultaneously on different processors could have resource
utilization sums over 100%. The maximum percentage is 100% times the number
of CPUs online. On platforms other than HPUX, If the ignore_mt flag is
set(true) in parm file, this metric will report values normalized against the
number of active cores in the system.
If the ignore_mt flag is not set(false) in parm file, this metric will report
values normalized against the number of threads in the system.
This flag will be a no-op if Multithreading is turned off.
On HPUX, CPU utilization normalization is controlled by the “-ignore_mt”
option of the midaemon(1m). To change normalization from core-based to
logical-cpu-based, or vice-versa, all performance components (scopeux, glance,
perfd) must be shut down and the midaemon restarted in the desired mode. To
start the midaemon with “-ignore_mt” by default, this option should be added
in the /etc/rc.config.d/ovpa control file. Refer to the documentation
regarding ovpa startup. Note that, on HPUX, unlike other platforms, specifying
core-based normalization affects CPU, application, process and thread metrics.
PROC_INTEREST
----------------------------------
A string containing the reason(s) why the process or thread is of interest,
based on the thresholds specified in the parm file.
An ‘A’ indicates that the process or thread exceeds the process CPU threshold,
computed using the actual time the process or thread was alive during the
interval.
A ‘C’ indicates that the process or thread exceeds the process CPU threshold,
computed using the collection interval. Currently, the same CPU threshold is
used for both CPU interest reasons.
A ‘D’ indicates that the process or thread exceeds the process disk IO
threshold.
An ‘I’ indicates that the process or thread exceeds the IO threshold.
An ‘M’ indicates that the process exceeds the process memory threshold. This
interest reason is only meaningful for processes and therefore not shown for
threads.
New processes or threads are identified with an ‘N’, terminated processes or
threads are identified with a ‘K’.
Note that the parm file ‘nonew’, ‘nokill’ and ‘shortlived’ settings are
logging only options and therefore ignored in Glance components. 5
blank Not Used 6 blank Not Used 7 blank Not Used 8
blank Not Used 9 blank Not Used 10 blank Not Used 11
blank Not Used 12 blank Special purpose field
PROC_INTERVAL_ALIVE
----------------------------------
The number of seconds that the process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) was alive during the interval. This may be less than
the time of the interval if the process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) was new or died during the interval.
PROC_IO_BYTE
----------------------------------
On HP-UX, this is the total number of physical IO KBs (unless otherwise
specified) that was used by this process or kernel thread, either directly or
indirectly, during the interval.
On all other systems, this is the total number of physical IO KBs (unless
otherwise specified) that was used by this process during the interval. IOs
include disk, terminal, tape and network IO.
On HP-UX, indirect IOs include paging and deactivation/reactivation activity
done by the kernel on behalf of the process or kernel thread. Direct IOs
include disk, terminal, tape, and network IO, but exclude all NFS traffic.
On a threaded operating system, such as HP-UX 11.0 and beyond, process usage
of a resource is calculated by summing the usage of that resource by its
kernel threads. If this metric is reported for a kernel thread, the value is
the resource usage by that single kernel thread. If this metric is reported
for a process, the value is the sum of the resource usage by all of its kernel
threads. Alive kernel threads and kernel threads that have died during the
interval are included in the summation.
On SUN, counts in the MB ranges in general can be attributed to disk accesses
and counts in the KB ranges can be attributed to terminal IO. This is useful
when looking for processes with heavy disk IO activity. This may vary
depending on the sample interval length.
Linux release versions vary with regards to the amount of process-level IO
statistics that are available. Some kernels instrument only disk IO, while
some provide statistics for all devices together (including tty and other
devices with disk IO).
When it is available from your specific release of Linux, the PROC_DISK_PHYS*
metrics will report pages of disk IO specifically. The PROC_IO* metrics will
report the sum of all types of IO including disk IO, in Kilobytes or KB rates.
These metrics will have “na” values on kernels that do not support the
instrumentation.
For multi-threaded processes, some Linux kernels only report IO statistics for
the main thread. In that case, patches are available that will allow the
process instrumentation to report the sum of all thread’s IOs, and will also
enable per-thread reporting.
Starting with 2.6.3X, at least some kernels will include IO data from the
children of the process in the process data. This results in misleading
inflated IO metrics for processes that fork a lot of children, such as shells,
or the init(1m) process.
PROC_IO_BYTE_CUM
----------------------------------
On HP-UX, this is the total number of physical IO KBs (unless otherwise
specified) that was used by this process or kernel thread, either directly or
indirectly, over the cumulative collection time.
On all other systems, this is the total number of physical IO KBs (unless
otherwise specified) that was used by this process over the cumulative
collection time. IOs include disk, terminal, tape and network IO.
The cumulative collection time is defined from the point in time when either:
a) the process (or thread) was first started, or b) the performance tool was
first started, or c) the cumulative counters were reset (relevant only to
Glance, if available for the given platform), whichever occurred last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after the
system has been up for more than 466 days, cumulative process CPU data won’t
include times accumulated prior to the performance tool’s start and a message
will be logged to indicate this.
On HP-UX, indirect IOs include paging and deactivation/reactivation activity
done by the kernel on behalf of the process or kernel thread. Direct IOs
include disk, terminal, tape, and network IO, but exclude all NFS traffic.
On a threaded operating system, such as HP-UX 11.0 and beyond, process usage
of a resource is calculated by summing the usage of that resource by its
kernel threads. If this metric is reported for a kernel thread, the value is
the resource usage by that single kernel thread. If this metric is reported
for a process, the value is the sum of the resource usage by all of its kernel
threads. Alive kernel threads and kernel threads that have died during the
interval are included in the summation.
Linux release versions vary with regards to the amount of process-level IO
statistics that are available. Some kernels instrument only disk IO, while
some provide statistics for all devices together (including tty and other
devices with disk IO).
When it is available from your specific release of Linux, the PROC_DISK_PHYS*
metrics will report pages of disk IO specifically. The PROC_IO* metrics will
report the sum of all types of IO including disk IO, in Kilobytes or KB rates.
These metrics will have “na” values on kernels that do not support the
instrumentation.
For multi-threaded processes, some Linux kernels only report IO statistics for
the main thread. In that case, patches are available that will allow the
process instrumentation to report the sum of all thread’s IOs, and will also
enable per-thread reporting.
Starting with 2.6.3X, at least some kernels will include IO data from the
children of the process in the process data. This results in misleading
inflated IO metrics for processes that fork a lot of children, such as shells,
or the init(1m) process.
PROC_IO_BYTE_RATE
----------------------------------
On HP-UX, this is the number of physical IO KBs per second that was used by
this process or kernel thread, either directly or indirectly, during the
interval.
On all other systems, this is the number of physical IO KBs per second that
was used by this process during the interval. IOs include disk, terminal,
tape and network IO.
On HP-UX, indirect IOs include paging and deactivation/reactivation activity
done by the kernel on behalf of the process or kernel thread. Direct IOs
include disk, terminal, tape, and network IO, but exclude all NFS traffic.
On a threaded operating system, such as HP-UX 11.0 and beyond, process usage
of a resource is calculated by summing the usage of that resource by its
kernel threads. If this metric is reported for a kernel thread, the value is
the resource usage by that single kernel thread. If this metric is reported
for a process, the value is the sum of the resource usage by all of its kernel
threads. Alive kernel threads and kernel threads that have died during the
interval are included in the summation.
On SUN, counts in the MB ranges in general can be attributed to disk accesses
and counts in the KB ranges can be attributed to terminal IO. This is useful
when looking for processes with heavy disk IO activity. This may vary
depending on the sample interval length.
Certain types of disk IOs are not counted by AIX at the process level, so they
are excluded from this metric.
Linux release versions vary with regards to the amount of process-level IO
statistics that are available. Some kernels instrument only disk IO, while
some provide statistics for all devices together (including tty and other
devices with disk IO).
When it is available from your specific release of Linux, the PROC_DISK_PHYS*
metrics will report pages of disk IO specifically. The PROC_IO* metrics will
report the sum of all types of IO including disk IO, in Kilobytes or KB rates.
These metrics will have “na” values on kernels that do not support the
instrumentation.
For multi-threaded processes, some Linux kernels only report IO statistics for
the main thread. In that case, patches are available that will allow the
process instrumentation to report the sum of all thread’s IOs, and will also
enable per-thread reporting.
Starting with 2.6.3X, at least some kernels will include IO data from the
children of the process in the process data. This results in misleading
inflated IO metrics for processes that fork a lot of children, such as shells,
or the init(1m) process.
PROC_IO_BYTE_RATE_CUM
----------------------------------
On HP-UX, this is the average number of physical IO KBs per second that was
used by this process or kernel thread, either directly or indirectly, over the
cumulative collection time.
On all other systems, this is the average number of physical IO KBs per second
that was used by this process over the cumulative collection time. IOs
include disk, terminal, tape and network IO.
The cumulative collection time is defined from the point in time when either:
a) the process (or thread) was first started, or b) the performance tool was
first started, or c) the cumulative counters were reset (relevant only to
Glance, if available for the given platform), whichever occurred last.
On HP-UX, all cumulative collection times and intervals start when the
midaemon starts. On other Unix systems, non-process collection time starts
from the start of the performance tool, process collection time starts from
the start time of the process or measurement start time, which ever is older.
Regardless of the process start time, application cumulative intervals start
from the time the performance tool is started.
On systems where the performance components are 32-bit or where the 64-bit
model is LLP64 (Windows), all INTERVAL_CUM metrics will start reporting “o/f”
(overflow) after the performance agent (or the midaemon on HPUX) has been up
for 466 days and the cumulative metrics will fail to report accurate data
after 497 days. On Linux, Solaris and AIX, if measurement is started after the
system has been up for more than 466 days, cumulative process CPU data won’t
include times accumulated prior to the performance tool’s start and a message
will be logged to indicate this.
On HP-UX, indirect IOs include paging and deactivation/reactivation activity
done by the kernel on behalf of the process or kernel thread. Direct IOs
include disk, terminal, tape, and network IO, but exclude all NFS traffic.
On a threaded operating system, such as HP-UX 11.0 and beyond, process usage
of a resource is calculated by summing the usage of that resource by its
kernel threads. If this metric is reported for a kernel thread, the value is
the resource usage by that single kernel thread. If this metric is reported
for a process, the value is the sum of the resource usage by all of its kernel
threads. Alive kernel threads and kernel threads that have died during the
interval are included in the summation.
On SUN, counts in the MB ranges in general can be attributed to disk accesses
and counts in the KB ranges can be attributed to terminal IO. This is useful
when looking for processes with heavy disk IO activity. This may vary
depending on the sample interval length.
Linux release versions vary with regards to the amount of process-level IO
statistics that are available. Some kernels instrument only disk IO, while
some provide statistics for all devices together (including tty and other
devices with disk IO).
When it is available from your specific release of Linux, the PROC_DISK_PHYS*
metrics will report pages of disk IO specifically. The PROC_IO* metrics will
report the sum of all types of IO including disk IO, in Kilobytes or KB rates.
These metrics will have “na” values on kernels that do not support the
instrumentation.
For multi-threaded processes, some Linux kernels only report IO statistics for
the main thread. In that case, patches are available that will allow the
process instrumentation to report the sum of all thread’s IOs, and will also
enable per-thread reporting.
Starting with 2.6.3X, at least some kernels will include IO data from the
children of the process in the process data. This results in misleading
inflated IO metrics for processes that fork a lot of children, such as shells,
or the init(1m) process.
PROC_MEM_LOCKED
----------------------------------
The number of KBs of virtual memory allocated by the process, marked as locked
memory.
On Windows, this is the non-paged pool memory of the process. This memory is
allocated from the system-wide non-paged pool, and is not affected by the
pageout process. Device drivers may allocate memory from the non-paged pool,
charging quota against the current (caller) thread.
The kernel and driver code use the non-paged pool for data that should always
be in the physical memory. The size of the non-paged pool is limited to
approximately 128 MB on Windows NT systems and to 256 MB on Windows 2000
systems. The failure to allocate memory from the non-paged pool can cause a
system crash.
PROC_MEM_RES
----------------------------------
The size (in KB) of resident memory allocated for the process(or kernel
thread, if HP-UX/Linux Kernel 2.6 and above).
On HP-UX, the calculation of this metric differs depending on whether this
process has used any CPU time since the midaemon process was started. This
metric is less accurate and does not include shared memory regions in its
calculation when the process has been idle since the midaemon was started.
On HP-UX, for processes that use CPU time subsequent to midaemon startup, the
resident memory is calculated as
RSS = sum of private region pages +
(sum of shared region pages /
number of references)
The number of references is a count of the number of attachments to the
memory region. Attachments, for shared regions, may come from several
processes sharing the same memory, a single process with multiple attachments,
or combinations of these.
This value is only updated when a process uses CPU. Thus, under memory
pressure, this value may be higher than the actual amount of resident memory
for processes which are idle because their memory pages may no longer be
resident or the reference count for shared segments may have changed.
On HP-UX, this metric is specific to a process. If this metric is reported
for a kernel thread, the value for its associated process is given.
A value of “na” is displayed when this information is unobtainable. This
information may not be obtainable for some system (kernel) processes. It may
also not be available for processes.
On AIX, this is the same as the RSS value shown by “ps v”.
On Windows, this is the number of KBs in the working set of this process. The
working set includes the memory pages touched recently by the threads of the
process. If free memory in the system is above a threshold, then pages are
left in the working set even if they are not in use. When free memory falls
below a threshold, pages are trimmed from the working set, but not necessarily
paged out to disk from memory. If those pages are subsequently referenced,
they will be page faulted back into the working set. Therefore, the working
set is a general indicator of the memory resident set size of this process,
but it will vary depending on the overall status of memory on the system.
Note that the size of the working set is often larger than the amount of
pagefile space consumed (PROC_MEM_VIRT).
PROC_MEM_VIRT
----------------------------------
The size (in KB) of virtual memory allocated for the process(or kernel thread,
if HP-UX/Linux Kernel 2.6 and above).
On HP-UX, this consists of the sum of the virtual set size of all private
memory regions used by this process, plus this process’ share of memory
regions which are shared by multiple processes. For processes that use CPU
time, the value is divided by the reference count for those regions which are
shared.
On HP-UX, this metric is less accurate and does not reflect the reference
count for shared regions for processes that were started prior to the midaemon
process and have not used any CPU time since the midaemon was started.
On HP-UX, this metric is specific to a process. If this metric is reported
for a kernel thread, the value for its associated process is given.
On all other Unix systems, this consists of private text, private data,
private stack and shared memory. The reference count for shared memory is not
taken into account, so the value of this metric represents the total virtual
size of all regions regardless of the number of processes sharing access.
Note also that lazy swap algorithms, sparse address space malloc calls, and
memory-mapped file access can result in large VSS values. On systems that
provide Glance memory regions detail reports, the drilldown detail per memory
region is useful to understand the nature of memory allocations for the
process.
A value of “na” is displayed when this information is unobtainable. This
information may not be obtainable for some system (kernel) processes. It may
also not be available for processes.
On Windows, this is the number of KBs the process has used in the paging
file(s). Paging files are used to store pages of memory used by the process,
such as local data, that are not contained in other files. Examples of memory
pages which are contained in other files include pages storing a program’s
.EXE and .DLL files. These would not be kept in pagefile space. Thus, often
programs will have a memory working set size (PROC_MEM_RES) larger than the
size of its pagefile space.
On Linux this value is rounded to PAGESIZE.
PROC_MINOR_FAULT
----------------------------------
Number of minor page faults for this process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above) during the interval.
On HP-UX, major page faults and minor page faults are a subset of vfaults
(virtual faults). Stack and heap accesses can cause vfaults, but do not
result in a disk page having to be loaded into memory.
PROC_PARENT_PROC_ID
----------------------------------
The parent process’ PID number.
On HP-UX, this metric is specific to a process. If this metric is reported
for a kernel thread, the value for its associated process is given.
PROC_PRI
----------------------------------
On Unix systems, this is the dispatch priority of a process (or kernel thread,
if HP-UX/Linux Kernel 2.6 and above) at the end of the interval. The lower
the value, the more likely the process is to be dispatched.
On Windows, this is the current base priority of this process.
On HP-UX, whenever the priority is changed for the selected process or kernel
thread, the new value will not be reflected until the process or kernel thread
is reactivated if it is currently idle (for example, SLEEPing).
On HP-UX, the lower the value, the more the process or kernel thread is likely
to be dispatched. Values between zero and 127 are considered to be “real-
time” priorities, which the kernel does not adjust. Values above 127 are
normal priorities and are modified by the kernel for load balancing. Some
special priorities are used in the HP-UX kernel and subsystems for different
activities. These values are described in /usr/include/sys/param.h.
Priorities less than PZERO 153 are not signalable.
Note that on HP-UX, many network-related programs such as inetd, biod, and
rlogind run at priority 154 which is PPIPE. Just because they run at this
priority does not mean they are using pipes. By examining the open files, you
can determine if a process or kernel thread is using pipes.
For HP-UX 10.0 and later releases, priorities between -32 and -1 can be seen
for processes or kernel threads using the Posix Real-time Schedulers. When
specifying a Posix priority, the value entered must be in the range from 0
through 31, which the system then remaps to a negative number in the range of
-1 through -32. Refer to the rtsched man pages for more information.
On a threaded operating system, such as HP-UX 11.0 and beyond, this metric
represents a kernel thread characteristic. If this metric is reported for a
process, the value for its last executing kernel thread is given. For
example, if a process has multiple kernel threads and kernel thread one is the
last to execute during the interval, the metric value for kernel thread one is
assigned to the process.
On AIX, values for priority range from 0 to 127. Processes running at
priorities less than PZERO (40) are not signalable.
On Windows, the higher the value the more likely the process or thread is to
be dispatched. Values for priority range from 0 to 31. Values of 16 and
above are considered to be “realtime” priorities. Threads within a process
can raise and lower their own base priorities relative to the process’s base
priority.
PROC_PROC_ID
----------------------------------
The process ID number (or PID) of this process(or associated process for
kernel threads, if HPUX/LInux Kernel 2.6 and above) that is used by the kernel
to uniquely identify the process. Process numbers are reused, so they only
identify a process for its lifetime.
On HP-UX, this metric is specific to a process. If this metric is reported
for a kernel thread, the value for its associated process is given.
PROC_PROC_NAME
----------------------------------
The process(or kernel thread, if HP-UX/Linux Kernel 2.6 and above) program
name. It is limited to 16 characters.
On Unix systems, this is derived from the 1st parameter to the exec(2) system
call.
On HP-UX, this metric is specific to a process. If this metric is reported
for a kernel thread, the value for its associated process is given.
On Windows, the “System Idle Process” is not reported by Perf Agent since Idle
is a process that runs to occupy the processors when they are not executing
other threads. Idle has one thread per processor.
PROC_RUN_TIME
----------------------------------
The elapsed time since a process (or kernel thread, if HP-UX/Linux Kernel 2.6
and above) started, in seconds.
This metric is less than the interval time if the process (or kernel thread,
if HP-UX/Linux Kernel 2.6 and above) was not alive during the entire first or
last interval.
On a threaded operating system such as HP-UX 11.0 and beyond, this metric is
available for a process or kernel thread.
PROC_STARTTIME
----------------------------------
The creation date and time of the process (or kernel thread, if HP-UX/Linux
Kernel 2.6 and above).
PROC_THREAD_COUNT
----------------------------------
The total number of kernel threads for the current process.
On Linux systems with Kernel 2.5 and below, every thread has its own process
ID so this metric will always be 1.
On Solaris systems, this metric reflects the total number of Light Weight
Processes (LWPs) associated with the process.
PROC_USER_NAME
----------------------------------
On Unix systems, this is real user name of a process or the login account
(from /etc/passwd) of a process (or kernel thread, if HP-UX/Linux Kernel 2.6
and above). If more than one account is listed in /etc/passwd with the same
user ID (uid) field, the first one is used. If an account cannot be found
that matches the uid field, then the uid number is returned. This would occur
if the account was removed after a process was started.
On Windows, this is the process owner account name, without the domain name
this account resides in.
On HP-UX, this metric is specific to a process. If this metric is reported
for a kernel thread, the value for its associated process is given.
RECORD_TYPE
----------------------------------
ASCII string that identifies the record. Possibilities include:
GLOB for global 5 minute detail
GSUM for global hourly summary
APPL for application 5 minute detail
ASUM for application hourly summary
CONF for configuration
TRAN for transaction tracker detail
TSUM for transaction tracker summary
Except for Windows Desktop, this also includes:
PROC for process 1 minute detail
DISK for disk device 5 minute detail
DSUM for disk device summary
On HP-UX, this also includes:
VOLS for logical volume disk detail
VSUM for logical volume disk summary
STATDATE
----------------------------------
The end date timestamp of the interval for which the information in this
record was captured, based on local time.
The date is an ASCII field in mm/dd/yyyy format unless localized. If
localized, the separators may be different and the subfield may be in a
different sequence. In ASCII files this field will always contain 10
characters. Each subfield (mm, dd, yyyy) will contain a leading zero if the
value is less than 10. This metric is extracted from GBL_STATTIME, which is
obtained using the time() system call at the time of data collection.
This field responds to language localization. For example, in Italy the field
would appear as dd/mm/yyyy and in Japan it would be yyyy/mm/dd.
In binary files this field is in MPE CALENDAR format in the least significant
16 bits of the field. The most significant 16 bits should all be zero.
Dividing the field by 512 will isolate the year (that is, 94). This field MOD
512 will isolate the day of the year.
STATTIME
----------------------------------
The local time of day for the end of the interval. The time is an ASCII field
in hh:mm:ss 24-hour format. This field will always contain 8 characters in
ASCII files. The three subfields (hh, mm, ss) will contain a leading zero if
the value is less than 10. This metric is extracted from GBL_STATTIME, which
is obtained using the time() system call at the end of the interval.
This field responds to language localization.
In binary files this field contains four byte size subfields. The most
significant byte contains the hour, the next most significant byte contains
the minute, then the seconds and finally the tenths of a second. The left two
bytes can be isolated by dividing by 65536. HHMM = TIME/65536. Then HOUR =
HHMM/256 and MINUTE = HHMM mod 256. SSTS = TIME mod 65536. Then SECOND =
SSTS/256.
TIME
----------------------------------
The local time of day for the start of the interval. The time is an ASCII
field in hh:mm:ss 24-hour format. This field will always contain 8 characters
in ASCII files. The three subfields (hh, mm, ss) will contain a leading zero
if the value is less than 10. This metric is extracted from GBL_STATTIME,
which is obtained using the time() system call at the start of the interval.
This field responds to language localization.
In binary files this field contains four byte size subfields. The most
significant byte contains the hour, the next most significant byte contains
the minute, then the seconds and finally the tenths of a second. The left two
bytes can be isolated by dividing by 65536. HHMM = TIME/65536. Then HOUR =
HHMM/256 and MINUTE = HHMM mod 256. SSTS = TIME mod 65536. Then SECOND =
SSTS/256.
TTBIN_TRANS_COUNT_1
----------------------------------
The number of completed transactions in this range during the last interval.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_TRANS_COUNT_10
----------------------------------
The number of completed transactions in this range during the last interval.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_TRANS_COUNT_2
----------------------------------
The number of completed transactions in this range during the last interval.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_TRANS_COUNT_3
----------------------------------
The number of completed transactions in this range during the last interval.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_TRANS_COUNT_4
----------------------------------
The number of completed transactions in this range during the last interval.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_TRANS_COUNT_5
----------------------------------
The number of completed transactions in this range during the last interval.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_TRANS_COUNT_6
----------------------------------
The number of completed transactions in this range during the last interval.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_TRANS_COUNT_7
----------------------------------
The number of completed transactions in this range during the last interval.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_TRANS_COUNT_8
----------------------------------
The number of completed transactions in this range during the last interval.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_TRANS_COUNT_9
----------------------------------
The number of completed transactions in this range during the last interval.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_UPPER_RANGE_1
----------------------------------
The upper range (transaction time) for this bin.
There are a maximum of nine user-defined transaction response time bins
(TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction
configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is
the overflow bin and will always have a value of -2 (overflow). Note that the
values specified in the transaction configuration file cannot exceed
2147483.6, which is the number of seconds in 24.85 days. If the user
specifies any values greater than 2147483.6, the numbers reported for those
bins or Service Level Objectives (SLO) will be -2.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_UPPER_RANGE_10
----------------------------------
The upper range (transaction time) for this bin.
There are a maximum of nine user-defined transaction response time bins
(TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction
configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is
the overflow bin and will always have a value of -2 (overflow). Note that the
values specified in the transaction configuration file cannot exceed
2147483.6, which is the number of seconds in 24.85 days. If the user
specifies any values greater than 2147483.6, the numbers reported for those
bins or Service Level Objectives (SLO) will be -2.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_UPPER_RANGE_2
----------------------------------
The upper range (transaction time) for this bin.
There are a maximum of nine user-defined transaction response time bins
(TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction
configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is
the overflow bin and will always have a value of -2 (overflow). Note that the
values specified in the transaction configuration file cannot exceed
2147483.6, which is the number of seconds in 24.85 days. If the user
specifies any values greater than 2147483.6, the numbers reported for those
bins or Service Level Objectives (SLO) will be -2.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_UPPER_RANGE_3
----------------------------------
The upper range (transaction time) for this bin.
There are a maximum of nine user-defined transaction response time bins
(TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction
configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is
the overflow bin and will always have a value of -2 (overflow). Note that the
values specified in the transaction configuration file cannot exceed
2147483.6, which is the number of seconds in 24.85 days. If the user
specifies any values greater than 2147483.6, the numbers reported for those
bins or Service Level Objectives (SLO) will be -2.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_UPPER_RANGE_4
----------------------------------
The upper range (transaction time) for this bin.
There are a maximum of nine user-defined transaction response time bins
(TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction
configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is
the overflow bin and will always have a value of -2 (overflow). Note that the
values specified in the transaction configuration file cannot exceed
2147483.6, which is the number of seconds in 24.85 days. If the user
specifies any values greater than 2147483.6, the numbers reported for those
bins or Service Level Objectives (SLO) will be -2.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_UPPER_RANGE_5
----------------------------------
The upper range (transaction time) for this bin.
There are a maximum of nine user-defined transaction response time bins
(TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction
configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is
the overflow bin and will always have a value of -2 (overflow). Note that the
values specified in the transaction configuration file cannot exceed
2147483.6, which is the number of seconds in 24.85 days. If the user
specifies any values greater than 2147483.6, the numbers reported for those
bins or Service Level Objectives (SLO) will be -2.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_UPPER_RANGE_6
----------------------------------
The upper range (transaction time) for this bin.
There are a maximum of nine user-defined transaction response time bins
(TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction
configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is
the overflow bin and will always have a value of -2 (overflow). Note that the
values specified in the transaction configuration file cannot exceed
2147483.6, which is the number of seconds in 24.85 days. If the user
specifies any values greater than 2147483.6, the numbers reported for those
bins or Service Level Objectives (SLO) will be -2.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_UPPER_RANGE_7
----------------------------------
The upper range (transaction time) for this bin.
There are a maximum of nine user-defined transaction response time bins
(TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction
configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is
the overflow bin and will always have a value of -2 (overflow). Note that the
values specified in the transaction configuration file cannot exceed
2147483.6, which is the number of seconds in 24.85 days. If the user
specifies any values greater than 2147483.6, the numbers reported for those
bins or Service Level Objectives (SLO) will be -2.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_UPPER_RANGE_8
----------------------------------
The upper range (transaction time) for this bin.
There are a maximum of nine user-defined transaction response time bins
(TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction
configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is
the overflow bin and will always have a value of -2 (overflow). Note that the
values specified in the transaction configuration file cannot exceed
2147483.6, which is the number of seconds in 24.85 days. If the user
specifies any values greater than 2147483.6, the numbers reported for those
bins or Service Level Objectives (SLO) will be -2.
On SUN systems, this metric is only available on 5.X or later.
TTBIN_UPPER_RANGE_9
----------------------------------
The upper range (transaction time) for this bin.
There are a maximum of nine user-defined transaction response time bins
(TTBIN_UPPER_RANGE). The last bin, which is not specified in the transaction
configuration file (ttdconf.mwc on Windows or ttd.conf on UNIX platforms), is
the overflow bin and will always have a value of -2 (overflow). Note that the
values specified in the transaction configuration file cannot exceed
2147483.6, which is the number of seconds in 24.85 days. If the user
specifies any values greater than 2147483.6, the numbers reported for those
bins or Service Level Objectives (SLO) will be -2.
On SUN systems, this metric is only available on 5.X or later.
TT_ABORT
----------------------------------
The number of aborted transactions during the last interval for this
transaction.
TT_ABORT_WALL_TIME_PER_TRAN
----------------------------------
The average time, in seconds, per aborted transaction during the last
interval.
On SUN systems, this metric is only available on 5.X or later.
TT_APP_NAME
----------------------------------
The registered ARM Application name.
TT_APP_TRAN_NAME
----------------------------------
A concatenation of TT_APP_NAME and TT_NAME. This provides a way to uniquely
identify a specific transaction. The field is limited to 60 characters.
TT_CLIENT_ADDRESS
----------------------------------
The correlator address. This is the address where the child transaction
originated.
TT_CLIENT_ADDRESS_FORMAT
----------------------------------
The correlator address format. This shows the protocol family for the client
network address. Refer to the ARM API Guide for the list and description of
supported address formats.
TT_CLIENT_TRAN_ID
----------------------------------
A numerical ID that uniquely identifies the transaction class in this
correlator.
TT_COUNT
----------------------------------
The number of completed transactions during the last interval for this
transaction.
TT_FAILED
----------------------------------
The number of Failed transactions during the last interval for this
transaction name.
TT_INFO
----------------------------------
The registered ARM Transaction Information for this transaction.
TT_NAME
----------------------------------
The registered transaction name for this transaction.
TT_NUM_BINS
----------------------------------
The number of distribution ranges.
On SUN systems, this metric is only available on 5.X or later.
TT_SLO_COUNT
----------------------------------
The number of completed transactions that violated the defined Service Level
Objective (SLO) by exceeding the SLO threshold time during the interval.
TT_SLO_PERCENT
----------------------------------
The percentage of transactions which violate service level objectives.
TT_SLO_THRESHOLD
----------------------------------
The upper range (transaction time) of the Service Level Objective (SLO)
threshold value. This value is used to count the number of transactions that
exceed this user-supplied transaction time value.
TT_TERM_TRAN_1_HR_RATE
----------------------------------
For this transaction name, the number of completed transactions calculated to
a 1 hour rate. For example, if you completed five of these transactions in a
5 minute window, the rate is 60 transactions per hour.
On SUN systems, this metric is only available on 5.X or later.
TT_TRAN_1_MIN_RATE
----------------------------------
For this transaction name, the number of completed transactions calculated to
a 1 minute rate. For example, if you completed five of these transactions in
a 5 minute window, the rate is one transaction per minute.
TT_TRAN_ID
----------------------------------
The registered ARM Transaction ID for this transaction class as returned by
arm_getid(). A unique transaction id is returned for a unique application id
(returned by arm_init), tran name, and meta data buffer contents.
TT_UNAME
----------------------------------
The registered ARM Transaction User Name for this transaction.
If the arm_init function has NULL for the appl_user_id field, then the user
name is blank. Otherwise, if “*” was specified, then the user name is
displayed.
For example, to show the user name for the armsample1 program, use:
appl_id = arm_init(“armsample1”,”*”,0,0,0);
To ignore the user name for the armsample1 program, use:
appl_id = arm_init(“armsample1”,NULL,0,0,0);
TT_USER_MEASUREMENT_AVG
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the average counter
differences of the transaction or transaction instance during the last
interval. The counter value is the difference observed from a counter between
the start and the stop (or last update) of a transaction.
If the measurement type is a gauge, this returns the average of the values
passed on any ARM call for the transaction or transaction instance during the
last interval.
TT_USER_MEASUREMENT_AVG_2
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the average counter
differences of the transaction or transaction instance during the last
interval. The counter value is the difference observed from a counter between
the start and the stop (or last update) of a transaction.
If the measurement type is a gauge, this returns the average of the values
passed on any ARM call for the transaction or transaction instance during the
last interval.
TT_USER_MEASUREMENT_AVG_3
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the average counter
differences of the transaction or transaction instance during the last
interval. The counter value is the difference observed from a counter between
the start and the stop (or last update) of a transaction.
If the measurement type is a gauge, this returns the average of the values
passed on any ARM call for the transaction or transaction instance during the
last interval.
TT_USER_MEASUREMENT_AVG_4
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the average counter
differences of the transaction or transaction instance during the last
interval. The counter value is the difference observed from a counter between
the start and the stop (or last update) of a transaction.
If the measurement type is a gauge, this returns the average of the values
passed on any ARM call for the transaction or transaction instance during the
last interval.
TT_USER_MEASUREMENT_AVG_5
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the average counter
differences of the transaction or transaction instance during the last
interval. The counter value is the difference observed from a counter between
the start and the stop (or last update) of a transaction.
If the measurement type is a gauge, this returns the average of the values
passed on any ARM call for the transaction or transaction instance during the
last interval.
TT_USER_MEASUREMENT_AVG_6
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the average counter
differences of the transaction or transaction instance during the last
interval. The counter value is the difference observed from a counter between
the start and the stop (or last update) of a transaction.
If the measurement type is a gauge, this returns the average of the values
passed on any ARM call for the transaction or transaction instance during the
last interval.
TT_USER_MEASUREMENT_COUNT
----------------------------------
This returns the total number of times the associated user defined metric
(UDM) was sampled during the last interval.
TT_USER_MEASUREMENT_COUNT_2
----------------------------------
This returns the total number of times the associated user defined metric
(UDM) was sampled during the last interval.
TT_USER_MEASUREMENT_COUNT_3
----------------------------------
This returns the total number of times the associated user defined metric
(UDM) was sampled during the last interval.
TT_USER_MEASUREMENT_COUNT_4
----------------------------------
This returns the total number of times the associated user defined metric
(UDM) was sampled during the last interval.
TT_USER_MEASUREMENT_COUNT_5
----------------------------------
This returns the total number of times the associated user defined metric
(UDM) was sampled during the last interval.
TT_USER_MEASUREMENT_COUNT_6
----------------------------------
This returns the total number of times the associated user defined metric
(UDM) was sampled during the last interval.
TT_USER_MEASUREMENT_MAX
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the highest measured
counter value over the life of the transaction or transaction instance. The
counter value is the difference observed from a counter between the start and
the stop (or last update) of a transaction.
If the measurement type is a gauge, this metric returns the highest value
passed on any ARM call over the life of the transaction or transaction
instance.
TT_USER_MEASUREMENT_MAX_2
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the highest measured
counter value over the life of the transaction or transaction instance. The
counter value is the difference observed from a counter between the start and
the stop (or last update) of a transaction.
If the measurement type is a gauge, this metric returns the highest value
passed on any ARM call over the life of the transaction or transaction
instance.
TT_USER_MEASUREMENT_MAX_3
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the highest measured
counter value over the life of the transaction or transaction instance. The
counter value is the difference observed from a counter between the start and
the stop (or last update) of a transaction.
If the measurement type is a gauge, this metric returns the highest value
passed on any ARM call over the life of the transaction or transaction
instance.
TT_USER_MEASUREMENT_MAX_4
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the highest measured
counter value over the life of the transaction or transaction instance. The
counter value is the difference observed from a counter between the start and
the stop (or last update) of a transaction.
If the measurement type is a gauge, this metric returns the highest value
passed on any ARM call over the life of the transaction or transaction
instance.
TT_USER_MEASUREMENT_MAX_5
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the highest measured
counter value over the life of the transaction or transaction instance. The
counter value is the difference observed from a counter between the start and
the stop (or last update) of a transaction.
If the measurement type is a gauge, this metric returns the highest value
passed on any ARM call over the life of the transaction or transaction
instance.
TT_USER_MEASUREMENT_MAX_6
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the highest measured
counter value over the life of the transaction or transaction instance. The
counter value is the difference observed from a counter between the start and
the stop (or last update) of a transaction.
If the measurement type is a gauge, this metric returns the highest value
passed on any ARM call over the life of the transaction or transaction
instance.
TT_USER_MEASUREMENT_MIN
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the lowest measured
counter value over the life of the transaction or transaction instance. The
counter value is the difference observed from a counter between the start and
the stop (or last update) of a transaction.
If the measurement type is a gauge, this metric returns the lowest value
passed on any ARM call over the life of the transaction or transaction
instance.
TT_USER_MEASUREMENT_MIN_2
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the lowest measured
counter value over the life of the transaction or transaction instance. The
counter value is the difference observed from a counter between the start and
the stop (or last update) of a transaction.
If the measurement type is a gauge, this metric returns the lowest value
passed on any ARM call over the life of the transaction or transaction
instance.
TT_USER_MEASUREMENT_MIN_3
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the lowest measured
counter value over the life of the transaction or transaction instance. The
counter value is the difference observed from a counter between the start and
the stop (or last update) of a transaction.
If the measurement type is a gauge, this metric returns the lowest value
passed on any ARM call over the life of the transaction or transaction
instance.
TT_USER_MEASUREMENT_MIN_4
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the lowest measured
counter value over the life of the transaction or transaction instance. The
counter value is the difference observed from a counter between the start and
the stop (or last update) of a transaction.
If the measurement type is a gauge, this metric returns the lowest value
passed on any ARM call over the life of the transaction or transaction
instance.
TT_USER_MEASUREMENT_MIN_5
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the lowest measured
counter value over the life of the transaction or transaction instance. The
counter value is the difference observed from a counter between the start and
the stop (or last update) of a transaction.
If the measurement type is a gauge, this metric returns the lowest value
passed on any ARM call over the life of the transaction or transaction
instance.
TT_USER_MEASUREMENT_MIN_6
----------------------------------
If the measurement type is a numeric or a string, this metric returns “na”.
If the measurement type is a counter, this metric returns the lowest measured
counter value over the life of the transaction or transaction instance. The
counter value is the difference observed from a counter between the start and
the stop (or last update) of a transaction.
If the measurement type is a gauge, this metric returns the lowest value
passed on any ARM call over the life of the transaction or transaction
instance.
TT_USER_MEASUREMENT_NAME
----------------------------------
The name of the user defined transactional measurement. The length of the
string complies with the ARM 2.0 standard, which is 44 characters long (there
are 43 usable characters since this is a NULL terminated character string).
TT_USER_MEASUREMENT_NAME_2
----------------------------------
The name of the user defined transactional measurement. The length of the
string complies with the ARM 2.0 standard, which is 44 characters long (there
are 43 usable characters since this is a NULL terminated character string).
TT_USER_MEASUREMENT_NAME_3
----------------------------------
The name of the user defined transactional measurement. The length of the
string complies with the ARM 2.0 standard, which is 44 characters long (there
are 43 usable characters since this is a NULL terminated character string).
TT_USER_MEASUREMENT_NAME_4
----------------------------------
The name of the user defined transactional measurement. The length of the
string complies with the ARM 2.0 standard, which is 44 characters long (there
are 43 usable characters since this is a NULL terminated character string).
TT_USER_MEASUREMENT_NAME_5
----------------------------------
The name of the user defined transactional measurement. The length of the
string complies with the ARM 2.0 standard, which is 44 characters long (there
are 43 usable characters since this is a NULL terminated character string).
TT_USER_MEASUREMENT_NAME_6
----------------------------------
The name of the user defined transactional measurement. The length of the
string complies with the ARM 2.0 standard, which is 44 characters long (there
are 43 usable characters since this is a NULL terminated character string).
TT_WALL_TIME_PER_TRAN
----------------------------------
The average transaction time, in seconds, during the last interval for this
transaction.
YEAR
----------------------------------
The year, including the century, the data in this record was captured. This
metric will contain 4 digits, such as 2002.
----------------------------------