HP
Network Node Manager
iSPI Performance for Metrics Software
Support Matrix
for the Windows® and Linux® operating systems
Software Version: 10.00/ July 2014
Publication Date: March 2016
For the latest additions to the system requirements and support
matrix, see http://support.openview.hp.com/selfsolve/document/KM00940851/binary/nnm_ispi_Metrics_SupportMatrix_10.00.html.
Introduction
Installation Guide
Hardware and
Software Requirements
Same
Server Install
Dedicated
Server Install
Virtualization
Hardware
Networking
Configuration
Disk
and Storage Considerations
Operating
Systems
NNMi
and NPS OS Combinations
Web Browsers
Integration and
Coexistence With Other Products
Legal
Notices
Introduction
This document describes system
requirements for the HP Network Node Manager iSPI Performance for
Metrics (iSPI Performance for Metrics) version 10.00 and Network
Performance Server (NPS) version 10.00.
NPS stores, aggregates, and provides reports and analytics for
collected performance
data. It is the foundation for performance reports and dashboards on
NNMi. Other iSPIs
rely
on NPS for data storage and report presentation functionality. The NNMi
iSPI
Performance for
Metrics adds the performance management capability to NNMi by
analyzing, processing,
and aggregating a range of standard metrics collected by NNMi from
different
network elements.
This document provides information about supported deployment
configurations that is not found in the Release
Notes, the Installation
Guide or the
Deployment Guide. When
you are
ready to install the product, refer to the Installation
Guide. The Installation
Guide is provided on the
installation media and online. The Deployment Reference is revised
regularly and can only found online. Download the latest version from
here:
http://support.openview.hp.com/selfsolve/document/KM00940852/binary/nnm_ispi_Metrics_Deployment_10.00.pdf
To obtain an electronic copy of
the most current version of any
product documentation, go to http://h20230.www2.hp.com/selfsolve/manuals.
Installation
Guide
Installation requirements, as
well as instructions for installing,
are
documented in an interactive version of the Installation
Guide. The
Installation Guide is
included on the product installation media as interactive_installation_guide_en.htm.
Hardware
and Software Requirements
You can install NPS on a
dedicated system, separate from NNMi, or you can install it on the same
system as NNMi. In addition, you can now split apart the roles within
NPS and install them on independent hardware - allowing for better
scalability and more predictable performance for your reporting
solution. Before installing NPS and your iSPIs, make sure that
your system meets the minimum requirements. Be aware of the
following:
- If you are upgrading from a
version 9.10 or 9.20 installation of iSPI Performance for Metrics,
check the hardware tables below for the latest requirements as these
change from release to release. See the Installation Guide
and Deployment
Reference for
instructions and available upgrade paths.
- The data directory
(NPSDataDir) will contain the bulk of the collected and aggregated data
and should be on the fastest and largest disk partition. On Linux, this
will be /var/opt/OV. The Deployment
Reference offers database
sizing and tuning advice regarding
file placement of the database files.
- By default, NPS stores daily
aggregated data for 800 days,
hourly aggregated data for 70 days, and very fine grain, as collected,
data for
14 days. The retention periods for hourly and fine grain data can be
increased
to a maximum of 400 days. Most of the required disk space for the iSPI
is
consumed by this fine grain data storage. To change retention periods
from the
default, use the Configuration utility (runConfigurationGUI.ovpl). You
can modify the settings for
each of
the storage areas independently, but fine grain data cannot be stored
for
longer than hourly data, which in turn cannot be stored for longer than
daily
data. Care should be taken to monitor the disk space usage on the NPS
database system after
a change to these parameters. The tables below quote disk space
requirements
for an out-of-the-box system, as well as for two further settings of
retention.
- In version 10.00, NPS
supports a distributed architecture. The distributed
deployment of NPS enables
you to distribute the computing load
across multiple systems and designate each system to perform a specific
operation determined by the role assigned to the system. The three
roles within NPS are:
- Extract,
Transform, and Load Server (ETL
Server) role (1 or more
required)
- Database
Server (DB
Server)
for storage and aggregation (1 required)
- User
Interface and Business Intelligence Server
(UiBi Server)
role to provide user interface for
business intelligence reporting (1 required)
- A single hardware system
can support any combination of roles,
or each role can be split onto separate hardware platforms.
- There can be only one UiBi
Server and only one Database Server. However, the ETL Server role can
be
split across multiple individual servers - each supporting one or
more Extension Pack.
- The decision to split NPS
roles across multiple systems should ideally be made in advance of
installation. Guidance on when it might be necessary and the options
involved are provided in the Deployment
Reference.
- NPS 10.00 does not support
the Application Failover feature. However, you can use NPS with an NNMi
management server that has been configured for Application Failover. If
the NNMi management server is configured for Application Failover, you
must install NPS on a dedicated server (and not on the NNMi management
server).
Hardware Sizing Assumptions
Each user environment is
unique, as is the use of
performance reporting at that environment.
Some installations will have many operational users who frequently use
the Performance Graphing feature and dashboards. Others
will
rarely
use this functionality, but will have very high scheduled reporting
loads. Some have high frequency collections against a small number of
nodes, others have a wide range of custom collections running against a
large number of nodes. In defining hardware levels, HP is providing a
guideline only, with many assumptions in place.
Discovered
vs. Performance Polled
NNMi has the ability to
discover topology elements such as nodes and interfaces, but not all
discovered elements will have performance data collected and stored for
reporting. The figures quoted below relate only to performance-polled
elements.
Polling Frequency
- The figures in the tables
below quote hardware requirements for an NPS running with a 5-minute
polling frequency.
- The number of polled
interfaces and components is
stated for each scale. Example: 130k/130k means 130,000 interfaces and
130,000 component sensors simultaneously.
- Higher
polling frequencies will result in more data being stored. So 400K
interfaces at 5
minutes results in the same number of collected data points as 80K
interfaces at 1-minute polling frequency.
Factors
Affecting Performance
- These hardware
recommendations are based on the use of the iSPI
Performance for Metrics package. They do not take account of additional
factors that may affect the sizing of your application, such as the
installation of other iSPI products.
- Adding other iSPIs to
your platform, such as NNM iSPI Performance for Quality Assurance, NNM
iSPI for
IP Telephony, or NNM iSPI for MPLS, will result in additional storage
and
processing requirements for NPS. Be sure to carefully examine the
Support Matrix documents of each iSPI product to take account of
additional NPS requirements from those installations.
- Custom, user-defined
collections will also result in additional storage and processing
requirements for NPS. It is assumed that most customers will configure
a small number of custom collections with report generation enabled.
The load from these collections is typically too small to consider when
sizing hardware for a full NNMi iSPI Performance installation. However,
if the total number of managed elements being collected from is greater
than approximately 10% of the total interface count managed by the
system then the load should be considered when sizing the system. As a
rule, consider that one custom collection running at a 5 minute
interval which retrieves 10 OIDs is equivalent to one interface in the
Interface Health package, or one Sensor in the Component Health package.
Post Installation and Maintenance
- HP recommends that
administrators monitor the disk usage, memory
consumption and processing performance of the
system
on a daily basis after installation or when introducing a new custom
collection or iSPI.
- The installer should consult
the NNM iSPI
Performance
for Metrics Deployment Reference
for pre-install and post install
tuning tips.
Storage
Capacity
- The size of the disk
required to store collected data will vary
dramatically depending upon the required retention periods for each
type of data.
- NPS records collected data
in three granularities:
- As polled
- 1 Hour grain
- 1 Day grain
- The tables below provide
data for three different levels of
retention:
- 14 days of as polled, 70
days of hourly grain, 800 days of
daily grain (R14/H70/D800)
- 70 days of as polled, 70
days of hourly grain, 800 days of
daily grain (R70/H70/D800)
- 70 days of as polled, 400
days of hourly grain, 800 days of
daily grain (R70/H400/D800)
- While overall disk size
guidance is provided in this document,
specifics regarding disk speeds and file layouts are covered in the Deployment Reference.
Critical to
the performance of the NPS server is the need for fast IO in support of
the database loading and queries.
Same
Server Install (NNMi and NPS on the Same
System)
The following table outlines the requirements for CPU, RAM and disk
space when NNMi and
NPS (all roles) are on the same system. The CPU, RAM, and Disk figures
represent the total requirements for the system and include the
combined
capacity required for NNMi and NPS. You must make sure that this host
also
meets any additional criteria defined in the NNMi support matrix.
Be aware of the following:
- When NPS is installed on a
system that hosts NNMi, half of the system RAM will be consumed by NPS.
The figures below for RAM resources take account of the combined needs
of NNMi and NPS, and are especially important when running NPS with
several iSPIs providing data.
- These figures do not take
account of additional requirements as a result of other iSPI products
or custom, user-defined, collections.
- Although it is a supported
option to have NNMi and NPS on the same system for "Large" scale
environments, HP recommends that NPS be installed on a separate,
standalone system.
- HP does not support NPS
installed on the same server as NNMi for "Very Large" scale
environments.
Management
Environment Size
|
|
NNMi
+ iSPI Minimum System Requirements
|
Tier
|
Number of
Interfaces/Components |
Number of Concurrent Users
|
Number of
CPUs (cores)a
|
RAM
|
Disk
space
in NNMInstallDir
|
Disk hardware for NNMDataDir |
Additional
disk space
Retention
= R14/H70/D800
|
Additional
disk space
Retention
= R70/H70/D800
|
Additional
disk space
Retention
= R70/H400/D800
|
Entry
|
Up
to 2500/2500 |
5 |
8 CPU
|
16
GB
|
15
GB
|
1 SCSI or SATA disk drive |
200
GB
|
300
GB
|
300
GB
|
Small
|
Up
to 10K/10K |
10 |
8 CPU
|
24
GB
|
15
GB
|
1 SCSI or SATA disk drive |
300
GB
|
400
GB
|
1
TB
|
Medium
|
Up
to 50K/50K |
25 |
12 CPU
|
48 GB
|
15
GB
|
RAID 1+0 or 5/6 with write cache recommended |
800
GB
|
1.5
TB
|
4
TB
|
Large
|
Up
to 130K/130K |
40 |
24 CPU
|
96 GB
|
15
GB
|
High performance SAN storage |
2
TB
|
3
TB
|
10
TB
|
Very
Large
|
Same
server install
not supported
for this scale |
-- |
--
|
--
|
--
|
-- |
--
|
|
|
aRecommended clock speed for each CPU: 2.5 GHz.
For optimal performance, follow the instructions to configure tuning parameters provided in the NNM iSPI Performance for Metrics Deployment Reference.
Dedicated
Server Install
If
you plan on installing NPS on a separate machine from NNMi, make sure
that the following criteria are met. Note that NPS sizing requirements
are
driven by the amount of data generated by the custom collections and
iSPIs
that will supply it. The references provided in this document for
sizing and
scalability are a guideline only and are based on the iSPI Performance
for Metrics using the
number of interfaces and node components that NNMi is polling for
performance
data. The total number of interfaces or components discovered by NNMi
is not
relevant for this system.
- These figures do not take
account of additional requirements as a
result of other iSPI products or custom, user-defined, collections
- Once NPS is installed, you
can use the Managed Inventory report to see the variety of collected
elements for each iSPI. For example, looking at the count of distinct Qualified
Interface Names in the Interface
Health Managed Inventory report will tell you how many unique
interfaces have had performance data collected within the selected time
period.
iSPI
Minimum System Requirements
Management
Environment Size
|
|
iSPI
Minimum System Requirements
|
Tier
|
Number of
Interfaces/Components |
Number of
Concurrent Users
|
Number of CPUs (cores)a |
RAM
|
Disk
space
in NPSInstallDir
|
Disk hardware for NPSDataDir |
Additional
disk space
Retention
= R14/H70/D800
|
Additional
disk space
Retention
= R70/H70/D800
|
Additional
disk space
Retention
= R70/H400/D800
|
Small
|
Up
to 5K/5K |
10 |
8 CPU
|
16
GB
|
10
GB
|
1 SCSI or SATA disk drive |
300
GB
|
400
GB
|
1
TB
|
Medium
|
Up
to 60K/60K |
25 |
8 CPU
|
32
GB
|
10
GB
|
RAID 1+0 or 5/6 with write cache recommended |
800
GB
|
1.5
TB
|
4
TB
|
Large
|
Up
to 130K/130K |
40 |
16 CPU
|
64
GB
|
10
GB
|
RAID 1+0 or 5/6 with write cache recommended |
2
TB
|
3
TB
|
10
TB
|
Very
Large
|
Up
to 400K/400K |
40
|
32 CPU
|
160 GB
|
10
GB
|
High performance SAN storage |
4
TB
|
8
TB
|
20
TB
|
aRecommended clock speed for each CPU: 2.5 GHz.
For optimal performance, follow the instructions to configure tuning parameters provided in the NNM iSPI Performance for Metrics Deployment Reference.
Distributed Deployment of NPS
Overview
Consider creating a distributed deployment of NPS if any of the
following statements are true:
- Your system tier is large or
above
- You plan to install
multiple iSPI products
- You plan to create multiple
custom collections
- You have a high number of
concurrent NNMi users
- You plan on scheduling
a large number of reports for
automatic creation and delivery
A distributed installation has
certain benefits as
outlined below:
- Isolates the functionality
of each role onto separate hardware,
making it easier to upgrade and expand in that one area if a bottleneck
is reached.
- Allows system
capacity to easily be grown to match your business needs in the future.
- On Linux only - allows for a
higher management capacity than a
standalone NPS
system. Distributing the roles of an NPS system across multiple
machines is supported in both Windows and Linux, but using this
to scale beyond the 'Very Large' tier boundaries is only
supported
on Linux.
If this is applicable to your installation, we recommend you discuss
the installation requirements with your HP representative and examine
the Deployment Reference.
Virtualization
Note: The Virtual environment must
meet the x86-64 or AMD64 hardware requirements listed below.
-
VMware ESX Server 4.0 or later minor version, ESXi
4.1 or later minor version, ESXi 5.0 or later minor version.
Supported only for the
Windows or Linux operating systems.
- Red Hat Enterprise Virtualization 3.5 or later minor version.
- Supported only for the entry, small, and medium tiers
- Guest operating system must be included in the list of supported operating systems in the Operating Systems section.
Hardware
- Intel® 64-bit (x86-64) or AMD
64-bit (AMD64)
- For Intel 64-bit
(x86-64), the following Xeon processor
families are recommended:
- Penryn, Nehalem,
Westmere, Sandy Bridge, Ivy Bridge, Haswell or later (recommended processor speed: 2.5 GHz) for up to Medium
tier
- Sandy Bridge, Ivy
Bridge, Haswell or later (recommended processor speed: 2.5 GHz) for Large or Very Large tier
- Virtual Memory/Swap Space
- Recommend
2 times physical memory and at least 32 GB
- Windows:
Verify virtual memory via System Properties
- Linux: To
verify, use the
cat
/proc/meminfo | grep Swap
command. To adjust, use the parted
and mkswap
commands.
Networking
Configuration of
the NPS Server
- Pure IPv6 is not
supported, but dual stack IPv6
and IPv4 combined is supported.
- NPS systems must be served
by Gigabit Ethernet LAN interfaces.
Disk
And Storage Considerations
NPS requires high speed disk
access. This is particularly
relevant in a large scale, very large scale and distributed
environments. You can use a benchmarking tool
such as bonnie++
(only on Linux) to assess the performance of the proposed storage
system. Read the Installation Guide and the Deployment Reference for
disk and database file layout
suggestions as well as post-install tuning guidelines.
NPS 10.00 is tested with the following file systems:
- For Windows: NTFS
- For Linux: ext4
Operating
Systems
Windows
Supported
Versions
- Windows Server 2008 R2 x64
Datacenter Edition with Service Pack 1 (or later service pack)
- Windows Server 2008 R2 x64
Enterprise Edition with Service Pack 1 (or later service pack)
- Windows Server 2008 R2 x64
Standard Edition with Service Pack 1 (or later service pack)
Notes:
- Windows 32-bit operating
systems are not supported
- Windows operating systems on
Itanium Processor Family (IPF) are
not supported
- Anti-virus and backup
software can interfere with the operation
of NPS if the software locks files while NPS is running. Any
application that locks files should be configured to exclude the NPS
install and data directories (C:\ProgramData\HP\HP BTO
Software\NNMPerformanceSPI).
- Windows 2008 includes the
concept of User Access Control (UAC).
Users who are part of the Administrator group may not have full
Administrator privileges. All scripts and commands associated with NPS
will detect and warn if the user is not elevated. They should be run
with full Administrator access. To achieve this, right-click on the
Command Tool icon and choose ‘Run as Administrator.’
Linux
Supported Versions
- Red Hat
- Red Hat Enterprise Linux Server 6.4 (or later minor version)
Tip: Red Hat does not support direct upgrades from Red Hat Enterprise Linux Server 5.x to 6.0.
- SUSE Linux Enterprise Server 11 SP3
NNMi
and NPS OS Combinations
This
table illustrates the combinations of operating systems that are
supported
with NPS in a distributed installation environment. Note that NPS
version
10.00 should always be used at the same version level as NNMi.
|
NPS
|
Windows
|
Linux
|
NNMi
|
Windows
|
Supported
|
Not Supported
|
Linux
|
Supported
|
Supported
|
High Availability
NNMi iSPI Performance can run on certain high availability (HA)
systems with additional configuration. See the Installation Guide for information
on how to install and configure the application with high availability
systems.
NOTE: NPS supports only a 1+1 configuration model for
high availability.
The following configurations are supported:
- Microsoft Windows: Microsoft Failover Clustering for Windows
Server 2008 R2
- Red Hat Linux:
NOTE:
- The NNMi iSPI Performance for Metrics and the foundation NPS
component are not supported on Red Hat Clustering Service (RHCS).
- HA is not supported when running NPS in a distributed, flexible
scale,
model. It is only supported when NPS is installed with all roles on the
same system.
- HA is not supported when running NPS on SUSE Linux.
Web
Browser
-
General Web Browser
Requirements
- The resolution of the
client display should be at least
1024x768.
- Caution: The following
browsers are not
supported:
- Microsoft Internet Explorer version 9 or version 10 when
running in Compatibility View mode
Be sure to disable Compatibility View in Internet Explorer using Tools → Compatibility
View Settings (clear all check boxes).
- Microsoft Internet Explorer version 6, version 7, and
version 8
- Mozilla Firefox 3.6.x through 23.x
- Mozilla Firefox 25.0 and other non-ESR versions of
Firefox.
- Apple Safari (all versions)
- Opera (all versions)
- Google™ Chrome (all versions)
-
Supported Web Browsers on
a Remote Client System (for
operational use)
- Microsoft Internet Explorer version 9
(not running in Compatibility View mode).
- Microsoft Internet Explorer version 10.
- Microsoft Internet Explorer version 11 (requires hotfix for enhancement request QCCR1B130802; to obtain the hotfix, contact HP Software Support).
- Mozilla Firefox version 24.x on a Windows or Linux
client
- Mozilla Firefox version 31.2.0 ESR on a Windows or Linux
client (requires hotfix for enhancement request QCCR1B130802; to obtain the hotfix, contact HP Software Support).
The
following products have been tested to coexist on the same system as
NPS 10.0:
- HP Network Node Manager i 10.00 Software
- HP Network Node Manager iSPI Performance for Traffic 10.00
Software
- HP Network Node Manager iSPI Performance for Quality Assurance
10.00 Software
- HP Network Node Manager iSPI Performance for Metrics 10.00
Software
- HP Network Node Manager iSPI for IP Multicast 10.00 Software
- HP Network Node Manager iSPI for IP Telephony Software version
10.00
- HP Network Node Manager iSPI for MPLS Software version 10.00
- HP Operations agent version 11.13 (or higher)
Note: Do not install NPS 10.00 on a system where
the HP Operations agent is already installed. You can, however, install
the HP Operations agent on a system where NPS 10.00 is successfully
installed.
©Copyright
2009-2015 Hewlett-Packard Development Company, L.P.
Confidential
computer software. Valid license from HP required for possession, use
or
copying. Consistent with FAR 12.211 and 12.212, Commercial Computer
Software,
Computer Software Documentation, and Technical Data for Commercial
Items are
licensed to the U.S. Government under vendor's standard commercial
license.
The
only warranties for HP products and services are set forth in the
express
warranty statements accompanying such products and services. Nothing
herein
should be construed as constituting an additional warranty. HP shall
not be
liable for technical or editorial errors or omissions contained herein.
The
information contained herein is subject to change without notice.
For
information about third-party license agreements, see the
license-agreements directory
on the product installation DVD-ROM.
To view open source code, see the license-agreements/source and
license-agreements/CygwinSources
directories on the product installation media.
Trademark Notices
AMD is a trademark of Advanced Micro Devices, Inc.
© 2012 Google Inc. All rights reserved. Google™ is a trademark of Google Inc.
Intel® is a trademark of Intel Corporation in the U.S. and other countries.
Java is a registered trademark of Oracle and/or its affiliates.
Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.
Microsoft® and Windows® are U.S.
registered trademarks of Microsoft Corporation.
Red Hat® is a registered trademark of Red Hat, Inc. in the United States and other countries.
Acknowledgments
This product includes:
- Apache software, version
1.1, copyright© 2000 The Apache Software Foundation. All
rights reserved.
- Apache, version 2.0, January
2004. The Apache Software Foundation.
- GNU Lesser General Public
License, version 2.1, copyright© 1991, 1999 Free Software
Foundation, Inc.
- GNU lesser General Public
License, version 3, copyright© 2007 Free Software Foundation,
Inc.
- IBM Cognos Business
Intelligence 10.1.1. Copyright© International Business
Machines Corporation 2010. All rights reserved.
- libjpeg library,
copyright© 1991-1998, Thomas G. Lane.
- libpng versions 1.2.5
through 1.2.10, copyright 2004, 2006© Glenn Randers-Pehrson.
- libxml2 library,
copyright© 1998-2003 Daniel Veillard. All Rights Reserved.
- libxp library,
copyright© 2001,2003 Keith Packard.
- Sybase IQ16 SP4, copyright © 2014 by SAP AG or an SAP
affiliate company. All rights reserved.
- The
“New” BSD License, copyright© 2005-2008,
The Dojo Foundation. All rights reserved.
PacketProxy,
copyright© 2002-2010, Daniel Stoedle, Yellow
Lemon Software. All rights reserved.
- 7-Zip,
copyright©
1999-2011 Igor Pavlov.