Last Updated: 24 January, 2014
This document contains updates to the release notes for HP Network Node Manager i-series (NNMi) 8.10, 8.11, 8.12, 8.13. You can also check for online NNMi 8.1x support matrix updates.
Other useful links include:
Important Information about NNMi 8.1x Patch 7 (and Higher Patches) and NNM iSPI jboss Compatibility:
The following list contains NNMi 8.1x update information:
This section applies to the following HP Network Node Manager i Software Smart Plug-ins (NNM iSPIs):
If any of the NNM iSPIs listed above is (or will be) installed in a network management environment that includes NNMi 8.1x Patch 7 (or a newer patch), the NNM iSPI will not work correctly until you replace the NNM iSPI jboss-ejb3.jar file with that installed by the NNMi patch. Follow the appropriate instructions in this section.
NOTE: The following NNM iSPIs do not run separate instances of jboss, so this workaround does not apply to them:
For NNM iSPIs that are already installed in the network management environment, follow these steps while (or after) installing NNMi 8.1x Patch 7:
NNM iSPI for MPLS: ovstop mplsjboss
NNM iSPI for IP Multicast: ovstop mcastjboss
NNM iSPI for IP Telephony: ovstop iptjboss
NNM iSPI for Traffic:
Windows:
%NnmInstallDir%\nonOV\traffic-master\bin\nmstrafficmasterstop.ovpl
%NnmInstallDir%\nonOV\traffic-leaf\bin\nmstrafficleafstop.ovpl
UNIX:
$NnmInstallDir/nonOV/traffic-master/bin/nmstrafficmasterstop.ovpl
$NnmInstallDir/nonOV/traffic-master/bin/nmstrafficleafstop.ovpl
For each installed NNM iSPI, copy the jboss-ejb3.jar file from the NNMi directory structure to the NNM iSPI directory structure:
NOTE: For the jboss-ejb3.jar file installed with NNMi 8.1x Patch 7,
the UNIX checksum $NnmInstallDir/nonOV/jboss/nms/server/nms/deploy/ejb3.deployer/jboss-ejb3.jar
command returns the following output:
2838858037 819099 /opt/OV/nonOV/jboss/nms/server/nms/deploy/ejb3.deployer/jboss-ejb3.jar
NNM iSPI for MPLS:
Windows:
copy %NnmInstallDir%\nonOV\jboss\nms\server\nms\deploy\ejb3.deployer\jboss-ejb3.jar %NnmInstallDir%\nonOV\mpls\jboss\server\nms\deploy\ejb3.deployer\jboss-ejb3.jar
UNIX:
cp $NnmInstallDir/nonOV/jboss/nms/server/nms/deploy/ejb3.deployer/jboss-ejb3.jar $NnmInstallDir/nonOV/mpls/jboss/server/nms/deploy/ejb3.deployer/jboss-ejb3.jar
NNM iSPI for IP Multicast:
Windows:
copy %NnmInstallDir%\nonOV\jboss\nms\server\nms\deploy\ejb3.deployer\jboss-ejb3.jar %NnmInstallDir%\nonOV\multicast\jboss\server\nms\deploy\ejb3.deployer\jboss-ejb3.jar
UNIX:
cp $NnmInstallDir/nonOV/jboss/nms/server/nms/deploy/ejb3.deployer/jboss-ejb3.jar $NnmInstallDir/nonOV/multicast/jboss/server/nms/deploy/ejb3.deployer/jboss-ejb3.jar
NNM iSPI for IP Telephony:
Windows:
copy %NnmInstallDir%\nonOV\jboss\nms\server\nms\deploy\ejb3.deployer\jboss-ejb3.jar %NnmInstallDir%\nonOV\ipt\jboss\server\nms\deploy\ejb3.deployer\jboss-ejb3.jar
UNIX:
cp $NnmInstallDir/nonOV/jboss/nms/server/nms/deploy/ejb3.deployer/jboss-ejb3.jar $NnmInstallDir/nonOV/ipt/jboss/server/nms/deploy/ejb3.deployer/jboss-ejb3.jar
NNM iSPI for Traffic:
If the NNM iSPI for Traffic is running on a standalone system, copy the
file from the NNMi management server to the two locations on the
NNM iSPI for Traffic server.
Windows:
copy %NnmInstallDir%\nonOV\jboss\nms\server\nms\deploy\ejb3.deployer\jboss-ejb3.jar %NnmInstallDir%\nonOV\traffic-master\jboss\server\nms\deploy\ejb3.deployer\jboss-ejb3.jar
copy %NnmInstallDir%\nonOV\jboss\nms\server\nms\deploy\ejb3.deployer\jboss-ejb3.jar %NnmInstallDir%\nonOV\traffic-leaf\jboss\server\nms\deploy\ejb3.deployer\jboss-ejb3.jar
UNIX:
cp $NnmInstallDir/nonOV/jboss/nms/server/nms/deploy/ejb3.deployer/jboss-ejb3.jar $NnmInstallDir/nonOV/traffic-master/jboss/server/nms/deploy/ejb3.deployer/jboss-ejb3.jar
cp $NnmInstallDir/nonOV/jboss/nms/server/nms/deploy/ejb3.deployer/jboss-ejb3.jar $NnmInstallDir/nonOV/traffic-leaf/jboss/server/nms/deploy/ejb3.deployer/jboss-ejb3.jar
NNM iSPI for MPLS: ovstart mplsjboss
NNM iSPI for IP Multicast: ovstart mcastjboss
NNM iSPI for IP Telephony: ovstart iptjboss
NNM iSPI for Traffic:
Windows:
%NnmInstallDir%\nonOV\traffic-master\bin\nmstrafficmasterstart.ovpl
%NnmInstallDir%\nonOV\traffic-leaf\bin\nmstrafficleafstart.ovpl
UNIX:
$NnmInstallDir/nonOV/traffic-master/bin/nmstrafficmasterstart.ovpl
$NnmInstallDir/nonOV/traffic-master/bin/nmstrafficleafstart.ovpl
For NNM iSPIs that are not yet installed, follow these steps while installing the NNM iSPI:
If you revert NNMi to NNMi 8.1x Patch 6 (or an older version), replace the iSPI jboss-ejb3.jar file with the version that NNMi is currently using, as described here:
When installing NNMi on the Windows operating system, if NNMi will use an Oracle database instance and the tablespace quota is not large enough, NNMi might install correctly but not create the tables. To prevent this situation, set the quota to unlimited
, but no smaller than 1MB
before installing NNMi.
NNMi command line tools and other product integrations use the http protocol for communicating with NNMi and will not work without http enabled. Use the system firewall to block the outside HTTP access.
Some networks use routing protocols such as HSRP (Hot Standby Router Protocol) to provide router redundancy. When routers are configured in an RRG (router redundancy group), as they are when using HSRP, the routers configured in the RRG share a protected IP address (one active and one standby). NNMi does not support the discovery and management of multiple RRGs configured with the same protected IP address. Each RRG must have a unique protected IP address.
NNMi 8.1x Patch 8 adds an option to prefer the IP address in an SNMPv1 trap's UDP header over the contents of the SNMPv1 trap's agent_addr field.
To use the IP address in an SNMPv1 trap's UDP header instead of the contents of the SNMPv1 trap PDU's agent_addr field, follow these steps:
%NnmDataDir%\shared\nnm\conf\ovjboss\ovjboss.jvm.properties
/var/opt/OV/shared/nnm/conf/ovjboss/ovjboss.jvm.properties
ovjboss.jvm.properties
file, set the useUdpHeaderIpAddress property to true:com.hp.nnm.trapd.useUdpHeaderIpAddress=true
ovstop ovjboss
ovstart ovjboss
nnmconfigimport.ovpl -f %NnmInstallDir%\newconfig\HPOvNmsEvent\nnm-sim-incidentConfig.xml
nnmconfigimport.ovpl -f /opt/OV/newconfig/HPOvNmsEvent/nnm-sim-incidentConfig.xml
Name | Variable |
Category |
$c
|
Severity |
$s
|
Count |
$# (number of incident attributes
|
Name |
$N (event name)
|
Oid |
$0 or $o
|
Uuid |
$U
|
SourceNode |
$r
|
NNMi 8.1x Patch 6 adds additional support for the following device types:
See the Device Support Matrix for more details.
A new NNM iSPI NET hotfix for QCCR1B40711 (iSPI NET Diagnostics Server does not detect that .NET V3.5 is installed and attempts to install .NET V3.0 which fails) is now available. This hotfix updates the installer for the NNM iSPI NET diagnostics server. The hotfix is beneficial for new installations of the NNM iSPI NET diagnostics server but does not change the diagnostics server runtime functionality.
Contact HP support to obtain this hotfix. For information about applying the hotfix to your system, see the readme_patch.txt file included in the hotfix package.
If any HP products that use the ovc process are installed on the NNMi management server, restart the ovc process after installing the NNM 8.13 patch.
If a connection has been deliberately altered such that its interfaces are mismatched (that is, the "stipulated" duplex values are different even though the "configured" values are still the same, and the vendor is the same), NA correctly detects the mismatch, but NNMi does not.
To workaround this problem, run the Diagnostics and Snapshot actions on the two devices in NA immediately after the FastEthernet0/24 interface on the VWAN_switch-2 device has been changed to half duplex.
In Help topic "Node Diagnostic Results Form (Flow Run Result) (NNM iSPI NET)", the code "Undefined variable: nmVariables.nnmtoolset-short" should be replaced with "NNM iSPI NET".
NNMi application failover is not supported on systems with Symantec Endpoint Protection installed.
NNM 8.1x Patch 4 introduces device extensions that support Component Health monitoring. For nodes that already exist in the NNMi topology, Component Health monitoring does not take effect until after discovery and NNMi restart. NNMi automatically discovers the Component Health metrics during the next rediscovery of each affected node. Therefore, if the rediscovery interval has not been changed from the default value of one day, the Component Health metrics should be fully discovered within one day of applying the patch and starting NNMi. For these nodes, the State Poller does not begin collecting the status of the discovered Component Health metrics until after the next restart of NNMi. Therefore, HP suggests the following sequence to turn on the monitoring of the Component Health metrics for all affected nodes:
Alternatively, you can force a configuration poll of each affected node to turn on the monitoring of the Component Health metrics those nodes.
The HPOM web service integration with NNMi can send incidents to multiple HPOM servers. The HPOM agent integration with NNMi uses the NNMi northbound interface and is currently a one-to-one relationship. Only one northbound receiver (HPOM management server) can be configured at a time.
When enabled, the northbound interface forwards an EventLifecycleStateClosed event to the trap receiver whenever a management event or a third-party SNMP trap is closed by an NNMi user or process. The Send 3rd Party Traps check box on the HP NNMi-Northbound Interface Configuration form does not affect this behavior.
The HPOM web service integration with NNMi is designed for the typical use case of forwarding management events from NNMi to HPOM. Because large numbers of traps can overwhelm the forwarding mechanism of the NNMi-HPOM web service integration, it is not recommended to forward all traps received by NNMi to HPOM through this integration. The recommended approach for forwarding received traps from NNMi to HPOM is to use the NNMi trap forwarding capability.
These two commands require an absolute path for the -f option; relative paths do not work.
For NNMi systems running with the embedded Postgres database, NNMi 8.1x Patch 3 provides an automatic database backup and restore during patch installation and patch removal. This database work enables the database to continue working after Patch 3 is removed. Prior to Patch 3 updating the NNMi system, it performs a backup of the existing database. If Patch 3 is later removed, that database is automatically restored so that the NNMi system can work without having to re-install NNMi.
In certain cases, the patch pre-installation database backup does not occur:
If Patch 3 is uninstalled and the system is running with the embedded Postgres database, the database backup taken during Patch 3 installation is restored. Note that all database changes that occurred after the Patch 3 installation are discarded. The database restore does not happen if the following conditions occur:
If the database was not restored and Patch 2 was not previously installed, NNMi is left in an unrunnable state. At this point, you have two options:
NNMi 8.1x Patch 3 supports Linux Red Hat 5.2 after the following configuration steps. The configuration includes steps before NNMi 8.10 product installation, after installation, and before patch installation. Firewall configuration might be required.
Do the following steps:
/etc/sysconfig/selinux
file.chcon -t textrel_shlib_t /opt/OV/lib/*
A firewall may block access to the NNMi Console, NNMi SPIs, and Application Failover. If you have problems accessing these functions, the firewall might be active and blocking access. Ensure that the following ports are opened up to gain full access to NNMi and NNMi SPI functionality:
Port Name | Default Value | Comments |
jboss.http.port |
80/tcp
|
If the port number is changed from 80, the port might be blocked by the
firewall. NNMi Console will not work. If a different port number was
specified during installation, that port needs to be opened up instead of 80. |
jboss.ejb3.port |
3873/tcp
|
If iSPIs are being used, this port must be made accessible to remote systems running iSPIs.
|
Application Failover Port |
45588/udp
|
This port must be unblocked if Application Failover is used.
|
SNMP Ports |
161/udp, 162/udp
|
These ports must be unblocked for NNMi to receive traps. |
Once the configuration steps are completed, then install NNMi Patch 3.
SYMPTOM DESCRIPTION:
After successfully loading a number of mibs using:
nnmloadmib.ovpl -u <user> -p <password> -load <mib>
it is not possible to load any more mibs.
The command fails each time with the following error: A fatal error has occurred. Please run the command with the resynch option.
RESOLUTION DESCRIPTION:
The synchronization logic was in error. The logic has now been corrected.
Management Event Configuration | SNMP Trap Configuration |
InterfaceInputUtilizationNone, InterfaceOutputUtilizationNone |
IetfVrrpStateChange,
RcVrrpStateChange
|
RrgNoPrimary
RrgMultiplePrimary,
RrgNoSecondary,
RrgMultipleSecondary,RrgDegraded,
RrgFailover,
RrgSecondaryChanged |
RcnChasPowerSupplyDown,
RcnChasPowerSupplyUp,
RcChasPowerSupplyDown,
RcChasPowerSupplyUp,
RcnChasFanDown,
RcnChasFanUp,
RcChasFanDown,
RcChasFanUp, Rcn2kTemperature, Rc2kTemperature |
Trapstorm |
CempMemBufferNotify,
CiscoEnvMonVoltageNotify,
CiscoEnvMonTemperatureNotify,
CiscoEnvMonFanNotify,
CiscoEnvMonRedundantSupplyNotify,
CiscoEnvMonVoltStatusChangeNotify,
CiscoEnvMonTempStatusChangeNotify,
CiscoEnvMonFanStatusChangeNotify,
CiscoEnvMonSupplyStatusChangeNotify |
CpuOutOfRangeOrMalfunctioning,
MemoryOutOfRangeOrMalfunctioning,
BufferOutOfRangeOrMalfunctioning,TemperatureOutOfRangeOrMalfunctioning,
FanOutOfRangeOrMalfunctioning,
PowerSupplyOutOfRangeOrMalfunctioning,
VoltageOutOfRangeOrMalfunctioning |
RcnAggLinkDown, RcnAggLinkUp, RcAggLinkDown, RcAggLinkUp,
RcnSmltIstLinkDown, RcnSmltIstLinkUp,
RcSmltIstLinkDown,
RcSmltIstLinkUp |
AggregatorDown,
AggregatorDegraded,
AggregatorLinkDown,
AggregatorLinkDegraded |
|
IslandGroupDown |
|
NnmclusterFailover,
NnmclusterTransfer,
NnmclusterStartup |
|
SnmpTrapLimitWarning,
SnmpTrapLimitMajor,
SnmpTrapLimitCritical |
Please be informed that NNMi 8.1x Patch 2 (also called NNMi 8.11) introduces changes to the NNMi database schema to support product enhancements introduced in this patch. These changes cannot be reverted after they have been applied during the NNMi 8.1x Patch 2 installation. Therefore, unless special steps are taken prior to applying NNMi 8.1x Patch 2, NNMi cannot be reverted to its prior version, or to any previous version, after NNMi 8.1x Patch 2 has been installed. Attempting to roll back to a previous version will result in the ovjboss process being unable to start because of these database schema changes. To revert to a prior version, NNMi must be completely removed from the system, reinstalled, and then restored from backup.
If you need to remove NNMi 8.1x Patch 2, you must take special steps to uninstall NNMi 8.1x Patch 2, all NNMi patches, any NNM iSPIs and patches, and the NNMi 8.10 product. Then you can reinstall NNMi 8.10 and any NNM iSPIs, and restore your database from backup.
NNMi Application Level Failover enables you to specify an alternate directory to hold the database files that are being exchanged between the systems participating in a failover cluster. This alternate directory is specified with the NNMCLUSTER_DB_ARCHIVE_DIR configuration parameter. (See the “Configuring NNMi for Application Level Failover” chapter in the NNMi Deployment Guide for further details.)
NNMi Application Failover has the following issue on Windows when %NnmDataDir% is set at install time to a directory name that includes parentheses characters (e.g. "C:\Program Files(x86)\"). The parentheses in "(x86)" cause the JGroups protocol stack to fail to initialize.
The "nnmcluster" command will use the alternate keystore file you have defined.
System reboots or other issues might cause the psql command to fail, generating dialogs to the Windows desktop and the event viewer. These errors do not affect operation and can be ignored.
If you want to uninstall the NNMi 8.11 patch and have previously enabled NNMi Application Failover, there is an additional cleanup step to perform on Windows platforms to remove the HP NNM Cluster Manager service:
The Quick Find "Make Empty" button from the “Assigned To” attribute in an Incident form does not work. To unassign an incident, use the "Unassign Incident" option from the actions menu.
The “Assigned To” attribute in the Incident form does not update if the Incident was unassigned when opened. To assign the incident, use the "Assign Incident" or "Own Incident" action. Then reopen or refresh the form.
If using LDAP to access your environment's directory services, sign in to NNMi using the same case as shown when you perform "Assign Incidents" from the Actions menu. This is because the case of your NNMi user name must match the case of the user name reported by the directory service. For example, if your directory service is case-insensitive and reports your login as "jane.smith", only sign in to NNMi as "jane.smith". If you sign in as "Jane.Smith", your sign in succeeds without errors or warnings; however, Actions->Assign Incident and the My Incidents view will not work.
Due to a timing issue, it is possible that File->Sign Out might not redirect your browser to the sign in screen. If this happens, select File->Sign Out a second time.
When specifying a MIB Filter Variable, the begins with wildcard character (for example, vlan*) and the contains wildcard character (for example, *vlan*) returns the same results. In both cases, NNMi matches any value that contains the string provided (for example, vlan). To filter the appropriate instances, you can specifically list the instances to collect or use a list of wildcards.
Attempting to delete a Collection or Policy with large number of Polled
Instances can fail. In such cases, the User Interface displays the "busy
circle" for a few minutes followed by error dialog indicating the
batch update has failed. This failure is more likely to happen when NNMi
collects
data from a MIB table where there are multiple instances being polled for a
given node. It is highly recommended that you filter only the instances that
you want to poll to help minimize this issue and the load on NNMi.
When you are unable to delete a custom Poller Collection or Policy, try
using the following sequence of steps:
1) Delete the Collection. If that fails...
2) Delete each Policy on the Collection individually.
For each Policy that fails to delete...
a) If the Policy has a MIB Filter
value, change its value to a pattern that does not match any MIB Filter
Variable value. Check the Custom Node Collection table to ensure that all
nodes for that Policy have completed discovery. All Polled Instances for
this Policy should be removed.
b) If the Policy does not have a MIB
Filter value, change the Policy to Inactive. This action should cause all
Polled Instances associated with the Policy to be deleted. If it
does not, try editing the associated Node Group to remove nodes from the group
which will result in Custom Node Collections and their polled instances to
be deleted.
3) It should now be possible to delete the Policy successfully.
4) When all Policies for a Collection have been deleted, it should
be possible to delete the Collection as well.
The information about the new Custom Poller feature in the Japanese online help has not been translated and is, therefore, available only in English at this time.
Some pre-existing features have been enhanced with the NNMi 8.11 release, but the Japanese version of the online help has not been updated. For example:
For the most recent information, please do one of the following: