HP Server Automation 7.86 Release Notes

for the HP-UX, IBM AIX, Red Hat Enterprise Linux, Solaris, SUSE Linux Enterprise Server, VMware, and Windows® Operating Systems

Software version: 7.86

Publication date: March 2011

This document is an overview of the changes made to Server Automation. It contains important information that is not included in books or Help.

 

Table of Contents

You can find information about the following in this document:

What's New in This Release?

What Was New in Previous Releases?

Installation

Known Issues

Fixed Issues for This Release

Fixed Issues for 7.85

Documentation Errata

HP Software Support

Legal Notices

 

What's New In this Release?

Server Automation release notes contain information for Server Automation, Storage Visibility and Automation, SE Connector, and other changes.

Fixed and Known Issues

Critical fixes and known issues each have their own tables: see Fixed Issues For This Release or Fixed Issues for 7.85, and Known Issues. Within those tables, defects are conveniently listed alphabetically by
subsystem, and then numerically under each subsystem.

Supported Operating Systems and Platforms

For a list of supported operating systems and platforms for Server Automation Cores, Agents, clients, and Satellites, see the Server Automation Support and Compatibility Matrix .

For a list of supported operating systems and platforms for Storage Visibility and Automation Managed Servers, SE Connector, SAN Arrays, Fibre Channel Adapters, SAN Switches, File System Software, Database Support, and Storage Essentials Compatibility, see the Storage Visibility and Automation Support and Compatibility Matrix .

To check for updates to these documents, go to:
http://support.openview.hp.com/selfsolve/manuals

The HP Software Product Manuals site requires that you register for an HP Passport and sign in.
To register for an HP Passport, select the New users - please register link on the HP Passport login page.

 

What was New In Previous Releases?

This section describes what was new in previous 7.8x releases.

============================================================

Microsoft Hyper-V Virtual Machine Enhancements (7.81)

Please see the Support and Compatibility Matrix for the most updated information on this topic.

This version of HP Server Automation significantly improves and expands your ability to manage Hyper-V hypervisors and virtual machines (partitions). With this version of SA you can create and provision Hyper-V virtual machines with these operating systems:

Windows Server 2008 x64 and x86
Windows Server 2003 x86
Windows Server 2000
Windows XP Professional x86 SP 2 or SP 3
SUSE Linux Enterprise Server 10 with Service Pack 2 (x86 or x64 Edition)
SUSE Linux Enterprise Server 10 with Service Pack 1 (x86 or x64 Edition)


You can also modify and delete Hyper-V VMs. You can add, delete and modify the following on Hyper-V VMs:

Legacy Network Adapters
Network Adapters
SCSI Controllers
Virtual Hard Disks
DVDs

You can also modify the following on Hyper-V VMs:

Memory size
The number of virtual processors
BIOS order
VLAN and MAC address configuration of network adapters
Media specification for DVDs
Controller and location for virtual hard disks


For complete information, see "Microsoft Hyper-V Partition Management" in the SA User Guide: Server Automation.

Solaris Patching Enhancements (7.81)

This version of HP Server Automation significantly improves the process of keeping your Sun Solaris servers running with current patches.

With this version of SA you can:

For complete information, see "Patch Management for Solaris" in the SA User Guide: Application Automation.

Revised Sizing Guidelines (7.81, 7.82)

SA 7.80 and later have increased memory demands on the Slice Component bundle host(s). Table 3 and Table 4 provide the revised sizing guidelines:

Table 3: Small-to-Medium SA Deployment (SA 7.80 and later)

Managed Servers SA Component Distribution by Server
  Server 1* Server 2* Server 3**
500
MR, Infra, Slice, 0, OS Prov N/A N/A
100
MR Infra, Slice, 0, OS Prov N/A
N/A N/A MR, Infra, Slice, 0, OS Prov
* Server Configuration: 4 CPU cores, 8 GB RAM, 1 GB/s network
** Server Configuration: 8 CPU cores, 16 GB RAM, 1 GB/s network

Table 4: Medium-to-Large SA Deployment (SA 7.80 and later)

Managed Servers SA Component Distribution by Server
  Server 1* Server 2* Server 3* Server 4* Server 5*
2000
MR Infra, Slice, 0, OS Prov N/A N/A N/A
4000
MR Infra, Slice, 0, OS Prov Slice 1 N/A N/A
6000
MR Infra, Slice, 0, OS Prov Slice 1 Slice 2 N/A
8000 MR Infra, Slice, 0, OS Prov Slice 1 Slice 2 Slice 3
* Server Configuration: 8 CPU Cores, 8 GB RAM, 1 GB/s network

Windows Agent Deployment Helper Obsolete (7.81)

In SA 7.81, the Windows Agent Deployment Helper (WADH) is no longer required to manage Windows servers with SA and has been removed from the SA distribution. The process of bringing Windows servers under SA management is now the same as for any other platform.

Note: After you install this patch on all your core and satellite servers and are certain that you will not need to roll back the 7.81 patch, you can redeploy the Windows server that hosted the WADH.

The removal of WADH obsoletes the following sections in the SA 7.80 documentation set:

The folder contains the tools required to install the Windows Agent Deployment Helper and upload ISMs to SA.

See the SA Planning and Installation Guide for more information about Windows Agent Deployment Helper. See the SA Content Utilities Guide for more information about ISMs.

-> Read access to facilities where you will scan for servers and manage servers.
-> Features > Managed Servers and Groups must be enabled.
-> Client Features > Unmanaged Servers > Allow Manage Server set to Yes.
-> Client Features > Unmanaged Servers > Allow Scan Network set to Yes.
-> Read access must be set to customer Opsware.


Agent Deployment Tool (ADT) Behavior in a Mixed-SA Version Environment (7.81)

When you run the Agent Deployment Tool (ADT) from a 7.81 SA Client session (the SA version of the core the SA Client session is logged in to), Windows agent deployment from that session is supported only to realms also running SA 7.81; deployment to realms running earlier SA versions is not supported.
If SA Client session is logged into a pre-7.81 core (for example, 7.80, 7.50.03, etc.) as long as that SA core has a properly configured Windows Agent Deployment Helper server, you can deploy Windows agents from that session to realms running SA 7.81 as well as earlier versions.


Veritas File System Support - Red Hat Enterprise Linux SA Cores (7.81)

As of SA 7.81, the Veritas File System (VxFS) is supported for SA Cores on Red Hat Enterprise Linux. Veritas File System (VxFS) is not supported on Solaris systems. For more information, see the SA Supported Platforms in the documentation directory of your SA installation.


Storage Visibility and Automation Feature (7.81)

For Server Automation 7.81, the following changes were made to the Storage Visibility and Automation feature:

See the Storage Visibility and Automation 7.81 Release Notes for detailed information about these changes.

Oracle Real Application Clusters (RAC) Support (7.82)

Concurrent with the release of version 7.82, SA provides support for Oracle RAC. In order to configure SA for Oracle RAC support, you must perform a fresh installation of SA 7.80, configured for Oracle RAC, then upgrade to SA 7.82. For more information about configuring SA for Oracle RAC support, see Oracle RAC Support: Oracle Setup for the Model Repository/SA Planning and Installation Guide, Appendix A on page 89 of these release notes.

Solaris Patching Supports Patch Bundles (7.82)

Version 7.82 of HP Server Automation adds support for Solaris patch bundles.

- Reboot Required: Yes – This setting indicates the managed server will be rebooted when the patch bundle is successfully installed.
- Install Mode: Single User Mode – This setting indicates that the patch bundle will be installed in single user mode. Note that the Solaris system is rebooted to single user mode, then the patch bundle is installed, then the system is rebooted to multiuser mode.
- Reboot Type: Reconfiguration – This setting indicates that a reconfiguration reboot will be performed after installing the patch bundle.
- Reboot Time: Immediate – This setting indicates that the server will be rebooted immediately after installing the patch bundle.

A software compliance scan will similarly indicate the server is out of compliance if the patch bundle is included in the software policy and the same scenario occurs.

To bring the server into compliance, place the relevant patches into a patch policy, resolve the dependencies on the policy to place all required patches in the policy and remediate the policy on the server.

Solaris Patching and Benign Error Codes

Installing Solaris patches sometimes results in benign error codes. A benign error code is an error code that does not reflect a true error situation. For example, a patch installation may fail because the patch is already installed or because a superseding patch is installed, resulting in a benign error code. The exit code from the Solaris patchadd command would indicate an error, when in reality the patch was not installed for a valid reason.

When a patch does not install because of a true error situation such as the server being out of disk space, SA reports the error and the valid error code.
SA detects benign error codes and reports success in most cases. In the following two cases, however, Solaris cannot detect benign error codes:

You can configure SA to detect benign error codes in these cases by performing the following steps:

  1. Install the following patches on all your servers running Solaris 10:
    - 119254-36 (sparc)
    - 119255-36 (i386)
  2. Run the SAS Web Client and log in as a user with "Configure Opsware" permission.
    The Configure Opsware permission is given by default to the "SA/Opsware System Administrators" group. You can locate and set it in the SAS Web Client by selecting Administration > Users & Groups, select the Groups tab, select the "SA/Opsware System Administrators" group and select the Features tab.
  3. Under the Administration node, select System Configuration.
  4. Select Command Engine.
  5. In the configuration parameters table, locate the line "way.remediate.sol_parse_patchadd_output".
  6. Select "Use value:".
  7. Enter the number 1 in the edit field.
  8. Select the Save button.

Behavior when a Pre-Install Script Fails (7.82)

You can specify pre-install scripts in patches, packages and software that run before the patch, package or software is installed on a server. For each pre-install script, you can specify the behavior if the pre-install script fails. The following shows a pre-install script and the error setting.

RMFOLDER=/opt/opsware/dbfile.cb
If [ -d $RMFOLDER ]; then rm -rf 4RMFOLDER
fi

When you initiate a remediate job or an install job that installs patches, packages or software, you can specify the behavior if any part of the job fails. The following shows an Install Patch job and the setting that controls the behavior when any part of the job fails.

In the Install Patch window:

All Steps > 3. Install Options

Install Options > Staged Install Options > Continuous: Run all phases as an uninterrupted operation.

Error Options > Attempt to continue running if an error occurs.

Before SA 7.82, if the error setting for the job specified "Attempt to continue running if an error occurs" and the error setting for the pre-install script specified "Stop Install" and an error occurred in the pre-install script, the job would ignore the script's error setting and continue running.

As of SA 7.82, if this situation occurs, the error setting for the pre-install script applies and the patch or package or software will not be installed. The job will continue running and attempt to install the remaining patches, packages or software.

To retain the pre-SA 7.82 behavior, simply change the error setting on the pre-install script to "Continue".

Approving Blocked Jobs that Run SA Extensions (7.82)

Job Approval Integration in SA allows you to block certain SA jobs from running until they are verified and unblocked. The typical method of unblocking these blocked jobs is by using HP Operations Orchestration (OO). SA 7.82 provides a way to unblock jobs that run program APXs (Automation Platform Extensions) without requiring HP Operations Orchestration. For more information on Job Approval Integration, see the SA Platform Developer's Guide.

In releases prior to SA 7.82, the only way to unblock blocked jobs was by calling an OO flow. The OO flow performed the appropriate checks and unblocked the job, allowing it to run.

SA 7.82 adds the ability to verify and unblock jobs that run program APXs without requiring HP Operations Orchestration.

Note: This applies only to "Run Program Extension" jobs, which are jobs that run a program APX. APXs are extensions to SA. For more information on APXs, see "Creating Automation Platform Extensions (APX)" in the SA Platform Developer's Guide.

Configuration Parameters Value
approval_integration.apx.enabled 0 (default) disables the ability to unblock jobs with an APX.
1 enables unblocking jobs with an APX.
approval_integration.apx.uniquename Specifies the name of the program APX that will handle blocked jobs.

Enabling Job Approval for APXs (7.82)

You must have the appropriate permissions to make changes to System Configuration parameters. For more information on permissions, see the SA Administration Guide.

To create this type of APX, perform the following steps:

  1. Write a program APX that checks the blocked jobs and unblocks them using the SA API methods approveBlockedJob(), updateBlockedJob(), cancelScheduledJob() and findJobRefs(). These methods are the callbacks into SA that enable job approval integration. For details on writing APXs see "Creating Automation Platform Extensions (APX)" in the SA Platform Developer's Guide.
  2. Log in to the SAS Web Client. For more information on the SAS Web Client, see the SA User's Guide: Server Automation.
  3. In the navigation pane, select Administration System Configuration. This displays the subcomponents of the SA platform.
  4. Under "Select a Product:", select Opsware. This displays the system configurations you can modify.
  5. Locate the entry for approval_integration.apx.enabled.
  6. Under the Value column, select the "Use value" button.
  7. In the text box next to the "Use value" button, enter a 1.
  8. Locate the entry for approval_integration.apx.uniquename.
  9. Under the Value column, select the "Use value" button.
  10. In the text box next to the "Use value" button, enter the unique name of your program APX that checks and unblocks blocked jobs.
  11. Select the Save button at the bottom of the page.
  12. Set up a mechanism to run this APX. For example, you could schedule this APX to run periodically to check for blocked APXs.

Disabling Job Approval for APXs (7.82)

You must have the appropriate permissions to make changes to System Configuration parameters. For more information on permissions, see the SA Administration Guide.

To disable the unblocking APX, set the value of approval_integration.apx.enabled to 0. For details on setting this system configuration value, see Enabling Job Approval for APXs on page 22.

Restricting Access to RPM Folders (7.82)

In SA 7.82, you can ensure that your Linux managed servers only have access to the set of RPMs in the SA Library that apply to each server. You simply specify in a custom attribute the folders in the SA Library that the server has access to. All other folders will be inaccessible to the server. This section describes how to set up these restrictions.

With this new mechanism, you can mimic the common Redhat systems administration paradigm of having multiple, distinct yum (Yellowdog Updater, Modified) repositories. This gives you folder-level control over which versions of RPMs can be applied to a given server, allowing you to precisely manage platform update versions, for example Redhat Advanced Server AS4 Update 5 versus Update 6.

This is not intended as a user-level access control mechanism, but rather to restrict the library and folder view of a managed server from access to the full set of RPMs in the SA Library. For information on user level folder access controls and folder permissions in the SA Library, see the SA Administration Guide.

How the RPM Folder Restrictions Work

During remediation, if a server has one or more of these custom attributes defined, SA reads the custom attribute values and only allows the managed server access to the RPMs in the SA Library folders specified in the custom attributes and their subfolders. Subfolders of all the specified folders are recursively searched for RPMs. All other folders are not accessible to the server.

Enabling RPM Folder Restrictions

To restrict a server or group of servers to a subset of RPMs in the SA Library, set a custom attribute in the format described below on your managed server or at a location that will be inherited by the server such as a device group, a software policy, a customer, a facility and so forth.

These custom attributes follow the custom attribute inheritance rules. For example, if you set a custom attribute at the facility level, the servers in that facility will inherit the custom attributes.

SA does not validate the SA Library folder paths you specify in these custom attributes so make sure the folder paths you specify are correct.

For more information about custom attributes, see the SA User's Guide: Application Automation.

Custom Attribute Format (7.82)

The custom attributes that restrict access to RPMs must be in the following format:

repo.restrict.<name>

Where <name> is any user-defined alphanumeric string. Specify a <name> that is descriptive and helps you remember the purpose of the custom attribute. You can define multiple custom attributes as long as each <name> is unique.

Examples

The following defines custom attributes that grant access only to the SA Library directories

/Redhat/AS4/en/x86_64/U5 and /Oracle/10/AS4/x86_64:
repo.restrict.as4u5=/Redhat/AS4/en/x86_64/U5
repo.restrict.oracle_updates=/Oracle/10/AS4/x86_64

The custom attribute value can be multiple lines. The following defines custom attributes that grant access only to the SA Library directories listed:

repo.restrict.as4u5=/Redhat/AS4/en/x86_64/U5
/Redhat/AS4/en/x86_64/U5-extras
repo.restrict.s5u3=/Redhat/5Server/en/x86_64/U3
/Redhat/5Server/en/x86_64/U3-extras
/Redhat/5Server/en/x86_64/U3-VT
/Redhat/5Server/en/x86_64/U3-Cluster

Troubleshooting Errors

If you attempt to remediate a software policy that contains RPMs that are not accessible to the server, the following error message will be given:

The metadata needed to install this package is missing.

This indicates that SA was unable to access the RPM because the server does not have access to the RPM in the SA Library. To resolve this error, check the folder locations you have set in your custom attributes to ensure they are correct.

 

New Memory Requirement for Solaris x86_64 VM PXE Booting (7.83)

In order to PXE boot Solaris x86_84 VM, you must assign the Solaris VM one gigabyte memory or more.

SA Cores on VMs Support (7.83)

SA 7.80 added support for running SA Core Components within VMWare ESX Virtual Machine (VM) environments.

In the release notes, HP recommended that the Model Repository not be deployed to a VM and that the only supported installations were those in which the VM which supports the SA Core infrastructure was the sole VM.

The intent was to ensure that in conditions where overall performance was potentially a factor, SA performance could be isolated from the environmental impact of resource contention caused by other VMs. However, it was not intended to require that all SA Core on VM installations would always have exclusive domain of the ESX container.

To clarify:

Red Hat Enterprise Linux 4.x PPC64 and 5.x PPC64 OS Provisioning (7.83 and 7.84)

While most OS Provisioning procedures are the same for Red Hat Enterprise Linux 4.x PPC64 and 5.x PPC64 as documented in the SA Policy Setter's Guide and the SA User's Guide: Server Automation, there are certain differences.

Red Hat Enterprise Linux 4.x PPC64 and 5.x PPC64 Kickstart files should be specified similarly to that shown in the sample files below:

Red Hat Enterprise Linux 4.x PPC64 Sample Kickstart File

lang en_US.UTF-8
timezone --utc US/Pacific
reboot
text
install
bootloader --location=partition --driveorder=sda,sdb --append="console=hvsi0 rhgb quiet"
#zerombr yes

clearpart --drives=sda --initlabel
part prepboot --fstype "PPC PReP Boot" --size=4 --ondisk=sda
part /boot --fstype ext3 --size=100 --ondisk=sda
part pv.3 --size=0 --grow --ondisk=sda
volgroup VolGroup00 --pesize=32768 pv.3
logvol / --fstype ext3 --name=LogVol00 --vgname=VolGroup00 --size=1024 --grow
logvol swap --fstype swap --name=LogVol01 --vgname=VolGroup00 --size=1000 --grow --maxsize=5888

authconfig --enableshadow --enablemd5
rootpw opsware

firewall --disabled
selinux --disabled

skipx

%packages
@Base

Red Hat Enterprise Linux 5.x PPC64 Sample Kickstart File

lang en_US.UTF-8
timezone --utc US/Pacific
reboot
text
install
bootloader --location=partition --driveorder=sda,sdb --append="console=hvsi0 rhgb quiet"
#zerombr yes

clearpart --drives=sda --initlabel
part prepboot --fstype "PPC PReP Boot" --size=4 --ondisk=sda
part /boot --fstype ext3 --size=100 --ondisk=sda
part pv.3 --size=0 --grow --ondisk=sda
volgroup VolGroup00 --pesize=32768 pv.3
logvol / --fstype ext3 --name=LogVol00 --vgname=VolGroup00 --size=1024 --grow
logvol swap --fstype swap --name=LogVol01 --vgname=VolGroup00 --size=1000 --grow --maxsize=5888

authconfig --enableshadow --enablemd5
rootpw opsware

firewall --disabled
key --skip
selinux --disabled

skipx

%packages
@Base

DHCP Configuration For PowerPC

PowerPC machines must be booted using BOOTP which requires that the dhcpdtool dynamic-bootp flag is enabled in each range statement in the dhcpd_subnets.conf file.

The dynamic-bootp usage is:

range [ dynamic-bootp ] low-address [ high-address ];

For more information about dhcp.conf statement usage, see:

http://www.daemon-systems.org/man/dhcpd.conf.5.html

Network Booting Red Hat Enterprise Linux 4.x or 5.x PPC64 Servers

  1. Mount the new PowerPC server in a rack and connect it to the network. The installation client on this network must be able to communicate with the SA DHCP server on the SA Core network. If the installation client is running on a different network than the SA Core network, your environment must have a DHCP proxy (IP helper).
  2. Use the SMS menu to configure the server to boot from the hard disk on which the operating system will be installed because the OS Provisioning process requires several reboots that default to the local disk.
  3. Start Open Firmware.
  4. Use the boot command to boot the server over the network. This command requires the Open Firmware path to the device you are booting from. You can specify device aliases to a device's Open Firmware path. If you configured the boot order with the SMS menu you can use the printenv and devalias commands to create the alias.
    For example:

    printenv boot-device
    boot-device /pci@800000020000002/pci@2,4/pci1069,b166@1/scsi@1/sd@5,0
    /pci@800000020000002/pci@2/ethernet@1:speed=auto,duplex=auto,
    192.168.157.2,,192.168.157.25,192.168.157.1
    devalias net /pci@800000020000002/pci@2/ethernet@1


  5. After you have set the net device alias, issue the following command:

    boot net:[SERVER_IP],[IMAGE_FILE],[CLIENT_IP],[GW_IP] [ARGUMENTS]

    You need only specify the IMAGE_FILE argument, for example:

    boot net:,yaboot,,

    Executing this command retrieves the bootloader (yaboot) and displays the server boot options. Press Enter to boot the default option (linux5) or wait for the boot to occur automatically.

  6. The Red Hat Anaconda installer starts. If your server has multiple network interfaces, the installer may prompt you to specify the interface to use.
  7. After the booting process finishes successfully, a message appears on the console indicating that the server is ready for OS provisioning. Since the OS Build Agent was installed, the server now appears in the SAS Web Client Server Pool list.
  8. (Optional) Record the MAC address and/or the serial number of the server so that you can locate the server in the SAS Web Client Server Pool list or in the SA Client Unprovisioned Servers list.
  9. Verify that the server appears in the SA Client Unprovisioned Server list and is ready for OS installation. For more information, see the SA User's Guide: Application Automation.

SuSE Linux Kernels and PPC Architecture (7.83 and 7.84)

When installing SUSE Enterprise Linux on PPC architectures, consoles may not work after the operating system loads during the boot process. Therefore, when provisioning or reprovisioning a server with SUSE Enterprise Linux on PPC architectures, you could lose console access to the server being provisioned.


New JDK Version Required for the DCML Exchange Tool (DET) (7.83)

As of SA 7.80, the DCML Exchange Tool (DET) requires jdk 1.6.

Support for Multiple Database Instances on the Model Repository Host (7.84)

SA now supports multiple database instances on the database server where one of the instances is the SA Model Repository instance. The requirements, configuration and procedures for adding instances is the same as shown in the SA Simple/Advanced Installation Guide, Appendix A: Oracle Setup for the Model Repository.

 

HP-UX Patch Management and OS Provisioning Support (7.84)

This release provides support for HP-UX Patch Management and for OS Provisioning. The documentation for these features is contained in separate white papers that you can download from the HP Self Solve web site:

http://support.openview.hp.com/selfsolve/manuals

This site requires that you register for an HP Passport and sign in. To register for an HP Passport ID, go to:

http://h20229.www2.hp.com/passport-registration.html

Or click the New users - please register link on the HP Passport login page.

The documents are titled:

Windows Server 2008 R2 Support (7.84)

This release adds support for Windows Server 2008 R2. If you plan to provision or manage Windows Server 2008 R2 hosts, there are certain additional steps you must take during the patch installation process to ensure full compatibility and support for existing configurations (application configuration, software policies, and so on). See Chapter 2, Installing SA 7.84, on page 43 of this guide.

Model Repository Database on HP-UX and IBM AIX Supported (7.84)

As of this release, SA supports installation of the Oracle database for the Model Repository on the HP-UX and IBM AIX platforms supported by Oracle. The installation procedure is the same as that described for remote databases in the SA Simple/Advanced Installation Guide, Appendix A: Oracle Setup for the Model Repository.

Veritas File System Support (7.84)

As of SA 7.84, the Veritas File System (VxFS) is supported for SA Cores on Red Hat Enterprise Linux and Solaris 10 U6. For more information, see the SA Supported Platforms in the documentation directory of your SA installation.

Simplified Database Schema Update Script (7.84)

SA 7.84 provides a new, simplified database schema update script, patch_database.sh, that combines the multiple scripts that were previously required to be run before patch installation.


Sunsolve Website Rebranding and the solpatch_import Script (7.84)

Oracle Corp. has rebranded the Sunsolve website therefore, before running solpatch_import -action=create_db as described in the SA User's Guide: Server Automation, you must log in to your Sunsolve account and subscribe to patch download automation. For more information, see:

http://support.openview.hp.com/selfsolve/document/KM961930

 

New Guide (7.85)

APX for Configuring LVM and MPIO : User Guide.

Fujitsu Clusters (7.85)

Support for Fujitsu Clusters

As of 7.85, SA supports Fujitsu clusters. A Fujitsu cluster is a cluster designed for a Solaris system that runs on Fujitsu hardware. This section contains information pertaining to these clusters. For more information on clusters, see the SA User Guide: Application Automation.

Issuing SA Commands

You can use the same cluster commands for Fujitsu clusters as you do for standard Solaris clusters.
Use the following command to display more information on cluster commands:
/opt/opsware/solpatch_import –manual
Fujitsu clusters can only be imported using the solpatch_import command.

Special Considerations for Downloading Fujitsu Clusters

If you use a single solpatch_import command to download both a Fujitsu cluster and a Solaris Recommended cluster file, both files will be downloaded to the same location but will not imported into the SA core. The first downloaded cluster will be overwritten by the second downloaded cluster, because both clusters have the same file names (such as: 10_Recommended.zip). To avoid overwriting one file with the other, do not use a single solpatch_import command to download the two clusters. Instead, download the first cluster, move it to a different location, then download the second one.

Note: You can still use a single solpatch_import command to import Fujitsu clusters and standard Solaris Recommended clusters for the same platform because when SA imports a file, it downloads and then immediately imports it to the core, no file overwriting can occur.

Special Considerations When Creating Patch Policies for Fujitsu With the -policy Option

You can create patch policies for any cluster from the command line or in the SA Client.
When you create a patch policy for a Fujitsu cluster using the –policy option from a command line, all applicable patches included in the cluster are applied  (regardless of whether Fujitsu intended them to be installed on your hardware model, using the cluster install). These extra patches do not cause harm. However, if you would rather apply only the patches that Fujitsu has designated for your hardware model, use the SA Client to create a new policy, and include the Fujitsu cluster. When you remediate the policy, SA will correctly apply the relevant patches only.

 

Sun Solaris Moves to Oracle (7.85)

As of the 7.85 release, Oracle and Sun were in the process of retiring the SunSolve web site and establishing a new Oracle support web site for Solaris patches and patch information. Once this transition is complete, HP will provide information about how to modify your solpatch_import.conf file to ensure that Solaris patching with SA continues to work correctly.

This information will be posted on the standard HP support channels.
For more information on Solaris patching and the solpatch_import.conf file, see the SA User Guide: Application Automation.

For more information, see the Knowledge Base Article KM1032711 at: (http://support.openview.hp.com/selfsolve).

 

New Command Engine System Configuration Parameter (7.85)

A new system configuration parameter, way.remediate.yum, was added under Command Engine to solve the issue where the RedHat server 5 x64 OS update software policy was not honoring kernel RPM dependencies. 

This parameter enables you to use YUM (Yellowdog Updater Modified) for install/remove transactions, which greatly simplifies the installation process when a package has a lot of dependencies. YUM is a frontend that can be used with RPM (Red Hat Package Manager) to perform install and remove transactions. Unlike RPM, YUM is capable of tracking the dependencies of a package and installing them prior to installing the package that the user wants to install.

You can use the way.remediate.yum parameter to specify which tool will be used to install RPM packages. Possible values:
  0 = only use RPM.
  1 = use YUM when available else use RPM. (default)
  2 = only use YUM.

YUM version 2.4.3 and greater is supported.

Faster Application Configuration Pushes (7.85)

You can speed up your application configuration pushes by disabling the restore capability. This section describes how to disable the restore capability.

For complete details on application configurations, see the SA Application Configuration User Guide .

Whenever you push an application configuration to a server, SA creates a snapshot of the server's configuration files. If you later restore the configuration file on the server, SA uses the snapshot to restore the configurations. One way to speed up application configuration pushes is to disable the snapshot creation. However, when you disable the snapshot creation, you also disable the ability to restore previous application configurations. Only disable snapshot creation if you are sure you will not need to restore your application configurations to a previous state.

You can disable snapshot creation in one of two ways:

  1. Disable snapshot creation for all servers: Set the global configuration parameter appconfig.disable_snapshots to 2, as described below.
  2. Disable snapshot creation for some servers:

Disabling Snapshots for Application Configuration Pushes

To disable snapshot creation and the ability to restore previous application configurations, perform the following steps.

  1. Log in to the SAS Web Client as a user that is a member of the Opsware System Administrator user group.
  2. In the SAS Web Client, select System Configuration, under the Administration section in the navigation pane. This displays the System Configuration screen.
  3. In the System Configuration screen, select Web Services Data Access Engine. This displays configuration settings for the Web Services Data Access Engine.
  4. Locate the configuration parameter named "appconfig.disable_snapshots".
  5. Set the value of appconfig.disable_snapshots to one of the following values:
  6. Scroll down to the bottom of the screen and select the Save button.
  7. Run the command below on each server hosting a Slice Component bundle in your SA Core to restart the Web Services Data Access Engine. For detailed instructions, see the SA Administration Guide.

    /etc/init.d/opsware-sas restart twist
  8. If you set the value of the appconfig.disable_snapshots to 0 or 2, skip the remaining steps.
  9. If you set the value of the appconfig.disable_snapshots to 1, create a custom attribute named appconfig.enable_snapshots where you want the snapshot to be taken and set its value to 0. Custom attributes can be created on device groups, customers and facilities and they apply to all servers in the device group, customer and facility, respectively. You can also create custom attributes on software policies and all servers with the software policy attached inherit the custom attribute.

    For more information about custom attributes, see the SA User Guide: Server Automation.
  10. Optionally, on the servers where you do not want snapshots created, create a custom attribute named appconfig.enable_snapshots and set its value to 0. This step is optional because the snapshot will not be taken on any servers that do not define the custom attribute.

The following table shows when a snapshot will be created when you push application configurations depending on the value of the global configuration parameter appconfig.disable_snapshots and the value of the appconfig.enable_snapshots custom attribute on individual servers:

When Snapshots are Created

Value of appconfig.enable_snapshots Custom Attribute for any particular server:

Value of Global appconfig.disable_snapshots Configuration Parameter:

 

0: - This is the default value.

1:

2 or any other value:

Custom attribute not defined for the server:

Yes - all servers

No

No snapshots created on any servers.

0 or any value other than 1:

Yes - all servers

No

No snapshots created on any servers.

1:

Yes - all servers

Yes

No snapshots created on any servers.

 

Enabling Snapshots for All Application Configuration Pushes on All Servers

To enable snapshot creation and enable the ability to restore previous application configurations on all your managed servers, perform the following steps.

  1. Log in to the SAS Web Client with a user that is a member of the Opsware System Administrator user group.
  2. In the SAS Web Client, select System Configuration, under the Administration section in the navigation pane. This displays the System Configuration screen.
  3. In the System Configuration screen, select Web Services Data Access Engine. This displays configuration settings for the Web Services Data Access Engine.
  4. Locate the global configuration parameter named "appconfig.disable_snapshots".
  5. Set the value of appconfig.disable_snapshots to 0.
  6. Scroll down to the bottom of the screen and select the Save button.
  7. Run the command below on each server hosting a Slice Component bundle in your SA Core to restart the Web Services Data Access Engine. For detailed instructions, see the SA Administration Guide.

    /etc/init.d/opsware-sas restart twist

This creates snapshots for all application configuration pushes on all managed servers.

 

Storage Visibility and Automation (7.85)

APX Created to Configure Volume Manager

A WAPX (Web Application Programming eXtension) solution to automate the configuration of Linux Native Volume Management (LVM) is new for this release. This new feature helps streamline the configuration of volume managers on SA managed servers.

MPIO configuration is NOT supported in 7.85.

See the APX for Configuring LVM and MPIO: User Guide for more information.

 

SE Connector (7.85)

Attaching and Remediating the SE Storage Scanner and SE Connector Update Policies

This section describes the steps to follow when you attach and remediate the SE Storage Scanner and SE Connector Update policies.

To attach and remediate:

  1. Attach the software policy SE Storage Scanner to the managed server
  2. Remediate the server.
  3. If your HP Storage Essentials management server is version 6.1.1, you do not need to follow any more steps.
  4. If the HP Storage Essentials management server is version 6.2 or later, attach the software policy SE Connector Update for your version to the managed server.

Note: The version of the SE Connector Update must be compatible with the version of the Storage Essentials server, which means that the version numbers of the SE Connector Update libraries must be the same as the version of the Storage Essentials. For example, if you have SE 6.2, installed, you will have to install the SE Storage Scanner first, then install the SE Connector Update for 6.2.

Back to the Table of Contents


Installation

This section describes the SA 7.86 installation procedure.

General Information

If any installed SA components (other than a previously installed patch) have a different build ID, you won't be allowed to install this patch.

To determine the build ID for a core machine, open the file:

/var/opt/opsware/install_opsware/inv/install.inv

and find the section beginning with %basics_ . Under this line, find the build_id .

For example:

%basics_linux

build_id: opsware_37.0.3006.*

When you install an SA patch, the patch installation updates the install.inv file to record the patch installation and the patch build ID. For example:

%opsware_patch

build_id: opsware_37.0.3826.0

If you must roll back this patch in a Multi-master Mesh, HP recommends that you roll back the secondary cores and satellites first, then the primary core.

Could not find spog.pkcs8 /var/opt/opsware/crypto

Copy the certificate from another core machine (for example, occ ) to

/var/opt/opsware/crypto/oi

and retry this operation.

If this error is encountered, simply copy the certificate from another core machine to your core server and retry the operation.

MBSA 2.1.1 Supported for SA 7.84 and Later

Obtain the required Windows patch management files by performing the following tasks:

  1. Obtain the following files from Microsoft:

    This file is packaged with the MBSA 2.1.1 setup file, MBSASetup-x86-EN.msi, that you must download from:

    http://www.microsoft.com/downloads/en/details.aspx?FamilyID=B1E76BBE-71DF-41E8-8B52-C871D012BA78

    After the download, on a Windows machine run MBSASetup-x86-EN.msi to install MBSA 2.1.1. In the directory where you installed MBSA 2.1.1, locate the mbsacli.exe file. By default, the file is installed here:
    %program files%\Microsoft Baseline Security
    Analyzer 2\mbsacli.ex

  2. Import the files you just downloaded into SA:
    1. Log in to the SA Client.
    2. Navigate to Administration > Patch Settings > Windows Patch Utilities.
    3. Select the Windows Patch Utility.
    4. Select the Utility name in the table.
    5. Select the Import Utility Update button to open the Import Patch Utility file picker.
    6. Select one of the files (mbsacli.exe or wusscan.dll) that you downloaded from Microsoft.
    7. Click the Import button.
    8. Repeat steps d through g for the second file.

    These patch management files will be copied to all managed Windows servers during software registration.
    For more information on Windows Patch Management, see the SA User Guide: Application Automation.

Script Running Order

The pre-patch, database update and patch install scripts must be run in the following order:

Table 2 - SA Script Running Order - Upgrade

Upgrade From

To

Script Running Order

7.8x (7.81, 7.82, etc.)

7.85

  1. patch_database.sh
  2. patch_opsware.sh
  3. patch_contents.sh
7.80 7.86
  1. prepatch.sh
  2. patch_database.sh
  3. patch_opsware.sh
  4. patch_contents.sh
7.8x (7.81, 7.82, etc.) 7.86
  1. patch_database.sh
  2. patch_opsware.sh
  3. patch_contents.sh

Table 3 - SA Script Running Order - Rollback

Rollback From

To

Script Running Order

7.86 7.80
  1. patch_opsware.sh
  2. patch_database.sh

Note: When you upgrade from 7.8x to 7.86, you do not need to run the prepatch.sh script, since it should already have been applied during the upgrade to 7.8x.

Pre-Patch Procedure

You must complete the following pre-patch procedure before applying the SA 7.86 patch.

Managed Platform Update

You must install an SA update on 7.80 cores before installing the SA 7.86 patch. If you are upgrading from SA 7.81 or 7.82, this update will have already been installed. This update enables the SA Core to handle new supported managed platforms introduced in CORD patch releases by ensuring mesh compatibility between a First Core patched with SA 7.84 and unpatched Secondary Cores.
The update should be applied to each Slice Component bundle host in all secondary cores and only needs to be applied once during the lifetime of the SA 7.80 server. If for some reason you have not applied the update, the CORD installation will automatically install the update before installing the CORD release.

Note: This update cannot be rolled-back.

To install the pre-patch update, run the following script: <distro>/opsware_installer/tools/prepatch.sh

If the patch has not been previously been applied, the following is displayed:

Patching /opt/opsware/occclient/ngui.jar

If the patch has been previously applied, the following will be displayed:

/opt/opsware/occclient/ngui.jar checksum = <current MD5 checksum>

Patch not applicable

 

Database Schema Update Procedure

The script run during this procedure makes required changes to the Model Repository including adding required tables and objects. Perform the following tasks to install database updates:

  1. Mount the distribution. Invoke patch_database.sh on the Model Repository host:

    <distro>/opsware_installer/patch_database.sh --verbose -r <response file>

    Where <response file> is the response file last used to install/upgrade the system.

    Usage: patch_database.sh [--verbose] -r <response file>

    patch_database.sh automatically detects if a database update is already
    installed and presents a corresponding menu:

    1. If the database update has not been previously applied, you see the following:
      Welcome to the Opsware Installer.
      It appears that you do not have a database update
      installed on this system.
      Press 'i' to proceed with patch installation.
      Press 's' to show patch contents.
      Press 'q' to quit.
      Selection: i

      Enter i at the prompt to begin the database update.

    2. If the database update has previously been applied, you see the following:

      Welcome to the Opsware Installer.
      It appears that you have installed or attempted
      to install a previous version of the database
      update on this system.
      Press 'u' to upgrade the patch to the current version.
      Press 'r' to remove this patch.
      Press 's' to show patch contents.
      Press 'q' to quit.
      Selection: u
      You chose to upgrade the patch. Continue? [y/n]: y

      Enter u at the prompt then Y to begin the database update.

  2. After you make your selection, the installer completes the new (or interrupted) installation.
    On completion, you see a screen similar to the following:

    [timestamp] Done with component Opsware SQL patches.

    [timestamp] ########################################################

    [timestamp] Opsware Installer ran successfully.

    [timestamp] ########################################################

Note: After running the patch_database.sh script, you may see the following error when running the System Diagnostic test on your core:

Test Name: Model Repository Schema
Description: Verifies that the Data Access Engine's version of the schema matches
the Model Repository's version.
Component device: Data Access Engine (spin)
Test Results: The following tables differ between the Data Access Engine and the
Model Repository: local_data_centers, role_class_bridge.


This error is invalid and you can disregard it.

Patch Installation Procedure

Note: Before performing the tasks in this section ensure that you have completed the tasks listed in the following sections: MBSA 2.1.1 Supported for SA 7.84 and Later, Managed Platform Update, and Database Schema Update Procedure.

Perform the following tasks to install SA:

  1. Mount the SA distribution. Invoke patch_opsware.sh on every host in the
    core/satellite facility:

    <distro>/opsware_installer/patch_opsware.sh --verbose

    Usage : patch_opsware.sh [--verbose]

    patch_opsware.sh automatically detects whether or not there is a patch
    already installed and presents a corresponding menu:

    1. Non-upgraded System : If your system has not been upgraded, you see the following menu:
      Welcome to the Opsware Installer. It appears that
      you do not have any patches installed on this system.
      Press 'i' to proceed with patch installation.
      Press 's' to show patch contents.
      Press 'q' to quit.
      Selection: i

      Enter i at the prompt to begin the installation.

    2. Previously Upgraded System : If an SA patch has already been installed successfully, when patch_opsware.sh is invoked from a newer patch release, you see the following menu:
      Welcome to the Opsware Installer. It appears that you have
      installed or attempted to install a previous version of
      the patch on this system.
      Press 'u' to upgrade the patch to the current version.
      Press 'r' to remove this patch.

      Press 's' to show patch contents.
      Press 'q' to quit.
      Selection: u

      Enter u at the prompt to begin the upgrade.

  2. After you make your selection, the installer completes the new (or interrupted) installation.

    The installer displays the following upon completion:

    [<timestamp>] Done with component Opsware Patch.

    [<timestamp>]

    ########################################################

    [<timestamp>] Opsware Installer ran successfully.

    [<timestamp>]

    ########################################################

Software Repository Content Upgrade

This section details upgrades to the software repository content on the upload distribution (such as agent packages to be reconciled to managed servers).

General Information

If you are upgrading a core hosted on multiple servers, the Software Repository content patch must be applied to the server hosting the Software Repository Store ( word store ).

If you are upgrading a Multimaster Mesh, the Software Repository content upgrade should only be applied to the First Core (the upgraded content will automatically be propagated to other cores in the mesh).

 Note: Unlike core patches, Software Repository content upgrades cannot be rolled back.

Upgrading the First Core Content

  1. On the First Core Software Repository store ( word store ) host, invoke the upgrade script:

    <distro>/opsware_installer/patch_contents.sh --verbose -r <response file>

    where <response file> is the response file last used to install/upgrade the SA Core.

    The following menu is displayed:

    Welcome to the Opsware Installer. Please select the components
    to install.
    1 ( ) Software Repository - Content (install once per mesh)
    Enter a component number to toggle ('a' for all, 'n' for none).
    When ready, press 'c' to continue, or 'q' to quit.


    Enter either 1 or a and press c to begin the installation.

  2. If the Software Repository content image is not installed on the server, the following message will be displayed:

    [<timestamp>] There are no components to upgrade.
    [<timestamp>] Exiting Opsware Installer.

 

Rolling Back the Patch

To rollback SA 7.86 to SA 7.80, invoke the script:

<distro>/opsware_installer/patch_opsware.sh --verbose

If this is a patched system, the following will be displayed:

Welcome to the Opsware Installer. It appears that you have previously
completed installation of this patch on this system.
Press 'r' to remove this patch.
Press 's' to show patch contents.
Press 'q' to quit.
Selection:

Enter r at the prompt to remove the patch.

Note: Rolling back SA 7.86 does not:

Rolling Back the Database Schema Update

To roll back the database schema update, enter this command:

<distro>/opsware_installer/patch_database.sh --verbose -r <response file>

Where <response file> is the response file last used to install/upgrade the system.

If the database has been updated, you see the following:

Welcome to the Opsware Installer. It appears that you have previously
completed the installation of this database update on this system.
Press 'r' to remove this patch.
Press 's' to show patch contents.
Press 'q' to quit.
Selection: r

Enter r at the prompt to begin the database schema update rollback.

 

Post-Patch Installation Tasks

Completing the Update to the Waypurge Garbage Collection Procedure

When you ran the SA 7.86 patch_database.sh script, Garbage Collection was modified so that during the next run, the old child records are completely deleted from the SESSION_SERVICE_INSTANCES table to improve performance.

After you have upgrade to SA 7.86, you should perform the following tasks to delete any existing old child records in your SESSION_SERVICE_INSTANCES table which reduces the size of the table.

Note: The following steps are optional but HP highly recommends that you perform this step, especially for large databases. If this step is not performed then nightly Waypurge Garbage Collection job will run automatically and delete the old unwanted records.

Changes Made by patch_database.sh

When you ran the pre-patch script, patch_database.sh, it updated the Waypurge garbage collection PL/SQL and added a new WAY_GC_SESSIONTREES_DELETE_MAX to the lcrep.audit_params table.

To view the new row you can use the following SQL*Plus command:
SQLPLUS> col NAME format a30
SQLPLUS> col AUDIT_PARAM_ID format a15
SQLPLUS> col VALUE format a30
SQLPLUS> set line 100
SQLPLUS> select AUDIT_PARAM_ID, NAME, VALUE from audit_params;


Sample output:

AUDIT_PARAM_ID NAME VALUE
--------------- ------------------------------ --------------------
68 DAYS_WAY 30
69 DAYS_CHANGE_LOG 180
70 LAST_DATE_WAY 20-FEB-10
71 LAST_DATE_CHANGE_LOG 23-SEP-09
72 DAYS_AUDIT_LOG 180
73 LAST_DATE_AUDIT_LOG 23-SEP-09
74 WAY_GC_SESSIONTREES_DELETE_MAX 100 ------> new row

Steps to Complete the Waypurge Garbage Collection Update

The following steps must be performed on all Model Repository hosts after the pre-patch script is run and the patch is installed.

  1. Verify how many records are expected to be deleted.

    SQLPLUS> SELECT count(session_id) FROM sessions
    WHERE (parent_session_id IS NULL OR
    parent_session_id IN (SELECT session_id FROM sessions WHERE
    parent_session_id IS NULL AND status = 'RECURRING')) AND
    status <> 'PENDING' AND status <> 'RECURRING' AND
    trunc(nvl(signoff_dt, nvl(end_dt,start_dt))) <
    (trunc(sysdate) - (SELECT value FROM audit_params WHERE name = 'DAYS_WAY'))
    AND NOT EXISTS (SELECT reconcile_session_id FROM device_role_classes
    WHERE reconcile_session_id IS NOT NULL AND
    reconcile_session_id = sessions.session_id);

  2. Run the WAYPURGE.GC_SESSIONS dba_job manually.

    sqlplus "/ as sysdba"
    SQLPLUS> grant create session to gcadmin;
    SQLPLUS> connect gcadmin/<password_for_gcadmin>
    SQLPLUS> col schema_user format a10
    SQLPLUS> col what format a50
    SQLPLUS> set line 200
    SQLPLUS> select job, schema_user, last_date, this_date, next_date, broken, what from user_jobs where what LIKE '%WAYPURGE%';

    Sample output:

    JOB SCHEMA_USE LAST_DATE THIS_DATE NEXT_DATE BRO WHAT
    ---------- ---------- --------------- --------------- --------------- -
    189 GCADMIN 14-APR-11 15-APR-11 N WAYPURGE.GC_SESSIONS;----> note job number

    SQLPLUS> exec dbms_job.run(189);

    Note the time taken by the manual job run and increase the value of WAY_GC_SESSIONTREES_DELETE_MAX accordingly. WAY_GC_SESSIONTREES_DELETE_MAX value should be gradually increased and the time taken to run the job should be monitored. WAY_GC_SESSIONTREES_DELETE_MAX can be increased to say 300, 500, 1000, 3000 and so on.

    sqlplus "/ as sysdba"
    SQLPLUS> grant create session to lcrep;
    SQLPLUS> connect lcrep/<password for lcrep>
    SQLPLUS> UPDATE audit_params SET value = 1000 WHERE name = 'WAY_GC_SESSIONTREES_DELETE_MAX';
    SQLPLUS> commit;

    Step 2 can be run to monitor the number of records that need to be cleaned up.

  3. The Waypurge job can be run manually or the nightly dba_job can delete the child records. Note that the GC nightly DBA job is run only once a day, so it may take several days for it to delete all the child records. A combination of manual and nightly job run is recommended.


  4. After all child records are removed, delete the WAY_GC_SESSIONTREES_DELETE_MAX value from the AUDIT_PARAMS table.

    sqlplus "/ as sysdba"
    SQLPLUS> grant create session to lcrep;
    SQLPLUS> connect lcrep/<password for lcrep>
    SQLPLUS> DELETE FROM audit_params WHERE name = 'WAY_GC_SESSIONTREES_DELETE_MAX';
    SQLPLUS> Commit;
    SQLPLUS> select AUDIT_PARAM_ID, NAME, VALUE from audit_params; ->check that the value was removed.

Windows Server 2008 R2 x64

SA 7.86 and later provides improved support for Windows Server 2008 R2 x64. Windows Server 2008 R2 x64 now appears with its own entries in the SA Client rather than as a subset of Windows Server 2008.

However, there are some tasks you must perform in order to migrate any Software Policies,
Application Configurations, packages (units), Patch Policies and/or OS Provisioning objects
you may already have set up for your server(s).

This section describes how to set up SA support for Windows Server 2008 R2 x64.

Migrating Software Policies, Application Configurations and/or Patch Policies is handled by
running a script, windows_2008_R2_fix_script.pyc , provided with SA 7.86 and later
in the directory:

<distro>/opsware_installer/tools

The script is called windows_2008_R2_fix_script.pyc and is invoked as follows:

/opt/opsware/bin/python2 windows_2008_R2_fix_script.pyc [--mrl=<MRL_ID>|--listmrls|--help

The script has the following options:

Table 4: Windows Server 2008 R2 Migration Script Options

Options

Description

--force_bs_hardware

Force Windows Server 2008 R2 servers to perform hardware registration. You must also specify at least one of the following options: --swPolicy, --patchPolicy, --appConfig, --osProv or --all.

--all, -a

Process Application Configurations, Software Policies, Patch Policies, and OS Provisioning objects.

--appConfig

Enable processing of Application Configurations.

--swPolicy Enable processing of Software Policies.
--patchPolicy Enable processing of Patch Policies.
--unit Enable processing of Units (Packages). Works only if --swPolicy option is also specified.
--osProv Enable processing of OS Provisioning MRLs (not including WIM-based), and link existing Installation Profiles and OS Sequences to the new MRL with updated platform.
--smbPassword=<SMB Password> Specify a Windows Share (SMB) password (if not provided, the script will prompt for it).
--wim=<MRL ID> Force processing of the specified WIM-based MRL ID. Works only when the --osProv option is also specified. Can be used multiple times. Note: If a server was provisioned using a WIM, you must run the script with the --osProv --wim options to avoid data integrity errors.
--username=<SA username> SA username (if not provided, the script will prompt for it).
--password=<SA password> SA password (if not provided, the script will prompt for it).
--debug, -d Print more information
--help, -h Display this help and exit

Note: Migrated objects other than Patch Policies are not copied, they are attached to the new Windows Server 2008 R2 x64 configuration.

For Patch Policies, Windows Server 2008 R2 x64 copies are created of Windows Server 2008 x64 Patch Policies containing R2 patches (x64 patch library).

The Windows Server 2008 x64 Patch Policies are then detached from the Windows Server 2008 R2 x64 servers and the equivalent Windows Server 2008 R2 x64 Patch Policy copies are attached to the Windows Server 2008 R2 x64 servers.

Note: You can run windows_2008_R2_fix_script.pyc multiple times without issue. The changes made by the script cannot be rolled back.

Requirements

 

Software Policies

After migration completes, the Software Policy appears in the SA Client Navigation pane under Library/By Type/Software Policies/Windows/Windows Server 2008 R2 x64 and Windows Server 2008 x64.

During migration, Software Policies are modified only if:

When processing policy items the script looks for the following types of objects:

If the script finds a policy item that has Windows Server 2008 x64 in the platform list it will migrate that policy item to Windows Server 2008 R2 x64. However, the type must be included in the list of types that the script processes. For example, if the script is run with --swPolicy and --units, it will process Software Policies and packages included as policy items. The script will not process any application configurations (even if they are included in the policy items of a Software Policy that will be migrated).

Similarly, if the script is run only with the --swPolicy option, it will only process Software Policies and any policy items that are Software Policies, but policy items that are packages or application configurations will not be processed.

The order of items in the Software Policy is retained and remediation status remains unchanged.

If the script identifies an existing Software Policy as a Windows Server 2008 R2 x64 policy, it does not modify it during processing.

 

Packages

To migrate packages, the script must be run with --swPolicy and --units or --all option. The script migrates only the packages that have Windows Server 2008 x64 in the platform list and are included as policy item inside a Software Policy that is migrated by the script.

After migration, the package will appear in the SA Client under both the Windows Server 2008 x64 and Windows Server 2008 R2 x64 folders in
Library/By Type/Packages/Windows.

The script does not take into account the package type. It looks for packages included in migrated Software Policies that are attached to Windows Server 2008 x64. Server Module Result objects, Windows Registry objects and Windows Services objects cannot be migrated by the script because their platform associations cannot be changed.

Properties settings (including general, archived scripts, install parameters, install scripts, uninstall parameters, uninstall scripts) are preserved.

Application Configurations

To migrate application configurations, the script must be run with the --appConfig or --all option.

The migration script migrates an application configuration if:

During migration the script adds Windows Server 2008 R2 x64 to the application configuration's platform list. The script also inspects all application configurations' associated templates (CML templates) and if a template has Windows Server 2008 x64 in the platform list it is also migrated.

The Compliant/Non Compliant/Scan Failed compliance status is changed to Scan Needed after running the script.

There is no undo option.

Patch Policies

To migrate SA Patch Policies, patch metadata and patch exceptions, the script must be run with the --patchPolicy or the --all option. During migration, the script appends R2 to the Patch Policy name. For example, for a patch policy named 2008 XYZ Policy, the migration script creates a new Windows Server 2008 R2 x64 policy named 2008 XYZ Policy R2 if:

During migration, the script appends R2 to the Patch Policy name. For example, for a patch policy named 2008 XYZ Policy, the migration script creates a new Windows Server 2008 R2 x64 policy named 2008 XYZ Policy R2 if:

Note: If a Windows Server 2008 R2 x64 policy named 2008 XYZ Policy R2 already exists, the applicable patches will be added to it.

If Windows Server 2008 R2 x64 servers, or device groups containing Windows Server 2008 R2 x64 servers, are attached to Windows Server 2008 x64 patch policies, the migration script will detach these policies and attach the newly created or updated equivalent Windows Server 2008 R2 x64 policies. Applicable Patch Policy exceptions are also migrated.

If metadata associated with Windows Server 2008 R2 x64 patches has been modified (for example: install/uninstall flags, pre/post install/uninstall scripts), that metadata will be migrated.

OS Provisioning

To migrate OS Provisioning MRLs, OS Sequences and Installation Profiles, the script must be run with the --osProv or the --all option. During migration, the script runs import_media for all detected Windows Server 2008 x64 MRLs. When it detects Windows Server 2008 R2 x64, it deletes the old MRL and creates a new one with the same configuration (same MRL ID) but with the Windows Server 2008 R2 x64 platform associated.

Since import_media cannot detect a WIM image's platform, it cannot automatically migrate MRLs that point to this type of media. However, you can force the migration of a specific MRL that contains a WIM image by providing its MRL ID using the --wim=<MRL ID> option. This option can be used multiple times so multiple WIM MRLs can be migrated. If the script finds an MRL that points to a WIM image and its MRL ID was not specified using the --wim=<MRL ID> option, it will display a warning message and skip processing of that MRL.

If a MRL that points to a WIM image was previously used to provision a Windows Server 2008 R2 x64 machine, then the script must be run with --wim=<MRL ID> to avoid data integrity errors. The IDs of all WIM MRLs that were already used for provisioning should be added by running the script with the --wim option multiple times.

If a you use dynamic MRLs (the MRL path has script weaver tokens like @mediaserver@), the script cannot migrate such MRLs because they can't be mounted. You can temporarily specify a full URL in the MRL using the SA Client interface before running the migration script. You can then restore the dynamic MRL specification after migration if needed.

Existing Installation Profiles attached to the old (migrated) MRL are linked with the newly created MRL and the platform is changed to Windows Server 2008 R2 x64.

Any OS Sequences linked to a migrated Installation Profiles will also be updated to show the Windows Server 2008 R2 x64 platform.

Software Policies attached to a migrated OS Sequence are not modified. However, if a migrated OS Sequence is used to provision a server, that server becomes identified as a Windows Server 2008 R2 x64 server and the Software Policies attached to the migrated OS Sequence are attached to the newly created Windows Server 2008 R2 x64 server. When you run the migration script again, these software policies are then migrated.

The populate-opsware-update-library Script

A new option, --no_w2k8r2, is provided for the populate-opsware-update-library script and is used to specify that Windows Server 2008 R2 x64 patch binaries should not be uploaded. For more information about the populate-opsware-update-library script, see the SA User Guide: Application Automation .

Windows Server CLI Installation

If you plan to install the SA Command-line Interface (OCLI) on a Windows Server after upgrade to SA 7.86, you must update the Agent on that server to the latest version. Errors occur during OCLI installation on Windows servers with earlier Agent versions.

Back to the Table of Contents


Known Issues

This section describes known issues for SA, Storage and Visibility (Storage), and SE Connector (SE) for this release. The table lists issues first by subsystem, then numerically within each subsystem. All issues are for SA unless otherwise designated as Storage or SE Connector.

Table 5: Known Issues

 

QCCRID Symptom/Description Platform Workaround

Agent

120597 Synchronization failure if live or update directories have Unicode names.

All platforms where Unicode is used

 

Unicode directory names are not supported for the root level directory assigned to a synchronization service. ASCII characters only should be used for this folder/directory name.

Application Configuration

119419 Pushing application configurations to a large number of servers can take a long time. This is a request to make application configuration pushes faster by disabling the ability to restore previous application cnfigurations. Independent To speed up application configuration pushes, disable the ability to restore saved application configurations. See Faster Application Configuration Pushes.

Core

120158

During patch_contents installation, following a core recertification, the process may fail with this error:

Verifying OCC available: FAILURE (Certificate file /var/opt/opsware/crypto/word_uploads/wordbot.srv does not exist)

 

Linux,
Solaris

Use the following procedures to update the crypto for word_uploads, then restart the patch_contents script:

cp /var/opt/opsware/crypto/spin/admin-ca.crt /var/opt/opsware/crypto/word_uploads/
cp /var/opt/opsware/crypto/spin/opsware-ca.crt /var/opt/opsware/crypto/word_uploads/
cp /var/opt/opsware/crypto/spin/agent-ca.crt /var/opt/opsware/crypto/word_uploads/
cp /var/opt/opsware/crypto/wordbot/wordbot.srv /var/opt/opsware/crypto/word_uploads/


Database Scanner for Oracle (Storage)

91143 The status of an automatic storage management (ASM) Diskgroup shown in
the Properties view is different than the status shown in the Database Configuration Assistant (DBCA) view. In the Properties view, the status is CONNECTED. In the DBCA, the status is
MOUNTED. By definition, the status of ASM Diskgroup is relative to the database instance.
What is reported in the Properties view matches the status for one database instance only.
Independent
None.
93690 The content pane for Relationships (SAN Switches and SAN Fabrics) on a virtual server is empty ("No items found"). The Server > Relationships > SAN Switches panel only displays SAN switches to which the given server is directly connected. In some cases, a server may depend on SAN switches that are not displayed in this panel. For example, a virtual server may be
using storage allocated from a hypervisor that was allocated storage from a SAN.
Independent None
Bug ID: 156909
/QCCR1D 68263
Tablespace's free space view does not match the Oracle Enterprise Manager
(OEM) view.

Independent

None

Note: There is an OEM bug about some tablespaces showing the
incorrect used size. The Database Scanner for Oracle gets the
tablespace used size directly from all of its data files, which avoid
the OEM bug.

123008

In order to monitor an Oracle 11G database with the SA Oracle Database Scanner, the XML DB and DBMS_NETWORK_ACL_ADMIN package must exist in the database.  The SA Oracle DB Scanner needs access to these objects in order to grant privileges and access for itself. If the objects do not exist, then the "pamuserprivilege.sql" will fail and the DB Scanner cannot be run.   An application may or may not install these objects in its Oracle 11G database.

The following error might be displayed under these circumstances:

"PLS-00905: object SYS.DBMS_NETWORK_ACL_ADMIN is invalid". 

 

 

Independent

Before executing the SA DB Scanner "pamuserprivilege.sql" in the Oracle database, first perform the following steps to install the XML DB and DBMS_NETWORK_ACL_ADMIN package in the Oracle 11G database.

1) cd $ORACLE_HOME/rdbms/admin
2) sqlplus /nolog
3) SQL> connect <sys_user>/<password> as sysdba
4) SQL> spool install_xml.log
5) SQL> @catqm xdb sysaux temp NO
6) SQL> @dbmsnacl.sql
7) SQL> spool off;

OS Provisioning
129493 WinPE image that is shipped does not contain vmxnet3 drivers. Windows None
129581

Reprovisioning for Red Hat Enterprise Linux 6 with the ext4 file system is not yet supported.

Linux None
QCCRID Symptom/Description Platform Workaround

Patch Management - Solaris

114146/QCCR1D 114153

The solpatch_import –filter option does not display recommended and/or security patches if they had previously been marked obsolete. This became an issue on June 4, 2010 when Oracle changed the criteria for recommended and security patches (described here: http://blogs.sun.com/patch/entry/merging_the_solaris_recommended_and).

Users with an existing metadata database (solpatchdb) must delete the solpatchdb.zip, solpatchdb-old.zip and solpatchdb_supplement.zip files and run create_db to have support for recommended obsolete patches.

Solaris

You must recreate the Solaris patch metadata database (solpatchdb) if the following are true:

  • You use the solpatch_import –filter option.
  • You have run solpatch_import –update_db on June 4, 2010 or later.

After you have installed the patch, perform these tasks to recreate the metadata database (solpatchdb):

  1. Log in to the SA Client.
  2. Select Library in the Navigation pane.
  3. Select By Folder.
  4. Navigate to /Opsware/Tools/Solaris Patching.
  5. Delete the following files:

    solpatchdb.zip
    solpatchdb-old.zip
    solpatchdb_supplement.zip

  6. Follow the steps to create a new metadata database (solpatchdb) as described in the SA User Guide: Application Automation, Patch Management for Solaris.

SA Installer

113995

After rolling back the SA 7.83 patch, the contents of /etc/opt/opsware/dhcpd/dhcpd_subnets.conf may not reflect the latest modifications done with dhcpdtool while SA 7.83 was installed.


Independent

Note: This workaround will not work if SA 7.83 has already been rolled back.

Before rolling back SA 7.83, perform the following tasks:

  1. Remove /etc/opt/opsware/dhcpd/dhcpd_subnets.conf.CORD_BACKUP.
  2. Issue the following command to replace range dynamic-bootp with range in /etc/opt/opsware/dhcpd/dhcpd_subnets:
    perl -pi -e 's/range dynamic-bootp/range/g' /etc/opt/opsware/dhcpd/dhcpd_subnets.conf

Search (Storage)

Bug ID: 155094 /QCCR1D 66448 If the user profile setting on the SAS Web Client is UTC, all discovered dates will display as expected. If the user profile setting is set to a timezone other than UTC, some discovery dates for SAN arrays, NAS filers, and switches will not display as expected, although they are technically correct. Independent Set the user profile to UTC.

SE Connector

88755 There is no Target and Target Volume information displayed for a LUN.
Target and Target Volume display "-" for a LUN in the storage volume access path
view.
Independent None
91582 When you perform a provisioning operation for an HP EVA array (such as create,
delete, or modify a volume or pool), the changes for the volume or pool might not be
immediately available in SA after running the "Update from Storage Essentials" process.
Independent After 30 minutes has lapsed, run the "Update from Storage Essentials" process again. See the Storage Essentials SRM Software User Guide for information about
provisioning EVA arrays.
103996

When a managed server on which SE Connector is running is directly deactivated
and deleted, stale entries of storage scanners will display in the Storage Scanner panel. The stale entries count will increase, depending on how many times the managed server is deactivated and deleted from the SA core.

OSs supported for SA core Manually delete the inactive storage scanner entries from the Storage Scanner panel by using the Remove menu option for each entry.
105953

An EMC Symmetrix array that is discovered through SE Connector can report
more than one storage volume with the same LUN number presented to a managed server.

Running the storage snapshot specification on the managed server will succeed; however, the Inventory > Storage > File Systems and Inventory > Storage > Managed Software panels will be empty.

Also, some host volumes with a LUN service type will not be displayed in the Storage > Inventory > Volumes panel. For the EMC storage array in the Relationships > Storage Initiators panel for this managed server, there will be more than one volume that has the same LUN number.

Independent None
QCCRID Symptom/Description Platform Workaround

Storage Host Agent Extension (SA and Storage)

113782

Server Automation:

A host operating system may report a stale LUN as having a Root service type because the system could not detect storage changes.

Independent After System reboot, the host OS detects the configuration changes correctly.
93630

On Windows servers that have EMC PowerPath installed as the multipathing
software, the SCSI Bus number provided by PowerPath (using the powermt command) does not match the bus number of the disks (LUNs). In these cases, LUNs are displayed as ROOT
and display alongside LUNs that are correctly displayed in the Inventory > Storage > Volumes Panel.

Windows None
104960 On some Windows servers, after you install multi-pathing software, the server Disk Management panel displays the software disks as foreign disks. In addition, volumes are not displayed in the Disk Management panel, but they are displayed in the Inventory > Storage > Volumes panel when you run a storage snapshot specification.
Windows

Log on to the Windows server. Open the Disk Management panel and import
the disks that are categorized as Foreign. Run the storage snapshot specification from the SA Web client for this server.

For more information on importing disks on Windows servers, see the relevant Microsoft documentation at: http://www.microsoft.com.

105382 On Windows 2008, if the disk information is changed, such as presenting new LUNs or removing existing LUNs, running the storage snapshot specification results in incorrect capacity values shown in the Inventory > Storage > Disk panel. This occurs if there is a mismatch in the disk names, as reported by the hardware registration script and the storage snapshot specification.
Windows 2008 To resolve this issue, after changing disk information (such as installing or
uninstalling multipathing software, presenting new LUNs, deleting LUNs, and so on) on the Windows managed server, the server must be rebooted. Run the hardware registration before running the storage snapshot specification.
105953 An EMC Symmetrix array that is discovered through SE Connector can report
more than one storage volume with the same LUN number presented to a managed server. Running the storage snapshot specification on the managed server will succeed; however, the
Inventory > Storage > File Systems and Inventory > Storage > Managed Software panels will be empty.
Independent None
106699 Managed servers with mirrored volumes, if one of the disks
that is part of a mirrored volume fails or is removed, the state of the volume is Failed Redundancy in the Disks Management panel on the Windows server. However, in the Inventory > Storage > Volumes panel for the managed server, the status of this volume is OK, even after running storage snapshot specifications.
Windows 2008 None
111724 The Storage Host Agent Extension does not support virtual servers that have
VMDK created on NFS datastore.
All VMware servers None
112902 On Linux Power PC servers, host bus adapters (HBAs) are not listed in the
Inventory > Hardware panel because the required RPM Package Managers (RPMs) are missing.
Linux on Power PC Install the following RPMs on the Power PC host.
libnl-1.0-0.10.pre5.5.ppc64.rpm
libdfc-64bit-3.0.17-1.ppc64.rpm
QCCRID Symptom/Description Platform Workaround
113035 (HBAs) port number is always zero (0) in the Relationship panel.
Independent None
Bug ID: 149406 /QCCR1D 60760 Solaris LVM RAID on Soft Partition on slices stops responding.
This configuration produces a defective storage supply chain.
Independent None
Bug ID: 149707 /QCCR1D 61061 The Storage Host Agent Extension reports two single port cards when a single dual port card is present. Some vendors may model dual port cards as two single-port cards. This is the information that ASAS reports on—output that shows a single dual port card with a single
serial number, where each adapter has its own unique node WWN.
Workaround: .
Independent None
Bug ID: 151921 /QCCR1D 63275

When you add a mirror to concatenated or stripe, the volume display labels both as "Mirrored" and does not distinguish between concatenated or striped in the label. Note that "Mirrored Concatenated" and "Mirror Striped" are distinct on the volume manager on the host, such as on the Veritas Volume Manager.
The type of the volume manager might not match the native tool, such
as the Veritas Volume Manager. The STORAGE_TYPE value is the immediate node in the
supply graph, which is the storage type of the most decendent volume.

Independent

None

 

Bug ID: 152016 /QCCR1D 63370 The STORAGE_DRIVE value is incorrectly formatted for SunOS 5.10 disks. The different format causes a broken storage supply chain on affected servers. Unix If the version number in the /etc/format.dat file on the server is
less than 1.28, update the file.
Bug ID: 152942 /QCCR1D 64296

On a Windows 2003 server with the SNIA library from QLogic, Fibre Channel
Adapter and storage volume information might not be discovered by the Storage Host Agent Extension, causing fibreproxy.exe to stop responding.

For Windows Server 2003 and Microsoft Windows 2000 operating systems, use the native Microsoft SNIA library instead of the SNIA that is provided by the QLogic driver. Download the Fibre Channel Information Tool to add the Microsoft HBAAPI support to the operating system. For Windows 2003 SP1 or later, the Microsoft HBAAPI support is built in. If the SNIA's version of hbaapi.dll is installed on the operating system, remove it.

Windows  
Bug ID:154418/QCCR1D
65772
The Unix QLogic snapshot is missing FC adapter information in the Hardware view and composition and connectivity information for any SAN in the Volumes pane.
Unix Install patches 108434 and 108435 on Solaris 8 SPARC servers. The Storage
Host Agent Extension on Solaris 5.8 SPARC requires these patches.
Note: There is no known workaround for Red Hat 3 or Red Hat 4 servers using QLogic
controllers.
Bug ID: 154971 /QCCR1D 66325 Veritas Storage Foundation 4.3 with QLogic 9.1.4.15 results in invalid fibre
proxy SCSI addresses.
Independent None
Bug ID: 155476 /QCCR1D 66830

The file system is not shown on the server storage file system panel when the partition and format on the Windows server is mounted to an empty NTFS folder.


Windows

None

Note: The Storage Host Agent Extension does not report file systems that have non-drive letter mount points. The Storage Host Agent Extension does not report file systems that have multiple mount points.

Bug ID: 157044 /QCCR1D 68398 Fibreproxy is broken on Windows 2000 SP4 server with a QLA2310 HBA and
vendor driver version 9.1.4.10. A storage inventory snapshot does not gather and supply complete data, including storage volume and FCA information.
Windows None
Bug ID: 157579 /QCCR1D 68933 When you run take a Storage Host Agent Extension snapshot by running
fibreproxy on a Windows server where Emulex LP850, LP952, LP9002, or LP9402 is installed, three FibreChannelTargetMappings are returned, two of which are duplicates. This symptom does not occur with Emulex driver 1.30a9.
Windows None
QCCRID Symptom/Description Platform Workaround
Bug ID: 158923 /QCCR1D 70277 If you run the chpath command as shown below to take a Storage Host Agent Extension snapshot for each available path to the device, all the MPIO paths to a logical device become disabled. In this state, the system calls used by the diskproxy and mpioproxy will stop responding.
chpath -l hdisk2 -p fscsi0 -s disable xx
AIX None
Bug ID: 159156 /QCCR1D 70510

After you remove a LUN mapping, the old LUN mapping information still
displays in the SAN array volume view and in the server storage volume view. An additional access path is displayed in the SAN array volume view (Access Path subview) for the volume for which LUN mapping was removed. The access path that shows no initiator device and/or initiator port information is the correct one.


Independent Take a snapshot of the server to which the volume was mapped or partitioned.
Bug ID: 159580 /QCCR1D 70934 SAV displays incorrect information after adding a zone to a fabric.
A fabric zone to card WWN does not correlate to the server, but a zone to the port
WWN does have correct correlation. The zone is not associated to the correct server/port/WWN.
Independent None
Bug ID: 164951 /QCCR1D 76305 The multipath information is not reported correctly for a server that has HP-UX
11iv2 OS installed and Veritas DMP managing the multipathing in the SA Client. The SNIA library does not support HBA_GetFcpTargetMappingsV2r.
Independent None
Bug ID: 168716 /QCCR1D 80070 On servers running AIX 5.2 with PCI-X Fibre Channel Adapters, the supply chain
does not display after taking an inventory snapshot.
Independent None
Bug ID: 167103 /QCCR1D 78457 If you perform a core upgrade to SA 7.50 and ASAS 7.50 and then run the customer extension to upgrade a Storage Host Agent Extension on the host, the host disappears from the INTERFACE table and the host's STORAGE_DRIVE does not appear in the STORAGE_COMPONENT table.
Independent It may take one to two hours for the host and drives to repopulate their tables.
Verify that the host is present in the INTERFACE table and that the STORAGE_DRIVE element is
present in the STORAGE_COMPONENT table.
Bug ID:168889 /QCCR1D 80243 After a Storage Host Agent Extension snapshot is run, logical volume devices
appear to be still under Veritas DMP control, even after disabling volumes in Veritas DMP.
Independent When constructing LVM modules on the HP-UX 11.31 platform, use agile DSF devices. There is no workaround for other platforms.

Software Repository

114135 When you import the Software Repository (found in /var/opt/opsware/word/mmword_local/packages/any/nt/$OS_VER$) to the non-local disk, the import takes longer than expected to complete for all mounted binaries. All Core Platforms None.

Virtualization

114273 A Hyper-V VM running Windows 2008 x86_64 R2 OS displays in Virtual Servers View as a hypervisor. Windows None.

Volume Manager (Storage)

119932

In APX to configure LVM, creation of multilevel RAID volumes (RAID10) is making the complete wizard unusable  "Error: Linux_LVM.vg3./dev/vg3/mrlv02  "

Linux None
120128

In the APX to configure LVM, if firewall is enabled on managed Server, none of the options operate. It throws error in the status fields.

Linux

Disable firewall and click Refresh .

120307

In APX to configure LVM, if the user does not have permissions, then embedded browser does not display proper message in the pop up.

Linux None

 

Back to the Table of Contents


 

Fixed Issues for This Release

This section describes fixed issues for this release.
The table lists issues first by subsystem, then numerically within each subsystem.

Table 5: Fixed Issues for This Release

QCCRID Symptom/Description Platform

Agent

120879 The agent crypto directory should not be readable by all users. Windows
121501 Support was added for new Solaris agents that do not use PAM support. UNIX/Solaris
129669

Running scripts returns 128 exit code.

Windows

Agent Deployment Tool (ADT)

129400

Agent installation fails for JP locale when you install as non-root user.

Core Supported Platforms (UNIX)

OS Provisioning (Backend)

119962 Red Hat Enterprise Linux Server 6 reprovisioning fails with the following error:
"That directory does not seem to contain a Red Hat Enterprise Linux Server installation tree"
Red Hat Enterprise
Linux Server 6 
122170 VMware ESX 3.x provisioning breaks and displays an error in the buildmgr log (“ks_mandatory”  parameter is missing). Vmware ESX
122505 Cannot provision Windows on HP hardware, although Linux provisioning works on the same machine. Windows

Server Module (SMO) User Interface

114782 Discovered software reports display server owners twice. For example, if a server owner has changed, the display shows the server as belonging to both the present and previous owner. Independent
121526 Red Hat Network import fails when it encounters a permissions error for a channel. Linux

Back to the Table of Contents


Fixed Issues for 7.85

This section describes fixed issues for SA 7.85.
The table lists issues first by subsystem, then numerically within each subsystem.

Table 5: Fixed Issues

QCCRID Symptom/Description Platform

AAA

115019 Enhancement: Manage customers feature is needed to reassign a server from one customer to another.
Without this feature, the following error is displayed in the Java client:
com.opsware.fido.AuthorizationDeniedException: Access is denied to performing the
operation DefaultOperations.writeCustomer against the object(s) [{type=customer,id=10001}].
Core Platforms
116479 Enhancement: Support requested for  redundant LDAP servers configured in SA for external authentication. Core Platforms

Agent

110366

SUSE agent scripts are missing Required-Stop fields, so the remediation jobs display errors.

Linux
115903 If you try to install an agent after OS Provisioning, you receive a TIME_WAIT socket error connected to port 2049 (NFS). UNIX/Linux
116158 The agent does not collect manufacturer info Xen 5.6.   XEN 5.6 guest VM - Linux 
117357 Agents may fail to install on some Linux variants when the environment variables LANG or LC_ALL are unset. Linux
117372 Manually installing the Opsware agent with the -t option on a XEN server Linux guest or a paravirtualized OVM Linux guest results in a console error. Linux
119311 Enhancement: Support for HPSA clients under Citrix Citrix Xen Server
119370 Enhancement: Enable the Opsware agent to properly manage a RHEL 6 i 386 machine. RHEL 6 i386
119422 All CDS (Code Deployment System) synchronizations are failing due to UNICODE characters in the filename and/or directory.

Only Agents that are running python 2.4 can take advantage of the Unicode characters therefore where CDR is used on a Python 1.5 Agents, ASCII is the only option.

All platforms where Unicode is used

Agent Deployment

108469 Need the ability to deploy 1000 agents in one hour  in case of a disaster recovery scenario. Current, non-serial method takes approximately 3 minutes per agent. Independent
118170 Agent installation to WAN via an SA push is hanging.
When installing the SA client to a single machine it works fine.  When you install the SA client to multiple machines, either selected by IP or provided in a list of server names, the installation hangs and is cancelled.
Windows

Agent Installer

101066 Hardware registration times out during the agent install, so the agent cannot register itself with the core. Unix
110462 Enhancement: Recompile the agent installers with large file support so that the agents install successfully on servers with more than 2 terrabytes of diskspace. UNIX
114546 A newly installed agent that is initially unable to connect to the core and start itself should become dormant until it is able to connect. Instead, the agent fails to connect to the core and start. Windows

APX

111653 Issue with the session management provided by the embedded Apache/PHP/JREX browser combination. Linux
111795 Imported APX is not visible when you choose
Run Extension in the right-click list that displays.
Linux and Solaris
114974 When specifying multiple resource types for a given feature in APX.perm, the APX permission does not get created correctly when the APX is imported. Core Platforms
116868 The case-sensitive comparison HTTP header field Content-Length causes Mozilla-based web browsers to hang. Core Platforms
117618 The APX session timeout specified in /etc/opt/opsware/apxproxy/apxproxy.conf is incorrectly implemented. Core Platforms

Audit and Compliance

112684 Users within the same group (which has audit permissions) cannot change audit-task schedule times if the task was created by another user in the group. Linux
116095 If audit fails in compliance phase, temporary snapshot directories are not deleted from stempcache directory. Independent
116203 Checks in policies imported from HPLN do not show the policy reference in the check name when viewed in the audit policy.  Independent
116513 Audit results are not available for inclusion in a compliance report if there is at least one rule whose operator is set to:

Does not match Regular Expression (ignore case)

An ORACLE error appears in the Web Services Data Access Engine log files.
Independent
119222 Software remediations hang and can only be restored if the Command Engine is restarted. Linux, Solaris

Command Center

116712 The server history in the UI shows a list of jobs, but does not show all server events seen on the Web page.  For example, SA does not log moving a managed server to a new customer. Independent

Command Engine

116768 Supportability issue: Sessions stuck in "Active" state do not report underlying cause. Way log should show session_id in error message. Independent
118524 CX reports failed for all servers even if only a few servers actually failed. Linux;Solaris;VMwareAll Agent Platforms

Core

116143 At least two python C modules are not compiled for reentrancy. Solaris
116144 The iconvmodule.c's error handling causes core dumps to be corrupted. Solaris

Data Access Engine and Web Data Access Engine

100264 During LDAP integration, the twistOverrides.conf file does not get overwritten with the encrypted password, and displays the following kind of error:
server.log.2:2009-09-15 13:59:05,872 WARNING Thread-12 [com.opsware.twist.utils.Configurator] [<init>] Twist Error:  failed to overwrite twistOverrides.conf with /tmp/twistOverrides8708072345993369552.conf.tmp
Linux
113874 Dropped exception occurs while loading SSL certificates for LDAP integration. Independent
114689 Incorrect custom attribute inheritance occurs if a Windows host inherits a custom attribute from a Linux software policy, or the other way around.   Independent
116729 Installation failure while attempting to install satellite on Linux system when any slice systems are running SunOS 5.10. Solaris
117296 Under certain circumstances the automatic communication test fails with a Connection Reset by Peer error. Independent
119165 Various HPSA activities (such as audits) fail when intensive Data Access Engine operations are performed. Independent
119390 Unable to reliably submit new jobs, and received notifications of a RemoteException, followed by a CacheFullException.  Independent
120340 Private device groups associated with a deleted user ID are still refreshed. Independent

Device Groups

118105 Read access is denied for values in dynamic rules. Either values have been deleted or appropriate permissions were not set. Independent
119919 When you add a rule to a Dynamic Device Group in the Device Membership page, and the customer name includes a comma (for example: CustomerA,INC), the name is split into two (for example: CustomerA and INC.) Windows

DSE

82104 When running a server script against a device group, the following message appears in the log: SEVERE ### UNKNOWN OS TYPE IN addScriptRefToPreferences. Independent

Global File System/ Shell Backend

112971 sshd fails to start on Japanese language core. Linux
116993 Editing files with filenames longer than 1024 characters in an OGFS session causes a kernel panic and failure. Linux

Installer

114377 The following type of warning messages appear when you run the CORD patch_contents installer:  '"Platform XXXXX already exists with a different os_version"  Core Platforms

Jobs and Sessions

112246 The "Custom Range" filter of a managed server does not work correctly (omits date) in the "Jobs and Sessions" page. In the "History" page, the filter does not work at all. Independent

Model Repository

117543 Cannot select Windows 2008 R2 as VMware Guest OS if you create a virtual machine on ESX Server and provision Windows 2008 R2. Windows 2008
118695 The following error is displayed when running compliance tests on all servers:
Error Details:
A Persistence layer error occurred while trying to write an object.
Details: ORA-02292: integrity constraint (TRUTH.COMP_SUMM_COMP_DETAIL_FK) violated - child record found
Linux, Solaris

OS Provisioning

55969 OS Sequence Remediation: Changes to Script Timeout was not saved after re-open the sequence.
Unable to save changes to OS Sequence  when editing the Script timeout and selecting the Close "X" Window.
Independent
115535 Unable to set options for a software remediate job that was spawned through OS Provisioning. Independent
116490 Details for a selected subnet option do not display correctly. Core Platforms
116656 Enhancement: Allow a textbox in the UI for flexible output size. Script output to export is limited to 10K for pre/post-remediate scripts attached to an OS Sequence Independent
116659 When attempting to upload a new kickstart file,, the upload dialog box stays in Initializing state even though  the file is uploaded Independent
117872
11788
1
If the driver is not present, a WinPE image, used for Windows provisioning, fails to recognize the NIC card. Windows
118810 Enhancement: New ability to use -opts options to specify NFS options in the ks.cfg profile. Linux
119781 OS Provisioning configuration APX displays a "file or directory not found" exception when there is an ignite "boot" directory but no ignite binary.  HP-UX

Patch Management

110198 Message for Install Patch window with job schedule is truncated Windows
114773 SA fails to recognize an installed Windows patch and displays the following error:
This software install was attempted and appeared successful, but after verification, HP Server Automation determined that it was not actually installed.
Windows
115490 During remediation, SA should report the correct # of patches that are out of compliance HP-UX
118053 Even though patches are installed successfully on managed servers, compliance scans fail. Windows
119334 An error occurs when the metadata script imports the swa_catalog (new) file. HP-UX
119776 A compliance calculation scan error occurs when the repository has a deleted catalog file as well as an active catalog file. The Command Engine cannot connect to the web services data engine. HP-UX

Search

111048 Malformed/incomplete query error is displayed if you run two advanced searches concurrently, and you open the second search from within the first using File > New Window. Independent
119386 Enhancement: Provide Custom Field columns in Advanced Search. Independent

Server Management - Managed Servers

77075 IIS server browsing on Korean servers does not properly display multibyte character. Windows
80816 UI issue : The "Servers and Device Groups" on the "Reboot Server" task window will list all reachable Windows servers on a scheduled/completed job, instead of only the  Windows server that was selected when the task was created.  Windows

Server Management - UI

77428 When listing servers in Manage Servers with the Server Use column displayed, the values in Server Use are not showing the display name for the Server Use, but instead are showing internal names, such as SAMPLE_USE. Independent

Server Module (SMO)

113541 The Server ID field in the Runtime State SMO is empty. Independent
114437
Added Windows Server 2008 R2 x64 support for SA SMOs.
Windows
114782 SAR Discovered Software Reports incorrectly displays a server twice if the server has changed ownership - the reports show the server listed under the present and the previous owner. Independent
115281 Upgrading or removing SMO package displays incorrect error message:
Found resource unit 'rsrc.zip' (25490001)
Traceback (most recent call last):
  File "/opt/opsware/bin/smtool", line 12, in ?
    smtool.smtool (sys.argv)
  File "src/smtool/smtool.py", line 28, in smtool
  File "src/smtool/ServerModule.py", line 680, in create
  File "src/smtool/ServerModule.py", line 147, in findExistingServerModule
  File "src/smtool/ServerModule.py", line 46, in refName
AttributeError: 'NoneType' object has no attribute 'name'
Removing temporary directory: /var/tmp/smtool.29805
Solaris
119372 Enhancement: Enable Server Visualizer component to properly display RHEL 6 i386 details. RHEL 6 i386

Software Management - Backend

92553 Remediation of Unix User object on Solaris zone fails when trying to create a directory in a read-only file system (where /usr/local is read-only). Solaris
96790 Email notification for a remediation job erroneously displays 'passed' instead of 'compliant'. Independent
112752 Enhancement: Policy Custom Attribute Changes are now recorded in the Software Policy History View. Independent
115190 RPM install failed during SW Policy Remediation and Install Software job. SunOS 5.10 X86
116122 A colon at the end of a variable has the same effect as adding the current directory. Windows
116918 RPM remediation (gather/order) fails. Linux
119371 Enhancement: Enable RHEL6 i386 Software Management support. RHEL 6 i386
119642 Remediating large numbers of RPMs causes system to hang, while consuming large amounts of core CPU. Linux

Web Client (OCC)

118126 Access Denied error occur when you download a file that was previously uploaded as a custom field of type file. Linux

For information on fixed issues from earlier releases, see the individual release notes for that release.

Back to the Table of Contents


Documentation Errata


This section describes documentation errata for release notes and product manuals.

Location Change/Addition Notes

SA 7.80 Administration Guide

Software Repository URL

Remove the following section:

Software Repository URL
https://theword.<data_center>:1003

 

None

7.82 SA Release Notes

Installation Procedure -
Pre-Patch step, pre-patch script

Replace path 1 (wrong) with path 2 (correct):

1. ./create_local_dc_table.sh <oracle_home> <oracle_sid>

REPLACE WITH:

2. ./truth_create_local_dc_table.sh <oracle_home> <oracle_sid>

None
Installation Procedure -
Pre-Patch step, rollback script

Replace path 1 (wrong) with path 2 (correct):

1. ./create_local_dc_table_rollback.sh <oracle_home> <oracle_sid>

REPLACE WITH:

2. ./truth_create_local_dc_table_rollback.sh <oracle_home> <oracle_sid>

None

7.83 SA Release Notes

Chapter 1, section:
Red Hat Enterprise Linux 4.x PPC64 and 5.x PPC64 OS Provisioning, Kickstart file

Replace entry 1 (wrong) with entry 2 (correct):

1.
%packages
@Base

REPLACE WITH:

2.
%packages --resolvedeps
@Base

The corrected entry ensures that all prerequisite packages are available for provisioning.

SA Planning and Installation Guide

Chapter 1, section:
SA Core Component Bundling

First paragraph

Replace sentence 1 (wrong) with sentence 2 (correct):

1. During a Custom installation, certain components can be broken out of their bundles (such as the Command Engine, the OS Provisioning Boot Server and Media Server, among others) and installed on separate servers.

REPLACE WITH:

2. During a Custom installation, certain components can be broken out of their bundles (such as the Software Repository Store, Slice Component bundle, OS Provisioning Media Server, OS Provisioning Boot Server etc.) and installed on separate servers.

 

Chapter 2, section: Pre-Installation Requirements, sub-section: SUSE Linux Enterprise Server 10 Package Requirements

Add the following to the section:

The following packages must not be installed on a SUSE Linux Enterprise Server 10 hosting an SA Core:
yast2-dhcp-server
rsync
samba
samba-32bit
yast2-samba-server
yast2-tftp-server

These packages are reinstalled during an operating system upgrade from SP2 to SP3 and therefore must be removed for proper SA Core operation.

Chapter 3, section:
Solaris Requirements

Table 11: Packages Required for Solaris

Ignore the packages marked with double asterisks (indicating that the packages are required for Solaris 8 or 9). Solaris 8 and 9 are not supported.

Chapter 3, section:
Pre-Installation Requirements

Table 18: Pre-Installation Requirements

Add the following at the bottom of the table, under the heading: Firewall Considerations:

  • Port 1521 is the default Oracle listener (listener.ora) port, but you can specify a different port in your Oracle configuration. In case your installation has been modified to use a port other than 1521, you should verify the port number from the Oracle listener status and ensure that your firewall is configured to allow the correct port to be open for the Oracle listener.
  • SA's data access layers (infrastructure) use connection pooling to the database. The connections between the database and the infrastructure layer must be maintained as long as SA is up and running. Ensure that your firewall is configured so that these connections do not time out and terminate the connections between the database and the infrastructure layers.

 

None

Chapter 3, section:
SUSE Enterprise Server 10 Requirements

 

 

Add the following requirements to this section:

binutils
cpp
desktop-file-utils
expat
gcc-c++
gcc
glibc
glibc-32bit
glibc-devel
glibc-devel-32bit
iptables
kernel-smp
kernel-source
libaio
libaio-32bit
libaio-devel
libcap
libcap-32bit
libelf
libgcc
libstdc++
libstdc++-devel
libpng
libpng-32bit
libxml2

 

libxml2-32bit
libxml2-python
make
mDNSResponder-lib
mkisofs
ncompress
nfs-utils
patch
popt
popt-32bit
readline
readline-32bit
rpm
sharutils
strace
sysstat
termcap
unzip
vim
xinetd
xntp
xorg-x11-libs
xorg-x11-libs-32bit
xorg-x11
xterm
zip
zlib
zlib-32bit

Add the package requirements for SUSE Enterprise Server 10 64-bit x_86, an SA Core Server (Linux) to the section.
All packages have the x86_64 architecture.

Chapter 3, section:
SUSE Enterprise Server 10 Requirements

Add the following table and its contents:
Packages that are Not Supported on SUSE Linux Enterprise Server 10 Core Hosts

rsync
samba
samba-32bit
yast2-dhcp-server
yast2-samba-server
yast2-tftp-server

These packages are reinstalled during an operating system upgrade from SUSE Enterprise Linux SP2 to SP3 and therefore must be removed for proper SA Core operation after upgrade.

Appendix A

Table 42: Supported Operating Systems and Oracle Versions

Replace the first entry with the second one:

SunOS 10 x86_64

REPLACE WITH:

SunOS 10 (SPARC)-64 bit.

 
Appendix A, section: Solaris Requirements

In the bulletted entry that reads:
"Free /tmp space should be 400MB or more
You can use the following command to check /tmp space:"

INCORRECT COMMAND:
df -k /tmp | grep / | awk '{ print $3 }'

REPLACE WITH:
df -k /tmp | grep / | awk '{ print $4 }'

 

 
Appendix A, section: Required and Suggested Parameters for init.ora

The following init.ora parameters should have the specified required values:

  • Both Oracle 10g and 11g
    optimizer_mode=all_rows
    session_cached_cursors>=50
  • Oracle 10g only
    open_cursors>=300
    remote_login_passwordfile=EXCLUSIVE
  • Oracle 11g only
    open_cursors>=1000
    memory_target=1616M
 
Appendix A, section: Changing Kernel Parameter Values for Linux

Add the following note:

For Oracle 11g, the typical number of open file descriptors used under normal usage has increased. For larger systems, HP recommends that you increase the value of fs.file-max. The recommended value is fs.file-max = 681574.

 
Appendix A, section: Changing Kernel Parameter Values for Linux

Change the following entry in the Oracle parameter list:

fs.file-max=65536

REPLACE WITH:

fs.file-max = 681574

 
Oracle RAC Support: Oracle Setup for the Model Repository

The following information is in addition to that found in the SA Planning and Installation Guide: Appendix A and the document, Oracle Setup for the Model Repository.

Concurrent with the SA 7.82 patch release, SA adds support for Oracle Real Application Clusters (RAC).

Note: Oracle RAC support requires a new installation of both Oracle and SA. Therefore, in order to enable Oracle RAC support in SA, you must first install SA 7.80 and Oracle 10.2.0.4 or 11.1.0.7 configured as described in the following sections.

Supported Oracle Versions Matrix

Supported Oracle Enterprise Edition Versions: 10.2.0.4, 11.1.0.7

Supported Operating Systems: Red Hat Enterprise Linux AS 4 x86_64 and 5 x86_64

Set up the Oracle RAC Database/Instances

SA supports any valid Oracle RAC configuration, such as any number of nodes, ASM or regular disks, and so on.
However, SA requires that the Oracle database be configured for use with SA. You will require your Oracle DBA's help to configure the Oracle RAC/instances, the required initialization parameters, the required tablespaces, the opsware_admin database user, and the listener.ora and tnsnames.ora files.
You can also run the truth_oracle_state_checker script to check if the initialization parameters are set correctly. The truth_oracle_state_checker file is located in the distribution /tools directory.

  1. Create the Database with the Required Initialization Parameters
    Before installing Oracle, the following scripts must be run and init.ora must have certain parameter values edited or added as shown in Required and Suggested Parameters for init.ora on page 143.
    • Create a database with the UTF8 character set (as required by SA), the data and index files, the default temporary tablespace, the undo tablespace, and the log files.
  2. Create the required table spaces
    • Create the following tablespaces that are required by SA:
      LCREP_DATA
      LCREP_INDX
      TRUTH_DATA
      TRUTH_INDX
      AAA_DATA
      AAA_INDX
      AUDIT_DATA
      AUDIT_INDX
      STRG_DATA
      STRG_INDX

    See "Tablespace Sizes" in the SA Planning and Installation Guide for additional tablespace sizing information.

  3. Required and Suggested Parameters for init.ora.
    • The file init.ora must be edited as follows:
      (Both Oracle 10g and 11g) For SA, the following init.ora entries are either suggested or required:

    log_buffer>=1048576
    db_block_size>=8192
    session_cached_cursors=>50
    nls_length_semantics=CHAR
    nls_sort=GENERIC_M
    processes >=1024
    undo_management=AUTO (Suggested)
    undo_tablespace=UNDO (Suggested)
    query_rewrite_integrity=TRUSTED
    query_rewrite_enabled=true
    optimizer_mode=all_rows
    optimizer_index_cost_adj=20
    optimizer_index_caching=80
    cursor_sharing=SIMILAR (value can be set to SIMILAR(preferred) or EXACT, recommended only if you encounter an Oracle error)
    recyclebin=OFF
    event="12099 trace name context forever, level 1"
    _complex_view_merging=false

    (Oracle 10g only) For SA, the following init.ora entries are either suggested or required:

    open_cursors >=300
    sga_max_size >=1GB
    db_cache_size>=629145600
    shared_pool_size>=262144000
    java_pool_size>=52428800
    large_pool_size>=52428800
    job_queue_processes>=10
    sessions >=1152
    pga_aggregate_target >=104857600
    workarea_size_policy=auto
    remote_login_passwordfile=EXCLUSIVE

    (Oracle 11g only) For SA, the following init.ora entries are either suggested or required:

    memory_target=1616M
    job_queue_processes>=1000 (default)
    remote_login_passwordfile=EXCLUSIVE

  4. Create the User opsware_admin.

    You can use the script, CreateUserOpsware_Admin.sql, to create the opsware_admin database user and grant permissions (privileges) to the user (required by SA) or create the user manually.
    If you plan to create the opsware_admin user manually, follow the procedure below:

    Manual Creation of the User Opsware_Admin
    To create the opsware_admin user after a manual Oracle installation, log in to SQL*Plus and enter the following:
    # Su - oracle
    # Sqlplus "/ as sysdba"

    SQL> create user opsware_admin identified by opsware_admin
    default tablespace truth_data
    temporary tablespace temp
    quota unlimited on truth_data;

    SQL> grant alter session to opsware_admin with admin option;
    grant create procedure to opsware_admin with admin option;
    grant create public synonym to opsware_admin with admin option;
    grant create sequence to opsware_admin with admin option;
    grant create session to opsware_admin with admin option;
    grant create table to opsware_admin with admin option;
    grant create trigger to opsware_admin with admin option;
    grant create type to opsware_admin with admin option;
    grant create view to opsware_admin with admin option;
    grant delete any table to opsware_admin with admin option;
    grant drop public synonym to opsware_admin with admin option;
    grant select any table to opsware_admin with admin option;
    grant select_catalog_role to opsware_admin with admin option;
    grant query rewrite to opsware_admin with admin option;
    grant restricted session to opsware_admin with admin option;

    grant execute on dbms_utility to opsware_admin with grant option;
    grant analyze any to opsware_admin;
    grant insert, update, delete, select on sys.aux_stats$ to opsware_admin;
    grant gather_system_statistics to opsware_admin;
    grant create job to opsware_admin;

    grant alter system to opsware_admin;
    grant create role to opsware_admin;
    grant create user to opsware_admin;
    grant alter user to opsware_admin;
    grant drop user to opsware_admin;
    grant create profile to opsware_admin;
    grant alter profile to opsware_admin;
    grant drop profile to opsware_admin;

Installing the Model Repository

In most production environments with Oracle RAC, the Model Repository installation can be done from any SA server. The database server or RAC nodes in this case are considered to be remote.

The examples used in the following sections assume this configuration:

Two (active-active) Node RAC environment:

# Public Network
192.168.173.210 rac1pub rac1pub.dev.opsware.com (instance_name=truth1,
db name=truth)
192.168.173.211 rac2pub rac2pub.dev.opsware.com (instance_name=truth2,
db name=truth)

# Private network
172.16.1.100 rac1prv rac1prv.dev.opsware.com
172.16.1.101 rac2prv rac2prv.dev.opsware.com

# Public Virtual IP (VIP)
192.168.173.212 rac1-vip rac1-vip.dev.opsware.com
192.168.173.213 rac2-vip rac2-vip.dev.opsware.com

SA server:
192.168.173.214 rac1sa.dev.opsware.com

Model Repository Installation on a Remote Database (truth) RAC Server

In an Oracle RAC environment, only one of the RAC nodes is used during the SA installation/upgrade process. The SA Installer connects to only one Oracle RAC instance to install/modify the Model Repository. During the regular SA operations, all RAC nodes are used.
Perform the following tasks on the SA server on which you will run the SA Installer, for example rac1sa.dev.opsware.com.

  1. Model Repository Hostname Resolution

    On the server where you will run the SA Installer, ensure that the Model Repository hostname truth resolves to the remote database server, not to the server on which you will be running the SA Installer:

    In /etc/hosts, enter the public IP address of one of the RAC nodes/instances. For example the
    /etc/hosts file on rac1sa.dev.opsware.com would have the following entry:

    192.168.173.210 truth rac1pub rac1pub.dev.opsware.com

  2. Install the Oracle 11g Full Client on the SA server
    1. The SA Installer will use the Oracle Full Client to connect to the SA server and install the Model Repository. Below are sample commands for installing the Oracle full client.

      Create user oracle for the Oracle Full Client installation:
      root@rac1sa ~]# mkdir -p /u01/app/oracle
      root@rac1sa ~]# mkdir -p /u01/app/oraInventory
      root@rac1sa ~]# groupadd oinstall
      root@rac1sa ~]# groupadd dba
      root@rac1sa ~]# useradd -c "Oracle Client software owner" -g oinstall -G
      dba -d /u01/app/oracle -s /bin/bash oracle
      root@rac1sa ~]# chown -R oracle:oinstall /u01/app
      root@rac1sa ~]# chmod -R 775 /u01/app
      root@rac1sa ~]#passwd oracle (change oracle user password )

    2. Create the .bash_profile file.
      In /u01/app/oracle create the .bash_profile file.

      Note: Temporarily comment out ORACLE_HOME and ORACLE_PATH. You will uncomment these entries after the Oracle client installation is complete.

      Sample .bash_profile file
      # .bash_profile

      # Get the aliases and functions
      if [ -f ~/.bashrc ]; then
      . ~/.bashrc
      fi

      # User specific environment and startup programs
      PATH=$PATH:$HOME/bin
      export PATH

      #SA-OracleRAC parameters begin
      #unset USERNAME
      export ORACLE_BASE=/u01/app/oracle
      #export ORACLE_HOME=$ORACLE_BASE/product/11.1.0/client_1
      #PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
      export PATH

      if [ -t ]; then
      stty intr ^C
      fi

      umask 022
      #SA-OracleRAC parameters end
    3. Install the Oracle Full Client
      Install the Oracle Full Client as described in your Oracle documentation. You can create a share to access the Oracle Full Client binaries.
    4. Set Up Terminals
      You will need two X window terminals to install the Oracle Full Client:

      Terminal 1: log in as root and enter the commands:

      Terminal 1> xhost +
      Terminal 2: ssh –X oracle@<new_oracle_full_client_host>

    5. Start Oracle Full Client installation
      From Terminal 2 run the Oracle Universal Installer (OUI) installer. The Oracle Full Client is installed in:

      /u01/app/oracle/product/11.1.0/client_1

    6. Run the Oracle Universal Installer to install Oracle Full Client. The directories in this example assume an Oracle 11g Full Client on Linux.

      1. cd /location_of_oracle_full_client
      2. /runInstaller
      3. At the Welcome Screen, click Next.
      4. Specify the Inventory Directory and Credentials (/u01/app/oraInventory and /u01/app/oinstall)
      5. For Select Installation Type, choose Administrator, click Next.
      6. For ORACLE_BASE select: /u01/app/oracle, click Next.
      7. The Oracle Universal Installer performs some checks. If the checks are not successful, fix the issue and re-run this step. If the checks are successful the click Next.
      8. Oracle OUI will list of products that will be installed. Click Install.
      9. OUI will show the progress bar when installing.
      10. In the 'Welcome to Oracle Net Configuration Assistant' window click Next.
      11. Click Finish once the installation is complete.
      12. The following two configuration scripts need to be executed as "root" when the installation is complete:
        /u01/app/oraInventory/orainstRoot.sh
        /u01/app/oracle/product/11.1.0/client_1/root.sh
    7. Verify that the .bash_profile file for user oracle is correct.
    8. Uncomment $ORACLE_HOME and $ORACLE_PATH.
  3. Making changes to tnsnames.ora on SA server
    By default the tnsnames.ora file is located in /var/opt/oracle.
    1. Login as root on the SA Server.
    2. Enter the command:

      mkdir -p /var/opt/oracle

    3. Copy tnsnames.ora from the remote database server to the directory you created above. For the RAC environment, copy it from RAC Node 1 (for example, rac1pub.dev.opsware.com).

      The SA Installer puts the database in a restricted mode during the Model Repository installation. The database is removed from the restricted mode after successful installation/upgrade of the Model Repository. When the database is in restricted mode, only certain privileged users are allowed to connect to the database.

      To accommodate the remote Model Repository installation process, two sets of tnsnames.ora are required on the SA server.

      • tnsnames.ora-install_upgrade – this copy of tnsnames.ora is used during SA installation/upgrade. The file can be renamed.
      • tnsnames.ora-operational – this copy of tnsnames.ora is used during normal SA operation. The file can be renamed.

        You can use softlinks to point tnsnames.ora to either tnsnames.ora-install_upgrade or tnsnames.ora-operational.

      ln –s tnsnames.ora-install_upgrade tnsnames.ora

  4. Changes to the tnsnames.ora-install_upgrade sample file.

    Make a note of the text that is in BOLD letters. The tnames.ora file should contain the SID and not the service name. These examples have TRUTH as the truth.servicename. Ensure that the HOST references the same server as the truth entry in /etc/hosts file. truth.servicename is case sensitive.

    # Generated by Oracle configuration tools.

    TRUTH =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac1pub.dev.opsware.com)(PORT = 1521))
    (CONNECT_DATA =
    (SID = truth1)
    )
    )

    LISTENER_TRUTH =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac1pub.dev.opsware.com)(PORT = 1521))
    (CONNECT_DATA =
    (SID = truth1)
    )
    )

    Use softlinks to link the file to tnsnames.ora.ora file. Do this before you start the SA Model Repository installation or upgrade ln –s tnsnames.ora-install_upgrade tnsnames.ora.

    Note: During installation the SA Installer adds a SA Gateway entry into tnsnames.ora (linked to tnsnames.ora.install-upgrade) file on the primary SA Core. When installation completes, copy this entry into the tnsname.ora.operational file. If this entry is not present in tnsname.ora.operational, Multimaster Mesh transactions will not flow. Below is a sample gateway entry from tnsnames.ora:

    Rac2sa_truth=(DESCRIPTION=(ADDRESS=(HOST=192.168.173.214)(PORT=20002)
    (PROTOCOL=tcp))(CONNECT_DATA=(SERVICE_NAME=truth)))

  5. Making changes to listener.ora on one of the RAC node server (instance)

    In an Oracle RAC environment, only one of the RAC nodes or instances is used during installation/upgrade process. The SA Installer connects to only one Oracle instance to modify the Model Repository. During the regular SA operations, all the RAC nodes are used.

    The SA Installer puts the database in a restricted mode during the Model Repository installation. The database is removed from the restricted mode after successful installation/upgrade of the Model Repository. When the database is in restricted mode, only certain privileged users are allowed to connect. To accommodate the remote truth installation process, two sets of listener.ora files are required on the SA server. The files can be given any name. By default the listener.ora files are located in $ORACLE_HOME/network/admin.

    • listener.ora-install_upgrade – this copy of listener.ora is used during SA install/upgrade
    • listener.ora-operational – this copy of listener.ora is used during normal SA operation.

    You can use softlinks to point listener.ora to either listener.ora.ora-install_upgrade or listener.ora.ora-operational:

    ln –s listener.ora-install_upgrade listener.ora

  6. Changes to the sample listener.ora-install_upgrade.
    This file is used during the SA installation/upgrade process. Make a note of the text that is in BOLD letters. The listener.ora file should contain the SID_NAME and not the service name. The SID_NAME is case sensitive. Ensure that the listener.ora changes are made on the same server that is referenced in SA servers /etc/hosts file.

    This example uses LISTENER_RAC1PUB as the listener name.

    # Generated by Oracle configuration tools.
    LISTENER_RAC1PUB =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.dev.opsware.com)(PORT = 1521)(IP = FIRST))
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.173.210)(PORT = 1521)(IP = FIRST))
    )
    )

    SID_LIST_LISTENER_RAC1PUB =
    (SID_LIST =
    (SID_DESC=
    (SID_NAME=truth1)
    (ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_2)
    )
    (SID_DESC =
    (SID_NAME = PLSExtProc)
    (ORACLE_HOME = /u01/app/oracle/product/11.1.0/db_2)
    (PROGRAM = extproc)
    )
    )

    You can use softlinks to link the file to listener.ora:

    ln –s listener.ora-install_upgrade listener.ora.ora

    Ensure that you start the listener as follows:

    > lsnrctl start LISTENER_RAC1PUB

  7. Testing the connection from SA machine to database.

    Before starting the Model Repository installation/upgrade, you can perform the following tests to verify that your tnsnames.ora and listener.ora files are configured correctly and if the SA Installer can connect to the database in restricted mode.

    1. Verify that the SA server's /var/opt/oracle/tnsnames.ora file is configured correctly as described in Making changes to tnsnames.ora on SA server on page 147.
    2. Verify that the database servers or RAC node's $ORACLE_HOME/network/admin/listener.ora file is configured correctly as described in Making changes to listener.ora on one of the RAC node server (instance) on page 148.
    3. On the SA server:
      1. Login as oracle or root or su – twist/spin – if these users exist.
      2. Export ORACLE_HOME=/u01/app/oracle/product/11.1.0/client_1 (or where you installed the Oracle Full Client).
      3. Export LD_LIBRARY_PATH=$ORACLE_HOME/lib.
      4. Export TNS_ADMIN=/var/opt/oracle.
      5. Set $PATH $ORACLE_HOME/bin path.
      6. sqlplus sys/password@truth as sysdba;
        where truth is the service_name or entry from the tnsnames.ora file
      7. Select logins from v$instance;
      8. Alter system enable restricted session;
      9. Select logins from v$instance;
        ? db should be restricted.
      10. Connect opsware_admin/<password>@truth.

        If you are able to logon to the database then all files are configured correctly.

      11. sqlplus sys/password@truth as sysdba.

      12. Alter system disable restricted session;
  8. Changes to SA Installer Response File
    You can now start the installation of the SA Model Repository. Ensure that you have the correct parameters values for the installation interview or that you have a previous response file.

    Verify the paths to the client's tnsnames.ora file (%truth.tnsdir), oracle client home (%truth.orahome), listener port (%truth.port), and so on.

    • %truth.tnsdir=/var/opt/oracle
    • %truth.orahome=/u01/app/oracle/product/11.1.0/client_1
    • %truth.port=1521

    You can now install the SA Core as described in the SA Planning and Installation Guide.

Post SA Installation Process

After you install the SA Core, perform the following tasks in order to use all the nodes in the Oracle RAC environment.

Making changes to tnsnames.ora on the SA server

After SA install is complete, the tnsnames.ora file should point/link to the tnsnames.ora-operational file.

The SA Installer puts the database in a restricted mode during the Model Repository installation. The database is removed from the restricted mode after successful installation/upgrade of the Model Repository. When the database is in restricted mode, only certain privileged users are allowed to connect to the database. To accommodate the remote truth installation process, two sets of tnsnames.ora are required on the SA server.

  • tnsnames.ora-install_upgrade – this copy of tnsnames.ora is used during SA installation/upgrade. You can rename the file.
  • tnsnames.ora-operational – this copy of tnames.ora is used during normal SA operation. You can rename the file.

You can use softlinks to point tnsnames.ora to either tnsnames.ora-install_upgrade or tnsnames.ora-operational:

ln –s tnsnames.ora-operational tnsnames.ora

tnsnames.ora-operational sample file
Make a note of the text that is in BOLD letters. This tnsnames.ora file is used during normal SA operation and contains the RAC parameters.

#This entry is for connecting to RAC virtual machines.
TRUTH =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.dev.opsware.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip.dev.opsware.com)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = truth)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = Preconnect)
(RETRIES = 180)
(DELAY = 5))

)
)

LISTENERS_TRUTH =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.dev.opsware.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip.dev.opsware.com)(PORT = 1521))
)

#This entry is for connecting to node2 via service_name. This entry is optional
TRUTH2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip.dev.opsware.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = truth)
(INSTANCE_NAME = truth2)
)
)
LISTENER_TRUTH2 =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip.dev.opsware.com)(PORT = 1521))

#This entry is for connecting to node1 via service_name. This entry is optional
TRUTH1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.dev.opsware.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = truth)
(INSTANCE_NAME = truth1)
)
)
LISTENER_TRUTH1 =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.dev.opsware.com)(PORT = 1521))

Use softlinks to link the file to tnsnames.ora.ora file after SA installation is complete and you are ready to start SA in operational mode.

ln –s tnsnames.ora- operational tnsnames.ora

Note: During installation the SA Installer adds an SA Gateway entry into tnsnames.ora (linked to tnsnames.ora.install-upgrade) file on the primary SA Core. When installation is complete, copy that entry into tnsname.ora.operational. If this entry is not present in the tnsname.ora.operational, Multimaster Mesh transactions will not flow. The following is a sample gateway entry from tnsnames.ora:

Rac2sa_truth=(DESCRIPTION=(ADDRESS=(HOST=192.168.173.214)(PORT=20002)(PROTOCOL=tcp))
(CONNECT_DATA=(SERVICE_NAME=truth)))

 

Making changes to listener.ora on one of the RAC node server (instance)

After SA installation is complete, the listener.ora file should point/link to the listener.ora-operational file.

In an Oracle RAC environment, only one of the RAC nodes or instances is used during installation/upgrade process. The SA Installer connects to only one Oracle instance to modify the Model Repository. During the normal SA operations, all the RAC nodes are used.

The SA Installer puts the database in a restricted mode during the Model Repository installation. The database is removed from the restricted mode after successful installation/upgrade of the Model Repository. When the database is in restricted mode, only certain privileged users are allowed to connect. To accommodate the remote truth installation process, two sets of listener.ora files are required on the SA server. The files can be given any name. By default the listener.ora files can be found in $ORACLE_HOME/network/admin.

Listener.ora-operational – this copy of tnames.ora is used during normal SA operation.

You can use softlinks to point listener.ora.ora to either listener.ora.ora-install_upgrade or listener.ora.ora-operational.

ln –s listener.ora-operational

listener.ora.ora (before SA regular operations)

listener.ora-operational - this file is used to start the listener when SA is running in normal operational mode. Make a note of the text that is in BOLD letters.

# listener.ora.rac1pub Network Configuration File: /u01/app/asm/product/11.1.0/db_1/network/admin/listener.ora.rac1pub
# Generated by Oracle configuration tools.

LISTENER_RAC1PUB =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.dev.opsware.com)(PORT = 1521)(IP = FIRST))
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.173.210)(PORT = 1521)(IP = FIRST))
)
)

SID_LIST_LISTENER_RAC1PUB =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/app/oracle/product/11.1.0/db_2)
(PROGRAM = extproc)
)
)

Use softlink to link the file to listener.ora file.

ln –s listener.ora-operational listener.ora.ora

Ensure that you start the listener as follows:

> lsnrctl start LISTENER_RAC1PUB

Vault.conf File Changes

In an Oracle RAC environment, the vault.conf file must be modified after SA installation is complete. Modify /etc/opt/opsware/vault/vault.conf to specify the complete tnsname definition instead of the SID. For example:

  • Before:

    truth.sid: truth

  • After:

    truth.sid=(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.dev.opsware.com)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)
    (HOST = rac2-vip.dev.opsware.com)(PORT = 1521)) (LOAD_BALANCE = yes) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = truth)
    (FAILOVER_MODE = (TYPE = SELECT) (METHOD = Preconnect) (RETRIES = 180) (DELAY = 5))))
    truth.port: 1521

Restart the vaultdaemon:

/etc/init.d/opsware-sas restart vaultdaemon

Upgrading the Model Repository

To upgrade the Model Repository in an Oracle RAC environment, follow the same procedure as Installing the Model Repository on page 145. If you are doing a remote database installation, then make sure that you modify your listener.ora is on one of the RAC instances and tnsnames.ora on the server where the SA Installer is run. It is recommended that you test the connection as suggested in section Testing connection from SA machine to database on page 149.

 

SA Upgrade Guide

Chapter 1, section: OS Provisioning Stage 2 Image Upload No Longer Required

The following sentence is not valid:
However, due to this change, any Satellites in an SA 7.80 Core must also be upgraded to release 7.80 in order to provision servers. In other words an SA 7.80 Satellite can perform OS Provisioning in an SA 7.80 Core but an SA 7.50 Satellite cannot.

You can perform OS Provisioning in a mixed version SA Core/Satellite environment.
Chapter 3, section: Phase 1, Step 3b This step should read:
3b. Select Multimaster Opsware Core - Subsequent Core
 
Chapter 3: Phase 6 Add the following step (step 4) after step e3:
Step 4 Log on to the Slice Component bundle host, select Slice from the Upgrade Component menu. Press c to continue.
The existing Step 4 should be renumbered Step 5.
 

SA Policy Setter Guide

Operating System Provisioning Setup Chapter, section: Solaris Provisioning from a Boot Server on a Red Hat/SLES 10 Linux Server — Disabling NFS v3 or NFS v4

Replace erroneous instructions:

INCORRECT INSTRUCTIONS:

To disable NFS v4 on an SLES 10 Boot Server host:

  1. On the Boot Server host, create the following file:
    /etc/sysconfig/nfs
  2. In the newly created NFS file, add the following line:
    NFS4_SUPPORT="no"
    Restart NFS:
    /etc/init.d/nfs stop
    /etc/init.d/nfs start

REPLACE WITH THESE CORRECT INSTRUCTIONS:

To disable NFS v4 on an SLES 10 Boot Server host:

  1. On the Boot Server host, create the following file:
    /etc/sysconfig/nfs
  2. In the newly created NFS file, add the following line:
    NFS4_SUPPORT="no"
    Restart NFS:
    /etc/init.d/nfsserver restart
 

 

Custom Attributes for Linux or VMware ESX, p 137

 

Add the following note above Table 11:

Note: Although custom attributes are provided with a default value, you must ensure that the values are valid for your system before proceeding. (QCCR1D 103293)

 

SA Users Guide: Application Automation

OS Provisioning section, Manage Boot Clients (MBC) sub-section Required Permissions: Add the following permission: Read & Write permission to customer Not Assigned.  

Planning and Installation Guide

First Core Post-Installation Tasks Chapter

Replace the incorrect section below:

Edit the jboss_wrapper.conf File

Comment out (or delete) the following three lines in the server/ext/wrapper/conf/jboss_wrapper.conf file below:

#Following are added for bug 150387

#wrapper.java.additional.6=-Dorg.omg.CORBA.ORBClass=com.sun.corba.se.internal.Interceptors.PIORB

#wrapper.java.additional.7=-Dorg.omg.CORBA.ORBSingletonClass=com.sun.corba.se.internal.corba.ORBSingleton

#wrapper.java.additional.8=-Xbootclasspath/p:/opt/NA/server/ext/wrapper/lib/CORBA_1.4.2_13.jar

Since SA 7.80 does not use Java 1.4.2, these lines are no longer required.

REPLACE THE SECTION ABOVE WITH THIS SECTION:

You should adjust the values for wrapper.java.additional.x where x > 8 is consecutive.
For example:

Change this:
wrapper.java.additional.1=-DTCMgmtEngine=1
wrapper.java.additional.2=-Duser.dir=/opt/NA750/server/ext/jboss/bin
wrapper.java.additional.3=-Xmn170m
wrapper.java.additional.4=-Djava.awt.headless=true
wrapper.java.additional.5=-Dfile.encoding=UTF8

#Following are added for bug 150387
wrapper.java.additional.6=-Dorg.omg.CORBA.ORBClass=com.sun.corba.se.internal.Interceptors.PIORB
wrapper.java.additional.7=-Dorg.omg.CORBA.ORBSingletonClass=com.sun.corba.se.internal.corba.ORBSingleton
wrapper.java.additional.8=-Xbootclasspath/p:/opt/NA750/server/ext/wrapper/lib/CORBA_1.4.2_13.jar

#Add location of keystore. This is used to make SSL request.
wrapper.java.additional.9=-Djavax.net.ssl.trustStore=/opt/NA750/server/ext/jboss/server/default/conf/truecontrol.keystore

#Bug 171948 - Need more PermGen
wrapper.java.additional.10=-XX:MaxPermSize=80m

To this:
wrapper.java.additional.1=-DTCMgmtEngine=1
wrapper.java.additional.2=-Duser.dir=/opt/NA750/server/ext/jboss/bin
wrapper.java.additional.3=-Xmn170m
wrapper.java.additional.4=-Djava.awt.headless=true
wrapper.java.additional.5=-Dfile.encoding=UTF8

#Following are added for bug 150387
#wrapper.java.additional.6=-Dorg.omg.CORBA.ORBClass=com.sun.corba.se.internal.Interceptors.PIORB
#wrapper.java.additional.7=-Dorg.omg.CORBA.ORBSingletonClass=com.sun.corba.se.internal.corba.ORBSingleton
#wrapper.java.additional.8=-Xbootclasspath/p:/opt/NA750/server/ext/wrapper/lib/CORBA_1.4.2_13.jar

#Add location of keystore. This is used to make SSL request.
wrapper.java.additional.6=-Djavax.net.ssl.trustStore=/opt/NA750/server/ext/jboss/server/default/conf/truecontrol.keystore

# Bug 171948 - Need more PermGen
wrapper.java.additional.7=-XX:MaxPermSize=80m

 

Storage Visibility and Automation User Guide

Chapter 2, Asset Discovery, section: Viewing Volume Properties On some Windows servers, the disks of newly installed multipathing software are identified as foreign disks, and their disk volumes are not displayed in the server's Disk Management panel. However, if you run the storage snapshot specification on the server from the SA client, you can display the missing volumes by choosing Inventory > Storage > Volumes.  
Chapter 2, Audit Rules, Schedule, and Results

To illustrate the new Unreplicated LUN Count storage audit rule, the following screens have been updated:

  • Audit Browser
    The new screen shows an unreplicated LUN count choice in the Source list under Rules > Storage Compliance Checks. For more information on these screens, see the 7.84 Storage Release Notes.
  • Select Server
    The new screen shows how you specify which servers are used as targets in the audit: In the Views tree, choose Targets. In the Source window, choose the Servers and Device Groups tag. In the Select Server window-Select Server tree, choose All Managed Servers.
  • Audit Schedule
    The new screen shows the new Schedule option under the View tree.
  • Audit Summary
    The new screen shows a summary of the audit results. Choose the Summary option under the View tree.
  • Storage Compliance
    The new screen shows checks in an example audit. Choose Views > <managed server> > Storage Compliance Checks
 

Storage Visibility and Automation Release

Context-Sensitive (F1) Help

A new Replication Pairs panel was added in the 7.82 release. This window provides the following information:

  • Copy Type—The type of association between source and target, such as Async, Sync, UnSyncAssoc, UnSync,UnAssoc, and Migrate
  • Replica Type—The type of replication, such as Full Copy, Before Delta, After Delta, Log, and Not specified.
  • Source Device—The name of the source device.
  • Source Volume—The name of the source volume.
  • Status—The State of the association between source and target, such as Initialized, PrepareInProgress, Prepared, ResyncInProgress, Synchronized, FractureInProgress, QuiesceInProgress, Quiesced, RestoreInProgress, Idle,Broken, Fractured, Frozen, and CopyInProgress.
  • Target Device—The name of the target device.
  • Target Volume—The name of the target volume.

There is no online Help available for this new panel. When you press F1, an empty page displays.

 
Replication

The following replication functionality was added to SE Connector:

  • A replication pair consists of the source volume and the target (or copy) volume, including properties that describe the type of replication used to back up or copy the source volume. Replication can be either local (where source and target volumes are on the same array) or remote (where source and target volumes are on different arrays.
  • A new Replication tree control is available on the Inventory ‰ Storage panel. A replication pair consists of the source volume and the target (or copy) volume, along including properties that describe the type of replication used to back up or copy the source volume. To find detailed information on replication pairs, perform the following steps:
  1. From the Navigation pane, select Devices > Storage > SAN Arrays.
    Or
    Select Devices > Storage > NAS Filers.
  2. In the content pane, select a storage system and then open it.
  3. In the San Array or NAS Filer browser, select Inventory > Storage > Replication.

The following table describes the storage array models and replication types that SE Replication supports with SE Connector.

Array Model Replication Pair Type Replication Technology Name

 

EVA

 

6200/4100

Local Business Copy (BC)
Snapshots
Snapclones
Remote Continuous Access (CA)

 

 

HP XP

 

 

XP24k/XP12K

Local Business Copy (BC)
Snapshots
Remote Continuous Access (CA)
Continuous Access Journal (CA Journal)

 

HDS

 

HDS990V/USB

Local Shadowimage
C.O.W. Snapshot
Remote TrueCopy
Universal Replicator

 

EMC Symmetrix

 

Symm48:3830
SymmDMX800

Local Business Continuous Volumes (BCV)
Remote RDF

 

NetApp

 

FAS270

Local Snapshot
SyncMirror
Remote Snapshot
 

 


Back to the Table of Contents


 

HP Software Support

This web site provides contact information and details about the products, services, and support that HP Software offers. For more information, visit the HP Support web site at: HP Software Support Online.

HP Software support provides customer self-solve capabilities. It provides a fast and efficient way to access interactive technical support tools needed to manage your business. As a valued support customer, you can benefit by being able to:

To access the Self-Solve knowledge base, visit the Self-Solve knowledge search home page.

Note: Most of the support areas require that you register as an HP Passport user and sign in. Many also require an active support contract. To find more information about support access levels, go to: Access levels.

To register for an HP Passport ID, go to: HP Passport Registration.


Legal Notices

Warranty

The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

The information contained herein is subject to change without notice.

Restricted Rights Legend

Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.

Copyright Notices

© Copyright 2000-2011 Hewlett-Packard Development Company, L.P.

Trademark Notices

Adobe® is a trademark of Adobe Systems Incorporated.
Intel® and Itanium® are trademarks of Intel Corporation in the U.S. and other countries.
Microsoft®, Windows®‚ Windows® XP are U.S. registered trademarks of Microsoft Corporation.
Oracle and Java are registered trademarks of Oracle and/or its affiliates.
UNIX® is a registered trademark of The Open Group.

Documentation Updates

To check for recent updates or to verify that you are using the most recent edition of a document, go to:
http://support.openview.hp.com/selfsolve/manuals
This site requires that you register for an HP Passport and sign in.
Or click the New users - please register link on the HP Passport login page.
You will also receive updated or new editions if you subscribe to the appropriate product support service.
Contact your HP sales representative for details.

Back to the Table of Contents