Databridge 6.2 Service Pack 2 Hotfix 4 (SP2 HF4) is available to maintained customers and applies to an existing installation of Databridge 6.2. This technical note describes fixes included in this release.
Note: Version 6.2 SP2 Update 1 is available beginning in February 2016. For more information, see KB 7021926.
Obtaining the Hotfix
Maintained customers are eligible to download the latest product releases from https://download.attachmate.com/Upgrades/.
You will be prompted to login and accept the Software License Agreement before you can download a file. For more information on using the Downloads website, see KB 7021965.
Databridge components and utilities that have been updated since version 6.2 are listed below with their current version number.
All host programs have been recompiled with MCP Level 55.1 software.
||Service Pack 2 Hotfix 4
|UserData Reader version
|SUMLOG Reader version
||Service Pack 2 Hotfix 4
||Service Pack 2 Hotfix 4
This hotfix uses the same directory structure as the installation image of the original release to help you locate the appropriate update file for each product.
Please note the following:
- This version uses a single patch that updates the Client, the Console, and DBEnterprise Server.
- You cannot update a 32-bit install with a 64-bit patch or vice versa. Uninstall the 32-bit software 6.2 release software and install the 64-bit 6.2 release software before updating it with the 64-bit patch.
Before you install the hotfix, quit all Databridge applications including the Console, and then terminate the service/daemon. After the installation is complete, restart the service/daemon manually.
IMPORTANT: To avoid potential problems, we strongly recommend that you upgrade the Host and Enterprise Server software simultaneously.
We also recommend that you update the Client Console if you're updating the Client software, particularly if you use the configuration aspect of the Client Console (that is, Client Configurator). This will ensure that your data is interpreted correctly, as the lengths of some fields have changed.
- On the MCP Server, upload DB62HOTFIX.con using binary or image file transfer.
put DB62HOTFIX.con DB62HOTFIX
- Log on to the Databridge usercode on the host and go to CANDE.
- To unwrap encapsulated files in the DB62HOTFIX file, use the following command:
WFL UNWRAP *= AS = OUTOF DB62HOTFIX TO DISK (RESTRICTED=FALSE)
Databridge Client, Client Console, and Enterprise Server
- On Windows, open the Windows32 or Windows64 folder of the hotfix and double-click the file databridge.62xxxx.Wnn.exe. All installed components such as the Client, the Console, or Enterprise Server will be updated.
- On UNIX, upload the appropriate tar files for the Client and Console from the hotfix to the directories where these components are installed. (Optimally, the Client and Console are installed in separate directories to facilitate maintenance.) If you use Windows to process the extract of the tar file from the zip file, you must transfer the tar file to UNIX using binary FTP.
Then, use the following command:
tar -xvf <filename>
where <filename> is the full name of the tar file. This command replaces the files in the Databridge install directory with updated files from the tar file.
Note: To avoid accidentally deleting the Databridge applications, we recommend that you always keep the install directory and the working directory separate.
Issues Resolved by Service Pack 2 Hotfix 4
In addition to the previous Issues Resolved by Service Pack 2 Hotfix 3, this hotfix resolves the following issues:
- If the available audit ends with a halt/load recovery record, DBEngine updates the client audit location with a SEG of 0, which results in error DBM0097 on the next process.
- An embedded remap causes an INVALID INDEX when used with a non-tailored DBSupport and a 57.1 database.
- Aborted COMPACT creates are sometimes sent to the caller.
- Processor utilization increases starting in DBEngine 62.004.0034 when replicating many overlapping transactions.
- Transactions that start in the middle of an aborted transaction could result in extra updates being sent to the caller.
- The recloning of data sets that have resident history records and involve virtual data sets, results in the history data sets always being recloned.
- The 64-bit SQL Server client does not use COUNT_BIG in the select statement used to get the row counts. This causes tables, that have more rows than a 32-bit integer can hold, to get an overflow error.
- An error in the editing of the row counts at the end of the data extraction of a data set results in a negative number when the count overflows a 32-bit integer.
- The error recovery from failed updates to COMPACT data sets with items that have OCCURS DEPENDING ON clauses fails when RSNs (or AA Values) are used as the source of the index. The SQL to do a delete fails to enclose the RSN (or AA Value) in quotes, when the data type is CHAR(12), resulting in a SQL error. A similar error occurs when using binary AA Values (or RSNs).
- The SQL Server client does not prevent the user from setting the configuration file parameter bcp_delim to the empty string, which causes the bcp to fail.
Issues Resolved by Service Pack 2 Hotfix 3
In addition to the previous Issues Resolved by Service Pack 2 Hotfix 2, this hotfix resolves the following issues:
- A halt/load during a Long Transaction causes DBEngine not to recognize quiet points for committing if COMMIT DURING LONG TRANSACTIONS was false.
- The Worker terminates if a valid request is not received within 30 seconds after a connection is made.
- No records of DIRECT data sets are extracted during a clone.
- DBEnterprise doesn't switch to host audit if caching is set to "if available" and Cacher is not running.
- DBEnterprise faults on some systems when processing a Single Abort Assign audit record.
- The 6.2 SP2 HF2 client gets a SQL error when an INSERT for a table that has nothing but keys results in a duplicate record error. The standard recovery from this situation is to try to do an UPDATE instead, but since there was no update stored procedure created, we end up with a SQL error.
- The global StateInfo is not propagated following the receipt of a GC reorg DOC record, when the auto_reclone parameter is set to true. This causes the client to select the data sets using an older quiet point resulting in some audit information being reprocessed after the data set in question is recloned.
- The service fails to create a log file entry when a client that it launched terminates abnormally.
- The service now logs the launching of BCNOTIFY initiated scripts in the Windows Application Event Log.
Issues Resolved by Service Pack 2 Hotfix 2
In addition to the previous Issues Resolved by Service Pack 2 Hotfix 1, this hotfix resolves the following issues:
- Some aborted updates are being sent if a data set is expanding.
Reversals of link updates are not being detected in the audit trail and dummy creates for an expanding STANDARD or COMPACT data set do not have reversals. This causes the algorithm for matching reversals with updates to fail.
One-word variable format record updates are sent as link items.
Clones that start in the middle of an aborted transaction can result in extra updates being sent to the caller.
- Record types with no items in the variable format portion are being discarded.
- DBTwin faults with resize error using non-tailored DBSupport and with a tailored DBSupport it gets
DATAERROR : ... : READONLY ITEMS HAVE CHANGED - CANNOT STORE
- If DMUtility fails, a message will now be displayed and we will wait for an AX response of Abort, Retry, or Ignore.
- Cloning a MODEL database uses the AFN from the original database instead.
- Retrieving a structure name overwrites the information in DATASET_INFO, which might be in use by the caller.
- DBTwin terminates with a "no more audit available" error when loading a dump created on a different day.
- DBPRIMARYKEY corrupts the record type in the DatasetInfo array if a variable format data set does not have a primary key.
- Some aborted updates are being sent if a data set was expanding.
- Reversals of link updates are not being detected in the audit trail and dummy creates for an expanding STANDARD or COMPACT data set do not have reversals. This causes the algorithm for matching reversals with updates to fail.
- One-word variable format record updates are sent as link items.
- Clones that start in the middle of an aborted transaction can result in extra updates being sent to the caller. (We must see real BTR to honor reversals.)
- The SAVETRPOINT construct is now supported, which marks the rollback point for aborts.
- The SQL Server Client creates a stored procedure with an update statement that has an empty SET list, when faced with a data set has nothing but keys and there also is an identity column added as a user column. As result the creation of the stored procedure fails.
- The Client cannot retrieve the NULL RECORD data from the NULL RECORD file for a variable format data set whose record type is greater than 127.
- MISER database updates to the virtual data set GL-HISTORY-REMAP2 are sometimes being processed out of turn when using multi-threaded updates.
- The service, when not run as the built-in SYSTEM account, cannot access the file DATABridge_Messages.dat. This causes the console to fail to display the text associated with error codes.
Issues Resolved by Service Pack 2 Hotfix 1
In addition to the previous 6.2 updates (described in KB 7021919), this hotfix resolves the following issues:
- The Engine does not flag items that are visible RSNs as needing to be cloned as RSNs, using the item_format field of the DBS_Layout record. This results in the visible RSN being cloned as a float.
In order to make this patch take effect, you will need to recompile the support library.
- DBEnterprise gets Address Check errors when reading the end of an exapnding data set. DBEngine now uses the DMUtility DMINQ function for determining DataEOFs of data sets to be cloned by DBEnterprise.
- DBSupport and GenFormat were adjusted to handle DMSII 57.1 databases that can contain more than 4K data sets and 16K structures.
- Visible RSNs and links in AA format are now labelled "UID" in Properties dialogs.
- Removing a remap from a DMSII database causes updates for the data set to be discarded.
- DBEnterprise gets Address Check errors when reading the end of an expanding data set.
- Data in Remote Records mode for EXTENDED data sets is misaligned in local filtered sources.
- Cache files get corrupted when Cacher runs out of disk space.
- Empty data sets or empty sections of data sets cause an I/O error on a clone and DBEnterprise is switching to Remote Record mode for cloning them.
- Changed the order of the "Incremental Statistics:" and the "Processing from AFN=afn, ABSN, â¦" messages to make the log file easier to read. Also added the old Afn to the incremental statistics line to make things even clearer.
- The preservation of deleted records, when used in conjunction with data sets that use a SET with KEYCHANGEOK attribute as the index, occasionally cause duplicate record errors. The solution to this problem is not to preserve fake deleted records that are actually modifies in DMSII.
- The host variable length of an ALPHA item that is being cloned as a binary type is not adjusted to compensate for the lack of a closing NUL character. This results in SQL errors when it makes the host variable longer than the maximum size for a binary column.
- Included the getlogtail utility in UNIX clients, as this is useful when generating e-mails about an abnormal run exit code.
- Made the client ignore duplicate deleted records when miser_database is true and the delete_seqno column is not being used.
- Tables that have more rows than would fit in a 32-bit signed integer cause the Oracle client to get an overflow error. This in turn causes the client to loop while fetching the record count during bulk load verification.
- The client is using 32-bit signed integers to hold the record counts during data extraction. Large data sets cause the bulk load verification to fail because of overflow errors. This was corrected by using a 64-bit integers, whenever possible, and otherwise using 32-bit unsigned integers to hold the record counts.
- The warnings and error messages written by the timer thread are being directed to stdout rather than stderr; this results in the log buffer not being flushed after these messages are issued. This is a problem when running dbutility as a background run, since the log file is the only place where you can see these messages.
- The client does not recover from an insert that results in duplicate record error after an update gets a row count of 0. This normally happens when the updated record is not present. However, in some cases where triggers are used, the count gets corrupted, resulting in the client stopping because of the error in the insert. We have made the client attempt to recover from this situation by doing a delete/insert operation, which ensures that the update gets done.
- Implemented a new DMS_ITEMS di_options bit that indicates that the item should be stored as a data type UNIQUEIDENTIFIER. This is only valid for SQL Server databases. The bit mask is 0x8000000 and it is only valid for ALPHA(36) items.
- The lexical scanner in the service and the batch console does not handle data source names that contain dashes.
- When "Flatten to Secondary Table" is selected for an item with an OCCURS clause, this cannot be undone after you committed the changes.
- The Client Configurator was enhanced to allow you to set the bit DIOPT_Clone_as_GUID (0x8000000) in the pop-up menu that appears when you right-click on the item in the DMSII view. The pop-up menu only shows the option if the item in an ALPHA(36).
- The Client Configurator was enhanced to allow you to set values for the dms_scale and dms_signed columns in DMS_ITEMS when an ALPHA item is marked as "Clone As Number". This is done using the properties page of the item that appears below the DMSII view.
- DBTwin with multiple source hosts didn't terminate on a DASDL update or reorg of the primary database. It will now display the following error message before terminating.
>>>  Database update level changed from <old> to <new> <<<
You should then perform the procedures for updating/reorganizing the secondary database before restarting DBTwin.
Additional technical resources are available from https://support.microfocus.com/product/?prod=DB.