Databridge 6.3 New Features and Release Notes

  • 7021921
  • 14-May-2015
  • 01-Apr-2018


Databridge version 6.3


This technical note includes new features and release notes for Databridge 6.3, available May 2015.

Note: Version 6.3 Hotfix 2 is available beginning in August 2015. For more information, see KB 7021924.


Obtaining the Product

Maintained customers are eligible to download the latest product releases from Downloads:

You will be prompted to login and accept the Software License Agreement before you can select and download a file. For more information on using Attachmate Downloads, see KB 7021965.

Version Information

Databridge components that have been updated in this release are listed below with their current version number:


Databridge 6.3
DBEngine version
DBServer version
DBSupport version
DBGenFormat version
DBPlus version
DBSpan version
DBSnapshot version
DBTwin version
SMSIIClient version
DMSIISupport version
DBInfo version
DBLister version
DBTanker version
DBAuditTimer version
DBChangeUser version
DBAuditMirror version
DBCobolSupport version


Databridge 6.3
Initialize version
PatchDASDL version
COBOLtoDASDL version
UserData Reader version
SUMLOG Reader version
COMS Reader version
Text Reader version
BICSS Reader version
TTrail Reader version
LINCLog Reader version
BankFile Reader version
DiskFile Reader version
PrintFile Reader version

Enterprise Server

Databridge 6.3
DBEnterprise version
DBDirector version
LINCLog version


Databridge 6.3
bconsole version
dbutility version
DBClient version
DBClntCfgServer version
dbscriptfixup version
DBClntControl version
dbfixup version
migrate version
dbpwenc version

Installation Instructions

Before you install version 6.3, quit all Databridge applications including the Console, and then terminate the service/daemon. After the installation is complete, restart the service/daemon manually.

Note: To avoid potential problems, we strongly recommend that you upgrade the Host and Enterprise Server software simultaneously.

We also recommend that you update the Client Console if you're updating the Client software, particularly if you use the configuration aspect of the Client Console (that is, Client Configurator). This will ensure that your data is interpreted correctly, as the lengths of some fields have changed.

Fore further installation instructions, see the Installation Guide at

Databridge 6.3 uses the same directory structure introduced in version 6.2 (see KB 7021915).

Supported Platforms

For information about supported platforms, including hardware and software requirements, see KB 7021922.

New Features and Resolved Issues

Databridge 6.3 introduces new features and resolves several issues, including those which were fixed in version 6.2 Service Pack 2. For a detailed description of those previous fixes, see KB 7021919.

Significant Features

  • This version is qualified to run on DMSII level 58.1 (MCP 17). New DMSII features introduced in 58.1 are not supported.
  • DMSII level 57.1 (MCP 16.0) support. This version supports XL databases, which contain up to 16000 structures and 4000 data sets.
  • Oracle 12c support
  • Microsoft SQL Server 2014 support


  • XL databases are now supported.
  • Databridge has been decoupled from the ACCESSROUTINES. This makes Databridge less sensitive to changes to DMSII internals. A new library called DMSIISUPPORT has been added to facilitate this change. This library is specifically tailored to each database and is compiled automatically.
  • The cloning process has changed. During a clone, Databridge only accesses the DMSII database in inquiry mode, except for the following conditions:
    • The clone is OFFLINE
    • READ ACTIVE AUDIT = FALSE (in DBEngine parameter file)
    • In addition, clones no longer mark the audit trail with begin- and end-transactions.
  • Unsafe constructs have been removed from DBEngine. Previously, an LFILES OBJECT/DATABRIDGE/ENGINE would report that it was NON-EXECUTABLE:UNSAFE.
  • A missing audit file with the options DBPLUS = FALSE and AUDIT NO FILE no longer causes DBEngine to go into an infinite loop until the file is restored.
  • If a transaction was aborted while a data set was simultaneously expanding, updates and their reversals could become mismatched, resulting in the reversals not being fully applied. This issue has been addressed in Databridge 6.3.
  • If the DBServer SOURCE references a logical database of a 57.1 database, it will no longer get an INVALID INDEX.
  • DBTwin with multiple source hosts doesn't terminate on a DASDL update or reorganization of the primary database. It will now display the following error message before terminating:
>>> [0105] Database update level changed from <old> to <new> <<<

The customer should then perform the procedures for updating/reorganizing the secondary database.

  • The Engine did not previously flag items that were visible RSNs as needing to be cloned as RSNs, using the item_format field of the DBS_Layout record. This resulted in the visible RSN being cloned as a float.

For this change to take effect, you will need to recompile the support library.

  • Some aborted updates were being sent if a data set was expanding.
  • Reversals of link updates were not being detected in the audit trail and dummy creates for an expanding STANDARD or COMPACT data set do not have reversals. This caused the algorithm for matching reversals with updates to fail.
  • One-word variable format record updates were sent as link items.
  • Clones that started in the middle of an aborted transaction could result in extra updates being sent to the caller.
  • Record types with no items in the variable format portion are no longer discarded.
  • DBTwin faulted with resize error using non-tailored DBSupport and with a tailored DBSupport it got

This issue has been addressed in Databridge 6.3.

  • If DMUtility fails, a message will now be displayed and will wait for an AX response of Abort, Retry, or Ignore.
  • Cloning a MODEL database uses the AFN from the original database instead.
  • Retrieving a structure name will no longer overwrite the information in DATASET_INFO, which might be in use by the caller.
  • DBTwin no longer terminates with a no more audit available error when loading a dump created on a different day.
  • The new parameter file option DYNAMIC NAMES determines if the names of the Extract Workers should include the database name and the current data set name. Syntax:

If the option is true, the Extract Workers will be named


where <n> is the Worker number. If the option is false, they will be named

  • A halt/load during a Long Transaction no longer causes DBEngine not to recognize quiet points for committing if COMMIT DURING LONG TRANSACTIONS is set to false.


  • The new parameter file option AFTER COPY JOB specifies a WFL job that DBServer will run after every file that is transferred using DBEnterprise. The option should appear with the other “global” options before the sources are declared.

Since file transfers run under the usercode alias associated with the Windows user name, the title of the WFL job should include a usercode and family name. The job will run under the usercode specified in the file title. For example:


will run under the ADMIN usercode.

The WFL job must take two parameters. The first is the title of the MCP file that was copied. The second is a boolean that is true if the file was uploaded to the MCP environment and false if it was downloaded from the MCP environment.

  • The new AUTHORIZED USERCODE source option specifies which usercodes are allowed to access the source. For Windows clients, the user name of the person or process running the client must be a valid usercode on the MCP system or mapped to a usercode using the REMOTEUSER construct:
AUTHORIZED USERCODE [=] usercode1 [OR usercode2 ...]

Usercodes should be in quotes if they could be confused with reserved words. For example:

source bank:
authorized usercode = BILLSMITH or "SUPPORT";

If an unauthorized user attempts to use a source, DBServer will return the new error message:

[1003] <username> at <clienthost> not authorized for
source <sourcename>

If the client program did not send the user name to DBServer, "(old client)" will appear in place of <username>.

  • If a valid request is not received within 30 seconds after a connection is made, the Worker will terminate.


  • DBSupport and GenFormat were adjusted to handle DMSII 57.1 databases that can contain more than 4K data sets and 16K structures.
  • DBPRIMARYKEY no longer corrupts the record type in the DatasetInfo array if a variable format data set does not have a primary key.
  • The formatting routines no longer erroneously check FIELD items for NULL values.


  • For extremely large databases, the DBSupport compile no longer fails with PROGRAM SEGMENT TOO LARGE.


  • The compile of DMSII Client Lib no longer fails with a syntax error when the database has more than 1,023 data sets as allowed with the XL DASDL option starting in 57.1.
  • When compiled with DMALGOL 56.1 (rather than 57.1), the clone of a DMSII 57.1 database no longer fails because the AUDITLOCATION data set has no records.
  • Structure numbers over 9,999 no longer causes the DMSII Client to attempt to clone data sets each time it runs.
  • Report shows duplicates and keychange errors.
  • An AX command before replication started no longer causes the client to use excessive machine cycles.


  • The compile of DMSII Client Lib no longer fails with a syntax error when the database has more than 1,023 data sets as allowed with the XL DASDL option starting in 57.1.


  • DBTwin with multiple source hosts no longer terminates on a DASDL update or reorg of the primary database. It will now display the following error message before terminating:
>>> [0105] Database update level changed from <old> to <new> <<<

You should then perform the procedures for updating/reorganizing the secondary database before restarting DBTwin.


  • Implemented support for unaudited databases. The DMSIITranstate entry point will be a no-op and always return false (no error) for unaudited databases. Likewise, the InTranState flag will always be false for unaudited databases.


  • Visible RSNs and links in AA format are now labelled "UID" in Properties dialogs.
  • Removing a remap from a DMSII database caused updates for the data set to be discarded.
  • Data in Remote Records mode for EXTENDED data sets was misaligned in local filtered sources.
  • DBEnterprise got Address Check errors when reading the end of an expanding data set.
  • Cache files got corrupted when Cacher ran out of disk space.
  • Empty data sets or empty sections of data sets caused an I/O error on a clone and DBEnterprise was switching to Remote Record mode for cloning them.
  • Some aborted updates were being sent if a data set was expanding.

Reversals of link updates were not being detected in the audit trail and dummy creates for an expanding STANDARD or COMPACT data set do not have reversals. This caused the algorithm for matching reversals with updates to fail.

One-word variable format record updates were sent as link items.

Clones that started in the middle of an aborted transaction could result in extra updates being sent to the caller. (We must see real BTR to honor reversals.)

The SAVETRPOINT construct is now supported, which marks the rollback point for aborts.

  • The COPY command line can now specify the HIDE option to prevent displaying the progress dialog. Copies from the MCP environment to the Windows environment can also specify the OVERWRITE option, which will automatically overwrite an existing Windows file. If HIDE is specified without OVERWRITE, existing files will be unchanged.
COPY "localname" { TO | AS } (usercode)MCPName [ ON family ]
[ { FROM | VIA } ipnameoraddress ] [ PORT portnbr ]
[ HIDE ]
COPY (usercode)MCPName [ ON family ] { TO | AS } "localname"
[ { FROM | VIA } ipnameoraddress ] [ PORT portnbr ]
[ HIDE ]
  • No records of DIRECT data sets were extracted during a clone.
  • DBEnterprise was faulting on some systems when processing a Single Abort Assign audit record.
  • When multiple file transfers are run simultaneously it was possible for one or more of them to terminate without opening a log file due to collisions on the titles, which have the date and time.

The list of files copied is now consolidated in a single file called "Files copied.txt" in the same subdirectory as the file transfer log files for a particular host (either TO or FROM).

DBEnterprise now retries the log open every second for 50 seconds and then displays a popup asking whether to retry or cancel the command.

  • DBEnterprise couldn't find files specified with a relative directory in a copy command.
  • Cacher returned error [0033] Audit location ... is wrong if a record was split across a block boundary and the continuation block hadn't yet been written. This issue has been addressed in Databridge 6.3.
  • DBEnterprise now switches to host audit if caching was set to if available and Cacher was not running.
  • Copying an MCP directory no longer ignores the OVERWRITE option.


  • Enhancements have been made to the timer thread to optionally implement timeouts to stop the Client after stuck queries or a lack of data from the server.
  • Automated handling of non-US Oracle databases and Oracle databases whose character set is UTF8.
  • 64-bit counters are now used for statistics. This eliminates any possibility of overflow during full clones.
  • The updatepath utility, which allows you to include the Databridge Client directory in the system or user PATHs, was modified to also include the install directory in the PATH.
  • The order of the "Incremental Statistics:" and "Processing from AFN=afn, ABSN, ..." messages was changed to make the log file easier to read. For additional clarity, the old AFN was added to the incremental statistics line..
  • The preservation of deleted records, when used in conjunction with data sets that use a SET with KEYCHANGEOK attribute as the index, occasionally caused duplicate record errors. This issue is resolved by not preserving fake deleted records that are actually modified in DMSII.
  • The host variable length of an ALPHA item that was being cloned as a binary type was not adjusted to compensate for the lack of a closing NULL character. This resulted in SQL errors when it made the host variable longer than the maximum size for a binary column.
  • Included the getlogtail utility in UNIX clients, as this is useful when generating e-mails about an abnormal run exit code.
  • Made the client ignore duplicate deleted records when miser_database is true and the delete_seqno column is not being used.
  • Tables that had more rows than would fit in a 32-bit signed integer caused the Oracle client to get an overflow error. This in turn caused the client to loop while fetching the record count during bulk load verification. The client was using 32-bit signed integers to hold the record counts during data extraction. Large data sets caused the bulk load verification to fail because of overflow errors. This was corrected by using 64-bit integers whenever possible, and otherwise using 32-bit unsigned integers to hold the record counts.
  • The warnings and error messages written by the timer thread were being directed to stdout rather than stderr. Therefore, the log buffer wasn't flushed after these messages were issued. This is problematic when running dbutility as a background run, as the log file is the only place where you can see these messages.
  • The client did not recover from an insert that resulted in a duplicate record error after an update got a row count of 0. Normally this happens when the updated record is not present.
  • In some cases where triggers are used, the count gets corrupted, resulting in the client stopping because of the error in the insert. The client recovers from this situation by doing a delete/insert operation, which ensures that the update gets done.
  • Implemented a new DMS_ITEMS di_options bit that indicates that the item should be stored as a data type UNIQUEIDENTIFIER. This is only valid for SQL Server databases. The bit mask is 0x8000000 and it is only valid for ALPHA(36) items.
  • When the SQL Server Client handled a data set with only keys and adding an identity column as a user column, it created a stored procedure with an UPDATE statement and empty SET list. As a result, the creation of the stored procedure failed. This issue has been resolved.
  • The Client could not retrieve the NULL RECORD data from the NULL RECORD file for a variable format data set whose record type was greater than 127.
  • MISER database updates to the virtual data set GL-HISTORY-REMAP2 were sometimes being processed out of turn when using multi-threaded updates.
  • The global StateInfo was not being propagated following the receipt of a GC reorg DOC record, when the auto_reclone parameter is set to true. This caused the client to select the data sets using an older quiet point causing some audit information to be reprocessed after the data set in question was recloned.

Client Configurator

  • When "Flatten to Secondary Table" was selected for an item with an OCCURS clause, this could not be undone after you committed the changes.
  • The Client Configurator was enhanced to allow you to set the bit DIOPT_Clone_as_GUID (0x8000000) in the pop-up menu that appears when you right-click the item in the DMSII view. The pop-up menu only shows the option if the item in an ALPHA(36).
  • The Client Configurator was enhanced to allow you to set values for the dms_scale and dms_signed columns in DMS_ITEMS when an ALPHA item is marked as "Clone As Number". This is done using the properties page of the item that appears below the DMSII view.


  • The lexical scanner in the service and the batch console did not handle data source names that contained dashes.
  • The service, when not run as the built-in SYSTEM account, could not access the file DATABridge_Messages.dat. This caused the console to fail to display the text associated with error codes.
  • The service failed to create a log file entry when a client that it launched terminated abnormally.
  • The service now logs the launching of BCNOTIFY initiated scripts in the Windows Application Event Log.

Additional Information

Documentation and additional technical resources are available from

Legacy KB ID

This document was originally published as Attachmate Technical Note 2784.