Environment
Situation
DATABridge 6.2 Service Pack 1 (SP1) is available to maintained users who already have DATABridge 6.2 installed. This technical note provides a list of important changes and fixes included in DATABridge 6.2 SP1.
Other DATABridge Releases
For information about new features and release notes for
- DATABridge 6.2 SP2, released May 2014, see KB 7021919.
- DATABridge 6.2, released January 2013, see KB 7021915.
Resolution
Obtaining the Product
Maintained customers are eligible to download the latest product releases from the Attachmate Download Library web site: https://download.attachmate.com/Upgrades/.
You will be prompted to login and accept the Software License Agreement before you can select and download a file. For more information on using the Download Library web site, see KB 7021965.
Version Information
DATABridge components that have been updated in this Service Pack are listed below with their current version number. (Components that aren't listed have not been updated since version 6.1.)
Host
|
Service Pack 1 |
DBEngine version |
62.003.0033 |
DBServer version |
62.000.0001 |
DBSupport version |
62.001.0005 |
DBGenFormat version |
62.001.0002 |
DBPlus version |
62.003.0057 |
UserDataFile version |
62.001.0002 |
SUMLOG Reader version |
62.002.0002 |
Enterprise Server
|
Service Pack 1 |
DBEnterprise version |
62.xxxx.0109 * |
DBDirector version |
62.xxxx.0006 * |
LINCLog version |
62.0000.0001 |
* Where xxxx is the final build number of the Service Pack.
Client
|
Service Pack 1 |
dbutility version |
6.2.3.030 |
DBClient version |
6.2.3.030 |
DBClntCfgServer version |
6.2.3.030 |
dbscriptfixup version |
6.2.3.030 |
DBClntControl version |
6.2.3.030 |
dbctrlconfigure |
6.2.3.030 |
dbfixup version |
6.2.3.030 |
migrate version |
6.2.3.030 |
dbpwenc version |
6.2.3.030 |
File Structure
The Attachmate DATABridge Service Pack uses the same directory structure as the release DVD to help you locate the appropriate update file for each product.
Please note the following:
- This version uses a single patch which updates the Client, the Console and DBEnterprise Server.
- You cannot update a 32-bit install with a 64-bit patch or vice versa. Uninstall the 32-bit software 6.2 release software and install the 64-bit 6.2 release software before updating it with the 64-bit patch.
Installation Instructions
Before you install the service pack, quit all DATABridge applications including the Console, and then terminate the service/daemon. After the installation is complete, restart the service/daemon manually.
IMPORTANT: To avoid potential problems, we strongly recommend that you upgrade the Host and Enterprise Server software simultaneously.
We also recommend that you update the Client Console if you're updating the Client software, particularly if you use the configuration aspect of the Client Console (that is, Client Configurator). This will ensure that your data is interpreted correctly, as the lengths of some fields have changed.
DATABridge Host
- On the MCP Server, upload DB62ServicePack.con using binary or image file transfer.
ftp my_aseries_host
<login>
bin
put DB62SERVICEPACK.con DB62SVCPACK
- Log on to the DATABridge usercode on the host and go to CANDE.
- To unwrap encapsulated files in the DB62SVCPACK.con file, use the following command:
WFL UNWRAP *= AS = OUTOF DB62SVCPACK TO DISK (RESTRICTED=FALSE)
DATABridge Client, Client Console, and Enterprise Server
- On Windows, open the Windows32 or Windows64 folder of the Service Pack and double-click the file databridge.D62xxxx.Wnn.exe. All installed components such as the Client, the Console, or Enterprise Server will be updated.
- On UNIX, upload the appropriate tar files for the Client and Console from the Service Pack to the directories where these components are installed. (Optimally, the Client and Console are installed in separate directories to facilitate maintenance.) If you use Windows to process the extract of the tar file from the zip file, you must transfer the tar file to UNIX using binary FTP.
Then, use the following command:
tar -xvf <filename>
where <filename> is the full name of the tar file. This command replaces the Client (or Console) files in the DATABridge install directory with updated files from the tar file.
Note: To avoid accidentally deleting the DATABridge applications, we recommend that you always keep the install directory and the working directory separate.
Supported Platforms
For information about supported platforms, including hardware and software requirements, see KB 7021917.
Issues Resolved by Service Pack 1
DBEngine
- Using DMAuditLib with audit blocks larger than 9000 words caused DBTwin to fault with SEG ARRAY ERROR @ (81610100).
- DBTwin opened the remote audit file on every read causing poor performance.
- The DBInitialValues entry point returned incorrect values for COMPACT data sets and the global data set.
- DBEngine was not always honoring the STOP BEFORE condition.
- In fixup mode, if there were no quiet points between the end of the extract and when an abort was detected, DBEngine was returning the error:
[0097] Seg (0) must be from 1 to 4294967295
- Visible RSNs caused DBTwin to fault with a SEG ARRAY ERROR or DIMENSION SIZE ERROR ON RESIZE or the DMSII error READONLY ITEMS HAVE CHANGED.
- Visible RSNs with a tailored DBSupport and $ PRIMARYFILTER caused records to be not found in the DBTwin database.
- Visible RSNs with a tailored DBSupport on the secondary side using the non-PRIMARYFILTER DBEngine caused DBTwin to fault with DIMENSION SIZE ERROR ON RESIZE.
- If an audit prefix was specified, such as AUDIT "(DB)SAVE/MYDB", DATABridge was using that prefix when trying to access the active audit file such as during a clone.
- If the AUDIT NO FILE option was set, DBEnterprise was returning error 141.
- The block split flag was sometimes set when it shouldn't be, resulting in PRINTAUDIT going into an infinite loop and DATABridge errors such as:
[0097] AudRecSize (0) must be from 3 to 65535
- If the final commit of a tracking run was after an End Transaction at the end of an audit block and more than 2 seconds elapsed before the next block was written, DBEngine sent the client an obsolete timestamp in the StateInfo. This resulted in the following error on the next tracking run:
[0033] Audit location timestamp = <timestamp> for ABSN <absn> in AUDITnnnn is wrong. Check for DMS rollback.
DBSupport
- Primary key lists in tailored libraries ended with the data set name as a key item.
DBGenFormat
- The primary key list for a tailored DBSupport library contained the elementary items of a group key and the name of the data set.
SUMLOG Reader
- Sumlog files having a truncated record always returned the error:
[0045] Error reading file: SUMLOG
The Reader will now report such errors to the TASKFILE but allow the record to be sent to the caller.
DBEnterprise
- If no source options were declared between the local source name and the filter specification, DBEnterprise reported a syntax error when importing a configuration text file (for example, Samples\Config\BankDB.cfg).
- Since DMSII currently changes the RSN of an aborted delete, DBEnterprise now sends the original delete and then sends the reversal as a create with the new RSN.
- If the COPY command specified a relative path for the destination Windows filename (for example, host\daily.rpt), DBEnterprise would return an error, such as:
[1101] Open error on host\daily.rpt: No such process
- If the WHERE clause of the SELECT statement in GenFormat contained an arithmetic expression like DATA-NAME MOD 4 = 0, DBEnterprise would return the error:
[1110] Filter routine failed.
- If audit files were cached, the starting audit file still had to be present on the host system to establish the correct DESCRIPTION file. When the client requests the starting audit file to be opened, DBEnterprise now opens the cached audit file (if available) instead of the host audit file.
- The client would occasionally stall reading cached audit files if they were cached individually.
- When reading the end of some inactive cached audit files, DBEnterprise would return DBM0009 instead of reading the next audit file.
- If caching is enabled and a database reorg occurs, DBEnterprise fails with a "Permission denied" error when trying to create the cachelist file for the new update level.
- When processing from a cached audit file at one update level to one at the next update level, DBEnterprise was always returning the error saying the database was reorganized. Cacher caused this problem by storing the wrong format levels in the updates of the new cached audit file. These files will need to be re-cached.
- The client was getting data errors such as "control characters in data" when it switched to reading from the host after it had read the cache for a local filtered source.
- If a dataset had a remap, Cacher was caching the remap image instead of the original dataset image.
- If DBEnterprise switched to using the base source cached files from the filtered source, it would remove the base cache files instead of the filtered source cache files when it had finished reading them. It will now remove cached files for the invoked (filtered) source instead of the cached files it is actually reading from.
- The error message in the log for an invalid update type contained incorrect values.
- The message logged when a cache file is opened now shows the source name to more easily distinguish between files from the base source and files from a filtered source.
The Cacher log was always reporting the error message:
DBEnterprise: [1118] Source <sourcename> does not have a cache
In some situations Cacher was creating cached files that contained garbage at the end of the file. This caused errors such as "invalid audit location." Cacher will also now open and close adjacent audit files less often.
- The source configuration settings were reverting to default values when the database update level changed and the configuration came from importing a text file.
- DBEnterprise was faulting if a configuration text file was imported after adding another source.
- In certain situations when reading a cache file with no commits DBEnterprise would return the error:
[0008] Missing DESCRIPTION file for Audit file update level 0.
- When you right-click an imported source that has caching enabled, the "Start caching updates" option did not appear in the context menu.
- When caching audit files with transaction aborts, DBEnterprise was writing the first update after the abort twice, which caused clients to encounter either missing records for deletes, missing creates for existing records, or the error:
[1114] Audit file <filename> corrupted: modify AI expected
Additionally, performance has been improved when DBEnterprise encounters aborts while writing a cache file.
- Cache files did not contain natural quiet points because Begin Transaction records were not being written. They also did not contain pseudo quiet points and therefore did not support the COMMIT DURING LONG TRANSACTIONS option.
"Doc Grow" records will no longer be written to the cache files.
- If the client closed the connection while DBEnterprise was waiting for more audit, it would continue waiting rather than terminating.
- Cacher was not properly discarding updates for aborted transaction groups that spanned multiple audit files.
- If the audit file number in LastAFN.txt was for a missing or truncated cache file, Cacher was returning
[1129] Errors in configuration file <source>.cachelist
Now it will create a new file to replace it.
- On the first tracking run after a clone with READ ACTIVE AUDIT = FALSE, DBEnterprise was still accessing the active audit file.
- If a cached audit file was missing from a local source but the base source had the audit file in its cache, DBEnterprise was still trying to open the audit on the mainframe to determine its update level.
- DBEnterprise was using the base source cache file to determine the update level for a local (filtered) cached source. If the base cached file was missing, DBEnterprise would use the mainframe audit file. It will now use the local cached file to determine the update level.
- If a cache file was less than 713 bytes, such as when it contained only the header, DBEnterprise was treating as an invalid cache file and not processing it. In particular, Cacher was not updating the file's header when it received a commit on a subsequent file, which left it marked as "active" and the de facto end of the cached audit.
- If the READ ACTIVE AUDIT option was set to FALSE, clones would immediately return a "no more audit" message and not send any records.
- If a base source had the option "When tracking, read cached files" set to Only, DBEnterprise was not sending the fixup records immediately after the clone.
- Files larger than 4 GB were truncated when copied from mainframe.
- The local source filter was not applied when reading from the base source cache.
- COMPACT data record was corrupted if declared as STORED OPTIONALLY WITH or DEPENDING ON.
- The values specified in an imported configuration file for retry and maxwait for caching were initially ignored.
The values for retry, maxwait, and modifies will now be taken from the .source file rather than the .cachelist file.
- Tracing will be enabled by default in the debug version to facilitate debugging access to the registry.
- If the log file could not be opened due to security restrictions, the program was faulting.
- Errors from DBServer were not being logged.
- The block split flag is sometimes set when it should not be. This causes PRINTAUDIT to go into an infinite loop and errors such as DBM0011 "invalid audit location".
- After exhausting the cache and switching to the host audit, if Cacher replenished the cache, DBEnterprise would switch back to the cache and encounter error conditions if it was in the middle of an aborted transaction.
If the option "When tracking, read cached files" is set to If available, DBEnterprise will switch to reading the host audit only if Cacher is not running and the database is active.
- If an audit file was empty except for block 0, Cacher was returning:
[0009] Unexpected EOF in audit file
- Cacher returned the error "Cacher already running" if it had to generate a new filter due to a reorg.
- If a configuration text file is imported without a cache directory option but the source already exists with a cache directory specified in the GUI, the .cachelist file is truncated to contain only the REMOTE SOURCE declaration and no local SOURCEs. When Cacher is run it returns a permissions error or the error:
[1129] Errors in configuration file <basesource>.cachelist
The .cachelist file will no longer be used.
- If the LastAfn.txt file contained an invalid or missing AFN, Cacher was terminating without providing an error message. Now it will correct the file.
- If DBServer initially links to an obsolete DBSupport, DBEnterprise is not retrieving the new version when it links to the correct DBSupport, which causes DBEnterprise to regenerate the filters when using an imported configuration text file.
- If a client was reading with the option "When tracking, read cached files" set to Only, and the predicted AFN was incorrect due to a switch audit, DBEnterprise was returning DBM0009.
- Following a reorg, DBEnterprise was accessing the host audit file even with the option "When tracking, read cached files" set to Only.
Client
- The SQL time in the data extraction statistics occasionally got clobbered and displayed as a huge number.
- The dbfixup program crashed whenever you tried to run it with tables that had an older version but which included all of the new columns. This could happen if you upgraded but the upgrade failed; the program would effectively revert to the older version by simply reloading the control tables. If the "alter table" commands all worked correctly in the aborted upgrade, when you tried to do it again dbfixup crashed.
- A new configuration parameter has been implemented, max_discards, which can include two numeric values. The first value represents the maximum number of discards after which the run is aborted. The second value is the maximum number of discards for a table, after which additional discards are not written to the file, as this adds too much overhead when the file gets big.
- Open and write errors for the discard file now display only once to avoid filling the log file.
- The client was not stopping when it got an I/O error when writing to a discard file.
- Data errors that caused a discard were generating two messages, both of which were being written to the log file. Now, the first message displays onscreen and the second message, which includes the keys, is written to the log file.
- The bulk loader thread and index thread statistics now display at the end of the data extraction phase.
- To reduce screen and log file clutter, log messages such as "Creating temporary file..." and "Starting bcp for..." are only logged if the /v option is enabled, except for the first temporary file for a primary table, which indicates the start of data extraction for the corresponding data set.
- A message will now appear when bcp operations are initiated because the "max temp storage" threshold was reached.
- The Oracle Client now retrieves the database name when the signon to the database works with a NULL database name. The database name is required to generate a lock filename.
- For the SQL Server Client, ODBC format dates were replaced with ISO format dates, which work in non-U.S. environments. The conversion from string to date implicitly assumed that these were datetime values resulting in good dates (datetime2 or date data types) being rejected. Host variable data already does this.
- Replaced all floating point statistics counters from the CPU type dependent data type stats_counter. This eliminates counters overflowing and inaccurate statistics when small values are added to large floating point numbers and the resulting data is truncated.
- Defined a new trace bit TR_DOC_RECS (8192) that traces only DOC records to help when diagnosing problems with COMMITS.
- Incremental statistics following the initial clone are cumulative but were being labeled incremental.
- In the unlikely situation where the service
crashed, the spawned client runs did not always terminate. Now, if this
situation occurs, one of the following occurs:
- The run stops at the next quiet point.
- If a clone in progress, the run continues up to the fixup phase.
- If the run is stuck waiting for updates long enough to avoid unintentionally aborting a clone, the watchdog timer closes the connection to the server.
Similarly DBClntCfgServer shuts down if it receives a reset for the IPC connection.
- If a column in the index for table was a date, whose corresponding data item in DMSII was a NUMBER that was NULL, the Client got into a tight loop that caused the Client to get a stack overflow.
- The client's handling of ALPHA items that are treated as numeric, was extended to handle ALPHA items that are generated by COBOL, where the sign byte is 0xC0 through 0xC9 ("{", "A" ... "I") or 0xD0 through 0xD9 ("}", "J" ... "R").
To enable this feature you need to use a layout script and set dms_signed to 1 in DMS_ITEMS. You also must set di_options to 0x4200 (the 0x200 bit indicates that the ALPHA item should be stored as a number, while the 0x4000 bit indicates that this is a COBOL display item). Finally, if a scale is needed, you also need to set dms_scale appropriately in the same user script.
- FileXtract data sources failed to track when there was no index for the tables. Also a bad index caused the clear duplicate records script to be run, when the error should have been ignored.
- When encountering a SET with KEYCHANGEOK that had too many keys, the Client was not clearing the misc_flags bit indicating that key changes were allowed. This caused the update to fail with the following error:
Engine did not send BI/AI pair for update to DataSet <name> which allows key changes
- The client failed to detect that the SET selected by the Engine had too many keys to be usable as an index when there were items with OCCURS clauses that were mapped to secondary tables.
- The configuration file parameter "set_blanks_to_null" was implemented. This makes the Client store a zero length character string as NULL, rather than a single space.
- Implemented a new DMS_ITEMS di_options bit (0x2000000) that allows the Client to properly handle unsigned NUMBER(n) and ALPHA items that use high values to force left justification.
- The configuration file parameter "enable_ff_padding" was implemented. This makes the Client Configurator allow the user to customize DMS items that use high value padding by adding a menu item to the context menus. If this option is not set to true, these menu items are not created.
- The host_info field of StateInfo records was getting corrupted when the parameter use_latest_si was set to true.
- The value of "min_sql_ops" in the STMT statistics was wrong.
- The unload command did not fixup the state info when the client was improperly terminated. This caused a problem when updating the software without first making sure that the global state info had been properly propagated. The command now fixes up the state info before writing the unload file.
- The Client was not propagating the global state info or displaying the ending state info following most host errors.
- The redefine command was setting added columns to a simulated NULL value instead of leaving them as NULL when their INITIALVALUE was NULL or the Engine passed a bad value to the client.
- The redefine command and the Client Configurator were causing an access violation when they got bad data for the INITIALVALUE from the Engine. The error reporting code was trying to print the value of the key, which is meaningless for an INITIALVALUE record.
- The concatenation code was extended to work with merged items regardless of which item comes first. Until now only the case where the merged item was first was being handled correctly.
Service
- Console users can now be assigned customized roles. Custom roles are configured as "custom = 0xhhhhhhhh, 0xhhhhhhhh" (versus "administrator", "operator" and "user" for non-customized roles). The two words of bits represent the various menu items in the Client Console. For details, see "Managing User Permissions" in the Managing Operations section of the Client Console Help.
- The maximum number of userids supported by the service has been changed from 10 to 30. To maintain backward compatibility the binary configuration file now stores the user ids in a new section. The format of the text configuration file remains unchanged.
Newer clients can read older binary files. But if you need to revert to using an older service/daemon, you will need to export the service configuration file with the new software and then import it using the older software, unless the file never got updated.
- Console passwords can now be up to 30 characters in length.
- If you tried to change a password that had a zero length, the service would crash under certain situations. All of these situations were caused by the failure to detect that the old password was not an allocated string.
- The service/daemon would sometimes crash when the console password was changed.
- All operator actions that affect operations to the service’s log file are now logged. This allows the audit of operator actions.
Client Console
- The Manage User command now allows user permissions to be customized. For details, see "Managing User Permissions" in the Managing Operations section of the Client Console Help.
- Command-line switches have been implemented for the Process, Redefine and Create User Scripts commands in the Client Console.
- Two commands, similar to the command-line client STATUS and PSTAT commands, have been added. These commands cause status and statistics information to be displayed in the Console view.
- Password handling is fixed so that passwords between 17 and 30 characters in length work correctly. If you currently use a password longer than 16 characters, you must delete the user id to which it corresponds and recreate it as it will not work. These passwords were encoded incorrectly.
- To prevent cached UI definitions from masking the new commands that were added to the Client Console and Client Configurator, the working directory name was changed from databridge/6.2 in the 6.2 release to databridge/6.2.1. This directory resides in the user’s home directory.
- Executing the "Export Config" or the "Create User Scripts" menu items in the Client Configurator caused the menu items to be grayed out.
- Added an edit box for the "Maximum discards" parameter to the end of the "DMSII Data Error Handling" page in the "Processing" section.
- Added the parameter "Set blank column to NULL" to the Customizing pane. This parameter corresponds to "set_blanks_to_null" in the Client Configuration file.
- Extended the concatenation code to work with merged items regardless of which item comes first. The Client Configurator was only allowing
- Implemented a new Client Configurator pop-up menu item that allows it to mark an unsigned NUMBER(n) or an ALPHA items as "Clone as High Value Padded" as described in the previous Client section.
- Added the parameter "Enable High Value padding" to the "DMSII Data Error Handling" section of the Client Configuration. This parameter corresponds to "enable_ff_padding" in the Client Configuration file.
Additional Information
Product documentation for DATABridge can be found at https://support.microfocus.com/manuals/databridge.html.