Environment
Situation
DATABridge 6.2 Service Pack 2 (SP2) is available to maintained users who already have DATABridge 6.2 installed. This technical note provides a list of important changes and fixes included in DATABridge 6.2 SP2.
Note: Version 6.3 is available beginning in May 2015. For more information, see KB 7021921.
Resolution
Obtaining the Product
Maintained customers are eligible to download the latest product releases from the Attachmate Download Library web site: https://download.attachmate.com/Upgrades/.
You will be prompted to login and accept the Software License Agreement before you can select and download a file. For more information on using the Download Library web site, see KB 7021965.
Version Information
DATABridge components that have been updated in this Service Pack are listed below with their current version number. (Components that aren't listed have not been updated since version 6.2.)
Host
Component |
Service Pack 2 |
DBEngine version |
62.8.0049 |
DBServer version |
62.6.0002 |
DBSupport version |
62.1.0005 |
DBGenFormat version |
62.1.0002 |
DBPlus version |
62.8.0059 |
DBSPAN version |
62.0.0000 |
UserDataFile version |
62.1.0002 |
SUMLOG Reader version |
62.7.0003 |
Enterprise Server
Component |
Service Pack 2 |
DBEnterprise version |
6.2.8.121 |
DBDirector version |
6.2.8.7 |
LINCLog version |
6.2.8.2 |
Client
Component |
Service Pack 2 |
dbutility version |
6.2.8.070 |
DBClient version |
6.2.8.070 |
DBClntCfgServer version |
6.2.8.070 |
dbscriptfixup version |
6.2.8.070 |
DBClntControl version |
6.2.8.070 |
dbctrlconfigure |
6.2.8.070 |
dbfixup version |
6.2.8.070 |
migrate version |
6.2.8.070 |
dbpwenc version |
6.2.8.070 |
File Structure
The Attachmate DATABridge Service Pack uses the same directory structure as the installation image of the original release to help you locate the appropriate update file for each product.
Please note the following:
- This version uses a single patch that updates the Client, the Console, and DBEnterprise Server.
- You cannot update a 32-bit install with a 64-bit patch or vice versa. Uninstall the 32-bit software 6.2 release software and install the 64-bit 6.2 release software before updating it with the 64-bit patch.
Installation Instructions
Before you install the service pack, quit all DATABridge applications including the Console, and then terminate the service/daemon. After the installation is complete, restart the service/daemon manually.
IMPORTANT: To avoid potential problems, we strongly recommend that you upgrade the Host and Enterprise Server software simultaneously.
We also recommend that you update the Client Console if you're updating the Client software, particularly if you use the configuration aspect of the Client Console (that is, Client Configurator). This will ensure that your data is interpreted correctly, as the lengths of some fields have changed.
DATABridge Host
- On the MCP Server, upload DB62ServicePack.con using binary or image file transfer.
ftp my_aseries_host
<login>
bin
put DB62SERVICEPACK.con DB62SERVICEPACK
- Log on to the DATABridge usercode on the host and go to CANDE.
- To unwrap encapsulated files in the DB62SERVICEPACK file, use the following command:
WFL UNWRAP *= AS = OUTOF DB62SERVICEPACK TO DISK (RESTRICTED=FALSE)
DATABridge Client, Client Console, and Enterprise Server
- On Windows, open the Windows32 or Windows64 folder of the Service Pack and double-click the file databridge.621802-servicepack-Wxx.exe. All installed components such as the Client, the Console, or Enterprise Server will be updated.
- On UNIX, upload the appropriate tar files for the Client and Console from the Service Pack to the directories where these components are installed. (Optimally, the Client and Console are installed in separate directories to facilitate maintenance.) If you use Windows to process the extract of the tar file from the zip file, you must transfer the tar file to UNIX using binary FTP.
Then, use the following command:
tar -xvf <filename>
where <filename> is the full name of the tar file. This command replaces the files in the DATABridge install directory with updated files from the tar file.
Note: To avoid accidentally deleting the DATABridge applications, we recommend that you always keep the install directory and the working directory separate.
Supported Platforms
For information about supported platforms, including hardware and software requirements, see KB 7021920.
New Features
DATABridge 6.2 Service Pack 2 contains the following features:
All-Product Features
- DMSII level 57.1 (MCP 16.0) support, except for XL
- Oracle Database 12c support
- Microsoft SQL Server 2014 support
DATABridge Client
- Enhancements to the watchdog timer to optionally implement timeouts to stop the client after stuck queries or lack of data from the server.
- Automated handling of non-US Oracle databases and Oracle databases whose character set is UTF8.
- 64-bit counters are now used for statistics. This eliminates any possibility of overflow during full clones.
Issues Resolved by Service Pack 2
DATABridge 6.2 SP2 includes the previous SP1 fixes and changes, as described in KB 7021916, and resolves the following additional issues for these DATABridge components:
DBEngine
- Because AUDMIS (type 90) audit records don't contain the stack number like other audit records, DBEngine faults with DIMENSION SIZE ERROR ON RESIZE.
- When DEBUG is enabled DBEngine will log the parameter file settings.
- In some situations DBEngine is using the primary audit even though the Accessory specified using the secondary audit only.
- If a STOP AFTER <task> location occurs in the middle of an aborted transaction group the client gets stuck at the beginning of that transaction group.
- If the database contains records longer than 32K words, the Accessory faults with MEMORY SIZE GREATER THAN 65535 WORDS.
- If the following errors occurs on the active audit file and the last commit is at least one audit record prior to the error location, DBEngine retries the read continuously.
DBM_LOC_MISMATCH
DBM_BAD_RANGE
DBM_CHECKSUM
DBM_AUD_CORRUPT
DBM_BAD_AUDBLKSZ
DBM_WRONG_ABSN
DBM_ABSN_SEQ
- If an audit record is longer than 65,535 words, as allowed by DMSII 56.1, DBEngine gets the error "AudRecSize (<auditrecordsize>) must be from 3 to 65535" and then continuously retries the read after each RETRYSECONDS delay. This prevents the caller from advancing through the audit trail.
- Multiple Extract Workers cloning an RDB secondary database finishing at the same time causes I/O faults trying to read the DMSII Control file.
- In certain situations, DBTwin was not switching to an available alternate host when the current host failed.
- If the database contained records longer than 32K words, the Accessory faults with MEMORY SIZE GREATER THAN 65535 WORDS.
- Since DMSII (currently) changes the RSN of an aborted delete, DBEngine now sends the original delete and then sends the reversal as a create with the new RSN.
- Using DMAuditLib with audit blocks larger than 9000 words causes DBTwin to fault with SEG ARRAY ERROR @ (81610100).
- DBTwin opens the remote audit file on every read causing poor performance.
- The DBInitialValues entry point returns incorrect values for COMPACT data sets and the global data set.
- DBEngine doesn't always honor the STOP BEFORE condition.
- In fixup mode, if there are no quiet points between the end of the extract and when an abort is detected, DBEngine returns the error:
[0097] Seg (0) must be from 1 to 4294967295
- Visible RSNs caused DBTwin to fault with a SEG ARRAY ERROR or DIMENSION SIZE ERROR ON RESIZE or the DMSII error READONLY ITEMS HAVE CHANGED.
- Visible RSNs with a tailored DBSupport and $ PRIMARYFILTER caused records to be not found in the DBTwin database.
- Visible RSNs with a tailored DBSupport on the secondary side using the non-PRIMARYFILTER DBEngine caused DBTwin to fault with DIMENSION SIZE ERROR ON RESIZE.
- If an audit prefix is specified, such as AUDIT "(DB)SAVE/MYDB", DATABridge uses that prefix when trying to access the active audit file such as during a clone.
- If the AUDIT NO FILE option is set, DBEnterprise returns error 141.
- The block split flag in DMSII audit block is sometimes set when it shouldn't be, resulting in PRINTAUDIT going into an infinite loop and DATABridge errors such as:
[0097] AudRecSize (0) must be from 3 to 65535
- If the final commit of a tracking run is after an End Transaction at the end of an audit block and more than 2 seconds elapse before the next block is written, DBEngine sends the client an obsolete timestamp in the StateInfo. This results in the following error on the next tracking run:
[0033] Audit location timestamp = <timestamp> for ABSN <absn> in AUDITnnnn is wrong. Check for DMS rollback.
DBServer
- If DEBUG is enabled, DBServer will log the parameter file settings to the DBEngine trace log. Also, the DBS_GetInfo RPC returns incorrect values for source string-valued options that are not specified in the parameter file.
DBSupport
- Primary key lists in tailored libraries end with the data set name as a key item.
DBGenFormat
- The primary key list for a tailored DBSupport library contains the elementary items of a group key and the name of the data set.
BCNotify
- Changed displays of DBNotify to BCNotify.
- WFL BCNotify corrected an error with the default IPADDR, which needed QUOTE around it.
DBPlus
- An integer overflow was occurring when using extremely large sectioned audit files.
DBInfo
- When using the "first quiet point" command, the reported audit location will now contain the hex value of the timestamp to facilitate entering that location in the client tables.
DBEnterprise
- Processing an audit block that contains a time change to an earlier time, such as a change from Daylight Saving time to Standard Time, causes DBEnterprise to return the error:
[0011] Invalid audit location ...
LINCLog.DLL
- LINCLOG records were corrupted during tracking.
Client
- The parameter max_srv_idle_time was added to the [params] section of the Client Configuration file. This parameter represents a timeout value (in minutes) which allows the watchdog timer to time out a server connection after several warnings of inactivity. It also provides an alternative to relying on TCP keep-alive to detect situations where we have a dead connection, such as when the MCP is HALT LOADED.
The value range for this parameter is 15 to 600 minutes. The default is 0, which indicates that the connection should never time out. When you set it to a non-zero value, the Client will stop with an exit code of 2059. If you use the service, the Client will restart after a brief delay.
- The parameter sql_exec_timeout, which applies to update processing, was added to the [params] section of the Client Configuration file. The first value helps to determine when the watchdog time should issue a WARNING about the query taking too long to complete and allows the user to override the default setting of 180 seconds.
The optional second parameter, which defaults to 0 when omitted, allows the user to set the secondary timeout value for a long query. A value of 0 disables this timeout. The value of the second parameter must be greater than that of the first parameter, except if it is 0.
- The watchdog timer and the status command now report the name of the table involved in the stuck SQL operation.
- The following exit status codes were added to the Client:
2053 - An I/O error has occurred while writing a discard file.
2054 - The client is stopping because the total discards threshold has been reached. This value is specified as the first value of the max_discards parameter.
2057 - The value specified for the bcp_decimal_char parameter is incorrect. This error is limited to dbutility for Oracle.
2058 - A SQL update took longer than the maximum allowable time specified by the second value of the sql_exec_timeout parameter. The query was cancelled and the Client was stopped after rolling back the last transaction group.
2059 - No data was received from the DATABridge Server for the amount of time specified by the parameter max_srv_idle_time. The connection to the server was reset and the client was stopped after rolling back the last transaction group.
3060 - The effective CHECKPOINT FREQUENCY parameters are all 0, possibly as a result of the parameters specified in the Client Configuration file. The Client will not continue processing under these conditions, as the Engine is being told to never commit. It will commit at the start of an audit file, but this will lead to huge transactions that could run the database out of log space and would lead to extremely poor performance. You need to correct the Client Configuration file or the Engine Control File, depending on where these values are being set to 0.
- When a virtual data set has more than 4 real data sets in its base list (DERIVED FROM in GenFormat), the client crashes when executing a define/redefine command. This does not apply to MISER databases where the linking is done using user scripts.
- When the index creation for a table fails, the SQL Server client occasionally gets a SQL Server native error 539 which indicates the schema had changed after the target table was created.
- The length calculation for the host variable data is wrong when storing a DMSII ALPHA item as a number on SQL Server and the data type is tinyint, smallint, int or bigint. This only happens if the scale was 0.
- The configuration file parameter set_lincday0_to_null was implemented to make the client treat a LINC date of 0 as NULL, instead of the 1/1 of the base year. The default value of this parameter is false.
- When running a 64-bit Windows client on Windows Server 2003 or 64-bit Windows XP, the following error is returned:
Entry point GetTickCount64 was not found in KERNEL32.dll
- The dbutility “rem” command was implemented to allow users to inject comments into the log file by using the command line. Double quotation marks around the comment are only required if the comment contains special characters (such as exclamation points) that would cause a syntax error. The client will start normally and exit immediately after opening the log file and echoing the command line to the log file. It will stop when it discovers that this a rem command.
- The SQL time in the data extraction statistics occasionally gets corrupted and displays as a huge number.
- The dbfixup program crashes if you try to run it with tables that have an older version, but which include all of the new columns. This might happen if you upgraded, but the upgrade failed. The program would effectively revert to the older version by simply reloading the control tables. Even though the "alter table" commands works correctly in the aborted upgrade, dbfixup crashes if you try to run the commands a second time.
- A new configuration parameter has been implemented, max_discards. This parameter can include two numeric values. The first value represents the maximum number of discards after which the run is aborted. A value of 0 (the default) indicates never. The second value is the maximum number of discards for a table, after which additional discards are not written to the file to avoid overhead when the file gets big. A value of 0 (the default) indicates that the client should write all discards.
- The bulk loader thread and index thread statistics now correctly display at the end of the data extraction phase.
- To reduce screen and log file clutter, log messages such as "Creating temporary file..." and "Starting bcp for..." are only logged if the /v option is enabled, with the exception of the first temporary file for a primary table. This file indicates the start of data extraction for the corresponding data set.
- When bcp operations are initiated because the "max temp storage" threshold has been reached, an informational message now appears.
- ODBC format dates have been replaced with ISO format dates on the SQL Server Client to work correctly in non-U.S. environments. Host variable data already does this.
- All 32-bit integer and floating point statistics counters have been replaced by the data type stats_counter, which uses a 64-bit integer on 64-bit machines and double precision floading point numbers on 32-bit machines. This prevents counters from overflowing and causing inaccurate statistics when small values are added to large floating point numbers, truncating the resulting data.
- A new trace bit TR_DOC_RECS (8192) has been defined which traces only DOC records to diagnose problems with COMMITS.
- In the unlikely situation where the service
crashes, the spawned client runs do not always terminate. To remedy this
situation, one of the following occurs:
- The run stops at the next quiet point.
- If a clone is in progress, the run continues up to the fixup phase.
- If the run is stuck waiting for updates long enough to avoid unintentionally aborting a clone, the watchdog timer closes the connection to the server.
Similarly, DBClntCfgServer shuts down if it receives a reset for the IPC connection.
- If a column in the index for a table is a date and the corresponding data item in DMSII is a NUMBER that is NULL, the Client got into a loop which results in a stack overflow.
- The handling of ALPHA items that are treated as numeric has been extended to handle ALPHA items that are generated by COBOL, where the sign byte is 0xC0 through 0xC9 ("{", "A" ... "I") or 0xD0 through 0xD9 ("}", "J" ... "R").
To enable this feature, create a user script where dms_signed is set to 1 in DMS_ITEMS and di_options is set to 0x4200. (The 0x200 bit indicates that the ALPHA item should be stored as a number, while the 0x4000 bit indicates that this is a COBOL display item.) If a scale is required, set dms_scale appropriately in the same user script.
- FileXtract data sources failed to track when there was no index for the tables. Also a bad index caused the clear duplicate records script to be run, when the error should have been ignored.
- When encountering a SET with KEYCHANGEOK that had too many keys, the Client was not clearing the misc_flags bit (indicating that key changes were allowed). This caused the update to fail with the following error:
Engine did not send BI/AI pair for update to DataSet <name> which allows key changes
- The Client failed to detect that the SET selected by the Engine had too many keys to be usable as an index when there were items with OCCURS clauses that were mapped to secondary tables.
- The Configuration file parameter "set_blanks_to_null" was implemented. This makes the Client store a zero-length character string as NULL instead of a single space. The default value of this parameter is false.
- Implemented a new DMS_ITEMS di_options bit (0x2000000) which allows the Client to properly handle unsigned NUMBER(n) and ALPHA items that use high values to force left justification.
- The Configuration file parameter "enable_ff_padding" was implemented to allow the user to customize DMS items that use high value padding in the Client Configurator by adding a context menu item. If this option is set to False, the menu item is not created.
- The host_info field of StateInfo records is getting corrupted when the parameter use_latest_si is set to true.
- The value of "min_sql_ops" in the STMT statistics is wrong.
- The unload command does not fixup the state info when the Client was improperly terminated. This causes a problem when the software is updated before checking if the global state info has correctly propagated. The command now fixes up the state info before writing the unload file.
- The Client does not propagate the global state info or display the ending state info that follows most host errors.
- When newly added columns have an INITIALVALUE of NULL or when the Engine passes a bad value to the Client, the redefine command sets those columns to a simulated NULL value.
- The redefine command and the Client Configurator cause an access violation when they get bad data for the INITIALVALUE from the Engine. The error reporting code tries to print the value of the key, which is meaningless for an INITIALVALUE record.
- The concatenation code has been extended to work with merged items regardless of which item comes first. Until now, merged items weren't handled correctly except when the merged item was first.
Service
- An exit code of 1167 (DBEnterprise: Network Read Error) causes the service to disable the data source. We now make the service retry the process command, as it is likely to recover.
- When the service gets bad data from a connection initiated by a port checker program, it hangs until the connection is killed. This problem was solved by making the service validate the first input it gets from the console and terminating the connection, if this is not a valid message.
- The end-of-run script for the redefine and generate commands that were initiated by the service gets the wrong values for the exit code and the run type.
Additional Information
Product documentation for DATABridge can be found at https://support.microfocus.com/manuals/databridge.html.
Additional technical resources are available from https://support.microfocus.com/product/?prod=DB.