home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
OS/2 Shareware BBS: 29 Fixes_o
/
29-Fixes_o.zip
/
db2sd71a.zip
/
RELEASE2.TXT
< prev
next >
Wrap
Text File
|
2000-05-10
|
400KB
|
9,137 lines
Release Notes
IBM(R) DB2(R) Universal Database
Release Notes
Version 7.1
╕ Copyright International Business Machines Corporation 2000. All rights
reserved.
U.S. Government Users Restricted Rights -- Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corp.
------------------------------------------------------------------------
Table of Contents
Welcome to DB2 Universal Database Version 7.1!
Special Notes
* 1.1 DB2 Universal Database Business Intelligence Quick Tour
* 1.2 Downloading Installation Packages for All Supported DB2 Clients
* 1.3 Installing DB2 on Windows 2000
* 1.4 Notes on Greater Than 8-Character User IDs and Schema Names
* 1.5 National Language Versions of DB2 Version 7.1
o 1.5.1 Control Center and Documentation Filesets
* 1.6 Accessibility Features of DB2 UDB Version 7.1
o 1.6.1 Keyboard Input and Navigation
+ 1.6.1.1 Keyboard Input
+ 1.6.1.2 Keyboard Focus
o 1.6.2 Features for Accessible Display
+ 1.6.2.1 High-Contrast Mode
+ 1.6.2.2 Font Settings
+ 1.6.2.3 Non-dependence on Color
o 1.6.3 Alternative Alert Cues
o 1.6.4 Compatibility with Assistive Technologies
o 1.6.5 Accessible Documentation
* 1.7 DB2 Everywhere is Now DB2 Everyplace
* 1.8 Error Messages when Attempting to Launch Netscape
* 1.9 Mouse Required
* 1.10 Supported Web Browsers on the Windows 2000 Operating System
* 1.11 Opening External Web Links in Netscape Navigator From The
Information Center when Netscape is Already Open (UNIX Based Systems)
* 1.12 Problems Starting the Information Center
* 1.13 Configuration Requirement for Adobe Acrobat Reader on UNIX Based
Systems
* 1.14 Attempting to Bind from the DB2 Run-time Client Results in a
"Bind files not found" Error
* 1.15 Additional Required Solaris Patch Level
* 1.16 Supported CPUs on DB2 Version 7.1 for Solaris
* 1.17 Searching the DB2 Online Information on Solaris
* 1.18 Java Control Center on OS/2
* 1.19 Search Discovery
* 1.20 Problems When Adding Nodes to a Partitioned Database
* 1.21 Errors During Migration
* 1.22 Memory Windows for HP-UX 11
* 1.23 Spatial Extender is Unavailable
* 1.24 SQL Reference is Provided in One PDF File
* 1.25 Migration Issue Regarding Views Defined with Special Registers
* 1.26 User Action for dlfm client_conf Failure
* 1.27 In the Rare Event that the Copy Daemon Does Not Stop on dlfm stop
* 1.28 Chinese Locale Fix on Red Flag Linux
* 1.29 Uninstalling DB2 DFS Client Enabler
* 1.30 DB2 Install May Hang if a Removable Drive is Not Attached
* 1.31 Client Authentication on Windows NT
* 1.32 AutoLoader May Hang During a Fork
* 1.33 DATALINK Restore
* 1.34 Define User ID and Password in IBM Communications Server for
Windows NT (CS/NT)
o 1.34.1 Node Definition
* 1.35 Federated Systems Restrictions
* 1.36 DataJoiner Restriction
* 1.37 IPX/SPX Protocol Support on Windows 2000
* 1.38 Stopping DB2 Processes Before Upgrading a Previous Version of DB2
* 1.39 Run db2iupdt After Installing DB2 If Another DB2 Product is
Already Installed
* 1.40 JDK Level on OS/2
* 1.41 Setting up the Linux Environment to Run DB2
* 1.42 Hebrew Information Catalog Manager for Windows NT
* 1.43 Error While Creating an SQL Stored Procedure on the Server
* 1.44 Microsoft SNA Server and SNA Multisite Update (Two Phase Commit)
Support
* 1.45 DB2's SNA SPM Fails to Start After Booting Windows
* 1.46 Additional Locale Setting for DB2 for Linux in a Japanese and
Simplified Chinese Linux Environment
* 1.47 Locale Setting for the DB2 Administration Server
* 1.48 Java Method Signature in PARAMETER STYLE JAVA Procedures and
Functions
* 1.49 Shortcuts Not Working
* 1.50 Service Account Requirements for DB2 on Windows NT and Windows
2000
Administration Guide: Planning
* 2.1 Chapter 14. DB2 and High Availability on Sun Cluster 2.2
* 2.2 Appendix E. National Language Support
Administration Guide: Implementation
* 3.1 Chapter 4. Altering a Database
o 3.1.1 Adding a Container to an SMS Table Space on a Partition
o 3.1.2 Switching the State of a Table Space
* 3.2 Chapter 8. Recovering a Database
* 3.3 Appendix C. User Exit for Database Recovery
* 3.4 Appendix I. High Speed Inter-node Communications
o 3.4.1 Enabling DB2 to Run Using VI
Administration Guide: Performance
* 4.1 Chapter 5. System Catalog Statistics
o 4.1.1 Collecting and Using Distribution Statistics
* 4.2 Chapter 6. Understanding the SQL Compiler
o 4.2.1 Replicated Summary Tables
o 4.2.2 Data Access Concepts and Optimization
* 4.3 Chapter 13. Configuring DB2
o 4.3.1 Sort Heap Size (sortheap)
o 4.3.2 Sort Heap Threshold (sheapthres)
* 4.4 Appendix A. DB2 Registry and Environment Variables
o 4.4.1 Table of New and Changed Registry Variables
* 4.5 Appendix C. SQL Explain Tools
Administrative API Reference
* 5.1 db2ConvMonStream
* 5.2 db2XaGetInfo (new API)
o db2XaGetInfo - Get Information for Resource Manager
* 5.3 db2XaListIndTrans (new API that supercedes sqlxphqr)
o db2XaListIndTrans - List Indoubt Transactions
* 5.4 sqlaintp - Get Error Message
Application Building Guide
* 6.1 Chapter 1. Introduction
o 6.1.1 Supported Software
o 6.1.2 Sample Programs
* 6.2 Chapter 3. General Information for Building DB2 Applications
o 6.2.1 Build Files, Makefiles, and Error-checking Utilities
* 6.3 Chapter 4. Building Java Applets and Applications
o 6.3.1 Setting the Environment
* 6.4 Chapter 5. Building SQL Procedures.
o 6.4.1 Setting the SQL Procedures Environment
+ 6.4.1.1 Configuring the Compiler Environment
+ 6.4.1.2 Customizing Compiler Options
+ 6.4.1.3 Retaining Intermediate Files
+ 6.4.1.4 Backup and Restore
o 6.4.2 Creating SQL Procedures
o 6.4.3 Calling Stored Procedures
* 6.5 Chapter 7. Building HP-UX Applications.
o 6.5.1 HP-UX C
o 6.5.2 HP-UX C++
* 6.6 Chapter 10. Building PTX Applications
o 6.6.1 ptx/C++
* 6.7 Chapter 12. Building Solaris Applications
o 6.7.1 SPARCompiler C++
* 6.8 VisualAge C++ Version 4.0 on OS/2 and Windows
Application Development Guide
* 7.1 Writing OLE Automation Stored Procedures
* 7.2 Chapter 7. Stored Procedures
o 7.2.1 DECIMAL Type Not Supported in Linux Java Routines
* 7.3 Chapter 12. Working with Complex Objects: User-Defined Structured
Types
o 7.3.1 Inserting Structured Type Attributes Into Columns
* 7.4 Chapter 20. Programming in C and C++
o 7.4.1 C/C++ Types for Stored Procedures, Functions, and Methods
* 7.5 Appendix B. Sample Programs
* 7.6 Activating the IBM DB2 Universal Database Project and Tool Add-ins
for Microsoft Visual C++
CLI Guide and Reference
* 8.1 CLI Unicode Functions and SQL_C_WCHAR Support on AIX Only
* 8.2 Binding Database Utilities Using the Run-Time Client
* 8.3 Addition to the "Using Compound SQL" Section
* 8.4 Writing a Stored Procedure in CLI
* 8.5 Appendix K. Using the DB2 CLI/ODBC/JDBC Trace Facility
* 8.6 Using Static SQL in CLI Applications
* 8.7 Limitations of JDBC/ODBC/CLI Static Profiling
* 8.8 Parameter Correction for SQLBindFileToParam() CLI Function
* 8.9 ADT Transforms
Command Reference
* 9.1 db2batch - Benchmark Tool
* 9.2 db2cap (new command)
o db2cap - CLI/ODBC Static Package Binding Tool
* 9.3 db2gncol (new command)
o db2gncol - Update Generated Column Values
* 9.4 db2look - DB2 Statistics Extraction Tool
* 9.5 db2updv6 (new command)
* 9.6 New Command Line Processor Option (-x, Suppress printing of column
headings)
* 9.7 True Type Font Requirement for DB2 CLP
* 9.8 CALL
* 9.9 EXPORT
* 9.10 GET DATABASE CONFIGURATION
* 9.11 IMPORT
* 9.12 LOAD
Connectivity Supplement
* 10.1 Setting Up the Application Server in a VM Environment
* 10.2 CLI/ODBC/JDBC Configuration PATCH1 and PATCH2 Settings
Data Links Manager Quick Beginnings
* 11.1 Dlfm start Fails with Message: "Error in getting the afsfid for
prefix"
* 11.2 Setting Tivoli Storage Manager Class for Archive Files
* 11.3 Disk Space Requirements for DFS Client Enabler
* 11.4 Monitoring the Data Links File Manager Back-end Processes on AIX
* 11.5 Installing and Configuring DB2 Data Links Manger for AIX:
Additional Installation Considerations in DCE-DFS Environments
* 11.6 Failed "dlfm add_prefix" Command
* 11.7 Installing and Configuring DB2 Data Links Manger for AIX:
Installing DB2 Data Links Manager on AIX Using the db2setup Utility
* 11.8 Installing and Configuring DB2 Data Links Manager for AIX:
DCE-DFS Post-Installation Task
* 11.9 Installing and Configuring DB2 Data Links Manager for AIX:
Manually Installing DB2 Data Links Manager Using Smit
* 11.10 Installing and Configuring DB2 Data Links DFS Client Enabler
* 11.11 Choosing a Backup Method for DB2 Data Links Manager on AIX
* 11.12 Choosing a Backup Method for DB2 Data Links Manager on Windows
NT
* 11.13 Backing up a Journalized File System on AIX
Data Movement Utilities Guide and Reference
* 12.1 Pending States After a Load Operation
* 12.2 Load Restrictions and Limitations
Installation and Configuration Supplement
* 13.1 Binding Database Utilities Using the Run-Time Client
* 13.2 UNIX Client Access to DB2 Using ODBC
* 13.3 Switching NetQuestion for OS/2 to Use TCP/IP
Message Reference
* 14.1 SQL0270N (New Reason Code 40)
* 14.2 SQL0301N (New Explanation Text)
* 14.3 SQL0303N (New Text)
* 14.4 SQL0358N (New User Response 26)
* 14.5 SQL0408N (New Text)
* 14.6 SQL0423N (Revised Text)
* 14.7 SQL0670N (Revised Text)
* 14.8 SQL1704N (New Reason Codes)
* 14.9 SQL4942N (New Text)
* 14.10 SQL20117N (Changed Reason Code 1)
Replication Guide and Reference
* 15.1 Replication on Windows 2000
* 15.2 DATALINK Replication
* 15.3 LOB Restrictions
* 15.4 Replication and Non-IBM Servers
* 15.5 Update-anywhere Prerequisite
* 15.6 Planning for Replication
* 15.7 Setting Up Your Replication Environment
* 15.8 Problem Determination
* 15.9 Capture and Apply for AS/400
* 15.10 Table Structures
* 15.11 Capture and Apply Messages
* 15.12 Starting the Capture and Apply Programs from Within an
Application
SQL Reference
* 16.1 IDENTITY_VAL_LOCAL
* 16.2 OLAP Functions
* 16.3 SQL Procedures/Compound Statement
* 16.4 LCASE and UCASE (Unicode)
* 16.5 WEEK_ISO
* 16.6 Naming Conventions and Implicit Object Name Qualifications
* 16.7 Queries (select-statement/fetch-first-clause)
* 16.8 Libraries Used by the CREATE WRAPPER Statement on Linux
System Monitor Guide and Reference
* 17.1 db2ConvMonStream
Troubleshooting Guide
* 18.1 Starting DB2 on Windows 95 and Windows 98 When the User Is Not
Logged On
Control Center
* 19.1 Ability to Administer DB2 Server for VSE and VM Servers
* 19.2 Java 1.2 Support for the Control Center
* 19.3 "Invalid shortcut" Error when Using the Online Help on the
Windows Operating System
* 19.4 "File access denied" Error when Attempting to View a Completed
Job in the Journal on the Windows Operating System
* 19.5 Multisite Update Test Connect
* 19.6 Control Center for DB2 for OS/390
* 19.7 Required Fix for Control Center for OS/390
* 19.8 Change to the Create Spatial Layer Dialog
* 19.9 Troubleshooting Information for the DB2 Control Center
* 19.10 Control Center Troubleshooting on UNIX Based Systems
* 19.11 Possible Infopops Problem on OS/2
* 19.12 Launching More Than One Control Center Applet
* 19.13 Help for the jdk11_path Configuration Parameter
* 19.14 Solaris System Error (SQL10012N) when Using the Script Center or
the Journal
* 19.15 Help for the DPREPL.DFT File
* 19.16 Online Help for the Control Center Running as an Applet
* 19.17 Running the Control Center in Applet Mode (Windows 95)
Data Warehouse Center
* 20.1 Data Warehouse Center Publications
o 20.1.1 Data Warehouse Center Application Integration Guide
o 20.1.2 Data Warehouse Center Administration Guide
o 20.1.3 Data Warehouse Center Messages
o 20.1.4 Data Warehouse Center Online Help
* 20.2 Warehouse Control Database
o 20.2.1 The default warehouse control database
o 20.2.2 The Warehouse Control Database Management window
o 20.2.3 Changing the active warehouse control database
o 20.2.4 Creating and initializing a warehouse control database
o 20.2.5 Migrating IBM Visual Warehouse control databases
* 20.3 Setting up and running replication with Data Warehouse Center
* 20.4 Troubleshooting tips
* 20.5 Correction to RUNSTATS and REORGANIZE TABLE Online Help
* 20.6 Notification Page (Warehouse Properties Notebook and Schedule
Notebook)
* 20.7 Agent Module Field in the Agent Sites Notebook
* 20.8 Accessing DB2 Version 5 data with the DB2 Version 7.1 warehouse
agent
o 20.8.1 Migrating DB2 Version 5 servers
o 20.8.2 Changing the agent configuration
+ 20.8.2.1 UNIX warehouse agents
+ 20.8.2.2 Microsoft Windows NT, Windows 2000, and OS/2
warehouse agents
* 20.9 Accessing warehouse control databases
* 20.10 Accessing sources and targets
* 20.11 Accessing DB2 Version 5 information catalogs with the DB2
Version 7.1 Information Catalog Manager
* 20.12 Additions to supported non-IBM database sources
* 20.13 Information Catalog Manager Initialization Utility
* 20.14 Information Catalog Manager Administration Guide
DB2 Stored Procedure Builder
* 21.1 Java 1.2 Support for the DB2 Stored Procedure Builder
* 21.2 Remote Debugging of DB2 Stored Procedures
* 21.3 Building SQL Procedures on the Intel and UNIX Platforms
* 21.4 Using the DB2 Stored Procedure Builder on the Solaris Platform
* 21.5 Known Problems and Limitations
* 21.6 Using DB2 Stored Procedure Builder with Traditional Chinese
Locale
* 21.7 UNIX (AIX, Sun Solaris, Linux) Installations and the Stored
Procedure Builder
DB2 Warehouse Manager
* 22.1 "Warehouse Manager" Should Be "DB2 Warehouse Manager"
* 22.2 DB2 Warehouse Manager Publications
o 22.2.1 Information Catalog Manager Administration Guide
o 22.2.2 Information Catalog Manager Programming Guide and
Reference
+ 22.2.2.1 Information Catalog Manager Reason Codes
o 22.2.3 Information Catalog Manager: Online Messages
o 22.2.4 Information Catalog Manager: Online Help
o 22.2.5 Query Patroller Administration Guide
+ 22.2.5.1 DB2 Query Patroller Client is a Separate Component
+ 22.2.5.2 Manual Installation of Query Patroller on AIX and
Solaris
+ 22.2.5.3 Enabling Query Management
+ 22.2.5.4 Starting Query Administrator
+ 22.2.5.5 User Administration
+ 22.2.5.6 Creating a Job Queue
+ 22.2.5.7 Using the Command Line Interface
+ 22.2.5.8 Query Enabler Notes
Information Center
* 23.1 "Invalid shortcut" Error on the Windows Operating System
OLAP Starter Kit
* 24.1 Completing the DB2 OLAP Starter Kit Setup on AIX and Solaris
* 24.2 Logging in from OLAP Integration Server Desktop
o 24.2.1 Starter Kit Login Example
* 24.3 Manually creating and configuring the sample databases for OLAP
Integration Server
* 24.4 Known problems and limitations
o 24.4.1 DB2 OLAP Starter Kit
o 24.4.2 DB2 OLAP Desktop Client
o 24.4.3 Spreadsheet Clients
o 24.4.4 DB2 OLAP Integration Server
* 24.5 OLAP Starter Kit Spreadsheet Needs Current Windows svc.pack
* 24.6 OLAP Spreadsheet Add-in EQD Files Missing
Wizards
* 25.1 Setting Extent Size in the Create Database Wizard
Additional Information
* 26.1 DB2 Universal Database and DB2 Connect Online Support
* 26.2 DB2 magazine
Appendix A. Notices
* A.1 Trademarks
------------------------------------------------------------------------
Welcome to DB2 Universal Database Version 7.1!
This file contains information about the following products that was not
available when the DB2 manuals were printed:
IBM DB2 Universal Database Personal Edition, Version 7.1
IBM DB2 Universal Database Workgroup Edition, Version 7.1
IBM DB2 Universal Database Enterprise Edition, Version 7.1
IBM DB2 Data Links Manager, Version 7.1
IBM DB2 Universal Database Enterprise - Extended Edition, Version 7.1
IBM DB2 Query Patroller, Version 7.1
IBM DB2 Personal Developer's Edition, Version 7.1
IBM DB2 Universal Developer's Edition, Version 7.1
IBM DB2 Data Warehouse Manager, Version 7.1
IBM DB2 Relational Connect, Version 7.1
A separate Release Notes file, installed as READCON.TXT, is provided for
the following products:
IBM DB2 Connect Personal Edition, Version 7.1
IBM DB2 Connect Enterprise Edition, Version 7.1
The What's New book contains both an overview of some of the major DB2
enhancements for Version 7.1, and a detailed description of these new
features and enhancements.
------------------------------------------------------------------------
Special Notes
------------------------------------------------------------------------
1.1 DB2 Universal Database Business Intelligence Quick Tour
The Quick Tour is not available on DB2 for Linux.
The Quick Tour is optimized to run with small system fonts. You may have to
adjust your Web browser's font size to correctly view the Quick Tour on
OS/2. Refer to your Web browser's help for information on adjusting font
size. To view the Quick Tour correctly (SBCS only), it is recommended that
you use an 8-point Helv font. For Japanese and Korean customers, it is
recommended that you use an 8-point Mincho font. When you set font
preferences, be sure to select the "Use my default fonts, overriding
document-specified fonts" option in the Fonts page of the Preference
window.
In some cases the Quick Tour may launch behind a secondary browser window.
To correct this problem, close the Quick Tour, and follow the steps in 1.8,
Error Messages when Attempting to Launch Netscape.
When launching the Quick Tour, you may receive a JavaScript error similar
to the following:
JavaScript Error: file:/C/Program Files/SQLLIB/doc/html/db2qt/index4e.htm, line 65:
Window is not defined.
This JavaScript error prevents the Quick Tour launch page, index4e.htm,
from closing automatically after the Quick Tour is launched. You can close
the Quick Tour launch page by closing the browser window in which
index4e.htm is displayed.
------------------------------------------------------------------------
1.2 Downloading Installation Packages for All Supported DB2 Clients
To download installation packages for all supported DB2 clients, which
include all the pre-Version 7.1 clients, connect to the IBM DB2 Client
Application Enabler Pack Web site at
http://www.ibm.com/software/data/db2/db2tech/clientpak.html
------------------------------------------------------------------------
1.3 Installing DB2 on Windows 2000
On Windows 2000, when installing over a previous version of DB2 or when
reinstalling the current version, ensure that the recovery options for all
of the DB2 services are set to "Take No Action".
------------------------------------------------------------------------
1.4 Notes on Greater Than 8-Character User IDs and Schema Names
* DB2 Version 7.1 products on Windows 32-bit platforms support user IDs
that are up to 30 characters long. However, because of native support
of Windows NT and Windows 2000, the practical limit for user ID is 20
characters.
* DB2 Version 7.1 supports non-Windows 32-bit clients connecting to
Windows NT and Windows 2000 with user IDs longer than 8 characters
when user ID and password are being specified explicitly. This
excludes connections using Client or DCE authentication.
* DCE authentication on all platforms continues to have the 8-character
user ID limit.
* The authid returned in the SQLCA from a successful CONNECT or ATTACH
is truncated to 8 characters. The SQLWARN fields contain warnings when
truncation occurs. For more information, refer to the description of
the CONNECT statement in the SQL Reference.
* The authid returned by the command line processor (CLP) from a
successful CONNECT or ATTACH is truncated to 8 characters. An ellipsis
(...) is appended to the authid to indicate truncation.
* DB2 Version 7.1 supports schema names with length up to 30 bytes, with
the following exceptions:
o Tables with schema names longer than 18 bytes cannot be
replicated.
o User defined types (UDTs) cannot have schema names longer than 8
bytes.
------------------------------------------------------------------------
1.5 National Language Versions of DB2 Version 7.1
DB2 Version 7.1 is available in English, French, German, Italian, Spanish,
Brazilian Portuguese, Japanese, Korean, Simplified Chinese, Traditional
Chinese, Danish, Finnish, Norwegian, Swedish, Czech, Dutch, Hungarian,
Polish, Turkish, Russian, Bulgarian and Slovenian.
On UNIX-based platforms, the DB2 product messages and library can be
installed in several different languages. The DB2 installation utility will
lay down the message catalog filesets into the most commonly used locale
directory for a given platform as shown in the following chart.
Operating AIX HPUX Solaris Linux SGI Dynix
System
Language Locale Cde Locale Cde Locale Cde Locale Cde Locale Cde Locale Cde
Pg Pg Pg Pg Pg Pg
French fr_FR 819 fr_FR.iso88591 819 fr 819 fr 819 fr 819
Fr_FR 850 fr_FR.roman8 1051
German de_DE 819 de_DE.iso88591 819 de 819 de 819 de 819
De_DE 850 de_DE.roman8 1051
Italian it_IT 819 it_IT.iso88591 819 it 819 it 819 es 819
It_IT 850 it_IT.roman8 1051
Spanish es_ES 819 es_ES.iso88591 819 es 819 es 819
Es_ES 850 es_ES.roman8 1051
Brazilian pt_BR 819 pt_BR 819 pt_BR 819
Portu-
guese
Japanese ja_JP 924 ja_JP.eucJP 954 ja 954 ja_JP.ujis 954 ja_JP.EUC 954
Ja_JP 932
Korean ko_KR 970 ko_KR.eucKR 970 ko 970 ko_KO.euc 970
Simplified zh_CN 1383 zh_CN.hp15CN 1383 zh 1383 zh 1386
Chinese Zh_ 1386 zh_CN.GBK
CN.GBK
Traditionalzh_TW 964 zh_TW.eucTW 964 zh_TW 964
Chinese Ah_TW 950 zh_TW.big5 950 zh_TW.BIG5 950
Danish da_DK 819 da_DK.iso88591 819 da 819
Da_DK 850 da_DK.roman8 1051
Finnish fi_FI 819 fi_FI.iso88591 819 fi 819
Fi_FI 850 fi_FI.roman8 1051
Norwegian no_NO 819 no_NO.iso88591 819 no 819
No_NO 850 no_NO.roman8 1051
Sweden sv_SE 819 sv_SE.iso88591 819 sv 819
Sv_SE 850 sv_SE.roman8 1051
Czech cs_CZ 912
Hungarian hu_HU 912
Polish pl_PL 912
Dutch nl_NL 819 nl 819
Nl_NL 850
Turkish tr_TR 920
Russian ru_RU 915
Bulgarian bg_BG 915 bg_BG.iso88595 915
Slovenian sl_SI 912 sl_SI.iso88592 912 sl_SI 912
If your system uses the same code pages but different locale names than
those provided above, you can still see the translated messages by creating
a link to the appropriate message directory.
For example, if your AIX machine default locale is ja_JP.IBM-eucJP and the
codepage of ja_JP.IBM-eucJP is 954, you can create a link from
/usr/lpp/db2_07_01/msg/ja_JP.IBM-eucJP to /usr/lpp/db2_07_01/msg/ja_JP by
issuing the following command:
ln -s /usr/lpp/db2_07_01/msg/ja_JP /usr/lpp/db2_07_01/msg/ja_JP.IBM-eucJP
After the execution of this command, all DB2 messages will come up in
Japanese.
1.5.1 Control Center and Documentation Filesets
The Control Center, Control Center Help and documentation filesets are
placed in the following directories on the target workstation:
* DB2 for AIX:
o /usr/lpp/db2_07_01/cc/%L
o /usr/lpp/db2_07_01/java/%L
o /usr/lpp/db2_07_01/doc/%L
o /usr/lpp/db2_07_01/qp/$L
o /usr/lpp/db2_07_01/spb/%L
* DB2 for HP-UX:
o /opt/IBMdb2/V7.1/cc/%L
o /opt/IBMdb2/V7.1/java/%L
o /opt/IBMdb2/V7.1/doc/%L
* DB2 for Linux:
o /usr/IBMdb2/V7.1/cc/%L
o /usr/IBMdb2/V7.1/java/%L
o /usr/IBMdb2/V7.1/doc/%L
* DB2 for Solaris:
o /opt/IBMdb2/V7.1/cc/%L
o /usr/IBMdb2/V7.1/java/%L
o /opt/IBMdb2/V7.1/doc/%L
Control Center filesets are in Unicode codepage. Documentation and Control
Center help filesets are in a browser-recognized codeset. If your system
uses a different locale name than the one provided, you can still run the
translated version of Control Center and see the translated version of help
by creating links to the appropriate language directories.
For example, if your AIX machine default locale is ja_JP.IBM-eucJP, you can
create links from /usr/lpp/db2_07_01/cc/ja_JP.IBM-eucJP to
/usr/lpp/db2_07_01/cc/ja_JP and from /usr/lpp/db2_07_01/doc/ja_JP.IBM-eucJP
to /usr/lpp/db2_07_01/doc/ja_JP by issuing the following commands:
* ln -s /usr/lpp/db2_07_01/cc/ja_JP
/usr/lpp/db2_07_01/cc/ja_JP.IBM-eucJP
* ln -s /usr/lpp/db2_07_01/doc/ja_JP
/usr/lpp/db2_07_01/doc/ja_JP.IBM-eucJP
After the execution of these commands, the Control Center and help text
will come up in Japanese.
------------------------------------------------------------------------
1.6 Accessibility Features of DB2 UDB Version 7.1
The DB2 UDB family of products includes a number of features that make the
products more accessible for people with disabilities. These features
include:
* Features that facilitate keyboard input and navigation
* Features that enhance display properties
* Options for audio and visual alert cues
* Compatibility with assistive technologies
* Compatibility with accessibility features of the operating system
* Accessible documentation formats
1.6.1 Keyboard Input and Navigation
1.6.1.1 Keyboard Input
The DB2 Control Center can be operated using only the keyboard. Menu items
and controls provide access keys that allow users to activate a control or
select a menu item directly from the keyboard. These keys are
self-documenting, in that the access keys are underlined on the control or
menu where they appear.
1.6.1.2 Keyboard Focus
In UNIX-based systems, the position of the keyboard focus is highlighted,
indicating which area of the window is active and where the user's
keystrokes will have an effect.
1.6.2 Features for Accessible Display
The DB2 Control Center has a number of features that enhance the user
interface and improve accessibility for users with low vision. These
accessibility enhancements include support for high-contrast settings and
customizable font properties.
1.6.2.1 High-Contrast Mode
The Control Center interface supports the high-contrast-mode option
provided by the operating system. This feature assists users who require a
higher degree of contrast between background and foreground colors.
1.6.2.2 Font Settings
The Control Center interface allows users to select the color, size, and
font for the text in menus and dialog windows.
1.6.2.3 Non-dependence on Color
Users do not need to distinguish between colors in order to use any of the
functions in this product.
1.6.3 Alternative Alert Cues
The user can opt to receive alerts through audio or visual cues.
1.6.4 Compatibility with Assistive Technologies
The DB2 Control Center interface is compatible with screen reader
applications such as Via Voice. When in application mode, the Control
Center interface has the properties required for these accessibility
applications to make onscreen information available to blind users.
1.6.5 Accessible Documentation
Documentation for the DB2 family of products is available in HTML format.
This allows users to view documentation according to the display
preferences set in their browsers. It also allows the use of screen readers
and other assistive technologies.
------------------------------------------------------------------------
1.7 DB2 Everywhere is Now DB2 Everyplace
The name of DB2 Everywhere has changed to DB2 Everyplace.
------------------------------------------------------------------------
1.8 Error Messages when Attempting to Launch Netscape
If you encounter the following error messages when attempting to launch
Netscape:
Cannot find file <file path> (or one of its components).
Check to ensure the path and filename are correct and that all
required libraries are available.
Unable to open "D:\Program Files\SQLLIB\CC\..\doc\html\db2help\XXXXX.htm"
you should take the following steps to correct this problem on Windows NT,
95, or 98 (see below for what to do on Windows 2000):
1. From the Start menu, select Programs --> Windows Explorer. Windows
Explorer opens.
2. From Windows Explorer, select View --> Options. The Options Notebook
opens.
3. Click the File types tab. The File types page opens.
4. Highlight Netscape Hypertext Document in the Registered file types
field and click Edit. The Edit file type window opens.
5. Highlight "Open" in the Actions field.
6. Click the Edit button. The Editing action for type window opens.
7. Uncheck the Use DDE check box.
8. In the Application used to perform action field, make sure that "%1"
appears at the very end of the string (include the quotation marks,
and a blank space before the first quotation mark).
If you encounter the messages on Windows 2000, you should take the
following steps:
1. From the Start menu, select Windows Explorer. Windows Explorer opens.
2. From Windows Explorer, select Tools --> Folder Options. The Folder
Options notebook opens.
3. Click the File Types tab.
4. On the File Types page, in the Registered file types field, highlight:
HTM Netscape Hypertext Document and click Advanced. The Edit File Type
window opens.
5. Highlight "open" in the Actions field.
6. Click the Edit button. The Editing Action for Type window opens.
7. Uncheck the Use DDE check box.
8. In the Application used to perform action field, make sure that "%1"
appears at the very end of the string (include the quotation marks,
and a blank space before the first quotation mark).
9. Click OK.
10. Repeat steps 4 through 8 for the HTML Netscape Hypertext Document and
SHTML Netscape Hypertext Document file types.
------------------------------------------------------------------------
1.9 Mouse Required
For all platforms except Windows, a mouse is required to use the tools.
------------------------------------------------------------------------
1.10 Supported Web Browsers on the Windows 2000 Operating System
We recommend that you use Microsoft Internet Explorer on Windows 2000.
If you use Netscape, please be aware of the following:
* DB2 online information searches may take a long time to complete on
Windows 2000 using Netscape. Netscape will use all available CPU
resources and appear to run indefinitely. While the search results may
eventually return, we recommend that you change focus by clicking on
another window after submitting the search. The search results will
then return in a reasonable amount of time.
* You may also run into this problem: in the Control Center, when you
request help it is displayed correctly in a Netscape browser window,
but if you leave the browser window open and request help later on
from a different part of the Control Center, nothing changes in the
browser. If you close the browser window and request help again, the
correct help comes up. You may be able to fix this problem by
following the steps in 1.8, Error Messages when Attempting to Launch
Netscape. You can also get around the problem by closing the browser
window before requesting help for the Control Center.
* When you request Control Center help, or a topic from the Information
Center, you may get an error message. To fix this, follow the steps in
1.8, Error Messages when Attempting to Launch Netscape.
------------------------------------------------------------------------
1.11 Opening External Web Links in Netscape Navigator From The Information
Center when Netscape is Already Open (UNIX Based Systems)
If Netscape Navigator is already open and displaying either a local DB2
HTML document or an external Web site, an attempt to open an external Web
site from the Information Center will result in a Netscape error. The error
will state that "Netscape is unable to find the file or directory named
<external site>."
To work around this problem, close the open Netscape browser before opening
the external Web site. Netscape will restart and bring up the external Web
site.
Note that this error does not occur when attempting to open a local DB2
HTML document with Netscape already open.
------------------------------------------------------------------------
1.12 Problems Starting the Information Center
On some systems, the Information Center can be slow to start if you invoke
it using the Start Menu, First Steps, or the db2ic command. If you
experience this problem, start the Control Center, then select Help -->
Information Center.
------------------------------------------------------------------------
1.13 Configuration Requirement for Adobe Acrobat Reader on UNIX Based
Systems
Acrobat Reader is only offered in English on UNIX based platforms, and
errors may be returned when attempting to open PDF files with language
locales other than English. These errors suggest font access or extraction
problems with the PDF file, but are actually due to the fact that the
English Acrobat Reader cannot function correctly within a UNIX non-English
language locale.
To view such PDF files, switch to the English locale by performing one of
the following steps before launching the English Acrobat Reader:
* Edit the Acrobat Reader's launch script, by adding the following line
after the #!/bin/sh statement in the launch script file:
LANG=C;export LANG
This approach will ensure correct behavior when Acrobat Reader is
launched by other applications, such as Netscape Navigator, or an
application help menu.
* Enter LANG=C at the command prompt to set the Acrobat Reader's
application environment to English.
For further information, contact Adobe Systems (http://www.Adobe.com).
------------------------------------------------------------------------
1.14 Attempting to Bind from the DB2 Run-time Client Results in a "Bind
files not found" Error
Because the DB2 Run-time Client does not have the full set of bind files,
the binding of GUI tools cannot be done from the DB2 Run-time Client, and
can only be done from the DB2 Administration Client.
------------------------------------------------------------------------
1.15 Additional Required Solaris Patch Level
DB2 Universal Database Version 7.1 for Solaris Version 2.6 requires patch
106285-02 or higher, in addition to the patches listed in the DB2 for UNIX
Quick Beginnings manual.
------------------------------------------------------------------------
1.16 Supported CPUs on DB2 Version 7.1 for Solaris
CPU versions previous to UltraSparc are not supported.
------------------------------------------------------------------------
1.17 Searching the DB2 Online Information on Solaris
If you are having problems searching the DB2 online information on your
Solaris system, check your system's kernel parameters in /etc/system. Here
are the minimum kernel parameters required by DB2's search system,
NetQuestion:
semsys:seminfo_semmni 256
semsys:seminfo_semmap 258
semsys:seminfo_semmns 512
semsys:seminfo_semmnu 512
semsys:seminfo_semmsl 50
shmsys:shminfo_shmmax 6291456
shmsys:shminfo_shmseg 16
shmsys:shminfo_shmmni 300
To set a kernel parameter, add a line at the end of /etc/system as follows:
set <semaphore_name> = value
You must reboot your system for any new or changed values to take effect.
------------------------------------------------------------------------
1.18 Java Control Center on OS/2
The Control Center must be installed on an HPFS-formatted drive.
------------------------------------------------------------------------
1.19 Search Discovery
Search discovery is only supported on broadcast media. For example, search
discovery will not function through an ATM adapter. However, this
restriction does not apply to known discovery.
------------------------------------------------------------------------
1.20 Problems When Adding Nodes to a Partitioned Database
When adding nodes to a partitioned database that has one or more system
temporary table spaces with a page size that is different from the default
page size (4 KB), you may encounter the error message: "SQL6073N Add Node
operation failed" and an SQLCODE. This occurs because only the IBMDEFAULTBP
buffer pool exists with a page size of 4 KB when the node is created.
For example, you can use the db2start command to add a node to the current
partitioned database:
DB2START NODENUM 2 ADDNODE HOSTNAME newhost PORT 2
If the partitioned database has system temporary table spaces with the
default page size, the following message is returned:
SQL6075W The Start Database Manager operation successfully added the node.
The node is not active until all nodes are stopped and started again.
However, if the partitioned database has system temporary table spaces that
are not the default page size, the returned message is:
SQL6073N Add Node operation failed. SQLCODE = "<-902>"
In a similar example, you can use the ADD NODE command after manually
updating the db2nodes.cfg file with the new node description. After editing
the file and running the ADD NODE command with a partitioned database that
has system temporary table spaces with the default page size, the following
message is returned:
DB20000I The ADD NODE command completed successfully.
However, if the partitioned database has system temporary table spaces that
are not the default page size, the returned message is:
SQL6073N Add Node operation failed. SQLCODE = "<-902>"
One way to prevent the problems outlined above is to run:
DB2SET DB2_HIDDENBP=16
before issuing db2start or the ADD NODE command. This registry variable
enables DB2 to allocate hidden buffer pools of 16 pages each using a page
size different from the default. This enables the ADD NODE operation to
complete successfully.
Another way to prevent these problems is to specify the WITHOUT TABLESPACES
clause on the ADD NODE or the db2start command. After doing this, you will
have to create the buffer pools using the CREATE BUFFERPOOL statement, and
associate the system temporary table spaces to the buffer pool using the
ALTER TABLESPACE statement.
When adding nodes to an existing nodegroup that has one or more table
spaces with a page size that is different from the default page size (4
KB), you may encounter the error message: "SQL0647N Bufferpool "" is
currently not active.". This occurs because the non-default page size
buffer pools created on the new node have not been activated for the table
spaces.
For example, you can use the ALTER NODEGROUP statement to add a node to a
nodegroup:
DB2START
CONNECT TO mpp1
ALTER NODEGROUP ng1 ADD NODE (2)
If the nodegroup has table spaces with the default page size, the following
message is returned:
SQL1759W Redistribute nodegroup is required to change data positioning for
objects in nodegroup "<ng1>" to include some added nodes or exclude
some drop nodes.
However, if the nodegroup has table spaces that are not the default page
size, the returned message is:
SQL0647N Bufferpool "" is currently not active.
One way to prevent this problem is to create buffer pools for each page
size and then to reconnect to the database before issuing the ALTER
NODEGROUP statement:
DB2START
CONNECT TO mpp1
CREATE BUFFERPOOL bp1 SIZE 1000 PAGESIZE 8192
CONNECT RESET
CONNECT TO mpp1
ALTER NODEGROUP ng1 ADD NODE (2)
A second way to prevent the problem is to run:
DB2SET DB2_HIDDENBP=16
before issuing the db2start command, and the CONNECT and ALTER NODEGROUP
statements.
Another problem can occur when the ALTER TABLESPACE statement is used to
add a table space to a node. For example:
DB2START
CONNECT TO mpp1
ALTER NODEGROUP ng1 ADD NODE (2) WITHOUT TABLESPACES
ALTER TABLESPACE ts1 ADD ('ts1') ON NODE (2)
This series of commands and statements generates the error message SQL0647N
(not the expected message SQL1759W).
To complete this change correctly, you should reconnect to the database
after the ALTER NODEGROUP... WITHOUT TABLESPACES statement.
DB2START
CONNECT TO mpp1
ALTER NODEGROUP ng1 ADD NODE (2) WITHOUT TABLESPACES
CONNECT RESET
CONNECT TO mpp1
ALTER TABLESPACE ts1 ADD ('ts1') ON NODE (2)
Another way to prevent the problem is to run:
DB2SET DB2_HIDDENBP=16
before issuing the db2start command, and the CONNECT, ALTER NODEGROUP, and
ALTER TABLESPACE statements.
------------------------------------------------------------------------
1.21 Errors During Migration
During migration, error entries in the db2diag.log file (database not
migrated) appear even when migration is successful, and can be ignored.
------------------------------------------------------------------------
1.22 Memory Windows for HP-UX 11
Memory windows is for users on large HP 64-bit machines, who want to take
advantage of greater than 1.75 GB of shared memory for 32-bit applications.
Memory windows makes available a separate 1 GB of shared memory per process
or group of processes. This allows an instance to have its own 1 GB of
shared memory, plus the 0.75 GB of global shared memory. If users want to
take advantage of this, they can run multiple instances, each in its own
window. Following are prerequisites and conditions for using memory
windows:
* DB2 EE environment
o Patches: Extension Software 12/98, and PHKL_17795.
o The $DB2INSTANCE variable must be set for the instance.
o There must be an entry in the /etc/services.window file for each
DB2 instance that you want to run under memory windows. For
example:
db2instance1 50
db2instance2 60
Note: There can only be a single space between the name and the ID.
o Any DB2 commands that you want to run on the server, and that
require more than a single statement, must be run using a TCP/IP
loopback method. This is because the shell will terminate when
memory windows finishes processing the first statement. DB2
Service knows how to accomplish this.
o Any DB2 command that you want to run against an instance that is
running in memory windows must be prefaced with db2win (located
in sqllib/bin). For example:
db2win db2start
db2win db2stop
o Any DB2 command that is run outside of memory windows (but when
memory windows is running) should return a 1042. For example:
db2win db2start <== OK
db2 connect to db <==SQL1042
db2stop <==SQL1042
db2win db2stop <== OK
* DB2 EEE environment
o Patches: Extension Software 12/98, and PHKL_17795.
o The $DB2INSTANCE variable must be set for the instance.
o The DB2_ENABLE_MEM_WINDOWS registry variable must be set to TRUE.
o There must be an entry in the /etc/services.window file for each
logical node of each instance that you want to run under memory
windows. The first field of each entry should be the instance
name concatenated with the port number. For example:
=== $HOME/sqllib/db2nodes.cfg for db2instance1 ===
5 host1 0
7 host1 1
9 host2 0
=== $HOME/sqllib/db2nodes.cfg for db2instance2 ===
1 host1 0
2 host2 0
3 host2 1
=== /etc/services.window on host1 ===
db2instance10 50
db2instance11 55
db2instance20 60
=== /etc/services.window on host2 ===
db2instance10 30
db2instance20 32
db2instance21 34
o You must not preface any DB2 command with db2win, which is to be
used in an EE environment only.
------------------------------------------------------------------------
1.23 Spatial Extender is Unavailable
DB2 Spatial Extender functionality will be available in a future release.
Updates to DB2 Spatial Extender information will be included in the Release
Notes on the DB2 Spatial Extender CD-ROM.
------------------------------------------------------------------------
1.24 SQL Reference is Provided in One PDF File
The "Using the DB2 Library" appendix in each book indicates that the SQL
Reference is available in PDF format as two separate volumes. This is
incorrect.
Although the printed book appears in two volumes, and the two corresponding
form numbers are correct, there is only one PDF file, and it contains both
volumes. The PDF file name is db2s0x70.
------------------------------------------------------------------------
1.25 Migration Issue Regarding Views Defined with Special Registers
Views become unusable after database migration if the special register USER
or CURRENT SCHEMA is used to define a view column. For example:
create view v1 (c1) as values user
In Version 5, USER and CURRENT SCHEMA were of data type CHAR(8), but since
Version 6, they have been defined as VARCHAR(128). In this example, the
data type for column c1 is CHAR if the view is created in Version 5, and it
will remain CHAR after database migration. When the view is used after
migration, it will compile at run time, but will then fail because of the
data type mismatch.
The solution is to drop and then recreate the view.
------------------------------------------------------------------------
1.26 User Action for dlfm client_conf Failure
If, on a DLFM client, dlfm client_conf fails for some reason, "stale"
entries in DB2 catalogs may be the reason. The solution is to issue the
following commands:
db2 uncatalog db <dbname>
db2 uncatalog node <node alias>
db2 terminate
Then try dlfm client_conf again.
------------------------------------------------------------------------
1.27 In the Rare Event that the Copy Daemon Does Not Stop on dlfm stop
It could happen in very rare situations that dlfm_copyd (the copy daemon)
does not stop when a user issues a dlfm stop, or there is an abnormal
shutdown. If this happens, issue a dlfm shutdown before trying to restart
dlfm.
------------------------------------------------------------------------
1.28 Chinese Locale Fix on Red Flag Linux
If you are using Simplified Chinese Red Flag Linux Server Version 1.1,
visit http://www.redflag-linux.com.cn/downloads.htm to download the
Simplified Chinese locale fix.
------------------------------------------------------------------------
1.29 Uninstalling DB2 DFS Client Enabler
Before the DB2 DFS Client Enabler is uninstalled, root should ensure that
no DFS file is in use, and that no user has a shell open in DFS file space.
As root, issue the command:
stop.dfs dfs_cl
Check that /... is no longer mounted:
mount | grep -i dfs
If this is not done, and DB2 DFS Client Enabler is uninstalled, the machine
will need to be rebooted.
------------------------------------------------------------------------
1.30 DB2 Install May Hang if a Removable Drive is Not Attached
During DB2 installation, the install may hang after selecting the install
type when using a computer with a removable drive that is not attached. To
solve this problem, run setup, specifying the -a option:
setup.exe -a
------------------------------------------------------------------------
1.31 Client Authentication on Windows NT
A new DB2 registry variable DB2DOMAINLIST is introduced to complement the
existing client authentication mechanism in the Windows NT environment.
This variable is used on the DB2 for Windows NT server to define one or
more Windows NT domains. Only connection or attachment requests from users
belonging to the domains defined in this list will be accepted.
This registry variable should only be used under a pure Windows NT domain
environment with DB2 servers and clients running at Version 7.1 (or
higher).
For information about setting this registry variable, refer to the "DB2
Registry and Environment Variables" section in the Administration Guide:
Performance.
------------------------------------------------------------------------
1.32 AutoLoader May Hang During a Fork
AIX 4.3.3 contains a fix for a libc problem that could cause the AutoLoader
to hang during a fork. The AutoLoader is a multithreaded program. One of
the threads forks( ) off another process. Forking off a child process
causes an image of the parent's memory to be created in the child.
It is possible that locks used by libc.a to manage multiple threads
allocating memory from the heap within the same process have been held by a
non-forking thread. Since the non-forking thread will not exist in the
child process, this lock will never be released in the child, causing the
parent to sometimes hang.
------------------------------------------------------------------------
1.33 DATALINK Restore
Restore of any offline backup that was taken after a database restore, with
or without rollforward, will not involve fast reconcile processing. In such
cases, all tables with DATALINK columns under file link control will be put
in datalink reconcile pending (DRP) state.
------------------------------------------------------------------------
1.34 Define User ID and Password in IBM Communications Server for Windows
NT (CS/NT)
If you are using APPC as the communication protocol for remote DB2 clients
to connect to your DB2 server and if you use CS/NT as the SNA product, make
sure that the following keyword is set correctly in the CS/NT configuration
file. This file is commonly found in the x:\ibmcs\private directory.
1.34.1 Node Definition
TG_SECURITY_BEHAVIOR
This parameter allows the user to determine how the node is to handle
security information present in the ATTACH if the TP is not configured
for security
IGNORE_IF_NOT_DEFINED
This parameter allows the user to determine if security parameters are
present in the ATTACH and to ignore them if the TP is not configured
for security.
If you use IGNORE_IF_NOT_DEFINED, you don't have to define a User ID
and password in CS/NT.
VERIFY_EVEN_IF_NOT_DEFINED
This parameter allows the user to determine if security parameters are
present in the ATTACH, verify them even if the TP is not configured
for security. This is the default.
If you use VERIFY_EVEN_IF_NOT_DEFINED, you have to define User ID and
password in CS/NT.
To define the CS/NT User ID and password, perform the following steps:
1. Start --> Programs --> IBM Communications Server --> SNA Node
Configuration. The Welcome to Communications Server Configuration
window opens.
2. Choose the configuration file you want to modify. Click Next. The
Choose a Configuration Scenario window opens.
3. Highlight CPI-C, APPC or 5250 Emulation. Click Finish. The
Communications Server SNA Node Window opens.
4. Click the [+] beside CPI-C and APPC.
5. Click the [+] beside LU6.2 Security.
6. Right click on User Passwords and select Create. The Define a User ID
Password window opens.
7. Fill in the User ID and password. Click OK. Click Finish to accept the
changes.
------------------------------------------------------------------------
1.35 Federated Systems Restrictions
Following are restrictions that apply to federated systems:
* The Oracle data types NCHAR, NVARCHAR2, BLOB, CLOB, NCLOB, and BFILE
are not supported in queries involving nicknames.
* The Create Server Option, Alter Server Option, and Drop Server Option
commands are not supported from the Control Center. To issue any of
these commands, you must use the command line processor (CLP).
* For queries involving nicknames, DB2 UDB does not always abide by the
DFT_SQLMATHWARN database configuration option. Instead, DB2 UDB
returns the arithmetic errors or warnings directly from the remote
data source regardless of the DFT_SQLMATHWARN setting.
* The CREATE SERVER OPTION statement does not allow the COLSEQ server
option to be set to 'I' for data sources with case-insensitive
collating sequences.
* The ALTER NICKNAME statement returns SQL0901N when an invalid option
is specified.
* For Oracle data sources, numeric data types cannot be mapped to DB2's
BIGINT data type. By default, Oracle's number(p,s) data type, where 10
<= p <= 18, and s = 0, maps to DB2's DECIMAL data type.
------------------------------------------------------------------------
1.36 DataJoiner Restriction
Distributed requests issued within a federated environment are limited to
read-only operations.
------------------------------------------------------------------------
1.37 IPX/SPX Protocol Support on Windows 2000
The published protocol support chart is not completely correct. A Windows
2000 client connected to any OS/2 or UNIX based server using IPX/SPX is not
supported. Also, any OS/2 or UNIX based client connected to a Windows 2000
server using IPX/SPX is not supported.
------------------------------------------------------------------------
1.38 Stopping DB2 Processes Before Upgrading a Previous Version of DB2
If you are upgrading a previous version of DB2 that is running on your
Windows machine, the installation program provides a warning containing a
list of processes that are holding DB2 DLLs in memory. At this point, you
have the option to manually stop the processes that appear in that list, or
you can let the installation program shut down these processes
automatically. It is recommended that you manually stop all DB2 processes
before installing to avoid loss of data. The best way to ensure that DB2
processes are not running is to view your system's processes through the
Windows Services panel. In the Windows Services panel, ensure that there
are no DB2 services, OLAP services, or Data warehouse services running.
Note:You can only have one version of DB2 running on Windows platforms at
any one time. For example, you cannot have DB2 Version 7.1 and DB2
Version 6 running on the same Windows machine. If you install DB2
Version 7.1 on a machine that has DB2 Version 6 installed, the
installation program will delete DB2 Version 6 during the
installation. Refer to the appropriate Quick Beginnings manual for
more information on migrating from previous versions of DB2.
------------------------------------------------------------------------
1.39 Run db2iupdt After Installing DB2 If Another DB2 Product is Already
Installed
When installing DB2 UDB Version 7.1 on UNIX based systems, and a DB2
product is already installed, you will need to run the db2iupdt command to
update those instances with which you intend to use the new features of
this product. Some features will not be available until this command is
run.
------------------------------------------------------------------------
1.40 JDK Level on OS/2
Some messages will not display on OS/2 running versions of JDK 1.1.8
released prior to 09/99. Ensure that you have the latest JDK Version 1.1.8.
------------------------------------------------------------------------
1.41 Setting up the Linux Environment to Run DB2
After leaving the DB2 installer on Linux and returning to the terminal
window, type the following commands to set the correct environment to run
the DB2 Control Center:
su -l <instance name>
export JAVA_HOME=/usr/jdk118
export DISPLAY=<your machine name>:0
Then, open another terminal window and type:
su root
xhost +<your machine name>
Close that terminal window and return to the terminal where you are logged
in as the instance owner ID, and type the command:
db2cc
to start the Control Center.
------------------------------------------------------------------------
1.42 Hebrew Information Catalog Manager for Windows NT
The Information Catalog Manager component is available in Hebrew and is
provided on the DB2 Warehouse Manager for Windows NT CD.
The Hebrew translation is provided in a zip file called IL_ICM.ZIP and is
located in the DB2\IL directory on the DB2 Warehouse Manager for Windows NT
CD.
To install the Hebrew translation of Information Catalog Manger, first
install the English version of DB2 Warehouse Manager for Windows NT and all
prerequisites on a Hebrew Enabled version of Windows NT.
After DB2 Warehouse Manager for Windows NT has been installed, unzip the
IL_ICM.ZIP file from the DB2\IL directory into the same directory where DB2
Warehouse Manager for Windows NT was installed. Ensure that the correct
options are supplied to the unzip program to create the directory structure
in the zip file.
After the file has been unzipped, the global environment variable LC_ALL
must be changed from En_US to Iw_IL. To change the setting:
1. Open the Windows NT "Control Panel" and double click on the "System"
icon.
2. In the "System Properties" window, click on the "Environment" tab,
then locate the variable "LC_ALL" in the "System Variables" section.
3. Click on the variable to display the value in the "Value:" edit box.
Change the value from En_US to Iw_IL.
4. Click on the "Set" button.
5. Close the "System Properties" window and the "Control Panel".
The Hebrew version of Information Catalog Manager should now be installed.
------------------------------------------------------------------------
1.43 Error While Creating an SQL Stored Procedure on the Server
To create SQL stored procedures on the server, the application development
client (as well as a compiler) must be installed on the server. Otherwise,
the create operation will fail with a message indicating that db2udp.dll
cannot be loaded.
------------------------------------------------------------------------
1.44 Microsoft SNA Server and SNA Multisite Update (Two Phase Commit)
Support
Host and AS/400 applications cannot access DB2 UDB servers using SNA two
phase commit when Microsoft SNA Server is the SNA product in use. Any DB2
UDB publications indicating this is supported are incorrect. IBM
Communications Server for Windows NT Version 5.02 or greater is required.
Note:Applications accessing host and AS/400 database servers using DB2
UDB for Windows can use SNA two phase commit using Microsoft SNA
Server Version 4 Service Pack 3 or greater.
------------------------------------------------------------------------
1.45 DB2's SNA SPM Fails to Start After Booting Windows
If you are using Microsoft SNA Server Version 4 SP3 or later, please verify
that DB2's SNA SPM started properly after a reboot. Check the
\sqllib\<instance name>\db2diag.log file for entries that are similar to
the following:
2000-04-20-13.18.19.958000 Instance:DB2 Node:000
PID:291(db2syscs.exe) TID:316 Appid:none
common_communication sqlccspmconnmgr_APPC_init Probe:19
SPM0453C Sync point manager did not start because Microsoft SNA Server has not
been started.
2000-04-20-13.18.23.033000 Instance:DB2 Node:000
PID:291(db2syscs.exe) TID:302 Appid:none
common_communication sqlccsna_start_listen Probe:14
DIA3001E "SNA SPM" protocol support was not successfully started.
2000-04-20-13.18.23.603000 Instance:DB2 Node:000
PID:291(db2syscs.exe) TID:316 Appid:none
common_communication sqlccspmconnmgr_listener Probe:6
DIA3103E Error encountered in APPC protocol support. APPC verb "APPC(DISPLAY 1
BYTE)". Primary rc was "F004". Secondary rc was "00000000".
If such entries exist in your db2diag.log, and the time stamps match your
most recent reboot time, you must:
1. Invoke db2stop.
2. Start the SnaServer service (if not already started).
3. Invoke db2start.
Check the db2diag.log file again to verify that the entries are no longer
appended.
------------------------------------------------------------------------
1.46 Additional Locale Setting for DB2 for Linux in a Japanese and
Simplified Chinese Linux Environment
An additional locale setting is required when you want to use the Java GUI
tools, such as the Control Center, on a Japanese or Simplified Chinese
Linux system. Japanese or Chinese characters cannot be displayed correctly
without this setting. Please include the following setting in your user
profile, or run it from the command line before every invocation of the
Control Center.
For a Japanese system:
export LC_ALL=ja_JP
For a Simplified Chinese system:
export LC_ALL=zh_CN
------------------------------------------------------------------------
1.47 Locale Setting for the DB2 Administration Server
Please ensure that the locale of the DB2 Administration Server instance is
compatible to the locale of the DB2 instance. Otherwise, the DB2 instance
cannot communicate with the DB2 Administration Server.
If the LANG environment variable is not set in the user profile of the DB2
Administration Server, the DB2 Administration Server will be started with
the default system locale. If the default system locale is not defined, the
DB2 Administration Server will be started with code page 819. If the DB2
instance uses one of the DBCS locales, and the DB2 Administration Server is
started with code page 819, the instance will not be able to communicate
with the DB2 Administration Server. The locale of the DB2 Administration
Server and the locale of the DB2 instance must be compatible. For example,
on a Simplified Chinese Linux system, "LANG=zh_CN" should be set in the DB2
Administration Server's user profile.
------------------------------------------------------------------------
1.48 Java Method Signature in PARAMETER STYLE JAVA Procedures and Functions
If specified after the Java method name in the EXTERNAL NAME clause of the
CREATE PROCEDURE or CREATE FUNCTION statement, the Java method signature
must correspond to the default Java type mapping for the signature
specified after the procedure or function name. For example, the default
Java mapping of the SQL type INTEGER is "int", not "java.lang.Integer".
------------------------------------------------------------------------
1.49 Shortcuts Not Working
In some languages, for the Control Center on UNIX based systems and on
OS/2, some keyboard shortcuts do not work. Please use the mouse to select
options.
------------------------------------------------------------------------
1.50 Service Account Requirements for DB2 on Windows NT and Windows 2000
During the installation of DB2 for Windows NT or Windows 2000, the setup
program creates several Windows services and assigns a service account for
each service. To run DB2 properly, the setup program grants the following
user rights to the service account that is associated with the DB2 service:
* Act as part of the operating system
* Create a token object
* Increase quotas
* Log on as a service
* Replace a process level token.
If you want to use a different service account for the DB2 services, you
must grant these user rights to the service account.
In addition to these user rights, the service account must also have write
access to the directory where the DB2 product is installed.
The service account for the DB2 Administration Server service (DB2DAS00
service) must also have the authority to start and stop other DB2 services
(that is, the service account must belong to the Power Users group) and
have DB2 SYSADM authority against any DB2 instances that it administers.
------------------------------------------------------------------------
Administration Guide: Planning
------------------------------------------------------------------------
2.1 Chapter 14. DB2 and High Availability on Sun Cluster 2.2
DB2 Connect is supported on Sun Cluster 2.2 if:
* The protocol to the host is TCP/IP (not SNA)
* Two-phase commit is not used. This restriction is relaxed if the user
configures the SPM log to be on a shared disk (this can be done
through the spm_log_path database manager configuration parameter),
and the failover machine has an identical TCP/IP configuration (the
same host name, IP address, and so on).
------------------------------------------------------------------------
2.2 Appendix E. National Language Support
The first paragraph in the section entitled "Deriving Code Page Values"
states the following:
The application code page is derived from the active environment
when the database connection is made. If the DB2CODEPAGE
registry variable is set, its value is taken as the application code page.
This is not always true for applications coded to use the CLI interface.
The CLI code layer will use the locale settings in some cases, even if the
user has set the DB2CODEPAGE registry variable.
------------------------------------------------------------------------
Administration Guide: Implementation
------------------------------------------------------------------------
3.1 Chapter 4. Altering a Database
Under the section "Altering a Table Space", the following new sections are
to be added:
3.1.1 Adding a Container to an SMS Table Space on a Partition
You can add a container to an SMS table space on a partition (or node) that
currently has no containers.
The contents of the table space are rebalanced across all containers.
Access to the table space is not restricted during the rebalancing. If you
need to add more than one container, you should add them all at the same
time.
To add a container to an SMS table space using the command line, enter the
following:
ALTER TABLESPACE <name>
ADD ('<path>')
ON NODE (<partition_number>)
The partition specified by number, and every partition (or node) in the
range of partitions, must exist in the nodegroup on which the table space
is defined. A partition_number may only appear explicitly or within a range
in exactly one on-nodes-clause for the statement.
The following example shows how to add a new container to partition number
3 of the nodegroup used by table space "plans" on a UNIX based operating
system:
ALTER TABLESPACE plans
ADD ('/dev/rhdisk0')
ON NODE (3)
3.1.2 Switching the State of a Table Space
The SWITCH ONLINE clause of the ALTER TABLESPACE statement can be used to
move table spaces in an OFFLINE state to an ONLINE state if the containers
associated with that table space have become accessible. The table space is
moved to an ONLINE state while the rest of the database is still up and
being used.
An alternative to the use of this clause is to disconnect all applications
from the database and then to have the applications connect to the database
again. This moves the table space from an OFFLINE state to an ONLINE state.
To switch the table space to an ONLINE state using the command line, enter:
ALTER TABLESPACE <name>
SWITCH ONLINE
------------------------------------------------------------------------
3.2 Chapter 8. Recovering a Database
Under the section "Tivoli Storage Manager", subsection "Managing Backups
and Log Archives on TSM", the third paragraph just before "Examples of
Using db2adutl:" and the last sentence in that paragraph is missing
information on the right-hand side of the page. The missing information is:
You can also qualify the command with OLDER [THAN] <timestamp>
or <n> DAYS. This will delete backups older than the given date (timestamp)
or older than the days specified. You can also select a range of logs to be
listed instead of seeing all of the logs. A specific backup for deletion can
be selected by using the TAKEN AT <timestamp> parameter.
------------------------------------------------------------------------
3.3 Appendix C. User Exit for Database Recovery
Under the section "Archive and Retrieve Considerations", the following
paragraph is no longer true and should be removed from the list:
A user exit may be interrupted if a remote client loses its connection
to the DB2 server. That is, while handling the archiving of logs
through a user exit, one of the other SNA-connected clients dies
or powers off resulting in a signal (SIGUSR1) being sent to the server.
The server passes the signal to the user exit causing an interrupt.
The user exit program can be modified to check for an interrupt
and then continue.
The Error Handling section has a Notes list that should replace the
contents of Note 3 with the following information:
* User exit program requests are suspended for five minutes. During this
time, all requests are ignored including the log file request that
caused the return code.
Following the five minute suspension in processing requests, the next
request is processed. If no error occurs with the processing of this
request, then processing of new user exit program requests continues
and DB2 will reissue the archive request for the log files that either
failed to archive previously, or were suspended. If a return code of
greater than 8 is generated during the retry, requests are suspended
for an additional five minutes. The five minute suspensions continue
until the problem is corrected or the database is stopped and
restarted.
Once all applications disconnect from the database and the database is
reopened, DB2 will issue the archive request for any log file that
might not have been successfully archived in the previous use of the
database.
If the user exit program fails to archive log files, your disk can be
filled with log files and performance may be degraded because of extra
work to format these log files. Once the disk becomes full, the
database manager will not accept further application requests for
database changes.
If the user exit program was called to retrieve log files,
roll-forward recovery is suspended but not stopped unless a stop was
specified in the ROLLFORWARD DATABASE utility. If a stop was not
specified, you can correct the problem and resume recovery.
------------------------------------------------------------------------
3.4 Appendix I. High Speed Inter-node Communications
The following section has been updated:
3.4.1 Enabling DB2 to Run Using VI
Detailed installation information is found in DB2 Enterprise - Extended
Edition for Windows Quick Beginnings.
After completing the installation of DB2 as documented in DB2 Enterprise -
Extended Edition for Windows Quick Beginnings, set the following DB2
registry variables and carry out the following tasks on each database
partition server in the instance:
* Set DB2_VI_ENABLE=ON
Use the db2set command to modify the value for the registry variable.
Use the db2_all command to run the db2set command on all database
partition servers in the instance. You must be logged on with a user
account that is a member of the Administrators group to run the
db2_all command.
In the following example, the ; character is placed inside the double
quotation marks to allow the request to run concurrently on all the
database partition servers in the instance:
db2_all ";db2set DB2_VI_ENABLE=ON"
For more information about the db2_all command, see "Issuing Commands
to Multiple Database Partition Servers" in the Administration Guide:
Implementation.
* Set DB2_VI_DEVICE=nic0
For example:
db2_all ";db2set DB2_VI_DEVICE=nic0"
Note:With Synfinity Interconnect, this variable should be set
DB2_VI_DEVICE=VINIC. The device name (VINIC) must be in upper
case.
* Set DB2_VI_VIPL=vipl.dll
For example:
db2_all ";db2set DB2_VI_VIPL=vipl.dll"
Note:The value used in the example is the default for the registry
variable. For more information on the registry variables, see
Administration Guide: Performance.
* Enter db2start on the MPP instance.
* Review the db2diag.log file. There should be one message for each
partition stating that "VI is enabled."
* Fast Communications Manager (FCM) configuration parameters may need to
be updated. Should you encounter a problem as a result of resource
constraints involving FCM, you should raise the values of the FCM
configuration parameters. If you are moving from another high speed
interconnect environment where you have increased the values for the
FCM configuration parameters, you may need to lower these values.
Also, on Windows NT, you may be required to set the DB2NTMEMSIZE
registry variable to override the DB2 defaults. Refer to
Administration Guide: Performance for more information on the registry
variables.
------------------------------------------------------------------------
Administration Guide: Performance
------------------------------------------------------------------------
4.1 Chapter 5. System Catalog Statistics
The following section requires a change:
4.1.1 Collecting and Using Distribution Statistics
In the subsection called "Example of Impact on Equality Predicates", there
is a discussion of a predicate C <= 10. The error is stated as being -86%.
This is incorrect. The sentence at the end of the paragraph should read:
Assuming a uniform data distribution and using formula (1), the number of
rows that satisfy the predicate is estimated as 1, an error of -87.5%.
In the subsection called "Example of Impact on Equality Predicates", there
is a discussion of a predicate C > 8.5 AND C <= 10. The estimate of the r_2
value using linear interpolation must be changed to the following:
10 - 8.5
r_2 *= ---------- x (number of rows with value > 8.5 and <= 100.0)
100 - 8.5
10 - 8.5
r_2 *= ---------- x (10 - 7)
100 - 8.5
1.5
r_2 *= ---- x (3)
91.5
r_2 *= 0
The paragraph following this new example must also be modified to read as
follows:
The final estimate is r_1 + r_2 *= 7,
and the error is only -12.5%.
------------------------------------------------------------------------
4.2 Chapter 6. Understanding the SQL Compiler
The following sections require changes:
4.2.1 Replicated Summary Tables
The following information will replace or be added to the existing
information already in this section:
Replicated summary tables can be used to assist in the collocation of
joins. For example, if you had a star schema where there is a large fact
table spread across twenty nodes, then the joins between the fact table and
the dimension tables are most efficient if these tables are collocated.
By placing all of the tables in the same nodegroup, at most there would one
dimension table partitioned correctly for a collocated join. All other
dimension tables would not be able to be used in a collocated join because
the join column(s) on the fact table would not correspond to the fact
table's partitioning key.
For example, you could have a table called FACT (C1, C2, C3, ...)
partitioned on C1; and a table called DIM1 (C1, dim1a, dim1b, ...)
partitioned on C1; and a table called DIM2 (C2, dim2a, dim2b, ...)
partitioned on C2; and so on.
From this example, you could see that the join between FACT and DIM1 is
perfect because the predicate DIM1.C1 = FACT.C1 would be collocated. Both
of these tables are partitioned on column C1.
The join between DIM2 with the predicate WHERE DIM2.C2 = FACT.C2 cannot be
collocated because FACT is partitioned on column C1 and not on column C2.
In this case, it would be good to replicate DIM2 in the fact table's
nodegroup. In this way we can do the join locally on each partition.
Note:The replicated summary tables discussion here has to do with
intra-database replication. Inter-database replication has to do
with subscriptions, control tables, and data located in different
databases and on different operating systems. If you are interested
in inter-database replication refer to the Replication Guide and
Reference for more information.
When creating a replicated summary table, the source table could be a
single-node nodegroup table or a multi-node nodegroup table. In most cases,
the table is small and can be placed in a single-node nodegroup. You may
place a limit on the data to be replicated by specifying only a subset of
the columns from the table, or by limiting the number of rows through the
predicates used, or by using both methods when creating the replicated
summary table.
Note:The data capture option is not required for replicated summary
tables to function.
The replicated summary table could also be created in a multi-node
nodegroup. The nodegroup is the same as the nodegroup in which you have
placed your large tables. In this case, copies of the source table are
created on all of the partitions of the nodegroup. Joins between a large
fact table and the dimension tables have a better chance of being done
locally in this environment rather than having to broadcast the source
table to all partitions.
Indexes on replicated tables are not created automatically. Indexes are
created and may be different from those identified in the source table.
Note:You cannot create unique indexes (or put on any constraints) on the
replicated tables. This will prevent constraint violations that are
not present on the source tables. These constraints are disallowed
even if there is the same constraint on the source table.
After using the REFRESH statement, you should run RUNSTATS on the
replicated table as you would any other table.
The replicated tables can be referenced directly within a query. However,
you cannot use the NODENUMBER() predicate with a replicated table to see
the table data on a particular partition.
To see if a created replicated summary table was used (given a query that
referenced the source table), you can use the EXPLAIN facility. First, you
would ensure the EXPLAIN tables existed. Then, you would create an explain
plan for the SELECT statement you are interested in. Finally, you would use
db2exfmt utility to format the EXPLAIN output.
The access plan chosen by the optimizer may or may not use the replicated
summary table depending on the information that needs to be joined. Not
using the replicated summary table could occur if the optimizer determined
that it would be cheaper to broadcast the original source table to the
other partitions in the nodegroup.
4.2.2 Data Access Concepts and Optimization
The section "Multiple Index Access" under "Index Scan Concepts" has
changed.
Add the following information before the note at the end of the section:
To realize the performance benefits of dynamic bitmaps when scanning
multiple indexes, it may be necessary to change the value of the sort heap
size (sortheap) database configuration parameter, and the sort heap
threshold (sheapthres) database manager configuration parameter.
Additional sort heap space is required when dynamic bitmaps are used in
access plans. When sheapthres is set to be relatively close to sortheap
(that is, less than a factor of two or three times per concurrent query),
dynamic bitmaps with multiple index access must work with much less memory
than the optimizer anticipated.
The solution is to increase the value of sheapthres relative to sortheap.
The section "Search Strategies for Star Join" under "Predicate Terminology"
has changed.
Add the following information at the end of the section:
The dynamic bitmaps created and used as part of the Star Join technique
uses sort heap memory. See Chapter 13, "Configuring DB2" in the
Administration Guide: Performance manual for more information on the Sort
Heap Size (sortheap) database configuration parameter.
------------------------------------------------------------------------
4.3 Chapter 13. Configuring DB2
The following parameters require changes:
4.3.1 Sort Heap Size (sortheap)
The "Recommendation" section has changed. The information here should now
read:
When working with the sort heap, you should consider the following:
* Appropriate indexes can minimize the use of the sort heap.
* Hash join buffers and dynamic bitmaps (used for index ANDing and Star
Joins) use sort heap memory. Increase the size of this parameter when
these techniques are used.
* Increase the size of this parameter when frequent large sorts are
required.
* ... (the rest of the items are unchanged)
4.3.2 Sort Heap Threshold (sheapthres)
The second last paragraph in the description of this parameter has changed.
The paragraph should now read:
Examples of those operations that use the sort heap include: sorts, dynamic
bitmaps (used for index ANDing and Star Joins), and operations where the
table is in memory.
The following information is to be added to the description of this
parameter:
There is no reason to increase the value of this parameter when moving from
a single-node to a multi-node environment. Once you have tuned the database
and database manager configuration parameters on a single node (in a DB2
EE) environment, the same values will in most cases work well in a
multi-node (in a DB2 EEE) environment.
The Sort Heap Threshold parameter, as a database manager configuration
parameter, applies across the entire DB2 instance. The only way to set this
parameter to different values on different nodes or partitions, is to
create more than one DB2 instance. This will require managing different DB2
databases over different nodegroups. Such an arrangement defeats the
purpose of many of the advantages of a partitioned database environment.
------------------------------------------------------------------------
4.4 Appendix A. DB2 Registry and Environment Variables
The following registry variables are new or require changes:
4.4.1 Table of New and Changed Registry Variables
Table 1. Registry Variables
Variable Name Operating Values
System
Description
DB2MAXFSCRSEARCH All Default=5
Values: -1, 1 to 33554
Specifies the number of free space control records to search when adding
a record to a table. The default is to search five free space control
records. Modifying this value allows you to balance insert speed with
space reuse. Use large values to optimize for space reuse. Use small
values to optimize for insert speed. Setting the value to -1 forces the
database manager to search all free space control records.
DLFM_TSM_MGMTCLASS AIX, Windows Default: the default TSM
NT management class
Values: any valid TSM
management class
Specifies which TSM management class to use to archive and retrieve
linked files. If there is no value set for this variable, the default TSM
management class is used.
DB2_CORRELATED_PREDICATES All Default=ON
Values: ON or OFF
The default for this variable is ON. When there are unique indexes on
correlated columns in a join, and this registry variable is ON, the
optimizer attempts to detect and compensate for correlation of join
predicates. When this registry variable is ON, the optimizer uses the
KEYCARD information of unique index statistics to detect cases of
correlation, and dynamically adjusts the combined selectivities of the
correlated predicates, thus obtaining a more accurate estimate of the
join size and cost.
DB2_VI_DEVICE Windows NT Default=null
Values: nic0 or VINIC
Specifies the symbolic name of the device or Virtual Interface Provider
Instance associated with the Network Interface Card (NIC). Independent
hardware vendors (IHVs) each produce their own NIC. Only one (1) NIC is
allowed per Windows NT machine; Multiple logical nodes on the same
physical machine will share the same NIC. The symbolic device name
"VINIC" must be in upper case and can only be used with Synfinity
Interconnect. All other currently supported implementations use "nic0" as
the symbolic device name.
------------------------------------------------------------------------
4.5 Appendix C. SQL Explain Tools
The section titled "Running db2expln and dynexpln" should have the last
paragraph replaced with the following:
To run db2expln, you must have SELECT privilege to the system catalog views
as well as EXECUTE authority for the db2expln package. To run dynexpln, you
must have BINDADD authority for the database, the schema you are using to
connect to the database must exist or you must have the EXPLICIT_SCHEMA
authority for the database, and you must have any privileges needed for the
SQL statements being explained. (Note that if you have SYSADM or DBADM
authority, you will automatically have all these authorization levels.)
------------------------------------------------------------------------
Administrative API Reference
------------------------------------------------------------------------
5.1 db2ConvMonStream
In the Usage Notes, the structure for the snapshot variable datastream type
SQLM_ELM_SUBSECTION should be sqlm_subsection.
------------------------------------------------------------------------
5.2 db2XaGetInfo (new API)
db2XaGetInfo - Get Information for Resource Manager
Extracts information for a particular resource manager once an xa_open call
has been made.
Authorization
None
Required Connection
Database
Version
sqlxa.h
C API Syntax
/* File: sqlxa.h */
/* API: Get Information for Resource Manager */
/* ... */
SQL_API_RC SQL_API_FN
db2XaGetInfo (
db2Uint32 versionNumber,
void * pParmStruct,
struct sqlca * pSqlca);
typedef SQL_STRUCTURE db2XaGetInfoStruct
{
db2int32 iRmid;
struct sqlca oLastSqlca;
} db2XaGetInfoStruct;
API Parameters
versionNumber
Input. Specifies the version and release level of the structure passed
in as the second parameter, pParmStruct.
pParmStruct
Input. A pointer to the db2XaGetInfoStruct structure.
pSqlca
Output. A pointer to the sqlca structure. For more information about
this structure, see the Administrative API Reference.
iRmid
Input. Specifies the resource manager for which information is
required.
oLastSqlca
Output. Contains the sqlca for the last XA API call.
Note:Only the sqlca that resulted from the last failing XA API can
be retrieved.
------------------------------------------------------------------------
5.3 db2XaListIndTrans (new API that supercedes sqlxphqr)
db2XaListIndTrans - List Indoubt Transactions
Provides a list of all indoubt transactions for the currently connected
database.
Scope
This API affects only the node on which it is issued.
Authorization
One of the following:
* sysadm
* dbadm
Required Connection
Database
Version
db2ApiDf.h
C API Syntax
/* File: db2ApiDf.h */
/* API: List Indoubt Transactions */
/* ... */
SQL_API_RC SQL_API_FN
db2XaListIndTrans (
db2Uint32 versionNumber,
void * pParmStruct,
struct sqlca * pSqlca);
typedef SQL_STRUCTURE db2XaListIndTransStruct
{
db2XaRecoverStruct * piIndoubtData;
db2Uint32 iIndoubtDataLen;
db2Uint32 oNumIndoubtsReturned;
db2Uint32 oNumIndoubtsTotal;
db2Uint32 oReqBufferLen;
} db2XaListIndTransStruct;
typedef SQL_STRUCTURE db2XaRecoverStruct
{
sqluint32 timestamp;
SQLXA_XID xid;
char dbalias[SQLXA_DBNAME_SZ];
char applid[SQLXA_APPLID_SZ];
char sequence_no[SQLXA_SEQ_SZ];
char auth_id[SQL_USERID_SZ];
char log_full;
char connected;
char indoubt_status;
char originator;
char reserved[8];
} db2XaRecoverStruct;
API Parameters
versionNumber
Input. Specifies the version and release level of the structure passed
in as the second parameter, pParmStruct.
pParmStruct
Input. A pointer to the db2XaListIndTransStruct structure.
pSqlca
Output. A pointer to the sqlca structure. For more information about
this structure, see the Administrative API Reference.
piIndoubtData
Input. A pointer to the application supplied buffer where indoubt data
will be returned. The indoubt data is in db2XaRecoverStruct format.
The application can traverse the list of indoubt transactions by using
the size of the db2XaRecoverStruct structure, starting at the address
provided by this parameter.
If the value is NULL, DB2 will calculate the size of the buffer
required and return this value in oReqBufferLen. oNumIndoubtsTotal
will contain the total number of indoubt transactions. The application
may allocate the required buffer size and issue the API again.
oNumIndoubtsReturned
Output. The number of indoubt transaction records returned in the
buffer specified by pIndoubtData.
oNumIndoubtsTotal
Output. The Total number of indoubt transaction records available at
the time of API invocation. If the piIndoubtData buffer is too small
to contain all the records, oNumIndoubtsTotal will be greater than the
total for oNumIndoubtsReturned. The application may reissue the API in
order to obtain all records.
Note:This number may change between API invocations as a result of
automatic or heuristic indoubt transaction resynchronisation,
or as a result of other transactions entering the indoubt
state.
oReqBufferLen
Output. Required buffer length to hold all indoubt transaction records
at the time of API invocation. The application can use this value to
determine the required buffer size by calling the API with
pIndoubtData set to NULL. This value can then be used to allocate the
required buffer, and the API can be issued with pIndoubtData set to
the address of the allocated buffer.
Note:The required buffer size may change between API invocations as
a result of automatic or heuristic indoubt transaction
resynchronisation, or as a result of other transactions
entering the indoubt state. The application may allocate a
larger buffer to account for this.
timestamp
Output. Specifies the time when the transaction entered the indoubt
state. This is the number of seconds the local time zone is displaced
from Coordinated Universal Time.
xid
Output. Specifies the XA identifier assigned by the transaction
manager to uniquely identify a global transaction.
dbalias
Output. Specifies the alias of the database where the indoubt
transaction is found.
applid
Output. Specifies the application identifier assigned by the database
manager for this transaction.
sequence_no
Output. Specifies the sequence number assigned by the database manager
as an extension to the applid.
auth_id
Output. Specifies the authorization ID of the user who ran the
transaction.
log_full
Output. Indicates whether or not this transaction caused a log full
condition. Valid values are:
SQLXA_TRUE
This indoubt transaction caused a log full condition.
SQLXA_FALSE
This indoubt transaction did not cause a log full condition.
connected
Output. Indicates whether or not the application is connected. Valid
values are:
SQLXA_TRUE
The transaction is undergoing normal syncpoint processing, and is
waiting for the second phase of the two-phase commit.
SQLXA_FALSE
The transaction was left indoubt by an earlier failure, and is
now waiting for resynchronisation from the transaction manager.
indoubt_status
Output. Indicates the status of this indoubt transaction. Valid values
are:
SQLXA_TS_PREP
The transaction is prepared. The connected parameter can be used
to determine whether the transaction is waiting for the second
phase of normal commit processing or whether an error occurred
and resynchronisation with the transaction manager is required.
SQLXA_TS_HCOM
The transaction has been heuristically committed.
SQLXA_TS_HROL
The transaction has been heuristically rolled back.
SQLXA_TS_MACK
The transaction is missing commit acknowledgement from a node in
a partitioned database.
SQLXA_TS_END
The transaction has ended at this database. This transaction may
be re-activated, committed, or rolled back at a later time. It is
also possible that the transaction manager encountered an error
and the transaction will not be completed. If this is the case,
this transaction requires heuristic actions, because it may be
holding locks and preventing other applications from accessing
data.
Usage Notes
A typical application will perform the following steps after setting the
current connection to the database or to the partitioned database
coordinator node:
1. Call db2XaListIndTrans with piIndoubtData set to NULL. This will
return values in oReqBufferLen and oNumIndoubtsTotal.
2. Use the returned value in oReqBufferLen to allocate a buffer. This
buffer may not be large enough if there are additional indoubt
transactions because the initial invocation of this API to obtain
oReqBufferLen. The application may provide a buffer larger than
oReqBufferLen.
3. Determine if all indoubt transaction records have been obtained. This
can be done by comparing oNumIndoubtsReturned to oNumIndoubtTotal. If
oNumIndoubtsTotal is greater than oNumIndoubtsReturned, the
application can repeat the above steps.
See Also
"sqlxhfrg - Forget Transaction Status", "sqlxphcm - Commit an Indoubt
Transaction", and "sqlxphrl - Roll Back an Indoubt Transaction" in the
Administrative API Reference.
------------------------------------------------------------------------
5.4 sqlaintp - Get Error Message
The following usage note is to be added to the description of this API:
In a multi-threaded application, sqlaintp must be attached to a valid context;
otherwise, the message text for SQLCODE -1445 cannot be obtained.
------------------------------------------------------------------------
Application Building Guide
------------------------------------------------------------------------
6.1 Chapter 1. Introduction
6.1.1 Supported Software
AIX
DB2 for AIX 7.1 supports stored procedures built with Micro Focus COBOL
compiler 4.1.30 on AIX 4.2.1. For information on DB2 support for Micro
Focus COBOL stored procedures on AIX 4.3, please see the DB2 Application
Development Web page:
http://www.ibm.com/software/data/db2/udb/ad
HP-UX
The listed version for the C++ compiler should be the following:
HP aC++, Version A.03.13
Note:HP does not support binary compatibility among objects compiled with
old and new compilers, so this will force recompiles of any C++
application built to access DB2 on HP-UX. C++ applications must also be
built to handle exceptions with this new compiler.
Following is the URL for the aCC transition guide:
http://www.hp.com/esy/lang/cpp/tguide. The C++ incompatibility portion
is here:
http://www.hp.com/esy/lang/cpp/tguide/transcontent.html#RN.CVT.1.2
http://www.hp.com/esy/lang/cpp/tguide/transcontent.html#RN.CVT.3.3
The C vs C++ portion is here:
http://www.hp.com/esy/lang/cpp/tguide/transcontent.html#RN.CVT.3.3.1
Even though C and aCC are compatible, when using the two different
object types, the object containing "main" must be compiled with aCC,
and the final executable must be linked with aCC.
OS/2
The listed versions for C/C++ compiler should be the following:
IBM VisualAge C++ for OS/2 Version 3.0, Version 3.6.5, and Version 4.0
Windows 32-bit Operating Systems
The listed versions for the IBM VisualAge C++ compiler should be the
following:
IBM VisualAge C++ for Windows Versions 3.6.5 and 4.0
6.1.2 Sample Programs
The following should be added to the "Object Linking and Embedding Samples"
section:
salarycltvc
A Visual C++ DB2 CLI sample that calls the Visual Basic stored
procedure, salarysrv.
SALSVADO
A sample OLE automation stored procedure (SALSVADO) and a SALCLADO
client (SALCLADO), implemented in 32-bit Visual Basic and ADO, that
calculates the median salary in table staff2.
The following should be added to the "Log Management User Exit Samples"
section:
Applications on AIX using the ADSM API Client at level 3.1.6 and higher
must be built with the xlc_r or xlC_r compiler invocations, not with xlc or
xlC, even if the applications are single-threaded. This ensures that the
libraries are thread-safe. This applies to the Log Management User Exit
Sample, db2uext2.cadsm.
If you have an application that is compiled with a non thread-safe library,
you can apply fixtest IC21925E or contact your application provider. The
fixtest is available on the index.storsys.ibm.com anonymous ftp server.
This will regress the ADSM API level to 3.1.3.
------------------------------------------------------------------------
6.2 Chapter 3. General Information for Building DB2 Applications
6.2.1 Build Files, Makefiles, and Error-checking Utilities
The entry for bldevm in table 16 should read:
bldevm
The event monitor sample program, evm (only available on AIX, OS/2,
and Windows 32-bit operating systems).
Table 17 should include the entries:
bldmevm
The event monitor sample program, evm, with the Microsoft Visual C++
compiler.
bldvevm
The event monitor sample program, evm, with the VisualAge C++
compiler.
------------------------------------------------------------------------
6.3 Chapter 4. Building Java Applets and Applications
6.3.1 Setting the Environment
If you are using IBM JDK 1.1.8 on supported platforms to build SQLJ
programs, a JDK build date of November 24, 1999 (or later) is required.
Otherwise you may get JNI panic errors during compilation.
If you are using IBM JDK 1.2.2 on supported platforms to build SQLJ
programs, a JDK build date of April 17, 2000 (or later) is required.
Otherwise, you may get Invalid Java type errors during compilation.
For sub-sections AIX and Solaris, replace the information on JDBC 2.0 with
the following:
Using the JDBC 2.0 Driver with Java Applications
The JDBC 1.22 driver is still the default driver on all operating systems.
To take advantage of the new features of JDBC 2.0, you must install JDK 1.2
support. Before executing an application that takes advantage of the new
features of JDBC 2.0, you must set your environment by issuing the usejdbc2
command from the sqllib/java12 directory. If you want your applications to
always use the JDBC 2.0 driver, consider adding the following line to your
login profile, such as .profile, or your shell initialization script, such
as .bashrc, .cshrc, or .kshrc:
. sqllib/java12/usejdbc2
Ensure that this command is placed after the command to run db2profile, as
usejdbc2 should be run after db2profile.
To switch back to the JDBC 1.22 driver, execute the following command from
the sqllib/java12 directory:
. usejdbc1
Using the JDBC 2.0 Driver with Java Stored Procedures and UDFs
To use the JDBC 2.0 driver with Java stored procedures and UDFs, you must
set the environment for the fenced user ID for your instance. The default
fenced user ID is db2fenc1. To set the environment for the fenced user ID,
perform the following steps:
1. Add the following line to the fenced user ID profile, such as
.profile, or the fenced user ID shell initialization script, such as
.bashrc, .cshrc, or .kshrc:
. sqllib/java12/usejdbc2
2. Issue the following command from the CLP:
db2set DB2_USE_JDK12=1
To switch back to the JDBC 1.22 driver support for Java UDFs and stored
procedures, perform the following steps:
1. Remove the following line from the fenced user ID profile, such as
.profile, or the fenced user ID shell initialization script, such as
.bashrc, .cshrc, or .kshrc:
. sqllib/java12/usejdbc2
2. Issue the following command from the CLP:
db2set DB2_USE_JDK12=
If you want your applications to always use the JDBC 2.0 driver, you can
add the following line to your login profile, such as .profile, or your
shell initialization script, such as .bashrc, .cshrc, or .kshrc:
. sqllib/java12/usejdbc2
Ensure that this command is placed after the command to run db2profile, as
usejdbc2 should be run after db2profile.
For sub-section Silicon Graphics IRIX, add the following:
When building SQLJ applications with the -o32 object type, using the Java
JIT compiler with JDK 1.2.2, if the SQLJ translator fails with a
segmentation fault, try turning off the JIT compiler with this command:
export JAVA_COMPILER=NONE
For sub-section Windows 32-bit Operating Systems, add the following:
Using the JDBC 2.0 Driver with Java Stored Procedures and UDFs
To use the JDBC 2.0 driver with Java stored procedures and UDFs, you must
set the environment by performing the following steps:
1. Issue the following command in the sqllib\java12 directory:
usejdbc2
2. Issue the following command from the CLP:
db2set DB2_USE_JDK12=1
To switch back to the JDBC 1.22 driver support for Java UDFs and stored
procedures, perform the following steps:
1. Issue the following command in the sqllib\java12 directory:
usejdbc2
2. Issue the following command from the CLP:
db2set DB2_USE_JDK12=
------------------------------------------------------------------------
6.4 Chapter 5. Building SQL Procedures.
6.4.1 Setting the SQL Procedures Environment
The following replaces the corresponding sections in the book.
6.4.1.1 Configuring the Compiler Environment
To create SQL procedures, configure DB2 to use a supported C or C++
compiler on the server by the following steps:
1. Set the DB2_SQLROUTINE_COMPILER_PATH DB2 registry variable to the
executable file with the following command:
db2set DB2_SQLROUTINE_COMPILER_PATH=executable_file
where executable_file is the full path name for the C compiler
environment file. For example,
OS2:
for IBM VisualAge C++ for OS/2 Version 3.6:
db2set DB2_SQLROUTINE_COMPILER_PATH="c:\ibmcxxo\bin\setenv.cmd"
for IBM VisualAge C++ for OS/2 Version 4:
db2set DB2_SQLROUTINE_COMPILER_PATH="c:\ibmcpp40\bin\setenv.cmd"
Windows 32-bit operating systems
for Microsoft Visual C++ Versions 5.0:
db2set DB2_SQLROUTINE_COMPILER_PATH="c:\devstudio\vc\bin\vcvars32.bat"
for Microsoft Visual C++ Versions 6.0:
db2set DB2_SQLROUTINE_COMPILER_PATH="c:\Micros~1\vc98\bin\vcvars32.bat"
for IBM VisualAge C++ for Windows Version 3.6:
db2set DB2_SQLROUTINE_COMPILER_PATH="c:\ibmcxxw\bin\setenv.cmd"
for IBM VisualAge C++ for Windows Version 4:
db2set DB2_SQLROUTINE_COMPILER_PATH="c:\ibmcppw40\bin\setenv.cmd"
2. Create an executable file that sets the environment for the compiler.
This will be a command file on OS/2, a script file on UNIX, or a batch
file on Windows. The compiler may require path, include, and library
environment variables.
If you do not set the DB2_SQLROUTINE_COMPILER_PATH DB2 registry
variable, DB2 will create a default file at the first attempt of
creating any SQL procedure. Depending on your operating system, this
file will have one of the following paths and file names:
OS/2
%DB2PATH%\function\routine\sr_cpath.cmd
UNIX
$HOME/sqllib/function/routine/sr_cpath
Windows
%DB2PATH%\function\routine\sr_cpath.bat
You can use this default file as long as you update it to reflect the
settings required for the server operating system and the C or C++
compiler you are using.
Note:On Windows NT and Windows 2000, you do not have to set the
DB2_SQLROUTINE_COMPILER_PATH DB2 registry variable if you store
the environment variables for your compiler as SYSTEM
variables.
6.4.1.2 Customizing Compiler Options
DB2 provides default values for one of the compilers it supports on each
platform. To use other compilers, set the SQL procedure compiler options
using the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable. To specify
customized C or C++ compiler options for SQL procedures, store the entire
command line, including all options, in the DB2 registry with the following
command:
db2set DB2_SQLROUTINE_COMPILE_COMMAND=compiler_command
where compiler_command is the C or C++ compile command, including the
options and parameters required to create stored procedures.
In the compiler command, use the keyword SQLROUTINE_FILENAME to replace the
filename for the generated SQC, C, PDB, DEF, EXP, messages log and shared
library files. For AIX only, use the keyword SQLROUTINE_ENTRY to replace
the entry point name.
The following are the default values for the DB2_SQLROUTINE_COMPILE_COMMAND
for C or C++ compilers on supported server platforms.
AIX
To use IBM C for AIX Version 3.6.6:
db2set DB2_SQLROUTINE_COMPILE_COMMAND=xlc -H512 -T512 \
-I$HOME/sqllib/include SQLROUTINE_FILENAME.c -bE:SQLROUTINE_FILENAME.exp \
-e SQLROUTINE_ENTRY -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib -lc -ldb2
To use IBM C Set++ for AIX Version 3.6.6:
db2set DB2_SQLROUTINE_COMPILE_COMMAND=xlC -H512 -T512 \
-I$HOME/sqllib/include SQLROUTINE_FILENAME.c -bE:SQLROUTINE_FILENAME.exp \
-e SQLROUTINE_ENTRY -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib -lc -ldb2
This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND
DB2 registry variable is not set.
Note:To compile 64-bit SQL procedures on AIX, add the -q64 option to the
above commands.
To use IBM VisualAge C++ for AIX Version 4:
db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld"
If you do not specify the configuration file after vacbld command, DB2 will
create the following default configuration file at the first attempt of
creating any SQL procedure:
$HOME/sqllib/function/routine/sqlproc.icc
If you want to use your own configuration file, you can specify your own
configuration file when setting the DB2 registry value for
DB2_SQLROUTINE_COMPILE_COMMAND:
db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld
%DB2PATH%/function/sqlproc.icc"
HP-UX
To use HP C Compiler Version A.11.00.03:
db2set DB2_SQLROUTINE_COMPILE_COMMAND=cc +DAportable +ul -Aa +z \
-I$HOME/sqllib/include -c SQLROUTINE_FILENAME.c; \
ld -b -o SQLROUTINE_FILENAME SQLROUTINE_FILENAME.o \
-L$HOME/sqllib/lib -ldb2
To use HP-UX C++ Version A.12.00:
db2set DB2_SQLROUTINE_COMPILE_COMMAND=CC +DAportable +a1 +z -ext \
-I$HOME/sqllib/include -c SQLROUTINE_FILENAME.c; \
ld -b -o SQLROUTINE_FILENAME SQLROUTINE_FILENAME.o \
-L$HOME/sqllib/lib -ldb2
This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND
DB2 registry variable is not set.
Linux
To use GNU/Linux gcc Version 2.7.2.3:
db2set DB2_SQLROUTINE_COMPILE_COMMAND=cc \
-I$HOME/sqllib/include SQLROUTINE_FILENAME.c \
-shared -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib -ldb2
To use GNU/Linux g++ Version egcs-2.90.27 980315 (egcs-1.0.2 release):
db2set DB2_SQLROUTINE_COMPILE_COMMAND=g++ \
-I$HOME/sqllib/include SQLROUTINE_FILENAME.c \
-shared -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib -ldb2
This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND
DB2 registry variable is not set.
PTX
To use ptx/C Version 4.5:
db2set DB2_SQLROUTINE_COMPILE_COMMAND=cc -KPIC \
-I$HOME/sqllib/include SQLROUTINE_FILENAME.c \
-G -o SQLROUTINE_FILENAME.so -L$HOME/sqllib/lib -ldb2 ; \
cp SQLROUTINE_FILENAME.so SQLROUTINE_FILENAME
To use ptx/C++ Version 5.2:
db2set DB2_SQLROUTINE_COMPILE_COMMAND=c++ -KPIC \
-D_RWSTD_COMPILE_INSTANTIATE=0
-I$HOME/sqllib/include SQLROUTINE_FILENAME.c \
-G -o SQLROUTINE_FILENAME.so -L$HOME/sqllib/lib -ldb2 ; \
cp SQLROUTINE_FILENAME.so SQLROUTINE_FILENAME
This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND
DB2 registry variable is not set.
OS/2
To use IBM VisualAge C++ for OS/2 Version 3:
db2set DB2_SQLROUTINE_COMPILE_COMMAND="icc -Ge- -Gm+ -W2
-I%DB2PATH%\include SQLROUTINE_FILENAME.c
/B\"/NOFREE /NOI /ST:64000\" SQLROUTINE_FILENAME.def
%DB2PATH%\lib\db2api.lib"
This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND
DB2 registry variable is not set.
To use IBM VisualAge C++ for OS/2 Version 4:
db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld"
If you do not specify the configuration file after vacbld command, DB2 will
create the following default configuration file at the first attempt of
creating any SQL procedure:
%DB2PATH%\function\routine\sqlproc.icc
If you want to use your own configuration file, you can specify your own
configuration file when setting the DB2 registry value for
DB2_SQLROUTINE_COMPILE_COMMAND:
db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld
%DB2PATH%\function\sqlproc.icc"
Solaris
To use SPARCompiler C Versions 4.2 and 5.0:
db2set DB2_SQLROUTINE_COMPILE_COMMAND=cc -xarch=v8plusa -Kpic \
-I$HOME/sqllib/include SQLROUTINE_FILENAME.c \
-G -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib \
-R$HOME/sqllib/lib -ldb2
To use SPARCompiler C++ Versions 4.2 and 5.0:
db2set DB2_SQLROUTINE_COMPILE_COMMAND=CC -xarch=v8plusa -Kpic \
-I$HOME/sqllib/include SQLROUTINE_FILENAME.c \
-G -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib \
-R$HOME/sqllib/lib -ldb2
This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND
DB2 registry variable is not set.
Notes:
1. The compiler option -xarch=v8plusa has been added to the default
compiler command. For details on why this option has been added, see
6.7, "Chapter 12. Building Solaris Applications".
2. To compile 64-bit SQL procedures on Solaris, take out the
-xarch=v8plusa option and add the -xarch=v9 option to the above
commands.
Windows NT and Windows 2000
Note:SQL procedures are not supported on Windows 98 or Windows 95.
To use Microsoft Visual C++ Versions 5.0 and 6.0:
db2set DB2_SQLROUTINE_COMPILE_COMMAND=cl -Od -W2 /TC -D_X86_=1
-I%DB2PATH%\include SQLROUTINE_FILENAME.c /link -dll
-def:SQLROUTINE_FILENAME.def /out:SQLROUTINE_FILENAME.dll
%DB2PATH%\lib\db2api.lib
This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND
DB2 registry variable is not set.
To use IBM VisualAge C++ for Windows Version 3.6:
db2set DB2_SQLROUTINE_COMPILE_COMMAND="ilib /GI
SQLROUTINE_FILENAME.def & icc -Ti -Ge- -Gm+ -W2
-I%DB2PATH%\include SQLROUTINE_FILENAME.c
/B\"/ST:64000 /PM:VIO /DLL\" SQLROUTINE_FILENAME.exp
%DB2PATH%\lib\db2api.lib"
To use IBM VisualAge C++ for Windows Version 4:
db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld"
If you do not specify the configuration file after vacbld command, DB2 will
create the following default configuration file at the first attempt of
creating any SQL procedure:
%DB2PATH%\function\routine\sqlproc.icc
If you want to use your own configuration file, you can specify your own
configuration file when setting the DB2 registry value for
DB2_SQLROUTINE_COMPILE_COMMAND:
db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld
%DB2PATH%\function\sqlproc.icc"
To return to the default compiler options, set the DB2 registry value for
DB2_SQLROUTINE_COMPILE_COMMAND to null with the following command:
db2set DB2_SQLROUTINE_COMPILE_COMMAND=
6.4.1.3 Retaining Intermediate Files
You have to manually delete intermediate files that may be left when an SQL
procedure is not created successfully. These files are in the following
directories:
UNIX
$DB2PATH/function/routine/sqlproc/$DATABASE/$SCHEMA/tmp
where $DB2PATH represents the directory in which the instance was
created, $DATABASE represents the database name, and $SCHEMA
represents the schema name with which the SQL procedures were created.
OS/2 and Windows
%DB2PATH%\function\routine\sqlproc\%DATABASE%\%SCHEMA%\tmp
where %DB2PATH% represents the directory in which the instance was
created, %DATABASE% represents the database name, and %SCHEMA%
represents the schema name with which the SQL procedures were created.
6.4.1.4 Backup and Restore
The executables of SQL procedures on the filesystem should be backed up and
restored at the same time as the database. This must be done manually. The
executables are in the following directory:
UNIX
$DB2PATH/function/routine/sqlproc/$DATABASE
where $DB2PATH represents the directory in which the instance was
created and $DATABASE represents the database name with which the SQL
procedures were created.
OS/2 and Windows
%DB2PATH%\function\routine\sqlproc\%DATABASE%
where %DB2PATH% represents the directory in which the instance was
created and %DATABASE% represents the database name with which the SQL
procedures were created.
6.4.2 Creating SQL Procedures
Set the database manager configuration parameter KEEPDARI to 'NO' for
developing SQL procedures. If an SQL procedure is kept loaded once it is
executed, you may have problems dropping and recreating the stored
procedure with the same name, as the library cannot be refreshed and the
executables cannot be dropped from the filesystem. You will also have
problems when you try to rollback the changes or drop the database because
the executables cannot be deleted.
See 'Updating the Database Manager Configuration File' in "Chapter 2.
Setup" of the 'Application Building Guide' for more information on setting
the KEEPDARI parameter.
Note:SQL procedures do not support the following data types for
parameters:
* GRAPHIC
* VARGRAPHIC
* LONG VARGRAPHIC
* Binary Large Object (BLOB)
* Character Large Object (CLOB)
* Double-byte Character Large Object (DBCLOB)
6.4.3 Calling Stored Procedures
The first paragraph in 'Using the CALL Command' should read:
To use the call command, you must enter the stored procedure name plus any
IN or INOUT parameters, as well as '?' as a place-holder for each OUT
parameter. For details on the syntax of the CALL command, see 9.8, "CALL".
------------------------------------------------------------------------
6.5 Chapter 7. Building HP-UX Applications.
6.5.1 HP-UX C
In "Multi-threaded Applications", the bldmt script file has been revised
with different compile options. The new version is in the sqllib/samples/c
directory.
6.5.2 HP-UX C++
In the build scripts, the C++ compiler variable CC has been replaced by
aCC, for the HP aC++ compiler. The revised build scripts are in the
sqllib/samples/cpp directory.
In "Multi-threaded Applications", the bldmt script file has been revised
with different compile options. The new version is in the
sqllib/samples/cpp directory.
------------------------------------------------------------------------
6.6 Chapter 10. Building PTX Applications
6.6.1 ptx/C++
Libraries need to be linked with the -shared option to build stored
procedures and user-defined functions. In the sqllib/samples directory, the
makefile, the build scripts bldsrv, and bldudf have been updated to include
this option, as in the following link step from bldsrv:
c++ -shared -G -o $1 $1.o -L$DB2PATH/lib -ldb2
------------------------------------------------------------------------
6.7 Chapter 12. Building Solaris Applications
6.7.1 SPARCompiler C++
Problems with executing C/C++ Applications and running SQL Procedures on
Solaris
When using the Sun WorkShop Compiler C/C++, if you experience problems with
your executable where you receive errors like the following:
1. syntax error at line 1: `(' unexpected
2. ksh: <application name>: cannot execute (where application name is the
name of the compiled executable)
you may be experiencing a problem that the compiler does not produce valid
executables when linking with libdb2.so. One suggestion to fix this is to
add the following compiler option to your compile and link commands:
-xarch=v8plusa
for example, when compiling the sample application, dynamic.sqc:
embprep dynamic sample
embprep utilemb sample
cc -c utilemb.c -xarch=v8plusa -I/export/home/db2inst1/sqllib/include
cc -o dynamic dynamic.c utilemb.o -xarch=v8plusa -I/export/home/db2inst1/sqllib/include \
-L/export/home/db2inst1/sqllib/lib -R/export/home/db2inst1/sqllib/lib -l db2
Notes:
1. If you are using SQL Procedures on Solaris, and you are using your own
compile string via the DB2_SQLROUTINE_COMPILE_COMMAND profile
variable, please ensure that you include the compiler option given
above. The default compiler command includes this option:
db2set DB2_SQLROUTINE_COMPILE_COMMAND="cc -# -Kpic -xarch=v8plusa -I$HOME/sqllib/include \
SQLROUTINE_FILENAME.c -G -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib -R$HOME/sqllib/lib -ldb2
2. To compile 64-bit SQL procedures on Solaris, take out the
-xarch=v8plusa option and add the -xarch=v9 option to the above
commands.
------------------------------------------------------------------------
6.8 VisualAge C++ Version 4.0 on OS/2 and Windows
Note:This updates the section "VisualAge C++ Version 4.0" in "Chapter 6.
Building AIX Applications". That section contains information common
to AIX, OS/2, and Windows 32-bit operating systems.
For OS/2 and Windows, use the set command instead of the export command.
For example, set CLI=tbinfo.
In "DB2 CLI Applications", subsection "Building and Running Embedded SQL
Applications", for OS/2 and Windows, the cliapi.icc file must be used
instead of the cli.icc file, because embedded SQL applications need the
db2api.lib library linked in by cliapi.icc.
------------------------------------------------------------------------
Application Development Guide
------------------------------------------------------------------------
7.1 Writing OLE Automation Stored Procedures
The last sentence in the following paragraph is missing from the second
paragraph under section "Writing OLE automation Stored Procedures:
After you code an OLE automation object, you must register the
methods of the object as stored procedures using the CREATE
PROCEDURE statement. To register an OLE automation stored
procedure, issue a CREATE PROCEDURE statement with the LANGUAGE
OLE clause. The external name consists of the OLE progID identifying
the OLE automation object and the method name separated
by ! (exclamation mark). The OLE automation object needs to be
implemented as an in-process server (.DLL).
------------------------------------------------------------------------
7.2 Chapter 7. Stored Procedures
7.2.1 DECIMAL Type Not Supported in Linux Java Routines
The DECIMAL SQL data type is not supported in Java stored procedures,
user-defined functions, or methods on the Linux platform.
------------------------------------------------------------------------
7.3 Chapter 12. Working with Complex Objects: User-Defined Structured Types
7.3.1 Inserting Structured Type Attributes Into Columns
The following rule applies to embedded static SQL statements: To insert an
attribute of a user-defined structured type into a column that is of the
same type as the attribute, enclose the host variable that represents the
instance of the type in parentheses, and append the double-dot operator and
attribute name to the closing parenthesis. For example, consider the
following situation:
- PERSON_T is a structured type that includes the attribute NAME of type VARCHAR(30).
- T1 is a table that includes a column C1 of type VARCHAR(30).
- personhv is the host variable declared for type PERSON_T in the programming language.
The proper syntax for inserting the NAME attribute into column C1 is:
EXEC SQL INSERT INTO T1 (C1) VALUES ((:personhv)..NAME)
------------------------------------------------------------------------
7.4 Chapter 20. Programming in C and C++
The following table supplements the information included in chapter 7,
"Stored Procedures", chapter 15, "Writing User-Defined Functions and
Methods", and chapter 20, "Programming in C and C++. The table lists the
supported mappings between SQL data types and C data types for stored
procedures, UDFs, and methods.
7.4.1 C/C++ Types for Stored Procedures, Functions, and Methods
Table 2. SQL Data Types Mapped to C/C++ Declarations
SQL Column Type C/C++ Data Type SQL Column Type Description
sqlint16 16-bit signed integer
SMALLINT
(500 or 501)
sqlint32 32-bit signed integer
INTEGER
(496 or 497)
sqlint64 64-bit signed integer
BIGINT
(492 or 493)
float Single-precision floating
REAL point
(480 or 481)
double Double-precision floating
DOUBLE point
(480 or 481)
Not supported. To pass a decimal value,
DECIMAL(p,s) define the parameter to be
(484 or 485) of a data type castable from
DECIMAL (for example CHAR or
DOUBLE) and explicitly cast
the argument to this type.
char[n+1] where n is Fixed-length,
CHAR(n) large enough to hold null-terminated character
(452 or 453) the data string
1<=n<=254
char[n+1] where n is Fixed-length character
CHAR(n) FOR BIT DATA large enough to hold string
(452 or 453) the data
1<=n<=254
char[n+1] where n is Null-terminated varying
VARCHAR(n) large enough to hold length string
(448 or 449) (460 or the data
461) 1<=n<=32 672
Not null-terminated varying
VARCHAR(n) FOR BIT struct { length character string
DATA sqluint16 length;
(448 or 449) char[n]
}
1<=n<=32 672
Not null-terminated varying
LONG VARCHAR struct { length character string
(456 or 457) sqluint16 length;
char[n]
}
32 673<=n<=32 700
Non null-terminated varying
CLOB(n) struct { length character string with
(408 or 409) sqluint32 length; 4-byte string length
char data[n]; indicator
}
1<=n<=2 147 483 647
Non null-terminated varying
BLOB(n) struct { binary string with 4-byte
(404 or 405) sqluint32 length; string length indicator
char data[n];
}
1<=n<=2 147 483 647
char[11] null-terminated character
DATE form
(384 or 385)
char[9] null-terminated character
TIME form
(388 or 389)
char[27] null-terminated character
TIMESTAMP form
(392 or 393)
Note: The following data types are only available in the DBCS or EUC
environment when precompiled with the WCHARTYPE NOCONVERT option.
sqldbchar[n+1] where n Fixed-length,
GRAPHIC(n) is large enough to null-terminated double-byte
(468 or 469) hold the data character string
1<=n<=127
sqldbchar[n+1] where n Not null-terminated,
VARGRAPHIC(n) is large enough to variable-length double-byte
(400 or 401) hold the data character string
1<=n<=16 336
Not null-terminated,
LONG VARGRAPHIC struct { variable-length double-byte
(472 or 473) sqluint16 length; character string
sqldbchar[n]
}
16 337<=n<=16 350
Non null-terminated varying
DBCLOB(n) struct { length character string with
(412 or 413) sqluint32 length; 4-byte string length
sqldbchar data[n]; indicator
}
1<=n<=1 073 741 823
------------------------------------------------------------------------
7.5 Appendix B. Sample Programs
The following should be added to the "Object Linking and Embedding Samples"
section:
salarycltvc A Visual C++ DB2 CLI sample that calls the
Visual Basic stored procedure, salarysrv.
SALSVADO A sample OLE automation stored procedure (SALSVADO) and a
SALCLADO client (SALCLADO), implemented in 32-bit Visual Basic and ADO,
that calculates the median salary in table staff2.
------------------------------------------------------------------------
7.6 Activating the IBM DB2 Universal Database Project and Tool Add-ins for
Microsoft Visual C++
Before running the db2vccmd command (step 1), please ensure that you have
started and stopped Visual C++ at least once with your current login ID.
The first time you run Visual C++, a profile is created for your user ID,
and that is what gets updated by the db2vccmd command. If you have not
started it once, and you try to run db2vccmd, you may see errors like the
following:
"Registering DB2 Project add-in ...Failed! (rc = 2)"
------------------------------------------------------------------------
CLI Guide and Reference
------------------------------------------------------------------------
8.1 CLI Unicode Functions and SQL_C_WCHAR Support on AIX Only
CLI Unicode functions accept pointers to character strings or to SQLPOINTER
in their arguments. The argument strings are in UCS2 format. These
functions are implemented as functions with a W suffix.
In Unicode functions that return or take strings, length arguments are
passed as count of characters. For functions that return length information
for server data, the display size and precision are described in number of
characters. When a length can refer to string or to non-string data, the
length is described in octet lengths. The function prototypes for the
Unicode functions can be found in sqlcli1.h.
The following is a list of Unicode functions:
SQLColAttributeW
SQLColAttributesW
SQLColumnPrivilegesW
SQLColumnsW
SQLConnectW
SQLDataSourcesW
SQLDescribeColW
SQLDriverConnectW
SQLBrowseConnectW
SQLErrorW
SQLExecDirectW
SQLForeignKeysW
SQLGetCursorNameW
SQLGetInfoW
SQLNativeSqlW
SQLPrepareW
SQLPrimaryKeysW
SQLProcedureColumnsW
SQLProceduresW
SQLSetCursorNameW
SQLSpecialColumnsW
SQLStatisticsW
SQLTablePrivilegesW
SQLTablesW
SQLGetDiagFieldW
SQLGetDiagRecW
SQLSetConnectAttrW
SQLSetStmtAttrW
SQLGetDescFieldW
SQLSetDescFieldW
An application can be written so that it can be compiled as either a
Unicode application or an ANSI application. The application is compiled as
a Unicode application by turning on the UNICODE define. In this case,
character data types can be declared SQL_C_TCHAR. This is a macro found in
sqlcli1.h. The macro inserts SQL_C_WCHAR if the application is compiled as
a Unicode application, or SQL_C_CHAR if it is compiled as an ANSI
application. The function prototypes for the Unicode functions can be found
in sqlcli1.h. Function calls without the W suffix will be mapped to the
corresponding function with the W suffix if the application is compiled
with the UNICODE define turned on.
Unicode and Ansi function calls cannot be mixed.
All SQL data types that can be converted to SQL_C_CHAR can also be
converted to SQL_C_WCHAR; the converse is also true.
The following restrictions apply:
* From an ODBC perspective, CLI is not a UNICODE driver. However,
because SQLConnectW is not exported, it is mapped to SQLConnectWInt in
sqlcli1.h.
* Currently, Unicode functions and SQL_C_WCHAR are only supported on
AIX. To use the CLI Unicode functions and SQL_C_WCHAR in applications
on AIX, use sqlcli1.h and compile with the UNICODE define. For Windows
NT applications requiring Unicode functions and SQL_C_WCHAR, use the
ODBC 3.5 driver. The ODBC 3.5 driver manager will treat the DB2 UDB
CLI driver as an ANSI driver. The ODBC 3.5 driver manager will convert
Unicode function (with the W suffix) to an ANSI function call and pass
it to the CLI driver. The ODBC 3.5 driver manager will also map
SQL_C_WCHAR to SQL_C_CHAR.
* Currently SQL_C_WCHAR support is provided by converting data to and
from UCS2 to an application code page.
* There is no SQL_WCHAR, SQL_WVARCHAR, or SQL_WLONGVARCHAR support.
* WCHARTYPE NOCONVERT is not supported for the Unicode functions or for
SQL_C_CHAR.
------------------------------------------------------------------------
8.2 Binding Database Utilities Using the Run-Time Client
The Run-Time Client cannot be used to bind the database utilities (import,
export, reorg, the command line processor) and DB2 CLI bind files to each
database before they can be used with that database. You must use the DB2
Administration Client or the DB2 Application Development Client instead.
You must bind these database utilities and DB2 CLI bind files to each
database before they can be used with that database. In a network
environment, if you are using multiple clients that run on different
operating systems, or are at different versions or service levels of DB2,
you must bind the utilities once for each operating system and DB2-version
combination.
------------------------------------------------------------------------
8.3 Addition to the "Using Compound SQL" Section
The following note is missing from the book:
Any SQL statement that can be prepared dynamically, other than a query,
can be executed as a statement inside a compound statement.
Note: Inside Atomic Compound SQL, savepoint, release savepoint, and
rollback to savepoint SQL statements are also disallowed. Conversely,
Atomic Compound SQL is disallowed in savepoint.
------------------------------------------------------------------------
8.4 Writing a Stored Procedure in CLI
Following is an undocumented limitation on CLI stored procedures:
If you are making calls to multiple CLI stored procedures,
the application must close the open cursors from one stored procedure
before calling the next stored procedure. More specifically, the first
set of open cursors must be closed before the next stored procedure
tries to open a cursor.
------------------------------------------------------------------------
8.5 Appendix K. Using the DB2 CLI/ODBC/JDBC Trace Facility
The sections within this appendix have been updated. See the "Traces"
chapter in the Troubleshooting Guide for the most up-to-date information on
this trace facility.
------------------------------------------------------------------------
8.6 Using Static SQL in CLI Applications
For more information on using static SQL in CLI applications, see the Web
page at: http://www.ibm.com/software/data/db2/udb/staticcli/
------------------------------------------------------------------------
8.7 Limitations of JDBC/ODBC/CLI Static Profiling
JDBC/ODBC/CLI static profiling currently targets straightforward
applications. It is not meant for complex applications with many functional
components and complex program logic during execution.
An SQL statement must have successfully executed for it to be captured in a
profiling session. In a statement matching session, unmatched dynamic
statements will continue to execute as dynamic JDBC/ODBC/CLI calls.
An SQL statement must be identical character-by-character to the one that
was captured and bound to be a valid candidate for statement matching.
Spaces are significant: for example, "COL = 1" is considered different than
"COL=1". Use parameter markers in place of literals to improve match hits.
When executing an application with pre-bound static SQL statements, dynamic
registers that control the dynamic statement behavior will have no effect
on the statements that are converted to static.
If an application issues DDL statements for objects that are referenced in
subsequent DML statements, you will find all of these statements in the
capture file. The JDBC/ODBC/CLI Static Profiling Bind Tool will attempt to
bind them. The bind attempt will be successful with DBMSs that support the
VALIDATE(RUN) bind option, but it fail with ones that do not. In this case,
the application should not use Static Profiling.
The Database Administrator may edit the capture file to add, change, or
remove SQL statements, based on application-specific requirements.
------------------------------------------------------------------------
8.8 Parameter Correction for SQLBindFileToParam() CLI Function
The last parameter - IndicatorValue - in the SQLBindFileToParam() CLI
function is currently documented as "output (deferred)". It should be
"input (deferred)".
------------------------------------------------------------------------
8.9 ADT Transforms
The following supercedes existing information in the book.
* There is a new descriptor type (smallint)
SQL_DESC_USER_DEFINED_TYPE_CODE, with values:
SQL_TYPE_BASE 0 (this is not a USER_DEFINED_TYPE)
SQL_TYPE_DISTINCT 1
SQL_TYPE_STRUCTURED 2
This value can be queried with either SQLColAttribute
or SQLGetDescField (IRD only).
The following attributes are added to obtain the actual type names:
SQL_DESC_REFERENCE_TYPE
SQL_DESC_STRUCTURED_TYPE
SQL_DESC_USER_TYPE
The above values can be queried using SQLColAttribute
or SQLGetDescField (IRD only).
* Add SQL_DESC_BASE_TYPE in case the application needs it. For example,
the application may not recognize the structured type, but intends to
fetch or insert it, and let other code deal with the details.
* Add a new connection attribute called SQL_ATTR_TRANSFORM_GROUP to
allow an application to set the transform group (rather than use the
SQL "SET CURRENT DEFAULT TRANSFORM GROUP" statement).
* Add a new statement/connection attribute called
SQL_ATTR_RETURN_USER_DEFINED_TYPES that can be set or queried using
SQLSetConnectAttr, which causes CLI to return the value
SQL_DESC_USER_DEFINED_TYPE_CODE as a valid SQL Type. This attribute is
required before using any of the transforms.
o By default, the attribute is off, and causes the base type
information to be returned as the SQL type.
o When enabled, SQL_DESC_USER_DEFINED_TYPE_CODE will be returned as
the SQL_TYPE. The application is expected to check for
SQL_DESC_USER_DEFINED_TYPE_CODE, and then to retrieve the
appropriate type name. This will be available to SQLColAttribute,
SQLDescribeCol, and SQLGetDescField.
* The SQLBindParameter does not give an error when you bind
SQL_C_DEFAULT, because there is no code to allow SQLBindParameter to
specify the type SQL_USER_DEFINED_TYPE. The standard default C types
will be used, based on the base SQL type flowed to the server. For
example:
sqlrc = SQLBindParameter (hstmt, 2, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_VARCHAR, 30,
0, &c2, 30, NULL);
* SQLDescribeParam and SQLGetDescField for parameter markers do not yet
return structured type information. (This support will be added in the
first Version 7.1 FixPak.)
------------------------------------------------------------------------
Command Reference
------------------------------------------------------------------------
9.1 db2batch - Benchmark Tool
The last sentence in the description of the PERF_DETAIL parameter should
read:
A value greater than 1 is only valid on DB2 Version 2 and DB2 UDB servers,
and is not currently supported on host machines.
------------------------------------------------------------------------
9.2 db2cap (new command)
db2cap - CLI/ODBC Static Package Binding Tool
Binds a CLI/ODBC capture file to generate a static package. A capture file
contains the captured SQL statements that are executed by a CLI/ODBC
application. This utility processes the capture file so that it can be used
by the CLI/ODBC driver to run static SQL for the application.
For more information on how to use static SQL in CLI/ODBC applications, see
the Using Static SQL in CLI Web page at
http://www.ibm.com/software/data/db2/udb/staticcli/
Authorization
* Access privileges to any database objects referenced by SQL statements
recorded in the capture file.
* Sufficient authority to set bind options such as OWNER and QUALIFIER
if they are different from the connect ID used to invoke the db2cap
command.
* BINDADD authority if the package is being bound for the first time;
otherwise, BIND authority is required.
Command Syntax
>>-db2cap----+----+--bind--capture-file----d--database_alias---->
+--h-+
'--?-'
>-----+--------------------------------+-----------------------><
'--u--userid--+---------------+--'
'--p--password--'
Command Parameters
-h/-?
Displays help text for the command syntax.
bind capture-file
Binds the statements from the capture file and creates one or more
packages.
-d database_alias
Specifies the database alias for the database that will contain one or
more packages.
-u userid
Specifies the user ID to be used to connect to the data source.
Note:If a user ID is not specified, a trusted authorization ID is
obtained from the system.
-p password
Specifies the password to be used to connect to the data source.
Usage Notes
The command must be entered in lowercase on UNIX platforms, but can be
entered in either lowercase or uppercase on Windows operating systems and
OS/2.
This utility supports a number of user-specified bind options that can be
found in the capture file. For performance and security reasons, the file
can be examined and edited with a text editor to change these options.
The SQLERROR(CONTINUE) and the VALIDATE(RUN) bind options can be used to
create a package.
When using this utility to create a package, static profiling must be
disabled.
------------------------------------------------------------------------
9.3 db2gncol (new command)
db2gncol - Update Generated Column Values
Updates generated columns in tables that are in check pending mode and have
limited log space. This tool is used to prepare for a SET INTEGRITY
statement on a table that has columns which are generated by expressions.
Authorization
One of the following
* sysadm
* dbadm
Command Syntax
>>-db2gncol----d--database----s--schema_name----t--table_name--->
>-----c--commit_count----+---------------------------+---------->
'--u--userid---p--password--'
>-----+-----+--------------------------------------------------><
'--h--'
Command Parameters
-d database
Specifies an alias name for the database in which the table is
located.
-s schema_name
Specifies the schema name for the table. The schema name is case
sensitive.
-t table_name
Specifies the table for which new column values generated by
expressions are to be computed. The table name is case sensitive.
-c commit_count
Specifies the number of rows updated between commits. This parameter
influences the size of the log space required to generate the column
values.
-u userid
Specifies a user ID with system administrator or database
administrator privileges. If this option is omitted, the current user
is assumed.
-p password
Specifies the password for the specified user ID.
-h
Displays help information. When this option is specified, all other
options are ignored, and only the help information is displayed.
Usage Notes
Using this tool instead of the FORCE GENERATED option on the SET INTEGRITY
statement may be necessary if a table is large and the following conditions
exist:
* All column values must be regenerated after altering the generation
expression of a generated column.
* An external UDF used in a generated column was changed, causing many
column values to change.
* A generated column was added to the table.
* A large load or load append was performed that did not provide values
for the generated columns.
* The log space is too small due to long-running concurrent transactions
or the size of the table.
This tool will regenerate all column values that were created based on
expressions. While the table is being updated, intermittent commits are
performed to avoid using up all of the log space. Once db2gncol has been
run, the table can be taken out of check pending mode using the SET
INTEGRITY statement.
------------------------------------------------------------------------
9.4 db2look - DB2 Statistics Extraction Tool
The syntax diagram should appear as follows:
>>-db2look---d--DBname----+--------------+---+-----+---+-----+-->
'--u--Creator--' '--s--' '--g--'
>-----+-----+---+-----+---+-----+---+-----+---+-----+----------->
'--a--' '--h--' '--r--' '--c--' '--p--'
>-----+------------+---+-------------------+-------------------->
'--o--Fname--' '--e--+----------+--'
'--t Tname-'
>-----+-------------------+---+-----+---+-----+----------------->
'--m--+----------+--' '--l--' '--x--'
'--t Tname-'
>-----+---------------------------+---+-----+------------------><
'--i--userid---w--password--' '--f--'
------------------------------------------------------------------------
9.5 db2updv6 (new command)
Purpose
Initializes a packed descriptor field for user tables.
This utility should be run against any Version 6 database
that is moving to a Version 7.1 instance, but only if that database
has previously migrated from Version 2 to Version 6.
Authorization
sysadm
Command Syntax
db2updv6 -d dbname ----------------->
| |
----- -h ------
where:
-d dbname specifies a database name, and
-h displays usage help.
------------------------------------------------------------------------
9.6 New Command Line Processor Option (-x, Suppress printing of column
headings)
A new option, -x, tells the command line processor to return data without
any headers, including column names. The default setting for this command
option is OFF.
------------------------------------------------------------------------
9.7 True Type Font Requirement for DB2 CLP
To display the national characters for single byte (SBCS) languages
correctly from the DB2 command line processor (CLP) window, change the font
to True Type.
------------------------------------------------------------------------
9.8 CALL
The syntax for the CALL command should appear as follows:
.-,---------------.
V |
>>-CALL--proc-name---(-----+-----------+--+---)----------------><
'-argument--'
The description of the argument parameter has been changed to:
Specifies one or more arguments for the stored procedure.
All input and output arguments must be specified in the order
defined by the procedure. Output arguments are specified
using the "?" character. For example, a stored procedure foo
with one integer input parameter and one output parameter
would be invoked as "call foo (4, ?)".
Notes:
1. When invoking this utility from an operating system prompt, it may be
necessary to delimit the command as follows:
"call DEPT_MEDIAN (51)"
A single quotation mark (') can also be used.
2. The stored procedure being called must be uniquely named in the
database.
3. The stored procedure must be cataloged. If an uncataloged procedure is
called, a DB21036 error message is returned.
4. A DB21101E message is returned if not enough parameters are specified
on the command line, or the command line parameters are not in the
correct order (input, output), according to the stored procedure
definition.
5. There is a maximum of 1023 characters for a result column.
6. LOBS and binary data (FOR BIT DATA, VARBINARY, LONGVARBINARY, GRAPHIC,
VARGAPHIC, or LONGVARGRAPHIC) are not supported.
7. CALL supports result sets.
8. If an SP with an OUTPUT variable of an unsupported type is called, the
CALL fails, and message DB21036 is returned.
9. The maximum length for an INPUT parameter to CALL is 1024.
------------------------------------------------------------------------
9.9 EXPORT
In the section "DB2 Data Links Manager Considerations", Step 3 of the
procedure to ensure that a consistent copy of the table and the
corresponding files referenced by DATALINK columns are copied for export
should read:
3. Run the dlfm_export utility at each Data Links server. Input to the
dlfm_export utility is the control file name, which is generated by the
export utility. This produces a tar (or equivalent) archive of the files listed
within the control file. For Distributed File Systems (DFS), the dlfm_export
utility will get the DCE network root credentials before archiving the files
listed in the control file. dlfm_export does not capture the ACLs information
of the files that are archived.
In the same section, the bullets following "Successful execution of EXPORT
results in the generation of the following files" should be modified as
follows:
The second sentence in the first bullet should read:
A DATALINK column value in this file has the same format
as that used by the import and load utilities.
The first sentence in the second bullet should read:
Control files server_name, which are generated for
each Data Links server. (On the Windows NT operating system,
a single control file, ctrlfile.lst, is used by all Data Links
servers. For DFS, there is one control file for each cell.)
The following sentence should be added to the paragraph before Table 5:
For more information about dlfm_export, refer to the "Data Movement
Utilities Guide and Reference" under "Using Export to move DB2 Data
Links Manager Data".
------------------------------------------------------------------------
9.10 GET DATABASE CONFIGURATION
The description of the DL_TIME_DROP configuration parameter should be
changed to the following:
Applies to DB2 Data Links Manager only. This parameter specifies
the interval of time (in days) files would be retained on an
archive server (such as a TSM server) after a DROP DATABASE command
is issued.
------------------------------------------------------------------------
9.11 IMPORT
In the section "DB2 Data Links Manager Considerations", the following
sentence should be added to Step 3:
For Distributed File Systems (DFS), update the cell name
information in the URLs (of the DATALINK columns) from the
exported data for the SQL table, if required.
The following sentence should be added to Step 4:
For DFS, define the cells at the target configuration
in the DB2 Data Links Manager configuration file.
The paragraph following Step 4 should read:
When the import utility runs against the target database,
files referred to by DATALINK column data are linked on
the appropriate Data Links servers.
------------------------------------------------------------------------
9.12 LOAD
In the section "DB2 Data Links Manager Considerations", add the following
sentence to Step 1 of the procedure that is to be performed before invoking
the load utility, if data is being loaded into a table with a DATALINK
column that is defined with FILE LINK CONTROL:
For Distributed File Systems (DFS), ensure that the DB2 Data Links Managers
within the target cell are registered.
The following sentence should be added to Step 5:
For DFS, register the cells at the target configuration
referred to by DATALINK data (to be loaded) in the DB2
Data Links Manager configuration file.
In the section "Representation of DATALINK Information in an Input File",
the first note following the parameter description for urlname should read:
Currently "http", "file", "unc", and "dfs" are permitted as a scheme name.
The first sentence of the second note should read:
The prefix (scheme, host, and port) of the URL
name is optional. For DFS, the prefix refers to the
scheme cellname filespace-junction portion.
In the DATALINK data examples for both the delimited ASCII (DEL) file
format and the non-delimited ASCII (ASC) file format, the third examples
should be removed.
The DATALINK data examples in which the load or import specification for
the column is assumed to be DL_URL_DEFAULT_PREFIX should be removed and
replaced with the following:
Following are DATALINK data examples in which the load or import
specification for the column is assumed to be DL_URL_REPLACE_PREFIX
("http://qso"):
* http://www.almaden.ibm.com/mrep/intro.mpeg
This is stored with the following parts:
o scheme = http
o server = qso
o path = /mrep/intro.mpeg
o comment = NULL string
* /u/me/myfile.ps
This is stored with the following parts:
o scheme = http
o server = qso
o path = /u/me/myfile.ps
o comment = NULL string
------------------------------------------------------------------------
Connectivity Supplement
------------------------------------------------------------------------
10.1 Setting Up the Application Server in a VM Environment
Add the following sentence after the first (and only) sentence in the
section "Provide Network Information", subsection "Defining the Application
Server":
The RDB_NAME is provided on the SQLSTART EXEC as the DBNAME parameter.
------------------------------------------------------------------------
10.2 CLI/ODBC/JDBC Configuration PATCH1 and PATCH2 Settings
The CLI/ODBC/JDBC driver can be configured through the Client Configuration
Assistant or the ODBC Driver Manager (if it is installed on the system), or
by manually editing the db2cli.ini file. For more details, see either the
Installation and Configuration Supplement, or the CLI Guide and Reference.
The DB2 CLI/ODBC driver default behavior can be modified by specifying
values for both the PATCH1 and PATCH2 keyword through either the db2cli.ini
file or through the SQLDriverConnect() or SQLBrowseConnect() CLI API.
The PATCH1 keyword is specified by adding together all keywords that the
user wants to set. For example, if patch 1, 2, and 8 were specified, then
PATCH1 would have a value of 11. Following is a description of each keyword
value and its effect on the driver:
1 - This makes the driver search for "count(exp)" and replace it with
"count(distinct exp)". This is needed because some versions of DB2
support the "count(exp)" syntax, and that syntax is generated by some
ODBC applications. Needed by Microsoft applications when the server
does not support the "count(exp)" syntax.
2 - Some ODBC applications are trapped when SQL_NULL_DATA is returned in
the SQLGetTypeInfo() function for either the LITERAL_PREFIX or LITERAL_SUFFIX
column. This forces the driver to return an empty string instead. Needed
by Impromptu 2.0.
4 - This forces the driver to treat the input time stamp data as date data
if the time and the fraction part of the time stamp are zero. Needed by
Microsoft Access.
8 - This forces the driver to treat the input time stamp data as time data
if the date part of the time stamp is 1899-12-30. Needed by Microsoft Access.
16 - Not used.
32 - This forces the driver to not return information about SQL_LONGVARCHAR,
SQL_LONGVARBINARY, and SQL_LONGVARGRAPHIC columns. To the application
it appears as though long fields are not supported. Needed by Lotus 123.
64 - This forces the driver to NULL terminate graphic output strings. Needed
by Microsoft Access in a double byte environment.
128 - This forces the driver to let the query "SELECT Config, nValue FROM MSysConf"
go to the server. Currently the driver returns an error with associated
SQLSTATE value of S0002 (table not found). Needed if the user has created
this configuration table in the database and wants the application
to access it.
256 - This forces the driver to return the primary key columns first in
the SQLStatistics() call. Currently, the driver returns the indexes sorted
by index name, which is standard ODBC behavior.
512 - This forces the driver to return FALSE in SQLGetFunctions() for both
SQL_API_SQLTABLEPRIVILEGES and SQL_API_SQLCOLUMNPRIVILEGES.
1024 - This forces the driver to return SQL_SUCCESS instead of SQL_NO_DATA_FOUND
in SQLExecute() or SQLExecDirect() if the executed UPDATE or DELETE
statement affects no rows. Needed by Visual Basic applications.
2048 - Not used.
4096 - This forces the driver to not issue a COMMIT after closing a cursor
when in autocommit mode.
8192 - This forces the driver to return an extra result set after invoking
a stored procedure. This result set is a one row result set consisting
of the output values of the stored procedure. Can be accessed by
Powerbuild applications.
32768 - This forces the driver to make Microsoft Query applications work
with DB2 MVS synonyms.
65536 - This forces the driver to manually insert a "G" in front of character
literals which are in fact graphic literals. This patch should always
be supplied when working in an double byte environment.
131072 - This forces the driver to describe a time stamp column as a CHAR(26)
column instead, when it is part of an unique index. Needed by
Microsoft applications.
262144 - This forces the driver to use the pseudo-catalog table
db2cli.procedures instead of the SYSCAT.PROCEDURES and
SYSCAT.PROCPARMS tables.
524288 - This forces the driver to use SYSTEM_TABLE_SCHEMA instead of TABLE_SCHEMA
when doing a system table query to a DB2/400 V3.x system. This
results in better performance.
1048576 - This forces the driver to treat a zero length string through
SQLPutData() as SQL_NULL_DATA.
The PATCH2 keyword differs from the PATCH1 keyword. In this case, multiple
patches are specified using comma separators. For example, if patch 1, 4,
and 5 were specified, then PATCH2 would have a value of "1,4,5". Following
is a description of each keyword value and its effect on the driver:
1 - This forces the driver to convert the name of the stored procedure
in a CALL statement to uppercase.
2 - Not used.
3 - This forces the driver to convert all arguments to schema calls to uppercase.
4 - This forces the driver to return the Version 2.1.2 like result set for
schema calls (that is, SQLColumns(), SQLProcedureColumns(), and so on),
instead of the Version 5 like result set.
5 - This forces the driver to not optimize the processing of input VARCHAR
columns, where the pointer to the data and the pointer to the length
are consecutive in memory.
6 - This forces the driver to return a message that scrollable cursors are not
supported. This is needed by Visual Basic programs if the DB2 client
is Version 5 and the server is DB2 UDB Version 5.
7 - This forces the driver to map all GRAPHIC column data types to the CHAR
column data type. This is needed in a double byte environment.
8 - This forces the driver to ignore catalog search arguments in schema calls.
9 - Do not commit on Early Close of a cursor
10 - Not Used
11 - Report that catalog name is supported, (VB stored procedures)
12 - Remove double quotes from schema call arguments, (Visual Interdev)
13 - Do not append keywords from db2cli.ini to output connection string
14 - Ignore schema name on SQLProcedures() and SQLProcedureColumns()
15 - Always use period for decimal separator in character output
16 - Force return of describe information for each open
17 - Do not return column names on describe
18 - Attempt to replace literals with parameter markers
19 - Currently, DB2 MVS V4.1 does not support the ODBC syntax where parenthesis
are allowed in the ON clause in an Outer join clause.
Turning on this PATCH2 will cause IBM DB2 ODBC driver to strip the parenthesis
when the outer join clause is in an ODBC escape sequence. This PATCH2
should only be used when going against DB2 MVS 4.1.
20 - Currently, DB2 on MVS does not support BETWEEN predicate with parameter
markers as both operands (expression ? BETWEEN ?). Turning on this patch
will cause the IBM ODBC Driver to rewrite the predicate to
(expression >= ? and expression <= ?).
21 - Set all OUTPUT only parameters for stored procedures to SQL_NULL_DATA
22 - This PATCH2 causes the IBM ODBC driver to report OUTER join as not supported.
This is for application that generates SELECT DISTINCT col1 or ORDER BY col1
when using outer join statement where col1 has length greater than 254
characters and causes DB2 UDB to return an error (since DB2 UDB does
not support greater-than-254 byte column in this usage
23 - Do not optimize input for parameters bound with cbColDef=0
24 - Access workaround for mapping Time values as Characters
25 - Access workaround for decimal columns - removes trailing zeros in char representation
26 - Do not return sqlcode 464 to application - indicates result sets are returned
27 - Force SQLTables to use TABLETYPE keyword value, even if the application
specifies a valid value
28 - Describe real columns as double columns
29 - ADO workaround for decimal columns - removes leading zeroes
for values x, where 1 > x > -1 (Only needed for some MDAC versions)
30 - Disable the Stored Procedure caching optimization
31 - Report statistics for aliases on SQLStatistics call
32 - Override the sqlcode -727 reason code 4 processing
33 - Return the ISO version of the time stamp when converted to char
(as opposed to the ODBC version)
34 - Report CHAR FOR BIT DATA columns as CHAR
35 - Report an invalid TABLENAME when SQL_DESC_BASE_TABLE_NAME
is requested - ADO readonly optimization
36 - Reserved
37 - Reserved
------------------------------------------------------------------------
Data Links Manager Quick Beginnings
------------------------------------------------------------------------
11.1 Dlfm start Fails with Message: "Error in getting the afsfid for
prefix"
For a Data Links Manager running in the DCE-DFS environment, if dlfm start
fails with the error:
Error in getting the afsfid for prefix
contact IBM Service. The error may occur when a DFS file set that had been
registered to the Data Links Manager using "dlfm add_prefix" was
subsequently deleted.
------------------------------------------------------------------------
11.2 Setting Tivoli Storage Manager Class for Archive Files
To specify which TSM management class to use for the archive files, set the
DLFM_TSM_MGMTCLASS DB2 registry entry to the appropriate management class
name.
------------------------------------------------------------------------
11.3 Disk Space Requirements for DFS Client Enabler
The DFS Client Enabler is an optional component that you can select during
DB2 Universal Database client or server installation. You cannot install a
DFS Client Enabler without installing a DB2 Universal Database client or
server product, even though the DFS Client Enabler runs on its own without
the need for a DB2 UDB client or server. In addition to the 2 MB of disk
space required for the DFS Client Enabler code, you should set aside an
additional 40 MB if you are installing the DFS Client Enabler as part of a
DB2 Run-Time Client installation. You will need more disk space if you
install the DFS Client Enabler as part of a DB2 Administration Client or
DB2 server installation. For more information about disk space requirements
for DB2 Universal Database products, refer to the DB2 for UNIX Quick
Beginnings manual.
------------------------------------------------------------------------
11.4 Monitoring the Data Links File Manager Back-end Processes on AIX
There has been a change to the output of the dlfm see command. When this
command is issued to monitor the Data Links File Manager back-end processes
on AIX, the output that is returned will be similar to the following:
PID PPID PGID RUNAME UNAME ETIME DAEMON NAME
17500 60182 40838 dlfm root 12:18 dlfm_copyd_(dlfm)
41228 60182 40838 dlfm root 12:18 dlfm_chownd_(dlfm)
49006 60182 40838 dlfm root 12:18 dlfm_upcalld_(dlfm)
51972 60182 40838 dlfm root 12:18 dlfm_gcd_(dlfm)
66850 60182 40838 dlfm root 12:18 dlfm_retrieved_(dlfm)
67216 60182 40838 dlfm dlfm 12:18 dlfm_delgrpd_(dlfm)
60182 1 40838 dlfm dlfm 12:18 dlfmd_(dlfm)
DLFM SEE request was successful.
The name that is enclosed within the parentheses is the name of the dlfm
instance, in this case "dlfm".
------------------------------------------------------------------------
11.5 Installing and Configuring DB2 Data Links Manger for AIX: Additional
Installation Considerations in DCE-DFS Environments
In the section called "Installation prerequisites", there is new
information that should be added:
You must also install either an e-fix for DFS 3.1,
or PTF set 1 (when it becomes available). The e-fix is available from:
http://www.transarc.com/Support/dfs/datalinks/efix_dfs31_main_page.html
Also:
The dfs client must be running before you install the Data Links Manager.
Use db2setup or smitty.
In the section called "Keytab file", there is an error that should be
corrected as:
The keytab file, which contains the principal and password information,
should be called datalink.ktb and ....
The example below this paragraph uses the correct name: datalink.ktb The
"Keytab file" section should be moved under "DCE-DFS Post-Installation
Task", because the creation of this file cannot occur until after the
DLMADMIN instance has been created.
In the section called "Data Links File Manager servers and clients", it
should be noted that the Data Links Manager server must be installed before
any of the Data Links Manager clients.
A new section, "Backup directory", should be added:
If the backup method is to a local file system,
this must be a directory in the DFS file system.
Ensure that this DFS file set has been created by a
DFS administrator. This should not be a DMLFS file set.
------------------------------------------------------------------------
11.6 Failed "dlfm add_prefix" Command
For a Data Links Manager running in the DCE/DFS environment, the dlfm
add_prefix command might fail with a return code of -2061 (backup failed).
If this occurs, perform the following steps:
1. Stop the Data Links Manager daemon processes by issuing the dlfm stop
command.
2. Stop the DB2 processes by issuing the dlfm stopdbm command.
3. Get dce root credentials by issuing the dce_login root command.
4. Start the DB2 processes by issuing the dlfm startdbm command.
5. Register the file set with the Data Links Manager by issuing the dlfm
add_prefix command.
6. Start the Data Links Manager daemon processes by issuing the dlfm
start command.
------------------------------------------------------------------------
11.7 Installing and Configuring DB2 Data Links Manger for AIX: Installing
DB2 Data Links Manager on AIX Using the db2setup Utility
In the section "DB2 database DLFM_DB created", the DLFM_DB is not created
in the DCE_DFS environment. This must be done as a post-installation step.
In the section "DCE-DFS pre-start registration for DMAPP", Step 2 should be
changed to the following:
2. Commands are added to /opt/dcelocal/tcl/user_cmd.tcl to
ensure that the DMAPP is started when DFS is started.
------------------------------------------------------------------------
11.8 Installing and Configuring DB2 Data Links Manager for AIX: DCE-DFS
Post-Installation Task
The following new section, "Complete the Data Links Manager Install",
should be added:
On the Data Links Manager server, the following steps must be performed to
complete the installation:
1. Create the keytab file as outlined under "Keytab file" in the section
"Additional Installation Considerations in DCE-DFS Environment", in
the chapter "Installing and Configuring DB2 Data Links Manger for
AIX".
2. As root, enter the following commands to start the DMAPP:
stop.dfs all
start.dfs all
3. Run "dlfm setup" using dce root credentials as follows:
a. Login as the Data Links Manager administrator, DLMADMIN.
b. As root, issue dce_login.
c. Enter the command: dlfm setup.
On the Data Links Manager client, the following steps must be performed to
complete the installation:
1. Create the keytab file as outlined under "Keytab file" in the section
"Additional Installation Considerations in DCE-DFS Environment", in
the chapter "Installing and Configuring DB2 Data Links Manger for
AIX".
2. As root, enter the following commands to start the DMAPP:
stop.dfs all
start.dfs all
------------------------------------------------------------------------
11.9 Installing and Configuring DB2 Data Links Manager for AIX: Manually
Installing DB2 Data Links Manager Using Smit
Under the section, "SMIT Post-installation Tasks", modify step 7 to
indicate that the command "dce_login root" must be issued before "dlfm
setup". Step 11 is not needed. This step is performed automatically when
Step 6 (dlfm server_conf) or Step 8 (dlfm client_conf) is done. Also remove
step 12 (dlfm start). To complete the installation, perform the following
steps:
1. Create the keytab file as outlined under "Keytab file" in the section
"Additional Installation Considerations in DCE-DFS Environment", in
the chapter "Installing and Configuring DB2 Data Links Manger for
AIX".
2. As root, enter the following commands to start the DMAPP:
stop.dfs all
start.dfs all
------------------------------------------------------------------------
11.10 Installing and Configuring DB2 Data Links DFS Client Enabler
In the section "Configuring a DFS Client Enabler", add the following to
Step 2:
Performing the "secval" commands will usually complete the configuration.
It may, however, be necessary to reboot the machine as well.
If problems are encountered in accessing READ PERMISSION DB files,
reboot the machine where the DB2 DFS Client Enabler has just been installed.
------------------------------------------------------------------------
11.11 Choosing a Backup Method for DB2 Data Links Manager on AIX
In addition to Disk Copy and XBSA, you can also use Tivoli Storage Manager
(TSM) for backing up files that reside on a Data Links server.
To use Tivoli Storage Manager as an archive server:
1. Install Tivoli Storage Manager on the Data Links server. For more
information, refer to your Tivoli Storage Manager product
documentation.
2. Register the Data Links server client application with the Tivoli
Storage Manager server. For more information, refer to your Tivoli
Storage Manager product documentation.
3. Add the following environment variables to the Data Links Manager
Administrator's db2profile or db2cshrc script files:
(for Bash, Bourne, or Korn shell)
export DSMI_DIR=/usr/lpp/tsm/bin
export DSMI_CONFIG=$HOME/tsm/dsm.opt
export DSMI_LOG=$HOME/dldump
export PATH=$PATH:/usr/lpp/tsm/bin
(for C shell)
setenv DSMI_DIR /usr/lpp/tsm/bin
setenv DSMI_CONFIG ${HOME}/tsm/dsm.opt
setenv DSMI_LOG ${HOME}/dldump
setenv PATH=${PATH}:/usr/lpp/tsm/bin
4. Ensure that the dsm.sys TSM system options file is located in the
/usr/lpp/tsm/bin directory.
5. Ensure that the dsm.opt TSM user options file is located in the
INSTHOME/tsm directory, where INSTHOME is the home directory of the
Data Links Manager Administrator.
6. Set the PASSWORDACCESS option to generate in the
/usr/lpp/tsm/bin/dsm.sys Tivoli Storage Manager system options file.
7. Register TSM password with the generate option before starting the
Data Links File Manager for the first time. This way, you will not
need to provide a password when the Data Links File Manager initiates
a connection to the TSM server. For more information, refer to your
TSM product documentation.
8. Set the DLFM_BACKUP_TARGET registry variable to TSM. The value of
DLFM_BACKUP_DIR_NAME registry variable will be ignored in this case.
This will activate the Tivoli Storage Manager backup option.
Notes:
1. If you change the setting of the DLFM_BACKUP_TARGET registry
variable between TSM and disk at run time, you should be aware
that the archived files are not moved to the newly specified
archive location. For example, if you start the Data Links File
Manager with the DLMF_BACKUP_TARGET registry value set to TSM,
and change the registry value to a disk location, all newly
archived files will be stored in the new location on the disk.
The files that were previously archived to TSM will not be moved
to the new disk location.
2. To override the default TSM management class there is a new
registry variable called DLFM_TSM_MGMTCLASS. If this registry
variable is left unset then the default TSM management class will
be used.
9. Stop the Data Links File Manager by entering the dlfm stop command.
10. Start the Data Links File Manager by entering the dlfm start command.
------------------------------------------------------------------------
11.12 Choosing a Backup Method for DB2 Data Links Manager on Windows NT
Whenever a DATALINK value is inserted into a table with a DATALINK column
that is defined for recovery, the corresponding DATALINK files on the Data
Links server are scheduled to be backed up to an archive server. Currently,
Disk Copy (default method) and Tivoli Storage Manager are the two options
that are supported for file backup to an archive server. Future releases of
DB2 Data Links Manager for Windows NT will support other vendors' backup
media and software.
Disk Copy (default method)
When the backup command is entered on the DB2 server, it ensures that
the linked files in the database are backed up on the Data Links
server to the directory specified by the DLFM_BACKUP_DIR_NAME
environment variable. The default value for this variable is
c:\dlfmbackup, where c:\ represents the Data Links Manager backup
installation drive.
To set this variable to c:\dlfmbackup, enter the following command:
db2set -g DLFM_BACKUP_DIR_NAME=c:\dlfmbackup
The location specified by the DLFM_BACKUP_DIR_NAME environment
variable must not located on a file system using a Data Links
Filesystem Filter and that the required space is available in the
directory you specified for the backup files.
Also, ensure that the DLFM_BACKUP_TARGET variable is set to LOCAL by
entering the following command:
db2set -g DLFM_BACKUP_TARGET=LOCAL
After setting or changing these variables, stop and restart the Data
Links File Manager using the dlfm stop and dlfm start commands.
Tivoli Storage Manager
To use Tivoli Storage Manager as an archive server:
1. Install Tivoli Storage Manager on the Data Links server. For more
information, refer to your Tivoli Storage Manager product
documentation.
2. Register the Data Links server client application with the Tivoli
Storage Manager server. For more information, refer to your
Tivoli Storage Manager product documentation.
3. Click on Start and select Settings --> Control Panel --> System.
The System Properties window opens. Select the Environment tab
and enter the following environment variables and corresponding
values:
Variable Value
DSMI_DIR c:\tsm\baclient
DSMI_CONFIG c:\tsm\baclient\dsm.opt
DSMI_LOG c:\tsm\dldump
4. Ensure that the dsm.sys TSM system options file is located in the
c:\tsm\baclient directory.
5. Ensure that the dsm.opt TSM user options file is located in the
c:\tsm\baclient directory.
6. Set the PASSWORDACCESS option to generate in the
c:\tsm\baclient\dsm.sys Tivoli Storage Manager system options
file.
7. Register TSM password with the generate option before starting
the Data Links File Manager for the first time. This way, you
will not need to provide a password when the Data Links File
Manager initiates a connection to the TSM server. For more
information, refer to your TSM product documentation.
8. Set the DLFM_BACKUP_TARGET environment variable to TSM using the
following command:
db2set -g DLFM_BACKUP_TARGET=TSM
The value of the DLFM_BACKUP_DIR_NAME environment variable will
be ignored in this case. This will activate the Tivoli Storage
Manager backup option.
Notes:
1. If you change the setting of the DLFM_BACKUP_TARGET
environment variable between TSM and LOCAL at run time, you
should be aware that the archived files are not moved to the
newly specified archive location. For example, if you start
the Data Links File Manager with the DLMF_BACKUP_TARGET
environment variable set to TSM, and change its value to
LOCAL, all newly archived files will be stored in the new
location on the disk. The files that were previously
archived to TSM will not be moved to the new disk location.
2. To override the default TSM management class there is a new
environment variable called DLFM_TSM_MGMTCLASS. If this
variable is left unset then the default TSM management class
will be used.
9. Stop the Data Links File Manager by entering the dlfm stop
command.
10. Start the Data Links File Manager by entering the dlfm start
command.
------------------------------------------------------------------------
11.13 Backing up a Journalized File System on AIX
The book states that the Data Links Manager must be stopped, and that an
offline backup should be made of the file system. The following approach,
which removes the requirement of stopping the Data Links Manager, is
suggested for users who require higher availability.
1. Extract the attached (see below) CLI source file quiesce.c and the
shell script online.sh.
2. Compile quiesce.c:
xlC -o quiesce -L$HOME/sqllib/lib --I$HOME/sqllib/include -c quiesce.c
3. Run the script on the node that has the DLFS file system.
The shell script online.sh assumes that you have a catalog entry on the
Data Link Manager node for each database that is registered with the Data
Link Manager. It also assumes that /etc/filesystem has the complete entry
for the DLFS file system. The shell script does the following:
* Quiesces all the tables in databases that are registered with the Data
Links Manager. This will stop any new activity.
* Unmounts and remounts the file system as a read-only file system.
* Performs a file system backup.
* Unmounts and remounts the file system as a read-write file system.
* Resets the DB2 tables; that is, brings them out of the quiesce state.
The script must be modified to suit your environment as follows:
1. Select the backup command and put in the do_backup function of the
script.
2. Set the following environment variables within the script:
o DLFM_INST: set this to the DLFM instance name.
o PATH_OF_EXEC: set this to the path where the "quiesce" executable
resides.
Invoke the script as follows:
online.sh <filesystem_name>
------------------------- start of 'online.sh' script ----------------------
#!/bin/ksh
# Sample script for performing a filesystem backup without bringing it
# offline for most of the duration of the backup
# Some sections of the script need to be modified by the users to suit their
# specific needs including replacing some of the parameters with their own.
# Usage: onlineb <filesystem name>
#The dlfs filesystem being backed up would remain accessible in read-only mode
#for most of the time that the filesystem backup is going on.
#For a short while in between it may be necessary to have all users off the
#filesystem. This would be required at two points; the first, when switching
#the filesystem to read-only (an unmount followed by re-mount as read-only)
#and the second when switching it back to read-write (unmount again followed by
#re-mount as read-write)
# Environment dependent variables ...
# To be changed according to needs ...
DLFM_INST=sharada
PATH_OF_EXEC=/home/sharada/amit
# Local environment variables.
EXEC=quiesce
DLFM_DB_NAME=dlfm_db
# Function to check if root
check_id() {
if [ `id -u` -ne 0 ]
then
echo "You need to be root to run this"
exit 1
fi
}
#
# Function to quiesce the tables with Datalinks value in databases registered
# with DLFM_DB
#
quiesce_tables()
{
echo "Starting DB2 ..."
su - $DLFM_INST "-c db2start | tail -n 1" # Print just the last line
su - $DLFM_INST "-c $PATH_OF_EXEC/$EXEC -q $DLFM_DB_NAME"
}
#
# Function to make the dlfs filesystem read-only
#
# [The filesystem should not be in use during this time; no user should even
# have 'cd'-ied into the filesystem]
# - If the filesystem is NFS exported, unexport it
#
unexport_fs() {
if exportfs | grep -w $filesystem_name
then
echo $filesystem_name " is NFS exported"
nfs_export_existed=1
echo "Unexporting " $filesystem_name
exportfs -u $filesystem_name
result=$?
if [ $result -ne 0 ]
then
echo "Failed to unexport " $filesystem_name
reset_tables
exit 1
fi
else
echo $filesystem_name " is not NFS exported"
fi
}
#
# Function to Unmount the filesystem
#
umount_fs() {
echo "Unmounting " $filesystem_name
umount $filesystem_name
result=$?
if [ $result -ne 0 ]
then
echo "Unable to unmount " $filesystem_name
echo "Filesystem " $filesystem_name " may be in use"
echo "Please make sure that no one is using the filesystem and"
echo "and then press a key"
read $ans
umount $filesystem_name
result=$?
fi
if [ $result -ne 0 ]
then
echo "Unable to unmount " $filesystem_name
echo "Aborting ..."
echo "Resetting the quiesced tables ..."
reset_tables
exit 1
fi
echo "Successfully unmounted " $filesystem_name
}
#
# Function to remount the same filesystem back as read-only or
# read-write depending on the value of "RO" variable.
#
remount_fs()
{
if [ $RO -eq 1 ]
then
echo "Now re-mounting " $filesystem_name " as read-only"
mount -v dlfs -r $filesystem_name
else
echo "Now re-mounting " $filesystem_name " as read-write"
mount -v dlfs $filesystem_name
fi
result=$?
if [ $result -ne 0 ]
then
echo "Failed to remount " $filesystem_name
echo "Aborting ..."
reset_tables
exit 1
fi
echo "Successfully re-mounted " $filesystem_name " as read-only"
}
#
# Function: If this was NFS exported, then export it a read-only now
#
make_fs_ro() {
if [ $nfs_export_existed ]
then
echo "Re-exporting for NFS as read-only"
chnfsexp -d $filesystem_name -N -t ro
result=$?
if [ $result -ne 0 ]
then
echo "Warning: Unable to NFS export " $filesystem_name
# Not aborting here - continuing with a warning
# at least the filesystem is available locally
## TBD: Or perhaps it would be better to exit
else
echo "Successfully exported " $filesystem_name " as read-only"
fi
fi
}
#
# Function to do the backup.
# Update this function with the backup command that you want to use.
#
do_backup() {
echo "Initiating backup of " $filesystem_name
# [ Add lines here to issue your favourite backup command with the right
# parameters, or uncomment one of the following ]
# To invoke backup via smit, uncomment the following line
# smit fs # Select Backup a Filesystem
# OR
# To issue the backup command directly, uncomment and modify the following
# line with your own options (for example full/incremental) and the
# appropriate parameters (you might want to replace /dev/rmt0 by the name of
# your backup device)
# /usr/sbin/backup -f'/dev/rmt0' -'0' $filesystem_name
# result=$?
# if [ $result -ne 0 ]
# then
# echo "Backup failed"
# # Do we exit here ? Or cleanup ?
# else
# echo "Successful backup"
# fi
# OR
# Put in your own backup script here
#
}
#
# Function to remount the filesystem as read-write. And NFS export it, if it
# was NFS exported to start with.
export_fs() {
if [ $nfs_export_existed ]
then
echo "Exporting back for NFS as read-write"
chnfsexp -d $filesystem_name -N -t rw
result=$?
if [ $result -ne 0 ]
then
echo "Warning: Unable to NFS export " $filesystem_name
# Not aborting here - continuing with a warning
# at least the filesystem is available locally
# TBD: Or perhaps it would be better to exit
else
echo "Successfully exported " $filesystem_name " as read-write"
fi
fi
}
# Function to reset Quiesced tables
reset_tables() {
su - $DLFM_INST "-c $PATH_OF_EXEC/$EXEC -r $DLFM_DB_NAME"
}
#***************** MAIN PORTION starts here ...*****************
#Check args
#
if [ $# -lt 1 ]
then
echo "Usage: " $0 " <filesystem_name>"
exit 1
fi
check_id
# Quiesce tables ( after waiting for all transactions to get over ...)
quiesce_tables
# (i) umount and remount the filesystem as read-only
filesystem_name=$1
unexport_fs
umount_fs
RO=1
remount_fs # READ_ONLY
make_fs_ro
# (ii) Start BackUp
do_backup
# (iii) unmount and remount the filesystem as read-write
umount_fs
RO=0
remount_fs # READ_WRITE
export_fs
# Reset all Quiesced tables ...
reset_tables
# Now the filesystem is ready for normal operation of Datalinks
echo "Done"
exit 0
------------------------- end of 'online.sh' script ------------------------
------------------------- start of 'quiesce.c' script ------------------------
/**********************************************************************
*
* OCO SOURCE MATERIALS
*
* COPYRIGHT: P#2 P#1
* (C) COPYRIGHT IBM CORPORATION Y1, Y2
*
* The source code for this program is not published or otherwise divested of
* its trade secrets, irrespective of what has been deposited with the U.S.
* Copyright Office.
*
* Source File Name = quiesce.c (%W%)
*
* Descriptive Name = Quiesce or Reset tables.
*
* Function: It quiesces ( OR resets ) the tables ( with datalinks column ) of
* the databases which are registered with DLFM_DB
*
* This program expects the databases registered with DLFM_DB are
* catalogued. It also expects that db2 is started.
*
* Dependencies:
*
* Restrictions:
*
***********************************************************************/
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sqlcli1.h>
#include <sqlutil.h>
#include <sqlca.h>
#define MAX_UID_LENGTH 20
#define MAX_PWD_LENGTH 20
#define MAXCOLS 255
struct sqlca sqlca;
struct SQLB_TBSPQRY_DATA *sqlb;
#ifndef max
#define max(a,b) (a > b ? a : b)
#endif
#define CHECK_HANDLE( htype, hndl, RC ) if ( RC != SQL_SUCCESS ) \
{ check_error( htype, hndl, RC, __LINE__, __FILE__ ) ; }
SQLRETURN check_error( SQLSMALLINT, SQLHANDLE, SQLRETURN, int, char * ) ;
SQLRETURN DBconnect( SQLHANDLE, SQLHANDLE * ) ;
SQLRETURN print_error( SQLSMALLINT, SQLHANDLE, SQLRETURN, int, char * ) ;
SQLRETURN prompted_connect( SQLHANDLE, SQLHANDLE * ) ;
SQLRETURN terminate( SQLHANDLE, SQLRETURN ) ;
SQLCHAR server[SQL_MAX_DSN_LENGTH + 1] ;
SQLCHAR uid[MAX_UID_LENGTH + 1] ;
SQLCHAR pwd[MAX_PWD_LENGTH + 1] ;
/* check_error - calls print_error(), checks severity of return code */
SQLRETURN check_error( SQLSMALLINT htype, /* A handle type identifier */
SQLHANDLE hndl, /* A handle */
SQLRETURN frc, /* Return code to be included with error msg */
int line, /* Used for output message, indicate where */
char * file /* the error was reported from */
) {
print_error( htype, hndl, frc, line, file ) ;
switch ( frc ) {
case SQL_SUCCESS:
break ;
case SQL_INVALID_HANDLE:
printf( "\n>------ ERROR Invalid Handle --------------------------\n");
case SQL_ERROR:
printf( "\n>--- FATAL ERROR, Attempting to rollback transaction --\n");
if ( SQLEndTran( htype, hndl, SQL_ROLLBACK ) != SQL_SUCCESS )
printf( ">Rollback Failed, Exiting application\n" ) ;
else
printf( ">Rollback Successful, Exiting application\n" ) ;
return( terminate( hndl, frc ) ) ;
case SQL_SUCCESS_WITH_INFO:
printf( "\n> ----- Warning Message,
application continuing ------- \n");
break ;
case SQL_NO_DATA_FOUND:
printf( "\n> ----- No Data Found, application continuing --------- \n");
break ;
default:
printf( "\n> ----------- Invalid Return Code --------------------- \n");
printf( "> --------- Attempting to rollback transaction ---------- \n");
if ( SQLEndTran( htype, hndl, SQL_ROLLBACK ) != SQL_SUCCESS )
printf( ">Rollback Failed, Exiting application\n" ) ;
else
printf( ">Rollback Successful, Exiting application\n" ) ;
return( terminate( hndl, frc ) ) ;
}
return ( frc ) ;
}
/* connect without prompt */
SQLRETURN DBconnect( SQLHANDLE henv,
SQLHANDLE * hdbc
) {
/* allocate a connection handle */
if ( SQLAllocHandle( SQL_HANDLE_DBC,
henv,
hdbc
) != SQL_SUCCESS ) {
printf( ">---ERROR while allocating a connection handle-----\n" ) ;
return( SQL_ERROR ) ;
}
/* Set AUTOCOMMIT OFF */
if ( SQLSetConnectAttr( * hdbc,
SQL_ATTR_AUTOCOMMIT,
( void * ) SQL_AUTOCOMMIT_OFF, SQL_NTS
) != SQL_SUCCESS ) {
printf( ">---ERROR while setting AUTOCOMMIT OFF ------------\n" ) ;
return( SQL_ERROR ) ;
}
if ( SQLConnect( * hdbc,
server, SQL_NTS,
uid, SQL_NTS,
pwd, SQL_NTS
) != SQL_SUCCESS ) {
printf( ">--- Error while connecting to database: %s -------\n",
server
) ;
SQLDisconnect( * hdbc ) ;
SQLFreeHandle( SQL_HANDLE_DBC, * hdbc ) ;
return( SQL_ERROR ) ;
}
else /* Print Connection Information */
printf( "\nConnected to %s\n", server ) ;
return( SQL_SUCCESS ) ;
}
/*--> SQLL1X32.SCRIPT */
/* print_error - calls SQLGetDiagRec(), displays SQLSTATE and message **
** - called by check_error */
SQLRETURN print_error( SQLSMALLINT htype, /* A handle type identifier */
SQLHANDLE hndl, /* A handle */
SQLRETURN frc, /* Return code to be included with error msg */
int line, /* Used for output message, indicate where */
char * file /* the error was reported from */
) {
SQLCHAR buffer[SQL_MAX_MESSAGE_LENGTH + 1] ;
SQLCHAR sqlstate[SQL_SQLSTATE_SIZE + 1] ;
SQLINTEGER sqlcode ;
SQLSMALLINT length, i ;
printf( ">--- ERROR -- RC = %d Reported from %s, line %d ------------\n",
frc,
file,
line
) ;
i = 1 ;
while ( SQLGetDiagRec( htype,
hndl,
i,
sqlstate,
&sqlcode,
buffer,
SQL_MAX_MESSAGE_LENGTH + 1,
&length
) == SQL_SUCCESS ) {
printf( " SQLSTATE: %s\n", sqlstate ) ;
printf( "Native Error Code: %ld\n", sqlcode ) ;
printf( "%s \n", buffer ) ;
i++ ;
}
printf( ">--------------------------------------------------\n" ) ;
return( SQL_ERROR ) ;
}
/*<-- */
/* prompted_connect - prompt for connect options and connect */
SQLRETURN prompted_connect( SQLHANDLE henv,
SQLHANDLE * hdbc
) {
/* allocate a connection handle */
if ( SQLAllocHandle( SQL_HANDLE_DBC,
henv,
hdbc
) != SQL_SUCCESS ) {
printf( ">---ERROR while allocating a connection handle-----\n" ) ;
return( SQL_ERROR ) ;
}
/* Set AUTOCOMMIT OFF */
if ( SQLSetConnectAttr( * hdbc,
SQL_ATTR_AUTOCOMMIT,
( void * ) SQL_AUTOCOMMIT_OFF, SQL_NTS
) != SQL_SUCCESS ) {
printf( ">---ERROR while setting AUTOCOMMIT OFF ------------\n" ) ;
return( SQL_ERROR ) ;
}
if ( SQLConnect( * hdbc,
server, SQL_NTS,
uid, SQL_NTS,
pwd, SQL_NTS
) != SQL_SUCCESS ) {
printf( ">--- ERROR while connecting to %s -------------\n",
server
) ;
SQLDisconnect( * hdbc ) ;
SQLFreeHandle( SQL_HANDLE_DBC, * hdbc ) ;
return( SQL_ERROR ) ;
}
else /* Print Connection Information */
printf( "\nConnected to %s\n", server ) ;
return( SQL_SUCCESS ) ;
}
/* terminate and free environment handle */
SQLRETURN terminate( SQLHANDLE henv,
SQLRETURN rc
) {
SQLRETURN lrc ;
printf( ">Terminating ....\n" ) ;
print_error( SQL_HANDLE_ENV,
henv,
rc,
__LINE__,
__FILE__
) ;
/* Free environment handle */
if ( ( lrc = SQLFreeHandle( SQL_HANDLE_ENV, henv ) ) != SQL_SUCCESS )
print_error( SQL_HANDLE_ENV,
henv,
lrc,
__LINE__,
__FILE__
) ;
return( rc ) ;
}
void show_progress()
{
int i;
for(i=0;i<3;i++)
{
printf("...");
/* sleep(1);*/
}
printf("... DONE.\n");
}
void wrong_input(char *str)
{
printf("\n\n\t****************************************************************\n");
printf("\t* usage: %s -q <DB-NAME> ( to Quiesce tables ..) *\n",str);
printf("\t* OR *\n");
printf("\t* usage: %s -r <DB-NAME> ( to reset Quiesced tables ..)*\n",str);
printf("\t****************************************************************\n\n\n");
exit(0);
}
extern SQLCHAR server[SQL_MAX_DSN_LENGTH + 1] ;
extern SQLCHAR uid[MAX_UID_LENGTH + 1] ;
extern SQLCHAR pwd[MAX_PWD_LENGTH + 1] ;
#define MAX_STMT_LEN 500
int reset=-1;
/*******************************************************************
** main
*******************************************************************/
int main( int argc, char * argv[] ) {
SQLHANDLE henv,hdbc[3], hstmt,hstmt1,hstmt2 ;
SQLRETURN rc ;
SQLCHAR * sqlstmt = ( SQLCHAR * ) "SELECT dbname,dbinst,password from dfm_dbid" ;/* for the primary db */
SQLCHAR * stmt = ( SQLCHAR * ) "SELECT COLS.TBCREATOR, COLS.TBNAME FROM SYSIBM.SYSCOLUMNS COLS, "
" SYSIBM.SYSCOLPROPERTIES PROPS WHERE COLS.TBCREATOR = PROPS.TABSCHEMA AND "
" COLS.TBNAME = PROPS.TABNAME AND COLS.TYPENAME='DATALINK' AND SUBSTR(PROPS.DL_FEATURES, 2, 1) "
" = 'F' GROUP BY COLS.TBCREATOR, COLS.TBNAME";/*test for the secondary db's*/
SQLCHAR * stmt2 = ( SQLCHAR * ) "SELECT count(*) from dfm_xnstate where xn_state=3" ;/* for the primary db */
SQLCHAR v_dbname[20] ;
SQLINTEGER v_xnstate ;
SQLCHAR v_usernm[20] ;
SQLCHAR v_passwd[20] ;
SQLINTEGER nullind;
SQLVARCHAR v_tbname[128];
SQLCHAR v_tbcreator[20];
SQLINTEGER rowcount;
int i,count;
char state[6],v_tb[100];
int flag=0;
int xxx,tong=0;
if( (argc != 2 && argc!=3) || argv[1][0]!='-' || strlen(argv[1]) !=2) wrong_input(argv[0]);
/*** NOTE : If argc==2 then DB-NAME the program would ask user to enter
DB-Name else it would take the second argument to this program ( argv[2] )
as DB-NAME ***/
if(argv[1][1]=='q' || argv[1][1]=='Q')
{
reset=0;
}
else
{
if(argv[1][1]!='r' || argv[1][1]!='R')
{
reset=1;
}
else
{
wrong_input(argv[0]);
}
if(reset==-1) wrong_input(argv[0]);
}
SQLAllocHandle( SQL_HANDLE_ENV, SQL_NULL_HANDLE, &henv ) ;
/*
Before allocating any connection handles, set Environment wide
Connect Options
Set to Connect Type 2, Syncpoint 1
*/
if ( SQLSetEnvAttr( henv,
SQL_CONNECTTYPE,
( SQLPOINTER ) SQL_COORDINATED_TRANS,
0
) != SQL_SUCCESS ) {
printf( ">---ERROR while setting Connect Type 2 -------------\n" ) ;
return( SQL_ERROR ) ;
}
/*<-- */
/*--> */
if ( SQLSetEnvAttr( henv,
SQL_SYNC_POINT,
( SQLPOINTER ) SQL_ONEPHASE,
0
) != SQL_SUCCESS ) {
printf( ">---ERROR while setting Syncpoint One Phase -------------\n" ) ;
return( SQL_ERROR ) ;
}
if(argc==3)
{
strcpy(server,argv[2]);
}
else
{
printf( ">Enter database Name:\n" ) ;
gets( ( char * ) server ) ;
}
/*prompted_connect(henv,&hdbc[0]);*/
/* allocate an environment handle */
rc = SQLAllocHandle( SQL_HANDLE_ENV, SQL_NULL_HANDLE, &henv ) ;
if ( rc != SQL_SUCCESS ) return( terminate( henv, rc ) ) ;
/* allocate a connect handle, and connect to the primary database*/
rc = DBconnect( henv, &hdbc[0] ) ;
if ( rc != SQL_SUCCESS ) return( terminate( henv, rc ) ) ;
flag=1;
if(reset!=1)
{
printf("\nWaiting for XNs to get over ...");
while(flag) /* Outer While */
{
rc = SQLAllocHandle( SQL_HANDLE_STMT, hdbc[0], &hstmt2 ) ;
CHECK_HANDLE( SQL_HANDLE_DBC, hdbc[0], rc ) ;
rc = SQLExecDirect( hstmt2, stmt2, SQL_NTS ) ;
CHECK_HANDLE( SQL_HANDLE_STMT, hstmt, rc ) ;
rc = SQLBindCol( hstmt2, 1, SQL_C_LONG, &v_xnstate, 0, &nullind) ;
CHECK_HANDLE( SQL_HANDLE_STMT, hstmt2, rc ) ;
while ( ( rc = SQLFetch( hstmt2 ) ) == SQL_SUCCESS )
{
/*printf( "\nCount of XNs Pending : %d \n",v_xnstate) ;*/
if (v_xnstate > 0)
{
fflush(stdout);
printf(".");
sleep(1);
break;
}
else flag=0;
} /* Inner While */
/* Deallocation */
rc = SQLFreeHandle( SQL_HANDLE_STMT, hstmt2 ) ;
CHECK_HANDLE( SQL_HANDLE_STMT, hstmt2, rc ) ;
} /* Outer While */
} /* IF */
if(!reset) printf("XNs OVER !!\n");
rc = SQLAllocHandle( SQL_HANDLE_STMT, hdbc[0], &hstmt ) ;
CHECK_HANDLE( SQL_HANDLE_DBC, hdbc[0], rc ) ;
rc = SQLExecDirect( hstmt, sqlstmt, SQL_NTS ) ;
CHECK_HANDLE( SQL_HANDLE_STMT, hstmt, rc ) ;
rc = SQLBindCol( hstmt, 1, SQL_C_CHAR, v_dbname, sizeof(v_dbname), NULL) ;
CHECK_HANDLE( SQL_HANDLE_STMT, hstmt, rc ) ;
rc = SQLBindCol( hstmt, 2, SQL_C_CHAR, v_usernm, sizeof(v_usernm), NULL) ;
CHECK_HANDLE( SQL_HANDLE_STMT, hstmt, rc ) ;
v_passwd[0]='\0';
rc = SQLBindCol( hstmt, 3, SQL_C_CHAR, v_passwd, sizeof(v_passwd), NULL) ;
CHECK_HANDLE( SQL_HANDLE_STMT, hstmt, rc ) ;
/* Counter for number of rows fetched from the primary db*/
count=1;
for (i=1;i<=count;i++) /* For the FOR LOOP */
{
while ( ( rc = SQLFetch( hstmt ) ) == SQL_SUCCESS )
{
printf( "\nDatabase Name : %s \n",v_dbname) ;
count=count+1;
/* Depending on the no. of rows fetched from the primary db connect to the sec db's */
if ( SQLAllocHandle( SQL_HANDLE_DBC,henv,&hdbc[i]) != SQL_SUCCESS )
{
printf(">---ERROR while allocating a connection handle-----\n");
return( SQL_ERROR ) ;
}
/* Set AUTOCOMMIT ON */
if ( SQLSetConnectAttr( * hdbc,SQL_ATTR_AUTOCOMMIT,( void * ) SQL_AUTOCOMMIT_ON, SQL_NTS) != SQL_SUCCESS )
{
printf(">---ERROR while setting AUTOCOMMIT OFF ------------\n");
return( SQL_ERROR ) ;
}
rc = SQLConnect(hdbc[i],v_dbname,SQL_NTS,((v_passwd[0]=='\0') ? NULL : v_usernm),SQL_NTS,v_passwd,SQL_NTS);
if ( rc != SQL_SUCCESS ) return( terminate( henv, rc ) ) ;
/* tRYING OUT FOR SELECTION FROM THESE DB'S*/
rc = SQLAllocHandle( SQL_HANDLE_STMT, hdbc[i], &hstmt1 ) ;
CHECK_HANDLE( SQL_HANDLE_DBC, hdbc[i], rc ) ;
rc = SQLExecDirect( hstmt1, stmt, 276 ) ;
CHECK_HANDLE( SQL_HANDLE_STMT, hstmt1, rc ) ;
rc = SQLBindCol( hstmt1, 1, SQL_C_CHAR, v_tbcreator, sizeof(v_tbcreator), NULL) ;
CHECK_HANDLE( SQL_HANDLE_STMT, hstmt1, rc ) ;
rc = SQLBindCol( hstmt1, 2, SQL_C_CHAR, v_tbname, sizeof(v_tbname), NULL) ;
CHECK_HANDLE( SQL_HANDLE_STMT, hstmt1, rc ) ;
while ( ( rc = SQLFetch( hstmt1 ) ) == SQL_SUCCESS )
{
v_tb[0]= '\0';
strcat(v_tb,v_tbcreator);
strcat(v_tb,".");
strcat(v_tb,v_tbname);
printf("\tTABLE : %s ",v_tb);
sqluvqdp (v_tb,(reset==1) ? 9 : 2, NULL, &sqlca);
/** 9 -> to RESET 2 -> to Quiesce ( exclusive) */
if (sqlca.sqlcode==0)
{
if (reset==1)
{
/* printf("The quiesced tablespace successfully reset.\n"); */
show_progress();
}
else
{
/* printf("The tablespace successfully quiesced\n");*/
show_progress();
}
}
else if (sqlca.sqlcode== -3805 ||sqlca.sqlcode==01004)
{
if(reset==1)
{
/* printf("The quiesced tablespace could not be reset.\n");*/
show_progress();
}
else
{
/* printf("The tablespace has already been quiesced\n");*/
show_progress();
}
}
else
{
if(reset==1)
{
printf("The quiesced tablespace could not be reset.\n");
}
else
{
printf("The tablespace could not be quiesced. \n");
}
printf("\t\tSQLCODE = %ld\n", sqlca.sqlcode);
strncpy(state, sqlca.sqlstate, 5);
state[5] = '\0';
printf("\t\tSQLSTATE = %s\n", state);
}
}
rc = SQLFreeHandle( SQL_HANDLE_STMT, hstmt1 ) ;
CHECK_HANDLE( SQL_HANDLE_STMT, hstmt1, rc ) ;
rc = SQLDisconnect( hdbc[i] );
CHECK_HANDLE( SQL_HANDLE_DBC, hdbc[i], rc ) ;
rc = SQLFreeHandle( SQL_HANDLE_DBC, hdbc[i] ) ;
CHECK_HANDLE( SQL_HANDLE_DBC, hdbc[i], rc ) ;
}
}
printf("The NO. of DATABASES is %d \n",count-1);
if ( rc != SQL_NO_DATA_FOUND )
CHECK_HANDLE( SQL_HANDLE_STMT, hstmt, rc ) ;
/* Commit the changes. */
rc = SQLEndTran( SQL_HANDLE_DBC, hdbc[0], SQL_COMMIT ) ;
CHECK_HANDLE( SQL_HANDLE_DBC, hdbc[0], rc ) ;
/* Disconnect and free up CLI resources. */
rc = SQLFreeHandle( SQL_HANDLE_STMT, hstmt ) ;
CHECK_HANDLE( SQL_HANDLE_STMT, hstmt, rc ) ;
/* ******************************************************/
printf( "\n>Disconnecting .....\n" ) ;
rc = SQLDisconnect( hdbc[0] );
CHECK_HANDLE( SQL_HANDLE_DBC, hdbc[0], rc ) ;
rc = SQLFreeHandle( SQL_HANDLE_DBC, hdbc[0] ) ;
CHECK_HANDLE( SQL_HANDLE_DBC, hdbc[0], rc ) ;
/**********************************************************/
rc = SQLFreeHandle( SQL_HANDLE_ENV, henv ) ;
if ( rc != SQL_SUCCESS ) return( terminate( henv, rc ) ) ;
return( SQL_SUCCESS ) ;
} /* end main */
------------------------- end of 'quiesce.c' script ------------------------
------------------------------------------------------------------------
Data Movement Utilities Guide and Reference
------------------------------------------------------------------------
12.1 Pending States After a Load Operation
The first two sentences in the last paragraph in this section have been
changed to the following:
The fourth possible state associated with the load process (check pending state)
pertains to referential and check constraints, DATALINKS constraints,
AST constraints, or generated column constraints. For example, if an existing table
is a parent table containing a primary key referenced by a foreign key
in a dependent table, replacing data in the parent table places both tables (not the
table space) in check pending state.
------------------------------------------------------------------------
12.2 Load Restrictions and Limitations
The following restrictions apply to generated columns and the load utility:
* It is not possible to load a table having a generated column in a
unique index unless the generated column is an "include column" of the
index or the generatedoverride file type modifier is used. If this
modifier is used, it is expected that all values for the column will
be supplied in the input data file.
* It is not possible to load a table having a generated column in the
partitioning key unless the generatedoverride file type modifier is
used. If this modifier is used, it is expected that all values for the
column will be supplied in the input data file.
------------------------------------------------------------------------
Installation and Configuration Supplement
------------------------------------------------------------------------
13.1 Binding Database Utilities Using the Run-Time Client
The Run-Time Client cannot be used to bind the database utilities (import,
export, reorg, the command line processor) and DB2 CLI bind files to each
database before they can be used with that database. You must use the DB2
Administration Client or the DB2 Application Development Client instead.
You must bind these database utilities and DB2 CLI bind files to each
database before they can be used with that database. In a network
environment, if you are using multiple clients that run on different
operating systems, or are at different versions or service levels of DB2,
you must bind the utilities once for each operating system and DB2-version
combination.
------------------------------------------------------------------------
13.2 UNIX Client Access to DB2 Using ODBC
Chapter 12 ("Running Your Own Applications") states that you need to update
odbcinst.ini if you install an ODBC Driver Manager with your ODBC client
application or ODBC SDK. This is partially incorrect. You do not need to
update odbcinst.ini if you install a Merant ODBC Driver Manager product.
------------------------------------------------------------------------
13.3 Switching NetQuestion for OS/2 to Use TCP/IP
The instructions for switching NetQuestion to use TCP/IP on OS/2 systems
are incomplete. The location of the *.cfg files mentioned in those
instructions is the data subdirectory of the NetQuestion installation
directory. You can determine the NetQuestion installation directory by
entering one of the following commands:
echo %IMNINSTSRV% //for SBCS installations
echo %IMQINSTSRV% //for DBCS installations
------------------------------------------------------------------------
Message Reference
------------------------------------------------------------------------
14.1 SQL0270N (New Reason Code 40)
The following reason code has been added to message SQL0270N:
Reason code 40
Under "Cause": The function IDENTITY_VAL_LOCAL cannot be used
in a trigger or SQL function.
Under "Action": Remove the invocation of the IDENTITY_VAL_LOCAL function
from the trigger definition or the SQL function definition.
------------------------------------------------------------------------
14.2 SQL0301N (New Explanation Text)
The Explanation section for this message has been extended. It now reads as
follows:
Explanation: A host variable could not be used as specified
in the statement because its data type is incompatible
with the intended use of its value.
This error can occur as a result of specifying an incorrect
host variable or an incorrect SQLTYPE value in a SQLDA
on an EXECUTE or OPEN statement. In the case of a user-defined
structured type, it may be that the associated built-in type
of the host variable or SQLTYPE is not compatible with the parameter
of the TO SQL transform function defined in the transform group
for the statement.
The statement cannot be processed.
------------------------------------------------------------------------
14.3 SQL0303N (New Text)
The Explanation and User Response sections for this message have been
extended. They now read as follows:
Explanation: An embedded SELECT or VALUES statement selects into a host variable,
but the data type of the variable is not compatible with the data type
of the corresponding SELECT-list or VALUES-list element.
Both must be numeric, character, or graphic. For a user-defined data type,
it is possible that the host variable is defined with an associated
built-in data type that is not compatible with the result type
of the FROM SQL transform function defined in the transform group
for the statement. For example, if the data type of the column is date
or time, the data type of the variable must be character
with an appropriate minimum length.
The statement cannot be processed.
User Response: Verify that the table definitions are current and that the host variable
has the correct data type. For a user-defined data type, verify
that the associated built-in type of the host variable is compatible
with the result type of the FROM SQL transform function defined
in the transform group for the statement.
------------------------------------------------------------------------
14.4 SQL0358N (New User Response 26)
Reason code 26
Explanation: DATALINK value referenced file cannot be accessed for linking.
It may be a directory, a symbolic link, a file with permission
bit for set user ID (SUID) on, or set group ID (SGID) on, or a file
owned by user nobody (uid = -2).
User Response: Linking of directories is not allowed. Use the actual file
name, not the symbolic link. If SUID or SGID is on, this file
cannot be linked using a DATALINK type. If the file was owned
by user nobody (uid = -2), this file cannot be linked using a
DATALINK type with the READ PERMISSION DB option.
------------------------------------------------------------------------
14.5 SQL0408N (New Text)
The Explanation and User Response sections for this message have been
extended. They now read as follows:
Explanation: The data type of the value to be assigned to the column, parameter,
SQL variable, or transition variable by the SQL statement
is incompatible with the declared data type of the assignment target.
Both must be:
- Numeric
- Character
- Graphic
- Dates or character
- Times or character
- Timestamps or character
- Datalinks
- The same distinct types
- Reference types, where the target type of the value is a subtype
of the target type of the column.
- The same user-defined structured types. Or, the static type
of the value must be a subtype of the static type (declared type)
of the target. If a host variable is involved, the associated built-in
type of the host variable must be compatible with the parameter
of the TO SQL transform function defined in the transform group
for the statement.
The statement cannot be processed.
User Response: Examine the statement and possibly the target table or view to determine
the target data type. Ensure that the variable, expression, or literal value
assigned has the proper data type for the assignment target.
For a user-defined structured type, also consider the parameter
of the TO SQL transform function defined in the transform group
for the statement as an assignment target.
------------------------------------------------------------------------
14.6 SQL0423N (Revised Text)
Locator variable "<variable-position>" does not currently represent any value.
Explanation: A locator variable is in error. Either it has not
had a LOB value assigned to it, the locator
associated with the variable has been freed,
or the result set cursor has been closed.
If "<variable-position>" is provided, it gives
the ordinal position of the variable in error
in the set of variables specified. Depending on when
the error is detected, the database manager may not
be able to determine "<variable-position>".
Instead of an ordinal position, "<variable-position>"
may have the value "function-name RETURNS", which
means that the locator value returned from the user-defined
function identified by function-name is in error.
User Response: If this was a LOB locator, correct the program
so that the LOB locator variables
used in the SQL statement have valid LOB values before
the statement is executed. A LOB value can be assigned
to a locator variable by means of a SELECT INTO statement,
a VALUES INTO statement, or a FETCH statement.
If this was a with return cursor, you must ensure that
the cursor is opened before attempting to allocate it.
sqlcode: -423
sqlstate: 0F001
------------------------------------------------------------------------
14.7 SQL0670N (Revised Text)
Message SQL0670N refers to row length limits for tables defined in a CREATE
TABLE or an ALTER TABLE statement, and to the regular table space in which
these tables are created. However, SQL0670N also applies to the row lengths
of declared temporary tables defined in a DECLARE GLOBAL TEMPORARY TABLE
statement, and to the user temporary table spaces in which these declared
temporary tables are created. If a DECLARE GLOBAL TEMPORARY TABLE statement
fails with SQL0670N, it means that the user temporary table space cannot
accommodate the row length defined in the DECLARE TEMPORARY TABLE
statement.
Following is the revised message text:
The row length of the table exceeded a limit of "<length>" bytes.
(Table space "<tablespace-name>".)
Explanation: The row length of a table in the database manager cannot exceed:
- 4005 bytes in a table space with a 4K page size.
- 8101 bytes in a table space with an 8K page size.
- 16293 bytes in a table space with an 16K page size.
- 32677 bytes in a table space with an 32K page size.
The length is calculated by adding the internal lengths
of the columns. Details of internal column lengths
can be found under CREATE TABLE in the SQL Reference.
One of the following conditions can occur:
- The row length for the table defined in
the CREATE TABLE or ALTER TABLE statement exceeds
the limit for the page size of the table space.
The regular table space name "<tablespace-name>"
identifies the table space from which the page size
was used to determine the limit on the row length.
- The row length for the table defined in the DECLARE
GLOBAL TEMPORARY TABLE statement exceeds the limit
for the page size of the table space. The user temporary
table space name "<tablespace-name>" identifies
the table space from which the page size was used
to determine the limit on the row length.
The statement cannot be processed.
User Response: Depending on the cause, do one of the following:
- In the case of CREATE TABLE, ALTER TABLE,
or DECLARE GLOBAL TEMPORARY TABLE, specify
a table space with a larger pagesize, if possible.
- Otherwise, reduce the row length by eliminating
one or more columns, or reducing the lengths
of one or more columns.
sqlcode: -670
sqlstate: 54010
------------------------------------------------------------------------
14.8 SQL1704N (New Reason Codes)
Reason code 14
Explanation: Table has an invalid primary key or unique constraint.
User Response: Table has an index that was erroneously used for a primary key
or unique constraint. Drop the primary key or unique constraint that uses the index.
This must be done in the release of the database manager in use prior
to the current release. Resubmit the database migration command
under the current release and then recreate the primary key or unique constraint.
Reason code 15
Explanation: Table does not have a unique index on the REF IS column.
User Response: Create a unique index on the REF IS column of the typed table
using the release of the database manager in use prior to the current release.
Resubmit the database migration command under the current release.
Reason code 16
Explanation: Table is not logged but has a DATALINK column with file link control.
User Response: Drop the table and then create the table without the not logged property.
This must be done in the release of the database manager in use prior
to the current release. Resubmit the database migration command
under the current release.
Reason Code 17
Explanation: Fail to allocate new page from the DMS system catalog table space.
User Response: Restore database backup onto its previous database manager system.
Add more containers to the table space. It is recommended to allocate 70%
free space for database migration. Move back to the current release and
migrate the database.
------------------------------------------------------------------------
14.9 SQL4942N (New Text)
The Explanation and User Response sections for this message have been
extended. They now read as follows:
Explanation: An embedded SELECT statement selects into a host variable
"<name>", but the data type of the variable
and the corresponding SELECT list element are not compatible.
If the data type of the column is date and time, the data type
of the variable must be character with an appropriate minimum length.
Both must either be numeric, character, or graphic.
For a user-defined data type, it is possible that the host variable
is defined with an associated built-in data type that is not compatible
with the result type of the FROM SQL transform function defined
in the transform group for the statement.
The function cannot be completed.
User Response: Verify that the table definitions are current, and that the host
variable has the proper data type. For a user-defined data type,
it is possible that the host variable is defined with an associated
built-in data type that is not compatible with the result type
of the FROM SQL transform function defined in the transform group
for the statement.
------------------------------------------------------------------------
14.10 SQL20117N (Changed Reason Code 1)
The following reason code has been changed for message SQL20117N:
Reason code 1
Under "Explanation": RANGE or ROWS is specified without an ORDER BY
in the window specification.
Under "User Response": Add a window ORDER BY clause to each window
specification that specifies RANGE or ROWS.
------------------------------------------------------------------------
Replication Guide and Reference
------------------------------------------------------------------------
15.1 Replication on Windows 2000
DB2 DataPropagator Version 7.1 is compatible with the Windows 2000
operating system.
------------------------------------------------------------------------
15.2 DATALINK Replication
You cannot replicate DATALINK columns between DB2 databases on AS/400 and
DB2 databases on other platforms.
On the AS/400 platform, there is no support for the replication of the
"comment" attribute of DATALINK values.
If you are running AIX 4.2, before you run the default user exit program
(ASNDLCOPY) you must install the PTF for APAR IY03101 (AIX 4210-06
RECOMMENDED MAINTENANCE FOR AIX 4.2.1). This PTF contains a Y2K fix for the
"modtime/MDTM" command in the FTP daemon. To verify the fix, check the last
modification time returned from the "modtime <file>" command, where <file>
is a file that was modified after January 1, 2000.
If the target table is an external CCD table, DB2 DataPropagator calls the
ASNDLCOPY routine to replicate DATALINK files. For the latest information
about how to use the ASNDLCOPY and ASNDLCOPYD programs, see the prologue
section of each program's source code. The following restrictions apply:
* Internal CCD tables can contain DATALINK indicators, but not DATALINK
values.
* Condensed external CCD tables can contain DATALINK values.
* Noncondensed CCD target tables cannot contain any DATALINK columns.
* When the source and target servers are the same, the subscription set
must not contain any members with DATALINK columns.
------------------------------------------------------------------------
15.3 LOB Restrictions
Condensed internal CCD tables cannot contain references to LOB columns or
LOB indicators.
------------------------------------------------------------------------
15.4 Replication and Non-IBM Servers
You must use DataJoiner Version 2 or later to replicate data to or from
non-IBM servers such as Informix, Microsoft SQL Server, Oracle, Sybase, and
Sybase SQL Anywhere. You cannot use the relational connect function for
this type of replication because DB2 Relational Connect Version 7.1 does
not have update capability. Also, you must use DJRA (DataJoiner Replication
Administration) to administer such heterogeneous replication on all
platforms (AS/400, OS/2, OS/390, UNIX, and Windows) for all existing
versions of DB2 and DataJoiner.
------------------------------------------------------------------------
15.5 Update-anywhere Prerequisite
If you want to set up update-anywhere replication with conflict detection
and with more than 150 subscription set members in a subscription set, you
must run the following DDL to create the ASN.IBMSNAP_COMPENSATE table on
the control server:
CREATE TABLE ASN.IBMSNAP_COMPENSATE (
APPLY_QUAL char(18) NOT NULL,
MEMBER SMALLINT,
INTENTSEQ CHAR(10) FOR BIT DATA,
OPERATION CHAR(1));
------------------------------------------------------------------------
15.6 Planning for Replication
On page 65, "Connectivity" should include the following fact:
If the Apply program cannot connect to the control server,
the Apply program terminates.
When using data blocking for AS/400, you must ensure that the total amount
of data to be replicated during the interval does not exceed "4 million
rows", not "4 MB" as stated on page 69 of the book.
------------------------------------------------------------------------
15.7 Setting Up Your Replication Environment
On page 95, "Customizing CD table, index, and tablespace names" says that
the DPREPL.DFT file is in either the \sqllib\bin directory or the
\sqllib\java directory. Actually DPREPL.DFT is in the \sqllib\cc directory.
------------------------------------------------------------------------
15.8 Problem Determination
The Replication Analyzer runs on Windows 32-bit systems and AIX. To run the
Analyzer on AIX, ensure that the sqllib/bin directory appears before
/usr/local/bin in your PATH environment variable to avoid conflicts with
/usr/local/bin/analyze.
The Replication Analyzer has two additional optional keywords: CT and AT.
CT=n
Show only those entries from the Capture trace table that are newer
than n days old. This keyword is optional. If you do not specify this
keyword, the default is 7 days.
AT=n
Show only those entries from the Apply trail table that are newer than
n days old. This keyword is optional. If you do not specify this
keyword, the default is 7 days.
Example:
analyze mydb1 mydb2 f=mydirectory ct=4 at=2 deepcheck q=applyqual1
For the Replication Analyzer, the following keyword information is updated:
deepcheck
Specifies that the Analyzer perform a more complete analysis,
including the following information: CD and UOW table pruning
information, DB2 for OS/390 tablespace-partitioning and compression
detail, analysis of target indexes with respect to subscription keys,
subscription timelines, and subscription-set SQL-statement errors. The
analysis includes all servers. This keyword is optional.
l ightcheck
Specifies that the following information be excluded from the report:
all column detail from the ASN.IBMSNAP_SUBS_COLS table, subscription
errors or anomalies or omissions, and incorrect or inefficient
indexes. This reduction in information saves resources and produces a
smaller HTML output file. This keyword is optional and is mutually
exclusive with the deepcheck keyword.
Analyzer tools are available in PTFs for replication on AS/400 platforms.
These tools collect information about your replication environment and
produce an HTML file that can be sent to your IBM Service Representative to
aid in problem determination. To get the AS/400 tools, download the
appropriate PTF (for example, for product 5769DP2, you must download PTF
SF61798 or its latest replacement).
Add the following problem and solution to the "Troubleshooting" section:
Problem: The Apply program loops without replicating changes; the Apply trail
table shows STATUS=2.
The subscription set includes multiple source tables. To improve the handling
of hotspots for one source table in the set, an internal CCD table is defined
for that source table, but in a different subscription set. Updates are made
to the source table but the Apply process that populates the internal CCD table
runs asynchronously (for example, the Apply program might not be started or an
event not triggered, and so on). The Apply program that replicates updates from
the source table to the target table loops because it is waiting for the internal
CCD table to be updated.
To stop the looping, start the Apply program (or trigger the event that causes
replication) for the internal CCD table. The Apply program will populate the
internal CCD table and allow the looping Apply program to process changes from
all source tables.
A similar situation could occur for a subscription set that contains source tables
with internal CCD tables that are populated by multiple Apply programs.
------------------------------------------------------------------------
15.9 Capture and Apply for AS/400
On page 178, "A note on work management" should read as follows:
You can alter the default definitions or provide your own definitions.
If you create your own subsystem description, you must name the
subsystem QZSNDPR and create it in a library other than QDPR.
See "OS/400 Work Management V4R3", SC41-5306 for more information
about changing these definitions.
Add the following to page 178, "Verifying and customizing your installation
of DB2 DataPropagator for AS/400":
If you have problems with lock contention due to high volume of transactions, you can
increase the default wait timeout value from 30 to 120. You can change the job every
time the Capture job starts or you can use the following procedure to change the default
wait timeout value for all jobs running in your subsystem:
1. Issue the following command to create a new class object by duplicating QGPL/QBATCH:
CRTDUPOBJ OBJ(QBATCH) FROMLIB(QGPL) OBJTYPE(*CLS) TOLIB(QDPR) NEWOBJ(QZSNDPR)
2. Change the wait timeout value for the newly created class (for example, to 300 seconds):
CHGCLS CLS(QDPR/QZSNDPR) DFTWAIT(300)
3. Update the routing entry in subsystem description QDPR/QZSNDPR to use the newly
created class:
CHGRTGE SBSD(QDPR/QZSNDPR) SEQNBR(9999) CLS(QDPR/QZSNDPR)
On page 195, the ADDEXITPGM command parameters should read:
ADDEXITPGM EXITPNT(QIBM_QJO_DLT_JRNRCV)
FORMAT(DRCV0100)
PGM(QDPR/QZSNDREP)
PGMNBR(*LOW)
CRTEXITPNT(*NO)
PGMDTA(65535 10 QSYS)
------------------------------------------------------------------------
15.10 Table Structures
On page 339, append the following sentence to the STATUS column description
for the value "2":
If you use internal CCD tables and you repeatedly get a value of "2" in
the status column of the Apply trail table, go to "Chapter 8: Problem Determination"
and refer to "Problem: The Apply program loops without replicating changes,
the Apply trail table shows STATUS=2".
------------------------------------------------------------------------
15.11 Capture and Apply Messages
Message ASN1027S should be added:
ASN1027S
There are too many large object (LOB) columns specified. The error code is
"<error_code>".
Explanation: Too many large object (BLOB, CLOB, or DBCLOB) columns are specified
for a subscription set member. The maximum number of columns allowed is 10.
User response: Remove the excess large object columns from the
subscription set member.
Message ASN1048E should read as follows:
ASN1048E
The execution of an Apply cycle failed. See the Apply trail table
for full details: "<text>"
Explanation: An Apply cycle failed. In the message, "<text>"
identifies the "<target_server>", "<target_owner, target_table,
stmt_number>", and "<cntl_server>".
User response: Check the APPERRM fields in the audit trail table to
determine why the Apply cycle failed.
------------------------------------------------------------------------
15.12 Starting the Capture and Apply Programs from Within an Application
On page 399 of the book, a few errors appear in the comments of the Sample
routine that starts the Capture and Apply programs; however the code in the
sample is correct. The latter part of the sample pertains to the Apply
parameters, despite the fact that the comments indicate that it pertains to
the Capture parameters.
You can get samples of the Apply and Capture API, and their respective
makefiles, in the following directories:
For NT - sqllib\samples\repl
For UNIX - sqllib/samples/repl
------------------------------------------------------------------------
SQL Reference
------------------------------------------------------------------------
16.1 IDENTITY_VAL_LOCAL
>>-IDENTITY_VAL_LOCAL--(--)------------------------------------><
The schema is SYSIBM.
The IDENTITY_VAL_LOCAL function is a non-deterministic function that
returns the most recently assigned value for an identity column, where the
assignment occurred as a result of a single row INSERT statement using a
VALUES clause. The function has no input parameters.
The result is a DECIMAL(31,0), regardless of the actual data type of the
identity column to which the result value corresponds.
The value returned is the value assigned to the identity column of the
table identified in the most recent single row INSERT statement with a
VALUES clause for a table containing an identity column. Note that the
INSERT statement must be issued at the same level 1 (that is, the value is
available locally at the level it was assigned, until it is replaced by the
next assigned value).
The assigned value could be a value supplied by the user (if the identity
column is defined as GENERATED BY DEFAULT), or an identity value generated
by DB2.
The function returns the null value in the following situations:
* when a single row INSERT statement with a VALUES clause has not been
issued for a table containing an identity column at the current
processing level
* when a COMMIT or ROLLBACK of a unit of work has occurred since the
most recent INSERT statement that assigned a value. 2
The result of the function is not affected by the following statements:
* a single row INSERT statement with a VALUES clause for a table that
does not contain an identity column
* a multiple row INSERT statement with a VALUES clause
* an INSERT statement with a fullselect
* a ROLLBACK TO SAVEPOINT statement.
Notes:
* Expressions in the VALUES clause of an INSERT statement are evaluated
prior to the assignments for the target columns of the INSERT
statement. Thus, an invocation of an IDENTITY_VAL_LOCAL function
invoked in the VALUES clause of an INSERT statement uses the most
recently assigned value for an identity column from a previous INSERT
statement. The function returns the null value if no previous single
row INSERT statement with a VALUES clause for a table containing an
identity column has been executed within the same level as the
IDENTITY_VAL_LOCAL function.
* The IDENTITY_VAL_LOCAL function cannot be used in a trigger or an SQL
function (SQLSTATE 42997).
* The identity column value of the table for which the trigger is
defined can be determined within a trigger, by referencing the trigger
transition variable for the identity column.
* Since the results of the IDENTITY_VAL_LOCAL function are not
deterministic, the result of an invocation of the IDENTITY_VAL_LOCAL
function within the SELECT statement of a cursor can vary for each
FETCH statement.
* The assigned value is the value actually assigned to the identity
column (that is, the value that would be returned on a subsequent
SELECT statement). This value is not necessarily the value provided in
the VALUES clause of the INSERT statement, or a value generated by
DB2. The assigned value could be a value specified in a SET transition
variable statement within the body of a before insert trigger, for a
trigger transition variable associated with the identity column.
* The value returned by the function is unpredictable following a failed
single row INSERT with a VALUES clause into a table with an identity
column. The value may be the value that would have been returned from
the function had it been invoked prior to the failed INSERT, or it may
be the value that would have been assigned had the INSERT succeeded.
The actual value returned depends on the point of failure and is
therefore unpredictable.
Examples:
* Set the variable IVAR to the value assigned to the identity column in
the EMPLOYEE table. If this insert is the first into the EMPLOYEE
table, then IVAR would have a value of 1.
CREATE TABLE EMPLOYEE
(EMPNO INTEGER GENERATED ALWAYS AS IDENTITY,
NAME CHAR(30),
SALARY DECIMAL(5,2),
DEPTNO SMALLINT)
* An IDENTITY_VAL_LOCAL function invoked in an INSERT statement returns
the value associated with the previous single row INSERT statement,
with a VALUES clause for a table with an identity column. Assume for
this example that there are two tables, T1 and T2. Both T1 and T2 have
an identity column named C1. DB2 generates values in sequence starting
with 1 for the C1 column in table T1, and values in sequence starting
with 10 for the C1 column in table T2.
CREATE TABLE T1 (C1 INTEGER GENERATED ALWAYS AS IDENTITY,
C2 INTEGER),
CREATE TABLE T2 (C1 DECIMAL(15,0) GENERATED BY DEFAULT AS IDENTITY
(START WITH 10),
C2 INTEGER),
INSERT INTO T1 (C2) VALUES (5),
INSERT INTO T1 (C2) VALUES (6),
SELECT * FROM T1
C1 C2
----------- ----------
1 5
2 6
VALUES IDENTITY_VAL_LOCAL() INTO :IVAR
At this point, the IDENTITY_VAL_LOCAL function would return a value of
2 in IVAR, because that was the value most recently assigned by DB2.
The following INSERT statement inserts a single row into T2, where
column C2 gets a value of 2 from the IDENTITY_VAL_LOCAL function.
INSERT INTO T2 (C2) VALUES (IDENTITY_VAL_LOCAL()),
SELECT * FROM T2
WHERE C1 = DECIMAL(IDENTITY_VAL_LOCAL(),15,0)
C1 C2
----------------- ----------
10. 2
Invoking the IDENTITY_VAL_LOCAL function after this insert results in
a value of 10, which is the value generated by DB2 for column C1 of
T2.
------------------------------------------------------------------------
16.2 OLAP Functions
The following represents a correction to the "OLAP Functions" section under
"Expressions" in Chapter 3.
aggregation-function
|--column-function--OVER---(--+------------------------------+-->
'-| window-partition-clause |--'
>----+--------------------------------------------------------------------+>
'-| window-order-clause |--+--------------------------------------+--'
'-| window-aggregation-group-clause |--'
>---------------------------------------------------------------|
window-order-clause
.-,-------------------------------------------.
V .-| asc option |---. |
|---ORDER BY-----sort-key-expression--+------------------+--+---|
'-| desc option |--'
asc option
.-NULLS LAST--.
|---ASC--+-------------+----------------------------------------|
'-NULLS FIRST-'
desc option
.-NULLS FIRST--.
|---DESC--+--------------+--------------------------------------|
'-NULLS LAST---'
window-aggregation-group-clause
|---+-ROWS--+---+-| group-start |---+---------------------------|
'-RANGE-' +-| group-between |-+
'-| group-end |-----'
group-end
|---+-UNBOUNDED FOLLOWING-----------+---------------------------|
'-unsigned-constant--FOLLOWING--'
In the window-order-clause description:
NULLS FIRST
The window ordering considers null values before all non-null values
in the sort order.
NULLS LAST
The window ordering considers null values after all non-null values in
the sort order.
In the window-aggregation-group-clause description:
window-aggregation-group-clause
The aggregation group of a row R is a set of rows, defined relative to
R in the ordering of the rows of R's partition. This clause specifies
the aggregation group. If this clause is not specified, the default is
the same as RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW,
providing a cumulative aggregation result.
ROWS
Indicates the aggregation group is defined by counting rows.
RANGE
Indicates the aggregation group is defined by an offset from a
sort key.
group-start
Specifies the starting point for the aggregation group. The
aggregation group end is the current row. Specification of the
group-start clause is equivalent to a group-between clause of the
form "BETWEEN group-start AND CURRENT ROW".
group-between
Specifies the aggregation group start and end based on either
ROWS or RANGE.
group-end
Specifies the ending point for the aggregation group. The
aggregation group start is the current row. Specification of the
group-end clause is equivalent to a group-between clause of the
form "BETWEEN CURRENT ROW AND group-end".
UNBOUNDED PRECEDING
Includes the entire partition preceding the current row. This can
be specified with either ROWS or RANGE. Also, this can be
specified with multiple sort-key-expressions in the
window-order-clause.
UNBOUNDED FOLLOWING
Includes the entire partition following the current row. This can
be specified with either ROWS or RANGE. Also, this can be
specified with multiple sort-key-expressions in the
window-order-clause.
CURRENT ROW
Specifies the start or end of the aggregation group based on the
current row. If ROWS is specified, the current row is the
aggregation group boundary. If RANGE is specified, the
aggregation group boundary includes the set of rows with the same
values for the sort-key-expressions as the current row. This
clause cannot be specified in group-bound2 if group-bound1
specifies value FOLLOWING.
value PRECEDING
Specifies either the range or number of rows preceding the
current row. If ROWS is specified, then value is a positive
integer indicating a number of rows. If RANGE is specified, then
the data type of value must be comparable to the type of the
sort-key-expression of the window-order-clause. There can only be
one sort-key-expression, and the data type of the
sort-key-expression must allow subtraction. This clause cannot be
specified in group-bound2 if group-bound1 is CURRENT ROW or value
FOLLOWING.
value FOLLOWING
Specifies either the range or number of rows following the
current row. If ROWS is specified, then value is a positive
integer indicating a number of rows. If RANGE is specified, then
the data type of value must be comparable to the type of the
sort-key-expression of the window-order-clause. There can only be
one sort-key-expression, and the data type of the
sort-key-expression must allow addition.
------------------------------------------------------------------------
16.3 SQL Procedures/Compound Statement
Following is a revised syntax diagram for the Compound Statement:
.-NOT ATOMIC--.
>>-+---------+--BEGIN----+-------------+------------------------>
'-label:--' '-ATOMIC------'
>-----+-----------------------------------------------+--------->
| .-----------------------------------------. |
| V | |
'-----+-| SQL-variable-declaration |-+---;---+--'
+-| condition-declaration |----+
'-| return-codes-declaration |-'
>-----+--------------------------------------+------------------>
| .--------------------------------. |
| V | |
'----| statement-declaration |--;---+--'
>-----+-------------------------------------+------------------->
| .-------------------------------. |
| V | |
'----DECLARE-CURSOR-statement--;---+--'
>-----+------------------------------------+-------------------->
| .------------------------------. |
| V | |
'----| handler-declaration |--;---+--'
.-------------------------------.
V |
>--------SQL-procedure-statement--;---+---END--+--------+------><
'-label--'
SQL-variable-declaration
.-,--------------------.
V |
|---DECLARE-------SQL-variable-name---+------------------------->
.-DEFAULT NULL-------.
>-----+-data-type----+--------------------+-+-------------------|
| '-DEFAULT--constant--' |
'-RESULT_SET_LOCATOR--VARYING---------'
condition-declaration
|---DECLARE--condition-name--CONDITION--FOR--------------------->
.-VALUE-.
.-SQLSTATE--+-------+---.
>----+-----------------------+---string-constant----------------|
statement-declaration
.-,-----------------.
V |
|---DECLARE-----statement-name---+---STATEMENT------------------|
return-codes-declaration
|---DECLARE----+-SQLSTATE--CHAR (5)--+---+--------------------+-|
'-SQLCODE--INTEGER----' '-DEFAULT--constant--'
handler-declaration
|---DECLARE----+-CONTINUE-+---HANDLER--FOR---------------------->
+-EXIT-----+
'-UNDO-----'
.-,-----------------------------------.
V .-VALUE-. |
>---------+-SQLSTATE--+-------+--string--+--+------------------->
+-condition-name---------------+
+-SQLEXCEPTION-----------------+
+-SQLWARNING-------------------+
'-NOT FOUND--------------------'
>----SQL-procedure-statement------------------------------------|
A statement-declaration declares a list of one or more names that are local
to the compound statement. A statement name cannot be the same as another
statement name within the same compound statement.
------------------------------------------------------------------------
16.4 LCASE and UCASE (Unicode)
In a Unicode database, the entire repertoire of Unicode characters is
uppercased (or lowercased) based on the Unicode properties of these
characters. Double-wide versions of ASCII characters, as well as Roman
numerals, now case convert correctly.
------------------------------------------------------------------------
16.5 WEEK_ISO
Change the description of this function to the following:
The schema is SYSFUN.
Returns the week of the year of the argument as an integer value
in range 1-53. The week starts with Monday and always includes 7 days.
Week 1 is the first week of the year to contain a Thursday,
which is equivalent to the first week containing January 4.
It is therefore possible to have up to 3 days at the beginning of a year
appear as the last week of the previous year. Conversely, up to 3 days
at the end of a year may appear as the first week of the next year.
The argument must be a date, timestamp, or a valid character string
representation of a date or timestamp that is neither
a CLOB nor a LONG VARCHAR.
The result of the function is INTEGER. The result can be null;
if the argument is null, the result is the null value.
Example:
The following list shows examples of the result of WEEK_ISO and DAYOFWEEK_ISO.
DATE WEEK_ISO DAYOFWEEK_ISO
---------- ----------- -------------
1997-12-28 52 7
1997-12-31 1 3
1998-01-01 1 4
1999-01-01 53 5
1999-01-04 1 1
1999-12-31 52 5
2000-01-01 52 6
2000-01-03 1 1
------------------------------------------------------------------------
16.6 Naming Conventions and Implicit Object Name Qualifications
Add the following note to this section in Chapter 3:
The following names, when used in the context of SQL Procedures,
are restricted to the characters allowed in an
ordinary identifier, even if the names are delimited:
- condition-name
- label
- parameter-name
- procedure-name
- SQL-variable-name
- statement-name
------------------------------------------------------------------------
16.7 Queries (select-statement/fetch-first-clause)
The last paragraph in the description of the fetch-first-clause:
Specification of the fetch-first-clause in a select-statement
makes the cursor not deletable (read-only). This clause
cannot be specified with the FOR UPDATE clause.
is incorrect and should be removed.
------------------------------------------------------------------------
16.8 Libraries Used by the CREATE WRAPPER Statement on Linux
Linux uses libraries called LIBDRDA.SO and LIBSQLNET.SO, not LIBDRDA.A and
LIBSQLNET.A as may have been documented previously.
------------------------------------------------------------------------
System Monitor Guide and Reference
------------------------------------------------------------------------
17.1 db2ConvMonStream
In the Usage Notes, the structure for the snapshot variable datastream type
SQLM_ELM_SUBSECTION should be sqlm_subsection.
------------------------------------------------------------------------
Troubleshooting Guide
------------------------------------------------------------------------
18.1 Starting DB2 on Windows 95 and Windows 98 When the User Is Not Logged
On
For a db2start command to be successful in a Windows 95 or a Windows 98
environment, you must either:
* Log on using the Windows logon window or the Microsoft Networking
logon window
* Issue the db2logon command (see note (NOTE1) for information about the
db2logon command).
In addition, the user ID that is specified either during the logon or for
the db2logon command must meet DB2's requirements (see note (NOTE2)).
When the db2start command starts, it first checks to see if a user is
logged on. If a user is logged on, the db2start command uses that user's
ID. If a user is not logged on, the db2start command checks whether a
db2logon command has been run, and, if so, the db2start command uses the
user ID that was specified for the db2logon command. If the db2start
command cannot find a valid user ID, the command terminates.
During the installation of DB2 Universal Database Version 6 on Windows 95
and Windows 98, the installation software, by default, adds a shortcut to
the Startup folder that runs the db2start command when the system is booted
(see note (NOTE1) for more information). If the user of the system has
neither logged on nor issued the db2logon command, the db2start command
will terminate.
If you or your users do not normally log on to Windows or to a network, you
can hide the requirement to issue the db2logon command before a db2start
command by running commands from a batch file as follows:
1. Create a batch file that issues the db2logon command followed by the
db2start.exe command. For example:
@echo off
db2logon db2local /p:password
db2start
cls
exit
2. Name the batch file db2start.bat, and store it in the /bin directory
that is under the drive and path where you installed DB2. You store
the batch file in this location to ensure that the operating system
can find the path to the batch file.
The drive and path where DB2 is installed is stored in the DB2
registry variable DB2PATH. To find the drive and path where you
installed DB2, issue the following command:
db2set -g db2path
Assume that the db2set command returns the value c:\sqllib. In this
situation, you would store the batch file as follows:
c:\sqllib\bin\db2start.bat
3. To start DB2 when the system is booted, you should run the batch file
from a shortcut in the Startup folder. You have two options:
o Modify the shortcut that is created by the DB2 installation
program to run the batch file instead of db2start.exe. In the
preceding example, the shortcut would now run the db2start.bat
batch file. The shortcut that is created by DB2 installation
program is called DB2 - DB2.lnk, and is located in
c:\WINDOWS\Start Menu\Programs\Start\DB2 - DB2.lnk on most
systems.
o Add you own shortcut to run the batch file, and delete the
shortcut that is added by the DB2 installation program. Use the
following command to delete the DB2 shortcut:
del "C:\WINDOWS\Start Menu\Programs\Startup\DB2 - DB2.lnk"
If you decide to use your own shortcut, you should set the close
on exit attribute for the shortcut. If you do not set this
attribute, the DOS command prompt is left in the task bar even
after the db2start command has successfully completed. To prevent
the DOS window from being opened during the db2start process, you
can create this shortcut (and the DOS window it runs in) set to
run minimized.
Note:As an alternative to starting DB2 during the boot of the
system, DB2 can be started prior to the running of any
application that uses DB2. See note (NOTE5) for details.
If you use a batch file to issue the db2logon command before the db2start
command is run, and your users occasionally log on, the db2start command
will continue to work, the only difference being that DB2 will use the user
ID of the logged on user. See note (NOTE1) for additional details.
Notes:
1. The db2logon command simulates a user logon. The format of the
db2logon command is:
db2logon userid /p:password
The user ID that is specified for the command must meet the DB2 naming
requirements (see note (NOTE2) for more information). If the command
is issued without a user ID and password, a window opens to prompt the
user for the user ID and password. If the only parameter provided is a
user ID, the user is not prompted for a password; under certain
conditions a password is required, as described below.
The user ID and password values that are set by the db2logon command
are only used if the user did not log on using either the Windows
logon window or the Microsoft Networking logon window. If the user has
logged on, and a db2logon command has been issued, the user ID from
the db2logon command is used for all DB2 actions, but the password
specified on the db2logon command is ignored
When the user has not logged on using the Windows logon window or the
Microsoft Networking logon window, the user ID and password that are
provided through the db2logon command are used as follows:
o The db2start command uses the user ID when it starts, and does
not require a password.
o In the absence of a high-level qualifier for actions like
creating a table, the user ID is used as the high-level
qualifier. For example:
1. If you issue the following: db2logon db2local
2. Then issue the following: create table tab1
The table is created with a high-level qualifier as
db2local.tab1.
You should use a user ID that is equal to the schema name of your
tables and other objects.
o When the system acts as client to a server, and the user issues a
CONNECT statement without a user ID and password (for example,
CONNECT TO TEST) and authentication is set to server, the user ID
and password from the db2logon command are used to validate the
user at the remote server. If the user connects with an explicit
user ID and password (for example, CONNECT TO TEST USER userID
USING password), the values that are specified for the CONNECT
statement are used.
2. In Version 6, the user ID that is either used to log on or specified
for the db2logon command must conform to the following DB2
requirements:
o It can be a maximum of 8 characters (bytes) in length.
o It cannot be any of the following: USERS, ADMINS, GUESTS, PUBLIC,
LOCAL, or any SQL reserved word that is listed in the SQL
Reference.
o It cannot begin with: SQL, SYS or IBM
o Characters can include:
+ A through Z (Windows 95 and Windows 98 support
case-sensitive user IDs)
+ 0 through 9
+ @, #, or $
3. You can prevent the creation of the db2start shortcut in the Startup
folder during a customized interactive installation, or if you are
performing a response file installation and specify the
DB2.AUTOSTART=NO option. If you use these options, there is no
db2start shortcut in the Startup folder, and you must add your own
shortcut to run the db2start.bat file.
4. On Windows 98, an option is available that you can use to specify a
user ID that is always logged on when Windows 98 is started. In this
situation, the Windows logon window will not appear. If you use this
option, a user is logged on and the db2start command will succeed if
the user ID meets DB2 requirements (see note (NOTE2) for details). If
you do not use this option, the user will always be presented with a
logon window. If the user cancels out of this window without logging
on, the db2start command will fail unless the db2logon command was
previously issued, or invoked from the batch file, as described above.
5. If you do not start DB2 during a system boot, DB2 can be started by an
application. You can run the db2start.bat file as part of the
initialization of applications that use DB2. Using this method, DB2
will only be started when the application that will use it is started.
When the user exits the application, a db2stop command can be issued
to stop DB2. Your business applications can start DB2 in this way, if
DB2 is not started during the system boot.
To use the DB2 Synchronizer application or call the synchronization
APIs from your application, DB2 must be started if the scripts that
are download for execution contain commands that operate either
against a local instance or a local database. These commands can be in
database scripts, instance scripts, or embedded in operating system
(OS) scripts. If an OS script does not contain Command Line Processor
commands or DB2 APIs that use an instance or a database, it can be run
without DB2 being started. Because it may be difficult to tell in
advance what commands will be run from your scripts during the
synchronization process, DB2 should normally be started before
synchronization begins.
If you are calling either the db2sync command or the synchronization
APIs from your application, you would start DB2 during the
initialization of your application. If your users will be using the
DB2 Synchronizer shortcut in the DB2 for Windows folder to start
synchronization, the DB2 Synchronization shortcut must be modified to
run a db2sync.bat file. The batch file should contain the following
commands to ensure that DB2 is running before synchronization begins:
@echo off
db2start.bat
db2sync.exe
db2stop.exe
cls
exit
In this example, it is assumed that the db2start.bat file invokes the
db2logon and db2start commands as described above.
If you decide to start DB2 when the application starts, ensure that
the installation of DB2 does not add a shortcut to the Startup folder
to start DB2. See note (NOTE3) for details.
------------------------------------------------------------------------
Control Center
------------------------------------------------------------------------
19.1 Ability to Administer DB2 Server for VSE and VM Servers
The DB2 Universal Database Version 7.1 Control Center has enhanced its
support of DB2 Server for VSE and VM databases. All DB2 Server for VSE and
VM database objects can be viewed by the Control Center. There is also
support for the CREATE INDEX, REORGANIZE INDEX, and UPDATE STATISTICS
statements, and for the REBIND command. REORGANIZE INDEX and REBIND require
a stored procedure running on the DB2 Server for VSE and VM hosts. This
stored procedure is supplied by the Control Center for VSE and VM feature
of DB2 Server for VSE and VM.
The fully integrated Control Center allows the user to manage DB2,
regardless of the platform on which the DB2 server runs. DB2 Server for VSE
and VM objects are displayed on the Control Center main window, along with
DB2 Universal Database objects. The corresponding actions and utilities to
manage these objects are invoked by selecting the object. For example, a
user can list the indexes of a particular database, select one of the
indexes, and reorganize it. The user can also list the tables of a database
and run update statistics, or define a table as a replication source.
For information about configuring the Control Center to perform
administration tasks on DB2 Server for VSE and VM objects, refer to the DB2
Connect User's Guide, or the Installation and Configuration Supplement.
------------------------------------------------------------------------
19.2 Java 1.2 Support for the Control Center
The Control Center supports bi-directional languages, such as Arabic and
Hebrew, using bi-di support in Java 1.2. This support is provided for the
Windows NT platform only.
Java 1.2 must be installed for the Control Center to recognize and use it:
1. JDK 1.2.2 is available on the DB2 UDB CD under the DB2\bidi\NT
directory. ibm-inst-n122p-win32-x86.exe is the installer program, and
ibm-jdk-n122p-win32-x86.exe is the JDK distribution. Copy both files
to a temporary directory on your hard drive, then run the installer
program from there. (/li>
2. Install it under <DB2PATH>\java\Java12, where <DB2PATH> is the
installation path of DB2.
3. Do not select JDK/JRE as the System VM when prompted by the JDK/JRE
installation.
After Java 1.2 is installed successfully, starting the Control Center in
the normal manner will use Java 1.2.
To stop the use of Java 1.2, you may either uninstall JDK/JRE from
<DB2PATH>\java\Java12, or simply rename the <DB2PATH>\java\Java12
sub-directory to something else.
Note:Do not confuse <DB2PATH>\java\Java12 with <DB2PATH>\Java12.
<DB2PATH>\Java12 is part of the DB2 installation, and includes JDBC
support for Java 1.2.
------------------------------------------------------------------------
19.3 "Invalid shortcut" Error when Using the Online Help on the Windows
Operating System
When using the Control Center online help, you may encounter an error like:
"Invalid shortcut". If you have recently installed a new Web browser or a
new version of a Web browser, ensure that HTML and HTM documents are
associated with the correct browser. See the Windows Help topic "To change
which program starts when you open a file".
------------------------------------------------------------------------
19.4 "File access denied" Error when Attempting to View a Completed Job in
the Journal on the Windows Operating System
On DB2 Universal Database for Windows NT, a "File access denied" error
occurs when attempting to open the Journal to view the details of a job
created in the Script Center. The job status shows complete. This behavior
occurs when a job created in the Script Center contains the START command.
To avoid this behavior, use START/WAIT instead of START in both the batch
file and in the job itself.
------------------------------------------------------------------------
19.5 Multisite Update Test Connect
Multisite Update Test Connect functionality in the Version 7.1 Control
Center is limited by the version of the target instance. The target
instance must be at least Version 7.1 for the "remote" test connect
functionality to run. To run Multisite Update Test Connect functionality in
Version 6, you must bring up the Control Center locally on the target
instance and run it from there.
------------------------------------------------------------------------
19.6 Control Center for DB2 for OS/390
The DB2 UDB Control Center for OS/390 allows you to manage the use of your
licensed IBM DB2 utilities. Utility functions that are elements of
separately orderable features of DB2 UDB for OS/390 must be licensed and
installed in your environment before being managed by the DB2 Control
Center.
The "CC390" database, defined with the Control Center when you configure a
DB2 for OS/390 subsystem, is used for internal support of the Control
Center. Do not modify this database.
Although DB2 for OS/390 Version 7.1 is not mentioned specifically in the
Control Center table of contents, or the Information Center Task
information, the documentation does support the DB2 for OS/390 Version 7.1
functions. Many of the DB2 for OS/390 Version 6-specific functions also
relate to DB2 for OS/390 Version 7.1, and some functions that are DB2 for
OS/390 Version 7.1-specific in the table of contents have no version
designation. If you have configured a DB2 for OS/390 Version 7.1 subsystem
on your Control Center, you have access to all the documentation for that
version.
To access and use the Generate DDL function from the Control Center for DB2
for OS/390, you must have the Generate DDL function installed:
* For Version 5, install DB2Admin 2.0 with DB2 for OS/390 Version 5.
* For Version 6, install the small programming enhancement that will be
available as a PTF for the DB2 Admin feature of DB2 for OS/390 Version
6.
* For Version 7.1, the Generate DDL function is part of the separately
priced DB2 Admin feature of DB2 for OS/390 Version 7.1.
You can access Stored Procedure Builder from the Control Center, but you
must have already installed it by the time you start the DB2 UDB Control
Center. It is part of the DB2 Application Development Client.
To catalog a DB2 for OS/390 subsystem directly on the workstation, select
to use the Client Configuration Assistant tool.
1. On the Source page, specify the "Manually configure a connection to a
database" radio button.
2. On the Protocol page, complete the appropriate communications
information.
3. On the Database page, specify the subsystem name in the "Database
name" field.
4. On the Node Options page, select the "Configure node options
(Optional)" checkbox.
5. Select "MVS/ESA, OS/390" from the list in the "Operating system"
field.
6. Click the Finish button to complete the configuration.
To catalog a DB2 for OS/390 subsystem via a gateway machine, follow steps
1-6 above on the gateway machine, and then
1. On the client machine, start the Control Center.
2. Right click on the "Systems" folder and select "Add".
3. In the Add System dialog, type the gateway machine name in the "System
name" field.
4. Type "DB2DAS00" in the "Remote instance" field.
5. For the TCP/IP protocol, in the Protocol parameters, specify the
gateway machine's host name in the "Host name" field.
6. Type "523" in the "Service name" field.
7. Click "OK" to add the system. You should now see the gateway machine
added under the "Systems" folder.
8. Expand the gateway machine name.
9. Right click on the "Instances" folder and select "Add".
10. In the Add Instance dialog, click the "Refresh" push button to list
the instances available on the gateway machine. If the gateway machine
is a Windows NT system, the DB2 for OS/390 subsystem was probably
cataloged under the instance DB2.
11. Select the instance. The protocol parameters are filled in
automatically for this instance.
12. Click "OK" to add the instance.
13. Open the Instances folder to see the instance you just added.
14. Expand the instance.
15. Right click on the "Databases" folder and select "Add".
16. Click the "Refresh" push button to display the local databases on the
gateway machine. If you are adding a DB2 subsystem in the Add Database
dialog, type the subsystem name in the "Database name" field. Option:
Type a local alias name for the subsystem (or the database).
17. Click "OK".
You have now successfully added the subsystem in the Control Center. When
you open the database, you should see the DB2 for OS/390 subsystem
displayed.
------------------------------------------------------------------------
19.7 Required Fix for Control Center for OS/390
You must apply APAR PQ36382 to the 390 Enablement feature of DB2 for OS/390
Version 5 and DB2 for OS/390 Version 6 to manage these subsystems using the
DB2 UDB Control Center for Version 7.1. Without this fix, you cannot use
the DB2 UDB Control Center for Version 7.1 to run utilities for those
subsystems.
The APAR should be applied to the following FMIDs:
DB2 for OS/390 Version 5 390 Enablement: FMID JDB551D
DB2 for OS/390 Version 6 390 Enablement: FMID JDB661D
------------------------------------------------------------------------
19.8 Change to the Create Spatial Layer Dialog
The "<<" and ">>" buttons have been removed from the Create Spatial Layer
dialog.
------------------------------------------------------------------------
19.9 Troubleshooting Information for the DB2 Control Center
In the "Control Center Installation and Configuration" chapter in your
Quick Beginnings book, the section titled "Troubleshooting Information"
tells you to unset your client browser's CLASSPATH from a command window if
you are having problems running the Control Center as an applet. This
section also tells you to start your browser from the same command window.
However, the command for starting your browser is not provided. To launch
Internet Explorer, type start iexplore and press Enter. To launch Netscape,
type start netscape and press Enter. These commands assume that your
browser is in your PATH. If it is not, add it to your PATH or switch to
your browser's installation directory and reissue the start command.
------------------------------------------------------------------------
19.10 Control Center Troubleshooting on UNIX Based Systems
If you are unable to start the Control Center on a UNIX based system, set
the JAVA_HOME environment variable to point to your Java distribution:
* If java is installed under /usr/jdk118, set JAVA_HOME to /usr/jdk118.
* For the sh, ksh, or bash shell:
export JAVA_HOME=/usr/jdk118.
* For the csh or tcsh shell:
setenv JAVA_HOME /usr/jdk118
------------------------------------------------------------------------
19.11 Possible Infopops Problem on OS/2
If you are running the Control Center on OS/2, using screen size 1024x768
with 256 colors, and with Workplace Shell Palette Awareness enabled,
infopops that extend beyond the border of the current window may be
displayed with black text on a black background. To fix this problem,
either change the display setting to more than 256 colors, or disable
Workplace Shell Palette Awareness.
------------------------------------------------------------------------
19.12 Launching More Than One Control Center Applet
You cannot launch more than one Control Center applet simultaneously on the
same machine. This restriction applies to Control Center applets running in
all supported browsers.
------------------------------------------------------------------------
19.13 Help for the jdk11_path Configuration Parameter
In the Control Center help, the description of the Java Development Kit 1.1
Installation Path (jdk11_path) configuration parameter is missing a line
under the sub-heading Applies To. The complete list under Applies To is:
* Database server with local and remote clients
* Client
* Database server with local clients
* Partitioned database server with local and remote clients
* Satellite database server with local clients
------------------------------------------------------------------------
19.14 Solaris System Error (SQL10012N) when Using the Script Center or the
Journal
When selecting a Solaris system from the Script Center or the Journal, the
following error may be encountered:
SQL10012N - An unexpected operating system error was received while
loading the specified library "/udbprod/db2as/sqllib/function/unfenced/
db2scdar!ScheduleInfoOpenScan". SQLSTATE=42724.
This is caused by a bug in the Solaris runtime linker. To correct this
problem, apply the following patches:
103627-06 for Solaris 2.5
105490-06 (107733 makes 105490 obsolete) for Solaris 2.6
------------------------------------------------------------------------
19.15 Help for the DPREPL.DFT File
In the Control Center, in the help for the Replication page of the Tool
Settings notebook, step 5d says:
Save the file into the working directory for the
Control Center (for example, SQLLIB\BIN) so that
the system can use it as the default file.
Step 5d should say:
Save the file into the working directory for the
Control Center (SQLLIB\CC) so that
the system can use it as the default file.
------------------------------------------------------------------------
19.16 Online Help for the Control Center Running as an Applet
When the Control Center is running as an applet, the F1 key only works in
windows and notebooks that have infopops.
You can press the F1 key to bring up infopops in the following components:
* DB2 Universal Database for OS/390
* The wizards
In the rest of the Control Center components, F1 does not bring up any
help. To display help for the other components, please use the Help push
button, or the Help pull-down menu.
------------------------------------------------------------------------
19.17 Running the Control Center in Applet Mode (Windows 95)
An attempt to open the Script Center may fail if an invalid user ID and
password are specified. Ensure that a valid user ID and password are
entered when signing on to the Control Center.
------------------------------------------------------------------------
Data Warehouse Center
* When creating editioned SQL steps, based on usage, you might want to
consider creating a non-unique index on the edition column to speed
performance of deleting of editions. Consider this for large warehouse
tables only, since the performance of inserts can be impacted when
inserting a small numbers of rows.
* In the Process Model window, if you change a source or target, the
change that you made is automatically saved immediately. If you make
any other change, such as adding a step, you must explicitly save the
change to make the change permanent. To save the change, click Process
--> Save.
* You can specify up to 254 characters in the Description field of
notebooks in the Data Warehouse Center. This maximum replaces the
maximum lengths specified in the online help.
* You cannot successfully run a Sample Contents request that uses the
AS/400 agent on a flat file source. Although you can create a flat
file source and attempt to use an AS/400 agent to issue a
sampleContent request, the request will fail.
* You might receive an error when you run Sample Contents on a warehouse
target in the process modeler. This error is related to the
availability of a common agent site to warehouse source, warehouse
target, and step in a process. The list of available agent sites for a
step is obtained from the intersection of the warehouse source IR
agent sites, the warehouse target IR agent sites, and the agent sites
available for this particular step (The steps are selected in the last
page of the agent sites properties notebook). For example, You want to
view the Sample Contents for a process that runs the FTP Put program
(VWPRCPY). The step used in the process must be selected for the agent
site in the agent site definition. When you run Sample Contents
against the Target file, the first agent site on the selected list is
usually used. However, database maintenance operations might affect
the order of the agent sites listed. Sample contents will fail if the
agent site selected does not reside in the same system as the source
or target file.
* If Import of a tag language file larger than about 500KB is run from
the Warehouse Center, the import utility (iwh2imp2.exe) might hang. If
you notice that iwh2imp2 is not using a lot of CPU, it has probably
hung and you should end the iwh2imp2 process and try the import again
from the DOS command line.
* When you try to edit the Create DDL SQL statement for a target table
for a step in development mode, you see the following misleading
message: "Any change to the Create DDL SQL statement will not be
reflected on the table definition or actual physical table. Do you
want to continue?"
The change will be reflected in the actual physical table. Ignore the
message and continue changing the Create DDL statement.
The corrected version of this message for steps in development mode
should read as follows: "Any change to the Create DDL SQL statement
will not be reflected in the table definition. Do you want to
continue?"
For steps in test or production mode, the message is correct. The Data
Warehouse Center will not change the physical target table that was
created when you promoted the step to test mode.
* If you want to migrate Visual Warehouse metadata synchronization
business views to the Data Warehouse Center, promote the business
views to production status before you migrate the warehouse control
database. If the business views are in production status, their
schedules are migrated to the Data Warehouse Center. If the business
views are not in production status, they will be migrated in test
status without their schedules. You cannot promote the migrated steps
to production status. You must create the synchronization steps again
in the Data Warehouse Center and delete the migrated steps.
* When the Data Warehouse Center generates the target table for a step,
it does not generate a primary key for the target table. Some of the
transformers, such as Moving Average, use the generated table as a
source table and also require that the source table have a primary
key. Before you use the generated table with the transformer, define
the primary key for the table by right-clicking the table in the DB2
Control Center and clicking Alter.
* To access Microsoft SQL Server on Windows NT using the Merant ODBC
drivers, verify that the system path contains the sqllib\odbc32
directory.
* When you define a warehouse source or warehouse target for an OS/2
database, type the database name in uppercase letters.
* The DB2 Control Center or the Command Line Processor might indicate
that the warehouse control database is in an inconsistent state. This
state is expected because it indicates that the warehouse server did
not commit its initial startup message to the warehouse logger.
* In the data warehousing sample contained in the TBC_MD database, you
cannot use SQL Assist to change the SQL in the Select Scenario SQL
step, because the SQL was edited after it was generated by SQL Assist.
* To use the FormatDate function, click Build SQL on the SQL Statement
page of the Properties notebook for an SQL step.
The output of the FormatDate function is of data type varchar(255).
You cannot change the data type by selecting Date, Time, or Date/Time
from the Category list on the Function Parameters - FormatDate page.
* On AIX and the Solaris Operating Environment, the installation process
sets the language for import and export to the information catalog,
and export to the OLAP Integration Server. If you want to use these
functions in a language other than the language set during
installation, create the following soft link by entering the following
command on one line:
On AIX
/usr/bin/ln -sf /usr/lpp/db2_07_01/msg/locale/flgnxolv.str
/usr/lpp/db2_07_01/bin/flgnxolv.str
locale
The locale name of the language in xx_yy format
On the Solaris Operating Environment
/usr/bin/ln -sf /opt/IBMdb2/V7.1/msg/locale/flgnxolv.str
/opt/IBMdb2/V7.1/bin/flgnxolv.str
locale
The locale name of the language in xx_yy format
* When you use the Update the value in the key column option of the
Generate Key Table transformer, the transformer updates only those
rows in the table that do not have key values. (That is, the values
are null). If the starting key value is coming from a column other
than the designated key column, duplicate keys might be generated if
you select the maximum value of the column as the starting value,
which is used and incremented to create the values in the key column.
When additional rows are inserted into the table, the key values are
null until you run the transformer again. When you run the transformer
again using the update option and the same non-key column that you
specified in the previous run, and if the maximum value of the non-key
column in the newly inserted rows is less than the maximum value used
in the previous run, then duplicate key values will be generated.
To avoid this problem, use one of the following approaches:
o Use the designated key column as the source of the starting key
value when you use the update option. This approach ensures that
the keys will always be in sequence and duplicates will not be
created.
o After the initial run of the transformer, use the Replace all
values option to create the keys for all the rows again.
* The warehouse server does not maintain connections to local or remote
databases when the DB2 server that manages the databases is stopped
and restarted. If you stop and restart DB2, then stop and restart the
warehouse services as well.
* When you install the DB2 Administration Client and the Data
Warehousing Tools to set up a Data Warehouse Center administrative
client on a different workstation from the one that contains the
warehouse server, you must add the TCP/IP port number at which the
warehouse server workstation is listening to the services file for the
client workstation. Add an entry into the services file as follows:
vwkernel 11000/tcp
* When you define a warehouse source for a DB2 for VM database, which is
accessed through a DRDA gateway, there are restrictions on the use of
CLOB and BLOB data types:
o You cannot use the Sample Contents function to view data of CLOB
and BLOB data types.
o You cannot use columns of CLOB and BLOB data types with an SQL
step.
This restriction is a known restriction on the DB2 for VM Version 5.2
server in which LOB objects cannot be transmitted using DRDA to a DB2
Version 7.1 client.
* When you define a DB2 for VM or DB2 for VSE target table in the Data
Warehouse Center, do not select the Grant to public check box. The
GRANT command syntax that the Data Warehouse Center generates not
supported on DB2 for VM and DB2 for VSE.
* To enable delimited identifier support for Sybase and Microsoft SQL
Server on Windows NT: select the Enable Quoted Identifiers check box
in the Advanced page of the ODBC Driver Setup notebook.
To enable delimited identifier support for Sybase on UNIX, edit the
Sybase data source in the .odbc.ini file to include the connect
attribute EQI=1.
------------------------------------------------------------------------
20.1 Data Warehouse Center Publications
20.1.1 Data Warehouse Center Application Integration Guide
In Chapter 8. Information Catalog Manager object types, the directory where
you can find the .TYP files, which include the tag language for defining an
object type, has been changed to \SQLLIB\DGWIN\TYPES.
20.1.2 Data Warehouse Center Administration Guide
* The Data Warehouse Center troubleshooting information has moved to the
DB2 Troubleshooting Guide.
* In "Chapter 5. Defining and running processes", section "Starting a
step from outside the Data Warehouse Center", it should be noted that
JDK 1.1.8 or later is required on the warehouse server workstation and
the agent site if you start a step that has a double-byte name.
* On page 180, section "Defining values for a Submit OS/390 JCL
jobstream (VWPMVS) program," step 8 states that you must define a
.netrc file in the same directory as the JES file. Instead, the
program creates the .netrc file. If the file does not exist, the
program creates the file in the home directory. If a .netrc file
already exists in the home directory, the program renames the existing
file and creates a new file. When the program finishes processing, it
deletes the new .netrc file that it created and renames the original
file to .netrc.
* In the Data warehousing sample appendix, section "Viewing and
modifying the sample metadata", the GEOGRAPHIES table should be
included in the list of source tables.
* In the Data warehousing sample appendix, section "Promoting the
steps", in the procedure for promoting steps to production mode, the
following statement is incorrect because the target table was created
when you promoted the step to test mode:
The Data Warehouse Center starts to create the target table,
and displays a progress window.
* On Microsoft Windows NT and Windows 2000, the Data Warehouse Center
logs events to the system event log. The Event ID corresponds to the
Data Warehouse Center message number. For information about the Data
Warehouse Center messages, refer to the Message Reference.
* The example in Figure 20 on page 315 has an error. The following
commands are correct:
"C:\IS\bin\olapicmd" < "C:\IS\Batch\my_script.script" >
"C:\IS\Batch\my_script.log"
The double quotation marks around "C:\IS\bin\olapicmd" are necessary
if the name of a directory in the path contains a blank, such as
Program Files.
* In "Appendix F. Using Classic Connect with the Data Warehouse Center",
the section "Installing the CROSS ACCESS ODBC driver" on page 388 has
been replaced with the following information:
Install the CROSS ACCESS ODBC driver by performing a custom install
of the DB2 Warehouse Manager Version 7, and selecting the Classic
Connect Drivers component. The driver is not installed as part of
a typical installation of the DB2 Warehouse Manager.
The CROSS ACCESS ODBC driver will be installed in the ODBC32 subdirectory
of the SQLLIB directory. After the installation is complete, you must manually
add the path for the driver (for example, C:\Program Files\SQLLIB\ODBC32)
to the PATH system environment variable. If you have another version
of the CROSS ACCESS ODBC driver already installed, place the ...\SQLLIB\ODBC32\
path before the path for the other version. The operating system will use
the first directory in the path that contains the CROSS ACCESS ODBC driver.
* In "Appendix G. Data Warehouse Center environment structure" on page
401, there is an incorrect entry in the table. C:\Program
Files\SQLLIB\ODBC32 is not added to the PATH environment variable. The
only update to the PATH environment variable is C:\Program
Files\SQLLIB\BIN.
20.1.3 Data Warehouse Center Messages
Data Warehouse Center message DWC3778E should read as follows: "Cannot
delete a Data Warehouse Center default Data Warehouse Center Program
Group."
Data Warehouse Center message DWC3806E should read as follows: "Step being
created or updated is not associated with either a source resource or Data
Warehouse Center program for population."
Data Warehouse Center message DWC6119E should read as follows: "The
warehouse client failed to receive a response from the warehouse server."
20.1.4 Data Warehouse Center Online Help
* A table or view must be defined for replication using the DB2 Control
Center before it can be used as a replication source in the Data
Warehouse Center.
* Before running the Essbase VWPs with the AS/400 agent, ARBORLIB and
ARBORPATH need to be set as *sys environment variables. To set these,
the user ID must have *jobctl authority. These environment variables
need to point to the library where Essbase is installed.
* Publish Data Warehouse Center Metadata window and associated
properties window: In step 10 of the task help, there is an example
states that if you specify a limit value of 1 (Limit the levels of
objects in the tree) and publish a process, only 1 step from that
process is published and displayed. This example is not correct in all
situations. In step 8, on the second bulleted item, the first
statement is incorrect. It should read "Click at the column level to
generate a transformation object between an information catalog source
column and a target column."
* Any references in the online help to "foreign keys" should read
"warehouse foreign keys."
* Any references in the online help to the "Define Replication notebook"
should read "replication step notebook."
* Importing a tag language online help: In the bulleted list showing
common import errors, one item in the list is "Importing a tag
language file that was not exported properly". This item is not
applicable to the list of common input errors.
* In the "Add data" topic of the online help, the links to the "Adding
source tables to a process" and "Adding target tables to a process"
topics are broken. You can find these topics in the help index.
------------------------------------------------------------------------
20.2 Warehouse Control Database
This section covers the following topics related to the management of
warehouse control databases:
* The default warehouse control database
* The Warehouse Control Database Management window
* Changing the active warehouse control database
* Creating and initializing a warehouse control database
* Migrating IBM Visual Warehouse control databases for use with the DB2
Version 7.1 Data Warehouse Center
20.2.1 The default warehouse control database
During a typical DB2 installation on Windows NT or Windows 2000, DB2
creates and initializes a default warehouse control database for the Data
Warehouse Center if there is no active warehouse control database
identified in the Windows NT registry. Initialization is the process in
which the Data Warehouse Center creates the control tables that are
required to store Data Warehouse Center metadata.
The default warehouse control database is named DWCTRLDB. When you log on,
the Data Warehouse Center specifies DWCTRLDB as the warehouse control
database by default. To see the name of the warehouse control database that
will be used, click the Advanced button on the Data Warehouse Center Logon
window.
20.2.2 The Warehouse Control Database Management window
The Warehouse Control Database Management window is installed during a
typical DB2 installation on Windows NT or Windows 2000. You can use this
window to change the active warehouse control database, create and
initialize new warehouse control databases, and migrate warehouse control
databases that have been used with IBM Visual Warehouse. The following
sections discuss each of these activities.
Stop the warehouse server before using the Warehouse Control Database
Management window.
20.2.3 Changing the active warehouse control database
If you want to use a warehouse control database other than the active
warehouse control database, use the Warehouse Control Database Management
window to register the database as the active control database. If you
specify a name other than the active warehouse control database when you
log on to the Data Warehouse Center, you will receive an error that states
that the database that you specified does not match the database specified
by the warehouse server.
To register the database:
1. Click Start --> Programs --> IBM DB2 --> Warehouse Control Database
Management.
2. In the New control database field, type the name of the control
database that you want to use.
3. In the Schema field, type the name of the schema to use for the
database.
4. In the User ID field, type the name of the user ID that is required to
access the database
5. In the Password field, type the name of the password for the user ID.
6. In the Verify Password field, type the password again.
7. Click OK.
The window remains open. The Messages field displays messages that
indicate the status of the registration process.
8. After the process is complete, close the window.
20.2.4 Creating and initializing a warehouse control database
If you want to create a warehouse control database other than the default,
you can create it during the installation process or after installation by
using the Warehouse Control Database Management window. You can use the
installation process to create a database on the same workstation as the
warehouse server or on a different workstation.
To change the name of the warehouse control database that is created during
installation, you must perform a custom installation and change the name on
the Define a Local Warehouse Control Database window. The installation
process will create the database with the name that you specify, initialize
the database for use with the Data Warehouse Center, and register the
database as the active warehouse control database.
To create a warehouse control database during installation on a workstation
other than where the warehouse server is installed, select Warehouse Local
Control Database during a custom installation. The installation process
will create the database. After installation, you must then use the
Warehouse Control Database Management window on the warehouse server
workstation by following the steps in 20.2.3, Changing the active warehouse
control database. Specify the database name that you specified during
installation. The database will be initialized for use with the Data
Warehouse Center and registered as the active warehouse control database.
To create and initialize a warehouse control database after the
installation process, use the Warehouse Control Database Management window
on the warehouse server workstation. If the new warehouse control database
is not on the warehouse server workstation, you must create the database
first and catalog it on the warehouse server workstation. Then follow the
steps in 20.2.3, Changing the active warehouse control database. Specify
the database name that you specified during installation.
When you log on to the Data Warehouse Center, click the Advanced button and
type the name of the active warehouse control database.
20.2.5 Migrating IBM Visual Warehouse control databases
DB2 Universal Database Quick Beginnings for Windows provides information
about how the active warehouse control database is migrated during a
typical install of DB2 Universal Database Version 7.1 on Windows NT and
Windows 2000. If you have more than one warehouse control database to be
migrated, you must use the Warehouse Control Database Management window to
migrate the additional databases. Only one warehouse control database can
be active at a time. If the last database that you migrate is not the one
that you intend to use when you next log on to the Data Warehouse Center,
you must use the Warehouse Control Database Management window to register
the database that you intend to use.
------------------------------------------------------------------------
20.3 Setting up and running replication with Data Warehouse Center
1. Setting up and running replication with Data Warehouse Center requires
that the Replication Control tables exist on both the Warehouse
Control database and the Warehouse Target databases.
Replication requires that the Replication Control tables exist on both
the Control and Target databases. The Replication Control tables are
found in the ASN schema and they all start with IBMSNAP. The
Replication Control tables are automatically created for you on a
database when you define a Replication Source via the Control Center,
if the Control tables do not already exist. Note that the Control
tables must also exist on the Target DB. To get a set of Control
tables created on the target DB you can either create a Replication
Source using Control Center, then remove the Replication Source, just
leaving the Control tables in place. Or you can use the DJRA, Data
Joiner Replication Administration, product to define just the control
tables.
2. Installing and Using the DJRA
If you want or need to use the DJRA to define the control tables, you
will need to install it first. The DJRA ships as part of DB2. To
install the DJRA, go to the d:\sqllib\djra directory (where your DB2
is installed) and click on the djra.exe package. This will install the
DJRA on your system. To access the DJRA after that, on Windows NT,
from the start menu, click on the DB2 for Windows NT selection, then
select Replication, then select Replication Administration Tools. The
DJRA interface is a bit different from usual NT applications. For each
function that it performs, it creates a set of SQL to be run, but does
not execute it. The user must manually save the generated SQL and then
select the Execute SQL function to run the SQL.
3. Setup to Run Capture and Apply
For the system that you are testing on, see the Replication Guide and
Reference Manual for instructions on configuring your system to run
the Capture and Apply program. You must bind the Capture and Apply
programs on each database where they will be used. Note that you do
NOT need to create a password file. The Data Warehouse Center will
automatically create a password file for the Replication subscription.
4. Define a Replication Source in the Control Center
Use the Control Center to define a Replication Source. The Data
Warehouse Center supports five types of replication: user copy,
point-in-time, base aggregate, change aggregate, and staging tables
(CCD tables). The types of User Copy, Point-in-Time, and Condensed
Staging table require that the replication source table have a primary
key. The other replication types do not. Keep this in mind when
choosing an input table to be defined as a Replication Source. A
Replication Source is actually the definition of the original source
table and a created CD (Change Data) table to hold the data changes
before they are moved to the target table. When you define a
Replication Source in the Control Center, a record is written out to
ASN.IBMSNAP_REGISTER to define the source and its CD table. The CD
table is created at the same time, but initially it contains no data.
When you define a Replication Source you can choose to include only
the after-image columns or both the before and after-image columns.
These choices are made via checkboxes in the Control Center
Replication Source interface. Your selection of before and after image
columns is then translated into columns created in the new CD table.
In the CD table, after-image columns have the same name as their
original source table column names. The after-image columns will have
a 'X' as the first character in the column name.
5. Import the Replication Source into the Data Warehouse Center
Once you have created the Replication Source in the Control Center,
you can import it into the Data Warehouse Center. When importing the
source, be sure to click on the checkbox that says "Tables that can be
replicated". This tells the Data Warehouse Center to look at the
records in the ASN.IBMSNAP_REGISTER table to see what tables have been
defined as Replication Sources.
6. Define a Replication Step in the Data Warehouse Center
On the process modeler, select one of the five Replication types: base
aggregate, change aggregate, point-in-time, staging table, or user
copy. If you want to define a base aggregate or change aggregate
replication type, see the section below about How to setup a Base
Aggregate or Change Aggregate replication in the Data Warehouse
Center. Select an appropriate Replication Source for the Replication
type. As mentioned above, the replication types of: user copy,
point-in-time, and condensed staging tables require that the input
source have a primary key. Connect the Replication Source to the
Replication Step. Open the properties on the Replication Step. Go to
the Parameters tab. Select the desired columns. Select the checkbox to
have a target table created. Select a Warehouse target. Go to the
Processing Options and fill in the parameters. Press OK.
7. Start the Capture Program
In a DOS window, enter: ASNCCP source-database COLD PRUNE
The COLD parameter indicates a COLD start and will delete any existing
data in the CD tables. The PRUNE parameter tells the capture program
to maintain the IBMSNAP_PRUNCNTL table. Leave the Capture program
running. When it comes time to quit, you can stop it with a Ctrl-Break
in its DOS window. Be aware that you need to start the Capture program
before you start the Apply program.
8. Replication Step Promote-To-Test
Back in the Data Warehouse Center, for the defined Replication Step,
promote the step to Test mode. This causes the Replication
Subscription information to be written out to the Replication Control
tables. You will see records added to IBMSNAP_SUBS_SET,
IBMSNAP_SUBS_MEMBR, IBMSNAP_SUBS_COLS, and IBMSNAP_SUBS_EVENT to
support the subscription. The target table will also be created in the
target database. If the replication type is user copy, point-in-time,
or condensed staging table, a primary key is required on the target
table. Go to the Control Center to create the Primary Key. Note that
some replication target tables also require unique indexes on various
columns. Code exists in the Data Warehouse Center to create these
unique indexes when the table is created so that you do NOT have to
create these yourself. Note though that if you define a primary key in
the Control Center and a unique index already exists for that column
then you will get a WARNING message when you create the primary key.
Ignore this warning message.
9. Replication Step Promote-To-Production
No replication subscription changes are made during
Promote-to-Production. This is strictly a Data Warehouse Center
operation like any other step.
10. Run a Replication Step
After a Replication Step has been promoted to Test mode, it can be
run. Do an initial run before making any changes to the source table.
Go to the Work-in-Progress (WIP) section and select the Replication
Step. Run it. When the step is run, the event record in the
IBMSNAP_SUBS_EVENT table is updated and the subscription record in
IBMSNAP_SUBS_SET is posted to be active. The subscription should run
immediately. When the subscription runs, the Apply program is called
by the Agent to process the active subscriptions. If you update the
original source table after that point, then the changed data will be
moved into the CD table. If you run the replication step following
that, such that the Apply program runs again, the changed data will be
moved from the CD table to the target table.
11. Replication Step Demote-To-Test
No replication subscription changes are made during Demote-to-Test.
This is strictly a Data Warehouse Center operation like any other
step.
12. Replication Step Demote-to-Development
When you demote a Replication Step to development, the subscription
information is removed from the Replication Control tables. No records
will remain in the Replication Control tables for that particular
subscription after the Demote-to-Development finishes. The target
table will also be dropped at this point. The CD table remains in
place since it belongs to the definition of the Replication Source.
13. How to setup a Base Aggregate or Change Aggregate Replication in the
Data Warehouse Center.
o Input table. Choose an input table that can be used with a GROUP
BY statement. For our example we will use an Input table that has
these columns: SALES, REGION, DISTRICT.
o Replication step. Choose Base or Change Aggregate. Open the Step
properties.
+ When the Apply program runs, it needs to execute a SELECT
statement that looks like: SELECT SUM(SALES), REGION,
DISTRICT GROUP BY REGION, DISTRICT Therefore in the output
columns selected you will need to choose REGION, DISTRICT
and one calculated column of SUM(SALES). Use the Add
Calculated Column button. For our example enter: SUM(SALES)
in the Expression field. Save it.
+ Where clause. There is a Replication requirement that when
you set up a Replication step that only requires a GROUP BY
clause then you must also provide a DUMMY where clause, such
as 1=1. Do NOT include the word "WHERE" in the WHERE clause.
Therefore in the Data Warehouse Center GUI for Base
Aggregate there is only a WHERE clause entry field. In this
field, for our example: enter: 1=1 GROUP BY REGION, DISTRICT
For the Change Aggregate, there is both a WHERE clause and a
GROUP BY entry field: In the WHERE clause field enter: 1=1
and in the GROUP BY field enter: GROUP BY REGION, DISTRICT
+ Setup the rest of the step properties, as you would do for
any other type of Replication. Press OK to save the step and
create the target table object.
o Open the target table object. You now need to rename the output
column for the calculated column expression to a valid column
name and you need to specify a valid data type for the column.
Save the target table object.
o Run Promote-to-Test on the Replication step. The target table
will be created. It does NOT need a primary key.
o Run the step like any other Replication step.
------------------------------------------------------------------------
20.4 Troubleshooting tips
* To turn on tracing for the Apply Program, set the Agent Trace value =
4 in the Warehouse Properties panel. The Agent turns on full tracing
for Apply when Agent Trace = 4.
If you don't see any data in the CD table, then most likely either the
Capture program has not been started or you have not updated the
original source table to create some changed data.
* The mail server field of the Notification page of the Schedule
notebook is missing from the online help.
* The mail server needs to support ESMTP for the Data Warehouse Center
notification to work. In the Open the Work in Progress window help,
click Warehouse --> Work in Progress rather than Warehouse Center -->
Work in Progress.
------------------------------------------------------------------------
20.5 Correction to RUNSTATS and REORGANIZE TABLE Online Help
The online help for these utilities states that the table that you want to
run statistics on, or that is to be reorganized, must be linked as both the
source and the target. However, because the step writes to the source, you
only need to link from the source to the step.
------------------------------------------------------------------------
20.6 Notification Page (Warehouse Properties Notebook and Schedule
Notebook)
On the Notification page of the Warehouse Properties notebook, the
statement:
The Sender entry field is initialized with the string <current user's logon ID>.
should be changed to:
The Sender entry field is initialized with the string <current logon user email address>.
On the Notification page of the Schedule notebook, the sender will be
initialized to what is set in the Warehouse Properties notebook. If nothing
is set, it is initialized to the current logon user e-mail address. If
there is no e-mail address associated with the logon user, the sender is
set to the logon user ID.
------------------------------------------------------------------------
20.7 Agent Module Field in the Agent Sites Notebook
The Agent Module field in the Agent Sites notebook provides the name of the
program that is run when the warehouse agent daemon spawns the warehouse
agent. Do not change the name of the field unless IBM directs you to do so.
------------------------------------------------------------------------
20.8 Accessing DB2 Version 5 data with the DB2 Version 7.1 warehouse agent
DB2 Version 7.1 warehouse agents, as configured by the DB2 Version 7.1
install process, will support access to DB2 Version 6 and DB2 Version 7.1
data. If you need to access DB2 Version 5 data, you must take one of the
following two approaches:
* Migrate DB2 Version 5 servers to DB2 Version 6 or DB2 Version 7.1.
* Modify the agent configuration, on the appropriate operating system,
to allow access to DB2 Version 5 data.
DB2 Version 7.1 warehouse agents do not support access to data from DB2
Version 2 or any other previous versions.
20.8.1 Migrating DB2 Version 5 servers
For information about migrating DB2 Version 5 servers, see DB2 Universal
Database Quick Beginnings for your operating system.
20.8.2 Changing the agent configuration
The following information describes how to change the agent configuration
on each operating system. When you migrate the DB2 servers to DB2 Version 6
or later, remove the changes that you made to the configuration.
20.8.2.1 UNIX warehouse agents
To set up a UNIX warehouse agent to access data from DB2 Version 5 with
either CLI or ODBC access:
1. Install the DB2 Version 6 run-time client. You can obtain the run-time
client by selecting the client download from the following URL:
http://www.ibm.com/software/data/db2/udb/support
2. Update the warehouse agent configuration file so that the DB2INSTANCE
environment variable points to a DB2 Version 6 instance.
3. Catalog all databases in this DB2 Version 6 instance that the
warehouse agent is to access.
4. Stop the agent daemon process by issuing the kill command with the
agent daemon process ID. The agent daemon will then restart
automatically. You need root authority to kill the process.
20.8.2.2 Microsoft Windows NT, Windows 2000, and OS/2 warehouse agents
To set up a Microsoft NT, Windows 2000 or OS/2 warehouse agent to access
data from DB2 Version 5:
1. Install DB2 Connect Enterprise Edition Version 6 on a workstation
other than where the DB2 Version 7.1 warehouse agent is installed.
DB2 Connect Enterprise Edition is included as part of DB2 Universal
Database Enterprise Edition and DB2 Universal Database Enterprise
Extended Edition. If Version 6 of either of these DB2 products is
installed, you do not need to install DB2 Connect separately.
Restriction:You cannot install multiple versions of DB2 on the same
Windows NT or OS/2 workstation. You can install DB2
Connect on another Windows NT workstation or on an OS/2
or UNIX workstation.
2. Configure the warehouse agent and DB2 Connect Version 6 for access to
the DB2 Version 5 data. For more information, see the DB2 Connect
User's Guide. The following steps are an overview of the steps that
are required:
a. On the DB2 Version 5 system, use the DB2 Command Line Processor
to catalog the Version 5 database that the warehouse agent is to
access.
b. On the DB2 Connect system, use the DB2 Command Line Processor to
catalog:
+ The TCP/IP node for the DB2 Version 5 system
+ The database for the DB2 Version 5 system
+ The DCS entry for the DB2 Version 5 system
c. On the warehouse agent workstation, use the DB2 Command Line
Processor to catalog:
+ The TCP/IP node for the DB2 Connect system
+ The database for the DB2 Connect system
For information about cataloging databases, see the DB2 Universal
Database Installation and Configuration Supplement.
3. At the warehouse agent workstation, bind the DB2 CLI package to each
database that is to be accessed through DB2 Connect.
The following DB2 commands give an example of binding to v5database, a
hypothetical DB2 version 5 database. Use the DB2 Command Line
Processor to issue the following commands. db2cli.lst and db2ajgrt are
located in the \sqllib\bnd directory.
db2 connect to v5database user userid using password
db2 bind db2ajgrt.bnd
db2 bind @db2cli.lst blocking all grant public
where userid is the user ID for v5database and password is the
password for the user ID.
An error occurs when db2cli.list is bound to the DB2 Version 5
database. This error occurs because large objects (LOBs) are not
supported in this configuration. This error will not affect the
warehouse agent's access to the DB2 Version 5 database.
Fixpak 14 for DB2 Universal Database Version 5, which is available in
June, 2000, is required for accessing DB2 Version 5 data through DB2
Connect. Refer to APAR number JR14507 in that fixpak.
------------------------------------------------------------------------
20.9 Accessing warehouse control databases
In a typical installation of DB2 Version 7.1 on Windows NT, a DB2 Version 7
warehouse control database is created along with the warehouse server. If
you have a Visual Warehouse warehouse control database, you must upgrade
the DB2 server containing the warehouse control database to DB2 Version 7.1
before the metadata in the warehouse control database can be migrated for
use by the DB2 Version 7.1 Data Warehouse Center. You must migrate any
warehouse control databases that you want to continue to use to Version
7.1. The metadata in your active warehouse control database is migrated to
Version 7.1 during the DB2 Version 7.1 install process. To migrate the
metadata in any additional warehouse control databases, use the Warehouse
Control Database Migration utility, which you start by selecting Start -->
Programs --> IBM DB2 --> Warehouse Control Database Management on Windows
NT. For information about migrating your warehouse control databases, see
DB2 Universal Database for Windows Quick Beginnings.
------------------------------------------------------------------------
20.10 Accessing sources and targets
The following tables list the version and release levels of the sources and
targets that the Data Warehouse Center supports.
Table 3. Version and release levels of supported IBM warehouse sources
Source Version/Release
IMS 5.1
DB2 Universal Database for Windows NT5.2 - 7.1
DB2 Universal Database 5.2 - 7.1
Enterprise-Extended Edition
DB2 Universal Database for OS/2 5.2 - 7.1
DB2 Universal Database for AS/400 3.7 - 4.5
DB2 Universal Database for AIX 5.2 - 7.1
DB2 Universal Database for Solaris 5.2 - 7.1
Operating Environment
DB2 Universal Database for OS/390 4.1 - 5.1.6
DB2 DataJoiner 2.1.2
DB2 for VM 3.4 - 5.3.4
DB2 for VSE 7.1
Source Windows NT AIX
Informix 7.2.2 - 8.2.1 7.2.4 - 9.2.0
Oracle 7.3.2 - 8.1.5 8.1.5
Microsoft SQL Server 7.0
Microsoft Excel 97
Microsoft Access 97
Sybase 11.5 11.9.2
Table 4. Version and release levels of supported IBM warehouse targets
Source Version/Release
DB2 Universal Database for Windows NT6 - 7
DB2 Universal Database 6 - 7
Enterprise-Extended Edition
DB2 Universal Database for OS/2 6 - 7
DB2 Universal Database for AS/400 3.1-4.5
DB2 Universal Database for AIX 6 -7
DB2 Universal Database for Solaris 6 -7
Operating Environment
DB2 Universal Database for OS/390 4.1 - 6
DB2 DataJoiner 2.1.2
DB2 DataJoiner/Oracle 8
DB2 for VM 3.4 - 5.3.4
DB2 for VSE 3.2, 7.1
CA/400 3.1.2
------------------------------------------------------------------------
20.11 Accessing DB2 Version 5 information catalogs with the DB2 Version 7.1
Information Catalog Manager
The DB2 Version 7.1 Information Catalog Manager subcomponents, as
configured by the DB2 Version 7.1 install process, support access to
information catalogs stored in DB2 Version 6 and DB2 Version 7.1 databases.
You can modify the configuration of the subcomponents to access information
catalogs that are stored in DB2 Version 5 databases. The DB2 Version 7.1
Information Catalog Manager subcomponents do not support access to data
from DB2 Version 2 or any other previous versions.
To set up the Information Catalog Administrator, the Information Catalog
User, and the Information Catalog Initialization Utility to access
information catalogs that are stored in DB2 Version 5 databases:
1. Install DB2 Connect Enterprise Edition Version 6 on a workstation
other than where the DB2 Version 7.1 Information Catalog Manager is
installed.
DB2 Connect Enterprise Edition is included as part of DB2 Universal
Database Enterprise Edition and DB2 Universal Database Enterprise
Extended Edition. If Version 6 of either of these DB2 products is
installed, you do not need to install DB2 Connect separately.
Restriction:You cannot install multiple versions of DB2 on the same
Windows NT or OS/2 workstation. You can install DB2
Connect on another Windows NT workstation or on an OS/2
or UNIX workstation.
2. Configure the Information Catalog Manager and DB2 Connect Version 6
for access to the DB2 Version 5 data. For more information, see the
DB2 Connect User's Guide. The following steps are an overview of the
steps that are required:
a. On the DB2 Version 5 system, use the DB2 Command Line Processor
to catalog the Version 5 database that the Information Catalog
Manager is to access.
b. On the DB2 Connect system, use the DB2 Command Line Processor to
catalog:
+ The TCP/IP node for the DB2 Version 5 system
+ The database for the DB2 Version 5 system
+ The DCS entry for the DB2 Version 5 system
c. On the workstation with the Information Catalog Manager, use the
DB2 Command Line Processor to catalog:
+ The TCP/IP node for the DB2 Connect system
+ The database for the DB2 Connect system
For information about cataloging databases, see the DB2 Universal
Database Installation and Configuration Supplement.
3. At the warehouse with the Information Catalog Manager, bind the DB2
CLI package to each database that is to be accessed through DB2
Connect.
The following DB2 commands give an example of binding to v5database, a
hypothetical DB2 version 5 database. Use the DB2 Command Line
Processor to issue the following commands. db2cli.lst and db2ajgrt are
located in the \sqllib\bnd directory.
db2 connect to v5database user userid using password
db2 bind db2ajgrt.bnd
db2 bind @db2cli.lst blocking all grant public
where userid is the user ID for v5database and password is the
password for the user ID.
An error occurs when db2cli.list is bound to the DB2 Version 5
database. This error occurs because large objects (LOBs) are not
supported in this configuration. This error will not affect the
warehouse agent's access to the DB2 Version 5 database.
Fixpak 14 for DB2 Universal Database Version 5, which is available in
June, 2000, is required for accessing DB2 Version 5 data through DB2
Connect. Refer to APAR number JR14507 in that fixpak.
------------------------------------------------------------------------
20.12 Additions to supported non-IBM database sources
The following table contains additions to the supported non-IBM database
sources:
Database client
Database Operating system requirements
Informix AIX Informix-Connect and
ESQL/C version 9.1.4 or
later
Informix Solaris Operating Informix-Connect and
Environment ESQL/C version 9.1.3 or
later
Informix Windows NT Informix-Connect for
Windows Platforms 2.x or
Informix-Client Software
Developer's Kit for
Windows Platforms 2.x
Oracle 7 AIX Oracle7 SQL*Net and
Oracle7 SQL*Net shared
library (built by the
genclntsh script)
Oracle 7 Solaris Operating Oracle7 SQL*Net and
Environment Oracle7 SQL*Net shared
library (built by the
genclntsh script)
Oracle 7 Windows NT The appropriate DLLs for
the current version of
SQL*Net, plus
OCIW32.DLL. For example,
SQL*Net 2.3 requires
ORA73.DLL, CORE35.DLL,
NLSRTL32.DLL,
CORE350.DLL and
OCIW32.DLL.
Oracle 8 AIX Oracle8 Net8 and the
Oracle8 SQL*Net shared
library (built by the
genclntsh8 script)
Oracle 8 Solaris Operating Oracle8 Net8 and the
Environment Oracle8 SQL*Net shared
library (built by the
genclntsh8 script)
Oracle 8 Windows NT To access remote Oracle8
database servers at a
level of version 8.0.3
or later, install Oracle
Net8 Client version
7.3.4.x, 8.0.4, or
later.
On Intel systems,
install the appropriate
DLLs for the Oracle Net8
Client (such as
Ora804.DLL, PLS804.DLL
and OCI.DLL) on your
path.
Sybase AIX In a non-DCE environment
(ibsyb15 ODBC driver):
libct library
In a DCE environment
(ibsyb1115 ODBC driver):
Sybase 11.1 client
library libct_r
Sybase Solaris Operating In a non-DCE environment
Environment (ibsyb15 ODBC driver):
libct library
In a DCE environment
(ibsyb1115 ODBC driver):
Sybase 11.1 client
library libct_r
Sybase Windows NT Sybase Open
Client-Library 10.0.4 or
later and the
appropriate Sybase
Net-Library.
------------------------------------------------------------------------
20.13 Information Catalog Manager Initialization Utility
If you get the following message::
FLG0083E: You do not have a valid license for the IBM Information Catalog Manager
Initialization utility. Please contact your local software reseller or IBM
marketing representative.
You must purchase the DB2 Warehouse Manager or the IBM DB2 OLAP Server and
install the Information Catalog Manager component, which includes the
Information Catalog Initialization utility.
If you installed the DB2 Warehouse Manager or IBM DB2 OLAP Server and then
installed another Information Catalog Manager Administrator component
(using the DB2 Universal Database CD-ROM) on the same workstation, you
might have overwritten the Information Catalog Initialization utility. In
that case, from the \sqllib\bin directory, find the files createic.bak and
flgnmwcr.bak and rename them to createic.exe and flgnmwcr.exe respectively.
If you install additional Information Catalog Manager components from DB2
Universal Database, the components must be on a separate workstation from
where you installed the Data Warehouse Manager. For more information, see
Chapter 3, Installing Information Catalog Manager components, in the DB2
Warehouse Manager Installation Guide.
------------------------------------------------------------------------
20.14 Information Catalog Manager Administration Guide
Chapter 6. Exchanging metadata with other products: "Converting
MDIS-conforming metadata into a tag language file", page 97
You cannot issue the MDISDGC command from the MS-DOS command prompt. You
must issue the MDISDGC command from a DB2 command window. The first
sentence of the section, "Converting a tag language file into
MDIS-conforming metadata," also says you must issue the DGMDISC command
from the MS-DOS command prompt. You must issue the DGMDISC command from a
DB2 command window.
------------------------------------------------------------------------
DB2 Stored Procedure Builder
------------------------------------------------------------------------
21.1 Java 1.2 Support for the DB2 Stored Procedure Builder
The DB2 Stored Procedure Builder supports building Java stored procedures
using Java 1.2 functionality. In addition, the Stored Procedure Builder
supports bi-directional languages, such as Arabic and Hebrew, using the
bi-di support in Java 1.2.
This support is provided for Windows NT platforms only.
In order for the Stored Procedure Builder to recognize and use Java 1.2
functionality, Java 1.2 must be installed.
To install Java 1.2:
1. JDK 1.2.2 is available on the DB2 UDB CD under the DB2\bidi\NT
directory. ibm-inst-n122p-win32-x86.exe is the installer program, and
ibm-jdk-n122p-win32-x86.exe is the JDK distribution. Copy both files
to a temporary directory on your hard drive, then run the installer
program from there.
2. Install it under <DB2PATH>\java\Java12, where <DB2PATH> is the
installation path of DB2.
3. Do not select JDK/JRE as the System VM when prompted by the JDK/JRE
installation.
After Java 1.2 is installed successfully, start the Stored Procedure
Builder in the normal manner.
To execute Java stored procedures using JDK 1.2 support, set the database
server environment variable DB2_USE_JDK12 to TRUE using the following
command:
DB2SET DB2_USE_JDK12=TRUE
Also, set your JDK11_PATH to point to the directory where your Java 1.2
support is installed. You set this path by using the following command:
DB2 UPDATE DBM CFG USING JDK11_PATH
To stop the use of Java 1.2, you can either uninstall the JDK/JRE from
<DB2PATH>\java\Java12, or simply rename the <DB2PATH>\java\Java12
subdirectory.
Important: Do not confuse <DB2PATH>\java\Java12 with <DB2PATH>\Java12.
<DB2PATH>\Java12 is part of the DB2 installation and includes JDBC support
for Java 1.2.
------------------------------------------------------------------------
21.2 Remote Debugging of DB2 Stored Procedures
To use the remote debugging capability for stored procedures on the Intel
and UNIX platforms, you need to install the IBM Distributed Debugger. The
IBM Distributed Debugger is included on the Visual Age for Java
Professional Edition CD. The debugger client runs only on the Windows
platform. Supported server platforms include: Windows, AIX and Solaris. At
this time, only Java and C stored procedures can be debugged remotely.
Support for SQL procedures will be available at a later date.
To debug SQL procedures on the OS/390 platform, you must also have the IBM
C/C++ Productivity Tools for OS/390 R1 product. For more information on the
IBM C/C++ Productivity Tools for OS/390 R1, go to the following Web site:
http://www.ibm.com/software/ad/c390/pt/
------------------------------------------------------------------------
21.3 Building SQL Procedures on the Intel and UNIX Platforms
Before you can use the Stored Procedure Builder to successfully build SQL
Procedures on your Intel or UNIX database, you must configure your server
for SQL Procedures. For information on how to configure your server for SQL
Procedures, see the IBM DB2 Universal Database Application Building Guide.
The database manager configuration parameter KEEPDARI must be set to NO.
This can be done using the command db2 update dbm cfg using KEEPDARI NO, or
using the Control Center. If KEEPDARI is set to YES, you may get message
SQL0454N when attempting to build an SQL stored procedure that was
previously built and run.
------------------------------------------------------------------------
21.4 Using the DB2 Stored Procedure Builder on the Solaris Platform
To use the Stored Procedure Builder on the Solaris platform:
1. Download and install JDK 1.1.8. You can download JDK 1.1.8 from the
JavaSoft web site.
2. Set the environment variable JAVA_HOME to the location where you
installed the JDK.
3. Set your DB2 JDK11_PATH to the directory where you installed the JDK.
To set the DB2 JDK11_PATH, use the command:
DB2 UPDATE DBM CFG USING JDK11_PATH.
------------------------------------------------------------------------
21.5 Known Problems and Limitations
* SQL Procedures are not currently supported on Windows 98.
* For Java stored procedures, the JAR ID, class names, and method names
cannot contain non-ASCII characters.
* On AS/400 the following V4R4 PTFs must be applied to OS/400 V4R4:
- SF59674
- SF59878
* Stored procedure parameters with a character subtype of FOR MIXED DATA
or FOR SBCS DATA are not shown in the source code in the editor pane
when the stored procedure is restored from the database.
* Currently, there is a problem when Java source code is retrieved from
a database. At retrieval time, the comments in the code come out
collapsed. This will affect users of the DB2 Stored Procedure Builder
who are working in non-ASCII code pages, and whose clients and servers
are on different code pages.
------------------------------------------------------------------------
21.6 Using DB2 Stored Procedure Builder with Traditional Chinese Locale
There is a problem when using Java Development Kit or Java Runtime 1.1.8
with the Traditional Chinese locale. Graphical aspects of the Stored
Procedure Builder program (including menus, editor text, messages, and so
on) will not display properly. The solution is to make a change to the file
font.properties.zh_TW, which appears in one or both of the following
directories:
sqllib/java/jdk/lib
sqllib/java/jre/lib
Change:
monospaced.0=\u7d30\u660e\u9ad4,CHINESEBIG5_CHARSET,NEED_CONVERTED
to:
monospaced.0=Courier New,ANSI_CHARSET
------------------------------------------------------------------------
21.7 UNIX (AIX, Sun Solaris, Linux) Installations and the Stored Procedure
Builder
For Sun Solaris installations, and if you are using a Java Development Kit
or Runtime other than the one installed on AIX with UDB, you must set the
environment variable JAVA_HOME to the path where Java is installed (that
is, to the directory containing the /bin and /lib directories). Stored
Procedure Builder is not supported on Linux, but can be used on supported
platforms to build and run stored procedures on DB2 UDB for Linux systems.
------------------------------------------------------------------------
DB2 Warehouse Manager
------------------------------------------------------------------------
22.1 "Warehouse Manager" Should Be "DB2 Warehouse Manager"
All occurrences of the phrase "Warehouse Manager" in product screens and in
product documentation should read "DB2 Warehouse Manager".
------------------------------------------------------------------------
22.2 DB2 Warehouse Manager Publications
22.2.1 Information Catalog Manager Administration Guide
* On page 1, Step 2 says:
When you install either the DB2 Warehouse Manager
or the DB2 OLAP Server, a default information catalog
is created on DB2 Universal Database for Windows NT.
The statement is incorrect. You must define a new information catalog.
See the "Creating the Information Catalog" section for more
information.
* Information Catalog for the Web. When using an information catalog
that is located on a DB2 UDB for OS/390 system, case insensitive
search is not available. This is true for both a simple search and an
advanced search. The online help does not explain that all searches on
a DB2 UDB for OS/390 information catalog are case sensitive for a
simple search. Moreover, all grouping category objects are expandable,
even when there are no underlying objects.
* On page 87, in the fourth paragraph, there is a statement that says:
When you publish DB2 OLAP Integration Server metadata, a linked relationship
is created between an information catalog "dimensions within a
multi-dimensional database" object type and a table object
in the OLAP Integration Server.
The statement should say:
When you publish DB2 OLAP Integration Server metadata, a linked relationship
is created between an information catalog "dimensions within a
multi-dimensional database object and a table object".
This statement also appears in the second paragraph on page 143
("Metadata mappings between the Information Catalog Manager and OLAP
Server" section).
* Page 90 shows an example of using the flgnxoln command to publish OLAP
server metadata to an information catalog. The example incorrectly
shows the directory for the db2olap.ctl and db2olap.ff files as
x:\Program Files\sqllib\logging. The directory name should be
x:\Program Files\sqllib\exchange as described on page 87.
22.2.2 Information Catalog Manager Programming Guide and Reference
22.2.2.1 Information Catalog Manager Reason Codes
In Appendix D: Information Catalog Manager reason codes, some text might be
truncated at the far right column for the following reason codes: 31014,
32727, 32728, 32729, 32730, 32735, 32736, 32737, 33000, 37507, 37511, and
39206. If the text is truncated, please see the PDF version of the book to
view the complete column.
22.2.3 Information Catalog Manager: Online Messages
* Message FLG0260E. The second sentence of the message explanation
should say:
The error caused a rollback of the information catalog, which failed.
The information catalog is not in stable condition, but no changes were made.
* Message FLG0051E. The second bullet in the message explanation should
say:
The information catalog contains too many objects or object types.
The administrator response should say:
Delete some objects or object types from the current
information catalog using the import function.
* Message FLG0003E. The message explanation should say:
The information catalog must be registered before you can use it.
The information catalog might not have been registered correctly.
* Message FLG0372E. The first sentence of the message explanation should
say:
The ATTACHMENT-IND value was ignored for an object
because that object is an Attachment object.
22.2.4 Information Catalog Manager: Online Help
Information Catalog window: The online help for the Selected menu Open item
incorrectly says "Opens the selected object". It should say "Opens the
Define Search window".
22.2.5 Query Patroller Administration Guide
22.2.5.1 DB2 Query Patroller Client is a Separate Component
The DB2 Query Patroller client is a separate component that is not part of
the DB2 Administration client. This means that it is not installed during
the installation of the DB2 Administration Client, as indicated in the
Query Patroller Installation Guide. Instead, the Query Patroller client
must be installed separately.
22.2.5.2 Manual Installation of Query Patroller on AIX and Solaris
To install DB2 Query Patroller using installp or smit, perform the steps
listed below. Refer to Manual Installation Commands for detailed syntax and
parameter information.
1. Set up or create a DB2 UDB EEE or EE instance to use with DB2 Query
Patroller.
2. Add an entry in the etc/services file to be used with the DB2 Query
Patroller server. For example, dqp1 55000/TCP.
3. Create a user named iwm if one does not exist already.
4. Mount the CD-ROM.
5. Go to the /cdrom/db2 directory.
6.
o Agent
a. If you are installing a Query Patroller Agent on AIX, use
smit to install the following filesets:
i. db2_07_01.dqp.cln
ii. db2_07_01.mlic
iii. db2_07_01.dqp.agt
Note:These filesets must be installed in the above order.
If you are not using smit to install the filesets,
ensure that this order is respected. Furthermore, if
the filesets db2_07_01.cj and db2_07_01.jdbc were not
installed when you set up your DB2 EE or EEE
instance, you need to install these prior to starting
the installation of the Query Patroller Agent.
b. If you are installing on Solaris, use pkgadd to install the
following packages for the DB2 Query Patroller server:
i. db2qpc71
ii. db2mlic71
iii. db2dqpa71
Note:These filesets must be installed in the order given.
Furthermore, if the filesets db2cj71 and db2jdbc71
were not installed when you set up your DB2 UDB EE or
EEE instance, you need to install these prior to
starting the installation of the Query Patroller
Agent.
o Server
a. To install a DB2 Query Patroller Server on AIX install the
fileset db2_07_01.dqp.agt and its prerequisite filesets
described above. Then, install the db2_07_01.dqp.srv
fileset.
b. To install a DB2 Query Patroller Server on Solaris install
db2dqpa71 and its prerequisite packages described above.
Then, install the db2dqps71 package.
If you are performing a migration from Version 6 to Version 7.1, refer to
the DB2 Query Patroller Installation Guide.
If you have installed the server, set up the license as follows:
1. Add the user iwm to the primary group for the DB2 UDB EE or EEE
instance owner. This will give the iwm user SYSADM authority over the
instance.
2. Add the following lines to the .profile file of the iwm user. The
INSTHOME variable is the home directory of the DB2 Query Patroller
server instance.
. INSTHOME/sqllib/db2profile
Note:If a C shell is being used, add source /sqllib/db2cshrc to the
.login file.
3. Log on as root, and run the following command:
The Query Patroller Server must be set up on either the DB2 UDB EE or
the DB2 UDB EEE main node where the instance was created. For Query
Patroller Server installation:
a. Enter the following command:
dqpcrt -s -p port_name instance_name
The port_name variable is the port name you used in Step 2.
instance_name is the name of the DB2 UDB EE or EEE instance.
Refer to Manual Installation Commands for detailed syntax and
parameter information.
Note:To remove a dqp instance you can run the dqpdrop
instance_name command. You can only run this command on
the node where the server is set up.
b. Log on as the instance name. Run the following command:
dqpsetup -D database_name -g nodegroup_name -n node_number -t
tablespace_name -r result_tablespace_name -l tablespace_path
Refer to Manual Installation Commands for detailed syntax and
parameter information.
For Query Patroller Agent installation, enter:
dqpcrt -a -p port_name instance_name
The port_name variable is the port name you used in Step 2.
instance_name is the name of the DB2 UDB EE or EEE instance. Refer to
Manual Installation Commands for detailed syntax and parameter
information.
4. Use the db2licm command to register DB2 Query Patroller. See the DB2
Command Reference for further information.
Creating the Query Patroller Schema and Binding the Application Bind Files
To manually create the DB2 Query Patroller schema and bind all the
application bind files perform the following steps:
1. Create the DB2 table space that will be used for the DB2 Query
Patroller schema. This table space must be created on one nodegroup.
2. Use the program db2_qp_schema in the DB2 bin directory to create the
schema. This program will use the script file iwm_schema.sql as input.
db2_qp_schema supports either syntax:
db2_qp_schema <schema_input_filename> <database_alias> <user>
<password> <querypatroller_tablespace>
db2_qp_schema <schema_input_filename> <database_alias>
<querypatroller_tablespace>
3. Bind the DB2 Query Patroller server bind files using the bind file
list file db2qp.lst in the DB2 bnd directory. After connecting to the
database, issue the DB2 CLP command:
db2 bind @db2qp.lst blocking all grant public
4. Run the following command:
db2 bind iwmsx001.bnd isolation ur blocking all grant public insert
buf datetime iso
5. Bind the DB2 Query Patroller stored procedure bind files using the
bind file list file db2qp_sp.lst in the DB2 bnd directory. After
connecting to the database, issue the DB2 CLP command:
db2 bind @db2qp_sp.lst blocking all
6. Create a table space for the DB2 Query Patroller result tables.
Manual Installation Commands
dqpcrt
This command is used to allocate a node on the DB2 UDB EE or DB2 UDB EEE
system as a DB2 Query Patroller server. The port name to be used with the
DB2 Query Patroller instance, and the name of the DB2 UDB EE or EEE
instance designated as the DB2 Query Patroller server, are required
parameters. Syntax:
>>-dpqcrt----+--s---p--port_name instance_name--+--------------><
+--a---p--port_name instance_name--+
'--h-------------------------------'
Table 5. dqpcrt Command Parameters
Parameter Description
-s Used to create a DB2 Query Patroller
server on the named DB2 UDB EE or
EEE instance.
-a Used to create a DB2 Query Patroller
agent on the named DB2 UDB EE or EEE
instance.
port_name Identifies the port name to be used
with the DB2 Query Patroller server
or agent.
instance_name Identifies the name of the DB2 UDB
EE or EEE instance that is to
designated as a DB2 Query Patroller
server instance.
-h Displays command usage information.
dqpsetup
This command is used to set the parameters for the DB2 Query Patroller
server. The size_DMS parameter and the -o flag are optional. The -o flag
can be used to remove schema objects from a previously installed version of
this product. Syntax:
>>-dqpsetup----+-| setup parameters |--+-----------------------><
'--h--------------------'
setup parameters
|----d--database_name-----g--nodegroup_group-------------------->
>------n--node_number-----t--tablespace_name-------------------->
>------r--result_tablespace_name-----l--tablespace_path--------->
>-----+---------------+---+-----+---instance_name---------------|
'--s--size_DMS--' '--o--'
Table 6. dqpsetup Command Parameters
Parameter Description
-d database_name Name of the database to be used with
the DB2 Query Patroller server.
-g nodegroup_name Name of the nodegroup that contains
the table space for the DB2 Query
Patroller server.
-n node_number Node number of a single node where
the nodegroup is defined.
-t tablespace_name Name of the DB2 Query Patroller
table space. The default type is an
SMS table space.
-r result_tablespace_name Name of the Result Table Space to be
used
-l tablespace_path Full path name of the table space.
-s size_DMS Size of the DMS table space. Use the
-s flag to specify the size for the
DMS table space. This parameter is
optional and only specified if a DMS
table space is to be used. The
default is an SMS table space.
-o Overwrites any existing DB2 Query
Patroller schema objects. This
parameter is optional.
instance_name Name of the DB2 UDB EE or EEE
instance that is to be designated as
a DB2 Query Patroller server.
-h Displays command usage information.
dqplist
This command is used to find the name of the DB2 UDB EE or the DB2 UDB EEE
instance being used as the DB2 Query Patroller server. It can only be run
from the node where the DB2 Query Patroller server was created. Syntax:
>>-dpqlist----+-----+------------------------------------------><
'--h--'
The -h flag displays command usage information.
dqpdrop
This command is used to drop an existing DB2 Query Patroller server
instance. This command can only be run from the node where the DB2 Query
Patroller server was created. Syntax:
>>-dpqdrop----+-instance_name--+-------------------------------><
'--h-------------'
The -h flag provides usage information. The instance_name parameter is the
name of the DB2 Query Patroller instance that you want to drop.
22.2.5.3 Enabling Query Management
In the "Getting Started" chapter under "Enabling Query Management", the
text should read:
You must be the owner of the data base, or you must have SYSADM,
SYSCTRL, or SYSMAINT authority to set database configuration parameters.
22.2.5.4 Starting Query Administrator
In the "Using QueryAdministrator to Administer DB2 Query Patroller"
chapter, instructions are provided for starting QueryAdministrator from the
Start menu on Windows. The first step provides the following text:
If you are using Windows, you can select DB2
Query Patroller --> QueryAdministrator
from the IBM DB2 program group.
The text should read:
DB2 Query Patroller --> QueryAdmin.
22.2.5.5 User Administration
In the "User Administration" section of the "Using QueryAdministrator to
Administer DB2 Query Patroller" chapter, the definition for the Maximum
Elapsed Time parameter indicates that if the value is set to 0 or -1, the
query will always run to completion. This parameter cannot be set to a
negative value. The text should indicate that if the value is set to 0, the
query will always run to completion.
The Max Queries parameter specifies the maximum number of jobs that the DB2
Query Patroller will run simultaneously. Max Queries must be an integer
within the range of 0 to 32767.
22.2.5.6 Creating a Job Queue
In the "Job Queue Administration" section of the "Using QueryAdministrator
to Administer DB2 Query Patroller" chapter, the screen capture in the steps
for "Creating a Job Queue" should be displayed after the second step. The
Information about new Job Queue window opens once you click New on the Job
Queue Administration page of the QueryAdministrator tool.
References to the Job Queues page or the Job Queues tab should read Job
Queue Administration page and Job Queue Administration tab, respectively.
22.2.5.7 Using the Command Line Interface
For a user with User authority on the DB2 Query Patroller system to submit
a query and have a result table created, the user may require CREATETAB
authority on the database. The user does not require CREATETAB authority on
the database if the DQP_RES_TBLSPC profile variable is left unset, or if
the DQP_RES_TBLSPC profile variable is set to the name of the default table
space. The creation of the result tables will succeed in this case because
users have the authority to create tables in the default table space.
22.2.5.8 Query Enabler Notes
* When using third-party query tools that use a keyset cursor, queries
will not be intercepted. In order for Query Enabler to intercept these
queries, you must modify the db2cli.ini file to include:
[common]
DisableKeySetCursor=1
* For AIX clients, please ensure that the environment variable LIBPATH
is not set. Library libXext.a, shipped with the JDK, is not compatible
with the library in the /usr/lib/X11 subdirectory. This will cause
problems with the Query Enabler GUI.
------------------------------------------------------------------------
Information Center
------------------------------------------------------------------------
23.1 "Invalid shortcut" Error on the Windows Operating System
When using the Information Center, you may encounter an error like:
"Invalid shortcut". If you have recently installed a new Web browser or a
new version of a Web browser, ensure that HTML and HTM documents are
associated with the correct browser. See the Windows Help topic "To change
which program starts when you open a file".
------------------------------------------------------------------------
OLAP Starter Kit
------------------------------------------------------------------------
24.1 Completing the DB2 OLAP Starter Kit Setup on AIX and Solaris
The DB2 OLAP Starter Kit install follows the basic procedures of the DB2
UDB install for UNIX. The product files are laid down by the installer to a
system directory owned by root user: (for AIX: /usr/lpp/db2_07_01; for
Solaris: /opt/IBMdb2/V7.1).
Then during the instance creation phase, two DB2 OLAP directories will be
created (essbase and is) within the instance user's home directory under
sqllib. Only one instance of OLAP server can run on a machine at a time. To
complete the set up, user must manually set the is/bin directory so that it
is not a link to the is/bin directory in the system. It should link to a
writable directory within the instance's home directory.
To complete the setup for AIX, logon using the instance ID, change to the
sqllib/is directory, then enter the following:
rm bin
mkdir bin
cd bin
ln -s /usr/lpp/db2_07_01/is/bin/ismesg.mdb ismesg.mdb
ln -s /usr/lpp/db2_07_01/is/bin/olapicmd olapicmd
ln -s /usr/lpp/db2_07_01/is/bin/olapisvr olapisvr
ln -s /usr/lpp/db2_07_01/is/bin/essbase.mdb essbase.mdb
ln -s /usr/lpp/db2_07_01/is/bin/libolapams.a libolapams.a
To complete the setup for Solaris, logon using the instance ID, change to
the sqllib/is directory, then enter the following:
rm bin
mkdir bin
cd bin
ln -s /opt/IBMdb2/V7.1/is/bin/ismesg.mdb ismesg.mdb
ln -s /opt/IBMdb2/V7.1/is/bin/olapicmd olapicmd
ln -s /opt/IBMdb2/V7.1/is/bin/olapisvr olapisvr
ln -s /opt/IBMdb2/V7.1/is/bin/essbase.mdb essbase.mdb
ln -s /opt/IBMdb2/V7.1/is/bin/libolapams.so libolapams.so
------------------------------------------------------------------------
24.2 Logging in from OLAP Integration Server Desktop
To use DB2 OLAP Integration Server Desktop to create OLAP models and
metaoutlines, you must connect the client software to two servers: DB2 OLAP
Integration Server and DB2 OLAP Server. The login dialog prompts you for
the necessary information for the Desktop to connect to these two servers.
On the left side of the dialog, enter information about DB2 OLAP
Integration Server. On the right side, enter information about DB2 OLAP
Server.
To connect to DB2 OLAP Integration Server:
* Server: Enter the host name or IP address of your Integration Server.
If you have installed the Integration Server on the same workstation
as your desktop, then typical values are "localhost" or "127.0.0.1".
* OLAP Metadata Catalog: When you connect to OLAP Integration Server you
must also specify a Metadata Catalog. OLAP Integration Server stores
information about the OLAP models and metaoutlines you create in a
relational database known as the Metadata Catalog. This relational
database must be registered for ODBC. The catalog database contains a
special set of relational tables that OLAP Integration Server
recognizes. On the login dialog, you can specify an Integration Server
and then expand the pull-down menu for the OLAP Metadata Catalog field
to see a list of the ODBC data source names known to the OLAP
Integration Server. Choose an ODBC database that contains the metadata
catalog tables.
* User Name and Password: OLAP Integration Server will connect to the
Metadata Catalog using the User name and password that you specify on
this panel. This is a login account that exists on the server (not the
client, unless the server and client are running on the same machine).
The user name must be the user who created the OLAP Metadata Catalog.
Otherwise, OLAP Integration Server will not find the relational tables
in the catalog database because the table schema names are different.
The DB2 OLAP Server information is optional, so the input fields on the
right side of the Login dialog may be left blank. However, some operations
in the Desktop and the Administration Manager require that you connect to a
DB2 OLAP Server. If you leave these fields blank, then the Desktop will
display the Login dialog again if the Integration Server needs to connect
to DB2 OLAP Server in order to complete an operation that you requested. It
is recommended that you always fill in the DB2 OLAP Server fields on the
Login dialog.
To connect to DB2 OLAP Server:
* Server: Enter the host name or IP address of your DB2 OLAP Server. If
you are running the OLAP Starter Kit, then your OLAP Server and
Integration Server are the same. If the Integration Server and OLAP
Server are installed on different hosts, then enter the host name or
an IP address that is defined on OLAP Integration Server.
* User Name and Password: OLAP Integration Server will connect to DB2
OLAP Server using the user name and password that you specify on this
panel. This user name and password must already be defined to the DB2
OLAP Server. OLAP Server manages its own user names and passwords
separately from the host operating system.
24.2.1 Starter Kit Login Example
The following example assumes that you created the OLAP Sample, and you
selected db2admin as your administrator user ID, and password as your
administrator password during DB2 UDB 7.1 installation.
* For OLAP Integration Server: Server is localhost, OLAP Metadata
Catalog is TBC_MD, User Name is db2admin, Password is password
* For DB2 OLAP Server: Server is localhost, User Name is db2admin
------------------------------------------------------------------------
24.3 Manually creating and configuring the sample databases for OLAP
Integration Server
The sample databases are created automatically when you install OLAP
Integration Server. The following instructions explain how to setup the
Catalog and Sample databases manually, if necessary.
1. In Windows, open the Command Center window by clicking Start
-->Programs-->DB2 for Windows NT--> Command Window.
2. Create the production catalog database:
a. Type db2 create db OLAP_CAT
b. Type db2 connect to OLAP_CAT
3. Create tables in the database:
a. Navigate to \SQLLIB\IS\ocscript\ocdb2.sql
b. Type db2 -tf ocdb2.sql
4. Create the sample source database:
a. Type db2 connect reset
b. Type db2 create db TBC
c. Type db2 connect to TBC
5. Create tables in the database:
a. Navigate to \SQLLIB\IS\samples\
b. Copy tbcdb2.sql to \SQLLIB\samples\db2sampl\tbc
c. Copy lddb2.sql to \SQLLIB\samples\db2sampl\tbc
d. Navigate to \SQLLIB\samples\db2sampl\tbc
e. Type db2 -tf tbcdb2.sql
f. Type db2 - vf lddb2.sqlto load sample source data into the
tables.
6. Create the sample catalog database:
a. Type db2 connect reset
b. Type db2 create db TBC_MD
c. Type db2 connect to TBC_MD
7. Create tables in the database:
a. Navigate to \SQLLIB\IS\samples\tbc_md
b. Copy ocdb2.sql to \SQLLIB\samples\db2sampl\tbcmd
c. Copy lcdb2.sql to \SQLLIB\samples\db2sampl\tbcmd
d. Navigate to \SQLLIB\samples\db2sampl\tbcmd
e. Type db2 -tf ocdb2.sql
f. Type db2 -vf lcdb2.sql to load sample metadata into the tables.
8. Configure ODBC for TBC_MD, TBC, AND OLAP_CAT:
a. Open the NT control panel by clicking Start-->Settings-->Control
Panel
b. Select ODBC (or ODBC data sources) from the list.
c. Select the System DSM tab.
d. Click Add. The Create New Data Source window opens.
e. Select IBM DB2 ODBC DRIVER from the list.
f. Click Finish. The ODBC IBM D2 Driver - Add window opens.
g. Type the name of the data source (OLAP_CAT) in the Data source
name field.
h. Type the alias name in the Database alias field, or click the
down arrow and select OLAP_CAT from the list.
i. Click OK.
j. Repeat these steps for the TBC_MD and the TBC databases.
------------------------------------------------------------------------
24.4 Known problems and limitations
This section lists known limitations for DB2 OLAP Starter Kit, DB2 OLAP
Desktop client, Spreadsheet clients, and DB2 OLAP Integration Server
24.4.1 DB2 OLAP Starter Kit
* The currently supported platforms for the DB2 OLAP Starter Kit are
Windows, AIX, and Sun Solaris Operating Environment.
* The OLAP Starter Kit includes four sample DB2 OLAP Server applications
named Demo, Sampeast, Sample, and Samppart. Each application includes
one or more databases. No data is loaded into any of these databases.
You must upgrade to the DB2 OLAP Server full product to be able to
load data into these databases.
* The incorrect help panels are displayed for:
o The Query Designer Row-->Measures-->Profit window.
o The Cascade Options window.
o The Subset Dialog window.
o F1 help does not bring up the correct help screen.
* Some words or characters are not translated or are translated
incorrectly for the following windows:
o The Output Options window uses the English words "Default" and
"Long Names".
o Some Application Manager windows and error messages are localized
instead of in English.
o In combined.mtx messages, the variable %s is resolved in English.
o In the Query Designer, the NLV translated words for Ascending and
Descending do not appear when you expand Data Sorting in the left
pane.
o In Database Object Names, some NLV characters are incorrect.
o The log files contain some incorrect NLV characters.
o In Application Manager Outline, Japanese characters are
incorrect.
* The Application Manager closes when you select
Database-->Setting-->Storage-->OK. You receive the following error
message: Essadmin.exe Application error "0x004fffce", refer to memory
"0x4ca7b075". Memory cannot be "read ". Press OK to finish this
application. Press Cancel to debug application.
* You must install Adobe Acrobat in order to use the tutorial and online
help.
* There is no migration path from DB2 OLAP server to OLAP Starter Kit.
OLAP Starter Kit is targeted for first time OLAP users.
24.4.2 DB2 OLAP Desktop Client
* OLAP Integration Server requires that an accounts dimension with at
least one measure be defined in the OLAP Model. If the OLAP model that
you use to create a metaoutline does not contain an accounts dimension
with one or more measures, and you use the Database Measures tab to
create a single measure in the OLAP metaoutline, you can save the
metaoutline without receiving an error, but a subsequent member load
will fail.
* DB2 OLAP Integration Server occasionally displays Essbase error
numbers without the corresponding error message. WORKAROUND: View the
message.txt file located in the ISHOME/esslib directory or check the
OLAP Integration Server log file.
* DB2 OLAP Integration Server Desktop does not support large fonts. If
client-user computer resolution is set to 1024 x 768 pixels or less
with large fonts only, buttons in the OLAP Integration Server Desktop
Login and Welcome windows are truncated, and usage is restricted.
WORKAROUND: On the Windows desktop, select Start--> Settings-->
Control Panel -->Display--> Settings. Select Small Fonts from the Font
Size drop-down list.
* Process buttons in the OLAP Integration Server Desktop OLAP Model
Assistant and OLAP Metaoutline Assistant tabs do not display properly
without a color palette of 65536 colors. WORKAROUND: To set the
optimum color palette, on the Windows Desktop, select Start-->
Settings--> Control Panel -->Display--> Settings. Select 65536 from
the Color Palette drop-down list.
* Windows. If there is a date/time transformation and filter on the same
member, then DB2 server crashes on Windows NT. This problem does not
occur on DB2 servers running on UNIX.
* OLAP Integration Server Desktop only supports table names from 1 to 30
characters in length.
* The Dimension renaming function reacts differently depending on the
procedure used. You can use names that contain spaces in the Rename
window on a dimension table. You cannot use names that contain spaces
in the Rename window of Dimension Properties. WORKAROUND: Do not imbed
spaces in dimension names.
* In some environments, the Metaoutline Properties window does not open
unless you save the metaoutline before trying to open the window.
* If your operating system file system does not allow names that are
longer than eight characters, the member load and view sample
functions could fail with error number 2001007.
The workaround is to start DB2 OLAP Integration Server in a directory
that is in a file system that supports longer names.
* An accounts dimension should not be deleted from a metaoutline after
the metaoutline has been created. If you need to change the measures
in an accounts dimension, delete all existing measures in the
metaoutline and then create new measures.
* The View Table Data option in the OLAP Model standard user interface,
which enables viewing of relational source table data in the left
frame of the OLAP Model main window, has a limit of 1000 rows. Data
source rows are displayed in 100-row increments per window by clicking
the Next button. The total number of rows retrieved for display cannot
exceed the first 1000 rows.
24.4.3 Spreadsheet Clients
* If you use NLV characters in a Lotus 123 spreadsheet name, or use NLV
characters in the spreadsheet, you will be unable to connect, retrieve
data, or execute any other function from the spreadsheet.
* Rapid double clicking in a cell in Lotus 123 spreadsheet add-in can
cause the following error message: Microsoft Visual C++ Runtime error
Program c:\lotus\123\123w.exe abnormal program termination.
WORKAROUND: Click slowly to expand the cells one at a time.
* In Lotus 123, selecting two rows at same time and then selecting
Essbase - Keep only or Remove only, causes only one column to be kept
or removed.
* In Lotus 123, an error occurs in the Essbase calculation when several
rows are selected.
* AIX. The sample applications do not contain data. You are able to use
spreadsheet programs to retrieve the database information, there will
be no values displayed other than the dimension and member names. Some
values are indicated as missing.
* If a down arrow is pressed in the Linked Objects browser, spreadsheets
crash with a DR Watson trap error. The correct error message cannot be
displayed.
* In Lotus 123, if no workbook is open, an error message should be
displayed when you click on an OLAP Server menu item. Instead no
message is displayed, and you receive two warning beeps.
* In Excel, double clicking formulas in the spreadsheet causes formulas
to be deleted from the spreadsheet. Specifying Retain on Zooms does
not prevent this error.
24.4.4 DB2 OLAP Integration Server
* Currently, only English is supported for scheduling functions. In some
non-English environments, using the Tool-->Scheduler function in the
OLAP Metaoutline standard user interface on NT client computers causes
the OLAP Integration Server to crash. The crash occurs because the NT
Scheduler stores scheduling information in language- specific format,
and the OLAP Integration Server is unable to parse the information.
* DB2 OLAP Integration Server does not support the GRAPHIC, VARGRAPHIC
and LONG VARGRAPHIC data type during View Table operations in the OLAP
Model standard user interface. Relational tables containing data with
the GRAPHIC, VARGRAPHIC or LONG VARGRAPHIC data type appear blank when
View Table is selected in the OLAP model. WORKAROUND: Make the
following addition to the DB2CLI.INI file: [SAMPLE] PATCH1=65536 <
PATCH2=7 < DBALIAS=SAMPLE
* If OLAP Integration Server encounters a NULL value during a data load,
it automatically loads the data into the parent member of the NULL.
However, if the NULL is at Generation 2, OLAP Integration Server
cannot load the data to the parent member because the parent member is
the dimension level member. In this case, OLAP Integration Server
records an error in the log file. WORKAROUND: Do not include NULLs at
Generation 2.
* OLAP Integration Server does not support Relational Database
Management System (RDBMS) column names that have imbedded blanks. If
OLAP Integration Server encounters blanks, it generates invalid SQL
statements.
* OLAP Integration Server does not read some double-byte character set
(DBCS) characters, such as minus (-) signs, when retrieving relational
data source values during a preview operation. If OLAP Integration
Server encounters such characters during a preview operation, an
"Unexpected Error at Condition" error message is displayed.
* Using the DBCS minus sign (-) character in a column concatenation in
an OLAP model generates a syntax error during member loads.
WORKAROUND: When performing transformations on columns in an OLAP
model, do not use a minus sign, a hyphen, or a dash (-) character in
the column name. Do not use relational tables or relational table
columns that include a minus sign, a hyphen or a dash (-) character.
* AIX. When you start the sample application, you will see a message
indicating that the server does not have the currency conversion
option.
* Windows: Two sample catalog databases are created during installation
for use with the DB2 OLAP Integration Server. However, if you try to
login into one of these databases from the DB2 OLAP Integration Server
Desktop, you receive a CLI error message 'Invalid connection string
attribute'. WORKAROUND: Update the db2cli.ini file located in the
sqllib directory, following these steps:
1. Make a backup copy of db2cli.ini.
2. Delete 'DATABASE=OLAPCATP' and 'DATABASE=OLAPCATD' from each
stanza.
[OLAPCATP]
DATABASE=OLAPCATP <----------------------------------(delete)
DESCRIPTION=OLAPCATP
DBALIAS=OLAPCATP
[OLAPCATD]
DATABASE=OLAPCATD <---------------------------------(delete)
DESCRIPTION=OLAPCATD
DBALIAS=OLAPCATD
* Hyperion Integration Server does not add a metaoutline filter
description in the OLAP Metaoutline Assistant.
* Use the following workaround to avoid a scheduling problem in the AIX
English version environment.
For all new AIX servers, add <username> to the
/var/adm/cron/cron.allow command to allow scheduling for the specified
user. Create an empty file named <username> for the specified user and
provide permission equal to 555 in the var/spool/crontabs directory. A
similar setup is required for other UNIX environments; for details,
see the man page for crontab.
* A DBCS space is not recognized as a DBCS space by OLAP Integration
Server. The following transformation space settings do not work for
source data columns that contain a DBCS space:
o Dropping leading/trailing spaces
o Converting spaces to underscores
o Concatenating
* OLAP Integration Server cannot save a dimension description in the
OLAP Model Assistant if the description contains DBCS characters.
* OLAP Integration Server does not support pass-through (database
specific) transformations using SQL functions. Specifying a built-in
RDBMS function, such as Substring, or Left, causes OLAP Integration
Server to generate invalid SQL.
* Creating hierarchies in OLAP Model Assistant causes OLAP Integration
Server to create an empty folder in the Hyperion\IS\Loadinfo folder on
the client. The empty folder contains an empty .txt file. Empty
folders and files are also created when you access either a View
Sample from the Edit Hierarchy dialog box, or the Preview Results
dialog box from the Edit OLAP Model dialog box in the standard user
interface.
To prevent buildup of empty folders and files, you can delete them
from the Loadinfo folder at any time.
* OLAP Integration Server does not display a message to confirm that a
dimension table has been deleted from an OLAP model.
Following is a suggested workaround: If you did not save the OLAP
model after you added the dimension table that you want to delete,
click Close to close the OLAP model without saving any changes, then
revert back to the previous version of the model. Any other changes
that you made during the current session will also be lost.
* OLAP Integration Server does not support Essbase ESSCMD scripts. The
IS\esscript directory has not been deleted from the OLAP Integration
Server directory structure that is created during the installation
process. This directory is an empty directory that is not used.
------------------------------------------------------------------------
24.5 OLAP Starter Kit Spreadsheet Needs Current Windows svc.pack
Before installing DB2 OLAP Server on Windows NT, you must apply MS Windows
NT 4.0 service patch 5.
If problems occur while installing the OLAP Starter Kit spreadsheet add-in
on Windows 95 or Windows 98, this may be due to down level Microsoft System
files. Download the following files from Microsoft via a Windows 95/98
service pack, or unzip %arborpath%\bin\olapewd.zip and copy the files into
the windows system directory. Make sure you do not replace any files
already on your system that have a newer release date. The Windows 9x
system files and their required levels are:
* ASYCFILT.DLL 2.20.4118.1
* COMCAT.DLL 4.71.1441.1
* COMPOBJ.DLL 2.10.35.35
* DCOMCNFG.EXE 4.0.1381.4
* DLLHOST.EXE 4.0.1381.4
* IPROP.DLL 4.0.1381.4
* OLE2.DLL 2.10.35.35
* OLEAUT32.DLL 2.20.4118.1
* OLECNV32.DLL 4.0.1381.4
* OLEDLG.DLL 4.0.1381.4
* OLEPRO32.DLL 5.0.4118.1
* OLETHK32.DLL 4.0.1371.1
* RPCLTC1.DLL 4.0.1381.4
* RPCLTCCM.DLL 4.0.1381.4
* RPCLTSCM.DLL 4.0.1381.4
* RPCMQCL.DLL 4.0.1381.4
* RPCMQSVR.DLL 4.0.1381.4
* RPCNS4.DLL 4.0.1371.1
* RPCSS.EXE 4.0.1381.4
* STDOLE2.TLB 2.20.4122.1
* STDOLE32.TLB 2.10.3027.1
* STORAGE.DLL 2.10.35.35
------------------------------------------------------------------------
24.6 OLAP Spreadsheet Add-in EQD Files Missing
In the DB2 OLAP Starter Kit, the Spreadsheet add-in has a component called
the Query Designer (EQD). The online help menu for EQD includes a button
called Tutorial that does not display anything. The material that should be
displayed in the EQD tutorials is a subset of chapter two of the OLAP
Spreadsheet Add-in User's Guide for Excel, and the OLAP Spreadsheet Add-in
User's Guide for 1-2-3. All the information in the EQD tutorial is
available in the HTML versions of these books in the Information Center,
and in the PDF versions.
------------------------------------------------------------------------
Wizards
------------------------------------------------------------------------
25.1 Setting Extent Size in the Create Database Wizard
Using the Create Database Wizard, it is possible to set the Extent Size and
Prefetch Size parameters for the User Table Space (but not those for the
Catalog or Temporary Tables) of the new database. This feature will be
enabled only if at least one container is specified for the User Table
Space on the "User Tables" page of the Wizard.
------------------------------------------------------------------------
Additional Information
------------------------------------------------------------------------
26.1 DB2 Universal Database and DB2 Connect Online Support
For a complete and up-to-date source of DB2 information, including
information about issues discovered after this document was published, use
the DB2 Universal Database & DB2 Connect Online Support Web site, located
at http://www.ibm.com/software/data/db2/udb/winos2unix/support.
------------------------------------------------------------------------
26.2 DB2 magazine
For the latest information about the DB2 family of products, obtain a free
subscription to "DB2 magazine". The online edition of the magazine is
available at http://www.db2mag.com; instructions for requesting a
subscription are also posted on this site.
------------------------------------------------------------------------
Appendix A. Notices
IBM may not offer the products, services, or features discussed in this
document in all countries. Consult your local IBM representative for
information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to
state or imply that only that IBM product, program, or service may be used.
Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However,
it is the user's responsibility to evaluate and verify the operation of any
non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give
you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the
IBM Intellectual Property Department in your country or send inquiries, in
writing, to:
IBM World Trade Asia Corporation
Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106, Japan
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS
IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT
NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY
OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this
statement may not apply to you.
This information could include technical inaccuracies or typographical
errors. Changes are periodically made to the information herein; these
changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s)
described in this publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those
Web sites. The materials at those Web sites are not part of the materials
for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the
purpose of enabling: (i) the exchange of information between independently
created programs and other programs (including this one) and (ii) the
mutual use of the information which has been exchanged, should contact:
IBM Canada Limited
Office of the Lab Director
1150 Eglinton Ave. East
North York, Ontario
M3C 1H7
CANADA
Such information may be available, subject to appropriate terms and
conditions, including in some cases, payment of a fee.
The licensed program described in this information and all licensed
material available for it are provided by IBM under terms of the IBM
Customer Agreement, IBM International Program License Agreement, or any
equivalent agreement between us.
Any performance data contained herein was determined in a controlled
environment. Therefore, the results obtained in other operating
environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these
measurements will be the same on generally available systems. Furthermore,
some measurements may have been estimated through extrapolation. Actual
results may vary. Users of this document should verify the applicable data
for their specific environment.
Information concerning non-IBM products was obtained from the suppliers of
those products, their published announcements or other publicly available
sources. IBM has not tested those products and cannot confirm the accuracy
of performance, compatibility or any other claims related to non-IBM
products. Questions on the capabilities of non-IBM products should be
addressed to the suppliers of those products.
All statements regarding IBM's future direction or intent are subject to
change or withdrawal without notice, and represent goals and objectives
only.
This information may contain examples of data and reports used in daily
business operations. To illustrate them as completely as possible, the
examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and
addresses used by an actual business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information may contain sample application programs in source
language, which illustrates programming techniques on various operating
platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using,
marketing or distributing application programs conforming to the
application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested
under all conditions. IBM, therefore, cannot guarantee or imply
reliability, serviceability, or function of these programs.
Each copy or any portion of these sample programs or any derivative work
must include a copyright notice as follows:
(C) (your company name) (year). Portions of this code are derived from IBM
Corp. Sample Programs. (C) Copyright IBM Corp. _enter the year or years_.
All rights reserved.
------------------------------------------------------------------------
A.1 Trademarks
The following terms, which may be denoted by an asterisk(*), are trademarks
of International Business Machines Corporation in the United States, other
countries, or both.
ACF/VTAM IBM
AISPO IMS
AIX IMS/ESA
AIX/6000 LAN DistanceMVS
AIXwindows MVS/ESA
AnyNet MVS/XA
APPN Net.Data
AS/400 OS/2
BookManager OS/390
CICS OS/400
C Set++ PowerPC
C/370 QBIC
DATABASE 2 QMF
DataHub RACF
DataJoiner RISC System/6000
DataPropagator RS/6000
DataRefresher S/370
DB2 SP
DB2 Connect SQL/DS
DB2 Extenders SQL/400
DB2 OLAP Server System/370
DB2 Universal Database System/390
Distributed Relational SystemView
Database Architecture VisualAge
DRDA VM/ESA
eNetwork VSE/ESA
Extended Services VTAM
FFST WebExplorer
First Failure Support TechnologyWIN-OS/2
The following terms are trademarks or registered trademarks of other
companies:
Microsoft, Windows, and Windows NT are trademarks or registered trademarks
of Microsoft Corporation.
Java or all Java-based trademarks and logos, and Solaris are trademarks of
Sun Microsystems, Inc. in the United States, other countries, or both.
Tivoli and NetView are trademarks of Tivoli Systems Inc. in the United
States, other countries, or both.
UNIX is a registered trademark in the United States, other countries or
both and is licensed exclusively through X/Open Company Limited.
Other company, product, or service names, which may be denoted by a double
asterisk(**) may be trademarks or service marks of others.
------------------------------------------------------------------------
Footnotes:
1 A new level is initiated when a trigger, function, or stored procedure
is invoked.
2 Interfaces that automatically commit after each statement will return
a null value when the function is invoked in separate statements,
unless the automatic commit is turned off.