home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
OS/2 Shareware BBS: 8 Other
/
08-Other.zip
/
PARMTUNE.ZIP
/
PARMTUNE.INF
(
.txt
)
Wrap
OS/2 Help File
|
1991-06-10
|
2MB
|
20,651 lines
ΓòÉΓòÉΓòÉ 1. Version Notice ΓòÉΓòÉΓòÉ
First Edition (March 1991)
The following paragraph does not apply to the United Kingdom or any country
where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS
MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states
do not allow disclaimer of express or implied warranties in certain
transactions; therefore, this statement may not apply to you.
This publication could include technical inaccuracies or typographical errors.
Changes are periodically made to the information herein; these changes will be
incorporated in new editions of the publication. IBM may make improvements
and/or changes in the product(s) and/or the program(s) described in this
publication at any time.
It is possible that this publication may contain reference to, or information
about, IBM products (machines and programs), programming, or services that are
not announced in your country. Such references or information must not be
construed to mean that IBM intends to announce such IBM products, programming,
or services in your country.
Requests for copies of this publication and for technical information about IBM
products should be made to your IBM Authorized Dealer or your IBM Marketing
Representative.
ΓòÉΓòÉΓòÉ 2. Special Notices ΓòÉΓòÉΓòÉ
References in this publication to IBM products, programs, or services do not
imply that IBM intends to make these available in all countries in which IBM
operates. Any reference to an IBM product, program, or service is not intended
to state or imply that only IBM's product, program, or service may be used. Any
functionally equivalent product, program, or service that does not infringe any
of IBM's intellectual property rights or other legally protectable rights may
be used instead of the IBM product, program, or service. Evaluation and
verification of operation in conjunction with other products, programs, or
services, except those expressly designated by IBM, are the user's
responsibility.
IBM may have patents or pending patent applications covering subject matter in
this document. The furnishing of this document does not imply giving license to
these patents. Written license inquiries may be sent to the IBM Director of
Commercial Relations, IBM Corporation, Purchase, NY 10577.
The following terms, denoted by an asterisk (*) in this publication, are
trademarks of the IBM Corporation in the United States and/or other countries:
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé C/2 Γöé ETHERAND Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé IBM Γöé Micro Channel Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Operating System/2 Γöé OS/2 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé PC Network Γöé Personal Computer AT Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Personal System/2 Γöé PS/2 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Presentation Manager Γöé VTAM Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
The following terms, denoted by a double asterisk (**) in this publication, are
trademarks of other companies as follows:
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé 3Com Γöé 3Com Corporation Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé EtherLink Γöé 3Com Corporation Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Ungermann-Bass Γöé Ungermann-Bass, Inc. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
ΓòÉΓòÉΓòÉ 3. About This Book ΓòÉΓòÉΓòÉ
The purpose of this book is to provide performance tuning guidance and
information about IBM* Operating System/2* Extended Edition (EE) Versions 1.2
and 1.3, and IBM OS/2* Local Area Network (LAN) Server Program Versions 1.2 and
1.3. Unless noted, comments apply to both versions of these programs.
The goal is that users of the OS/2 system would be able to more optimally tune
their systems and applications for performance. This is accomplished by
providing:
o A methodology on how to approach performance tuning
o Background information on OS/2 system structure and performance related
parameters
o Tuning tips and guidelines.
Much of the information in this book is taken from existing IBM publications.
The purpose of repeating it here is to put the information in one place.
Additionally, this book expands on the information found in other sources, and
where different, supersedes those other sources.
Who Should Read This Book
How This Book Is Organized
Prerequisite Publications
Related Publications
ΓòÉΓòÉΓòÉ 3.1. Who Should Read This Book ΓòÉΓòÉΓòÉ
This document is intended for anyone who needs to tune the OS/2 system for
performance. It is assumed that this person has extensive knowledge of
computers, is familiar with the terminology relating to hardware and software
systems, and has a conceptual understanding of program internals.
It is also assumed that the reader is familiar with the OS/2 system and knows
how to initially install and configure the OS/2 system. See Prerequisite
Publications for a list of publications related to installation and
configuration.
ΓòÉΓòÉΓòÉ 3.2. How This Book Is Organized ΓòÉΓòÉΓòÉ
This book contains the following chapters:
Chapter 1 Roadmap to This Document, indicates where to go in this document for
various types of information.
Chapter 2 How to Approach Performance Tuning, covers performance concepts and
recommended approaches for performance tuning.
Chapter 3 Base System Considerations, describes performance enhancements made
to OS/2 Standard Edition 1.3, as well as base operating system functions that
directly affect OS/2 Extended Edition and LAN Server (such as RAM usage and
file system characteristics).
Chapter 4 Communications Manager, covers tuning concepts and guidelines for
Communications Manager.
Chapter 5 LAN Server and Requester, covers tuning concepts and guidelines for
LAN Server and Requester.
Chapter 6 Database Manager, covers tuning concepts and guidelines for Database
Manager.
Chapter 7 Application Design and Development Guidelines, describes OS/2
concepts and guidelines for tuning applications using OS/2 Standard Edition
functions and features.
Chapter 8 DOS Compatibility Mode Hints, describes how to increase the memory
available to DOS Compatibility Mode under OS/2.
ΓòÉΓòÉΓòÉ 3.3. Prerequisite Publications ΓòÉΓòÉΓòÉ
The following IBM manuals are needed for installation and configuration of the
OS/2 system. Although the parameter and tuning information provided in this
tuning guide repeats and often supersedes that found in the following manuals,
these manuals do have additional information not repeated in this tuning guide.
Hence the following manuals are prerequisites to this tuning guide.
o IBM OS/2 EE Systems Administrator's Guide for Communications, 01F0261 (1.2)
or 01F0302 (1.3)
o IBM OS/2 LAN Server Network Administrator's Guide 01F0286 (1.2) or 33F9428
(1.3)
o IBM OS/2 Extended Edition Database Manager Administrator's Guide, 01F0267
(1.2) or 01F0291 (1.3)
o IBM OS/2 Extended Edition: Integrating OS/2 Workstations and LANs into
Enterprise Networks, GG22-9490 (1.2) or 33F9434 (1.3).
ΓòÉΓòÉΓòÉ 3.4. Related Publications ΓòÉΓòÉΓòÉ
Following are publications and Field Television Network (FTN) video tapes that
provide additional information of interest. Order numbers are included.
Among the publications listed are "cookbooks" from the OS/2 International
Technical Support Centers (ITSCs). To order the entire group of cookbooks, the
following order numbers may be used:
o For all OS/2 Extended Edition Cookbooks: GBOF-2195
o For all Personal Systems Cookbooks: GBOF-0426
o OS/2 Standard Edition
- IBM Operating System/2 Programming Tools and Information, Version 1.2,
64F0273
o IBM OS/2 Communications Manager
- IBM OS/2 Extended Edition Version 1.3 System Administrator's Guide for
Communications, 01F0261 (1.2) or 01F0302 (1.3), (includes list of host
related and many other publications)
- IBM OS/2 Extended Edition Version 1.2 Cookbook: Communications Manager
Design and Implementation, GG24-3552
- IBM OS/2 Extended Edition Version 1.2 Cookbook: Communications Manager
SNA Environment, GG24-3553, (includes list of host related publications
with short description)
- IBM OS/2 Extended Edition Version 1.2 Cookbook: Communications Manager
SDLC, S.25, and LAN Environments, GG24-3554
- IBM OS/2 Extended Edition Version 1.2 Cookbook: Communications Manager
3270 and 5350 Emulator Services and APIs, GG24-3555
- IBM OS/2 Extended Edition Version 1.3 APPC Programming Reference, 01F0295
o SNA publications
- SNA Concepts and Products, GC30-3072
- SNA Technical Overview, GC30-3072
- SNA Reference Summary, GA27-3136
- SNA Format and Protocol Reference Manual: Architectural Logic, SC30-3112
- SNA Format and Protocol Reference Manual: Architecture Logic for Type
2.1 Nodes, SC30-3422
- SNA: Sessions between Logical Units, GC20-1868
- SNA Format and Protocol Reference Manual: Architecture Logic for LU Type
6.2, SC30-3422
- IBM Synchronous Data Link Control Concepts, GA27-3093
o IBM OS/2 LAN Server
- IBM OS/2 LAN Server Version 1.3 Network Administrator's Guide, 01F0286
(1.2) or 33F9428 (1.3)
- IBM OS/2 LAN Server Version 1.3 Applications Programmer's Reference,
33F9432
- IBM OS/2 Server Installation and Configuration Guidelines, 01F0288 (1.2)
and 33F9433 (1.3)
- IBM OS/2 LAN Server 1.2 Planning, Installation and Customization (ITSC
Cookbook), GG24-3506
- IBM Local Area Network Technical Reference, SC30-3383
- IBM Token Ring Network Architecture Reference, SC30-3374
o IBM OS/2 Database Manager
- IBM OS/2 Extended Edition Database Manager Administrator's Guide, 01F0267
(1.2) or 01F0291 (1.3)
- IBM OS/2 Extended Edition Version 1.3 Database Manager Programming Guide
and Reference, 01F0292
o Other IBM OS/2 program information sources
- IBM Operating System/2 Version 1.3 Information and Planning Guide,
G360-2650
- IBM Operating System/2 Extended Edition Version 1.3 Commands Reference,
01F0290
- Catalog of Configuration and Performance Related Documentation for OS/2
EE and LAN Server, GG22-9486, (includes documentation through Version 1.2
of the OS/2 system)
- Analysis of Some Queuing Models in Real-Time Systems, GF20-0007
o Field Television Network (FTN) video tapes:
At the date of this publication, tapes can be ordered from the following
source:
VCA Teletronics
(214) 869-3544
To check for the most current list of FTN tapes available, contact IBM at
the following address:
Technical Coordinator Video Club
IBM 40-A2-04
1 East Kirkwood Blvd.
Roanoke, TX 76299-0015
IBMers may contact In-Territory Education in Atlanta for current tape
information.
- "Configuring the OS/2 Extended Edition Database Manager," May 11, 1989,
Technical Coordinator Satellite Education Exchange (TCSEE) Module 89G1,
(see also: FTN from November 29 , 1990)
- "LAN Performance and Tuning," May 18, 1989, TCSEE Module 89H1
- "OS/2 LAN Server Administration," May 26, 1989, TCSEE Module 89J1
- "Selecting Programming Tools for the IBM OS/2 Presentation Manager,"
January 25, 1990, TCSEE Module 90B1
- "Developing Local Area Network Applications," March 7, 1990, TCSEE Module
90E1
- "OS/2 Extended Edition APPC Concepts - Configuration," April 12, 1990,
TCSEE Module 90H1
- "OS/2 LAN Server and DOS Requester," April 26, 1990, TCSEE Module 90I1
- "EASEL OS/2 EE Cooperative Processing," May 11, 1990, TCSEE Module 90K1
- "3174 Peer Communication," May 24, 1990, TCSEE Module 90L1
- "OS/2 Communications Manager: Installation and Configuration," June 14,
1990, TCSEE Module 90M1
- "IBM Micro Channel Architecture Update - Part 1," June 28, 1990, TCSEE
Module 90N1
- "IBM Micro Channel Architecture Update - Part 2," June 29, 1990, TCSEE
Module 90N2
- "LAN Connectivity Solutions," July 26, 1990, TCSEE 90P1
- "OS/2 Extended Edition 1.2 5250 Workstation Feature," August 9, 1990,
TCSEE Module 90Q1
- "OS/2 LAN Server and DOS LAN Requester Tips and Techniques," August 23,
1990, TCSEE Module 90R1
- "OS/2 Extended Edition SNA Gateway," October 11, 1990, TCSEE Module 90S1
- "LAN Management," October 25, 1990, TCSEE Module 90T1
- "OS/2 Extended Edition Database Manager: Performance Considerations,"
November 29, 1990, TCSEE Module 90W1, (see also: FTN from May 11, 1989)
- "OS/2 Extended Edition and DOS Remote Data Services and LAN Requesters,"
December 13, 1990, TCSEE Module 90Y1
- "OS/2 Print Spooler: Technical Update," December 14, 1990, TCSEE Module
90Z1
- "LAN Technical Update: Bridges," January 24, 1991, TCSEE Module 91A1
- "Distributed Console Facility," January 31, 1991, TCSEE Module 91G1
- "DOS Memory Management," March 1, 1991, TCSEE Module 91D1
- "OS/2 System Tuning and Application Development Tips," March 28, 1991,
TCSEE Module 91F1
o Non-IBM information sources
- OS/2 Notebook: The Best of the IBM Personal Systems Developer, Microsoft
Press, November 1990.
ΓòÉΓòÉΓòÉ 4. Roadmap to This Document ΓòÉΓòÉΓòÉ
Although it is assumed that the reader of this guide is quite knowledgeable in
computers and the OS/2 system, an attempt is made to provide as much background
information as possible without duplicating existing OS/2 Administrator's
Guides. Headings were chosen so that concept and background sections stand
apart from the actual parameter description and tuning guidelines sections.
Make use of the Table of Contents to observe the organization of each chapter
and to determine which sections you need to reference.
In general, chapters are organized with concepts presented up front, followed
by specific parameter and tuning suggestions. The following comments cover
each of the chapters, in the order they are found in this guide.
o Since not all readers have performance tuning experience, the document
begins with a chapter that includes concepts on how to approach tuning,
benchmarking suggestions, and general performance concepts and terms. (See
Chapter 2, How to Approach Performance Tuning )
o The primary focus of this guide is IBM OS/2 Extended Edition and IBM OS/2
LAN Server and Requester, Versions 1.2 and 1.3. However, since these
components are very much affected by the base operating system, some OS/2
base operating system information and guidelines are provided in Chapter 3,
Base System Considerations Included are:
- Summary of performance enhancements in the OS/2 1.3 base operating system
- Memory usage considerations
- HPFS and FAT file system considerations
- DASD considerations.
o The Extended Edition and LAN Server chapters are arranged starting with the
lowest level component, hence the ordering:
- Communications Manager, see Chapter 4, Communications Manager
- LAN Server, see Chapter 5, Lan Server and Requester
- Database Manager, see Chapter 6, Database Manager As much as possible,
the most important performance suggestions are listed first for each of
the components.
The Database Manager chapter is a bit different from the others in that
more than parameters are discussed relative to tuning. Since so much of
the performance of Database Manager is dependent on database and
application design, these topics are also included.
o Some base operating system application design considerations are included in
Chapter 7, Application Design and Development Guidelines Again, this
chapter is arranged with concepts first, followed by guidelines.
o Finally, tips on how to provide more memory for the DOS Compatibility Mode
are provided in Chapter 8, DOS Compatibility Mode Hints.
Chapter 2 How to Approach Performance Tuning
ΓòÉΓòÉΓòÉ 5. Chapter 2 How to Approach Performance Tuning ΓòÉΓòÉΓòÉ
2.1 Introduction
2.2 Methodical Approach
2.3 Benchmarking
2.4 Collecting Performance Data
2.5 Performance Concepts
ΓòÉΓòÉΓòÉ 5.1. 2.1 Introduction ΓòÉΓòÉΓòÉ
On any computer system, there are limited resources that need to be shared by
all the functions executing on that machine. Examples of such resources are
memory (RAM), DASD, CPU, and the communications lines.
The art of performance tuning involves maximizing throughput and achieving
acceptable response times within the constraints of the available resources.
This means figuring out how to modify parameters, applications and workloads
such that the limited resources are used most effectively. There will always
be tradeoffs to consider, such as:
o Cost versus Performance
There may be situations where the only way to improve performance is to
purchase more/faster hardware.
o Conflicting Performance Requirements
When more than one component of OS/2 is used simultaneously, you may find
that they have conflicting performance requirements. In such cases, you will
need to iterate toward a compromise between these two requirements until an
acceptable level of performance is achieved for all affected functions.
Similarly there may be a tradeoff between speed and functionality. For
example, in a LAN Requester, increasing the number of transmit and receive
buffers may help response time, but may take RAM away from the DOS Box such
that certain DOS applications no longer run.
The following sections describe a methodical approach to the task of
performance tuning (2.2 Methodical Approach) as well as some background
performance concepts (2.5 Performance Concepts).
2.2 Methodical Approach
ΓòÉΓòÉΓòÉ 5.2. 2.2 Methodical Approach ΓòÉΓòÉΓòÉ
There are many variables to be considered in determining how to improve the
response time and throughput of a system. Because of this, performance tuning
can be a slow and tedious task, and the answer to many performance-related
questions is often "it depends."
Thus, to effectively tune a system, a methodical approach is needed. When using
unorganized methods, changes to the system may be made with no particular rhyme
or reason. In this case, the cross-effects of multiple, simultaneous changes
may make it difficult to determine the actual effect each change has on the
system.
Even though tuning can be a slow process, much of it is common sense, involving
the major steps illustrated in Figure 2-1. "Performance Tuning Process". At any
point in these steps, iteration within a step or back to a previous step may be
needed.
The remainder of this section expands on these steps (starting at 2.2.1 Define
the Problem). However, even before going through these steps, you may want to
read through the following checklist, since some performance problems may be
nothing more than the overhead of trace statements, loose hardware, etc.
"Before You Begin" Checklist
o Cable and card connections
Such things as noisy connections (SDLC, Async, X.25), wires that are too
close, or loose cables can cause re-transmissions, which will slow
performance.
o OS/2 System Trace
If OS/2 system trace is turned on (via the TRACE ON statement), the impact
to performance is significant. If there is no need for it to be on, check
that there are no TRACE ON statements in CONFIG.SYS or STARTUP.CMD.
Additionally, if the trace facility is not needed at all, it is recommended
that the following trace statements be removed from CONFIG.SYS, since they
cause the following amounts of RAM to be allocated:
- TRACE ON/OFF - causes 4K to be allocated.
- TRACEBUF x - causes a buffer of the specified size (x) to be
automatically allocated at IPL.
o Communications Manager Trace
Communications Manager provides a tracing facility that can be auto-enabled.
Be sure these traces are off unless you explicitly want them. (To access
them, choose Advanced from the Communications Manager action bar, select
Problem Determination Aids, then select Trace Services.)
o Pointing to the correct .CFG file
Ensure that the "correct" CONFIG.SYS is pointing to the desired
Communications Manager .CFG (configuration) file. If you happen to be
sharing a test machine with others, there may be several versions of
CONFIG.SYS on the system. Additionally, anytime a DEVICE statement for the
link is changed or updated, it is easy to forget to change the CFG= portion
of DEVICE statement. Device drivers this applies to are:
- ETHERDD.SYS
- PCNETDD.SYS
- NETBDD.SYS
- TRNETDD.SYS
- T1P1xxDD.SYS
o Enhanced 386 Memory Adapter
If you are using this adapter, make sure the system is using the memory on
it. To do this, follow the installation instructions very carefully. When
IPLing your system, watch the RAM count to ensure that the total memory you
expect is being counted. If not, repeat the installation process. The
performance impact, if this memory isn't being used, will be the same as
that of a RAM-constrained environment.
o Code Changes
There are times when seemingly "innocent" changes have an adverse effect on
performance. If you upgrade your system to a new level of the OS/2 program,
you must remember to make all the changes to CONFIG.SYS that you had on your
previous level. Minor parameter or CONFIG.SYS changes that may seem
unrelated to the testing at hand may have side-effects that impact
performance. Even minor application changes (especially if more than one
person is providing updates to your application) may have unexpected
results.
2.2.1 Define the Problem
2.2.2 Hypothesize Solutions
2.2.3 Design Tests
2.2.4 Set Values and Run Tests
2.2.5 Analyze the Results
2.2.1 Define the Problem
ΓòÉΓòÉΓòÉ 5.2.1. 2.2.1 Define the Problem ΓòÉΓòÉΓòÉ
1. Put Initial Words to the Problem
Sit down and verbalize what you currently perceive to be your problem.
Rather than simply saying "the system is slow," try to quantify how slow.
For example, is it 10% slower than you desire, or 10 times slower? Unless
you clearly state your problem, you'll have no idea what direction to take
in improving your performance.
Determine what data you have (as well as its reliability) to indicate
there is a performance problem. As you obtain additional information,
your problem statement will probably change or become more focused.
Examples of problems could be:
o When the user hotkeys from their spreadsheet program, it takes 5 seconds
to bring up the OS/2 Task List.
o When 10 more users are added to the LAN, and all users on the LAN are
running the "end of day report," 5 or more systems cannot complete the
report due to timeouts at the server.
o Database report XYZ takes 15 minutes to run and the desired time is 3
minutes.
2. Know What Affects Performance
Find out all you can about what affects performance in your environment
and what doesn't. For example, for a Gateway connected to a Host, the
type of communication link (e.g. SDLC versus Token Ring) affects
performance at the Gateway whereas the speed of your disk or type of
display will not.
If you have staff with experience and knowledge in the area of
multitasking operating systems (such as an MVS expert), make use of their
experience! When isolating OS/2 performance bottlenecks or making OS/2
application and system design decisions, this person may be more
appropriate to call in than a DOS expert, since they should have
conceptual knowledge relating to the multitasking environment of the OS/2
system (such as job or process priorities).
3. Define a goal ... be realistic
Think through what improvement you'd like to see and how you'll know when
you see it. This will probably be an iterative process. Setting your
desired goal may be very easy. Whether or not this goal is realistic will
become apparent over time, as you learn more about how your system and
application work and where the bottlenecks are.
2.2.2 Hypothesize Solutions
ΓòÉΓòÉΓòÉ 5.2.2. 2.2.2 Hypothesize Solutions ΓòÉΓòÉΓòÉ
Narrowing in on the cause of a performance problem is definitely an iterative
process, involving identification of possible bottlenecks and verification of
whether they are indeed bottlenecks. It is important to make sure the correct
bottleneck has been identified before pouring in resources to alleviate it. If
not, unnecessary expense may be incurred by adding resources that do nothing to
improve performance. For example:
o RAM could be added to a database server in order to increase the size of the
buffer pool, only to find out that the problem is inappropriate indices, not
a small buffer pool.
o A faster CPU could be purchased, hoping to improve server throughput.
Although throughput may indeed improve slightly with a faster CPU, it might
later be discovered that the real bottleneck is insufficient buffer space on
the adapter card or a slow fixed disk (in the case of an I/O intensive
application).
Hence, it is important to think through several possibilities and test them
out. Once a bottleneck is identified, and resources have been applied to
relieve it, another bottleneck may suddenly surface. This may be because
several things in your system are running near capacity. Note however that
there will always be something in the system that is the gating performance
factor. Depending on the workload, this factor may or may not be an actual
bottleneck.
1. Identify Possible Bottlenecks
Bottlenecks occur at points in the system where requests are arriving
faster than they can be handled (as with I/O operations), or where
resources are insufficient to hold efficient amounts of data (such as
buffers). For example, if Token Ring requests through a Gateway machine
outpace the ability of the Gateway to send the requests on to the host, a
bottleneck will occur at the Gateway machine.
To start narrowing in on a bottleneck, first think through all the things
the poorly performing function does (i.e. what's happening behind the
scenes?). For example, suppose the screen updates too slowly when some
function in an application is selected. Under the covers, this function
may do the following things:
o Component "a" of the application is being executed and it does "b" and
"c".
o During that time, there is a lot of fixed disk activity.
o The Database Manager is also being invoked to retrieve a specific amount
of data.
Additionally, think of other functions that do approximately the same
thing as the slow function. When this alternate function is tried,
observe whether the same slow response results. If not, determine the
difference between it and the application. For example:
o An application performs a directory listing and it is slow. Is the
problem the application, the PM overhead, or the disk speed? To narrow
things down, try executing the DIR command from the OS /2 command line,
and then do a directory listing from the File Manager. See where the
differences lie.
o Perhaps an application is accessing remote data and the response is too
slow. Try the same function at the server workstation where the data
resides and observe if the performance has improved. This will indicate
whether the problem is primarily communications, or the way in which data
is being accessed at the remote location.
For a LAN application, DOS Copy some data across the LAN and compare its
performance to that of the application. DOS Copy uses large record
transfer, which is more efficient than several small record transfers.
If the application was written for local access (small records), it may
be very inefficient on the LAN.
2. Brainstorm Possible Solutions
To increase the capacity at a bottleneck, think of ways to offload the
work arriving there or improve its capacity (through hardware or
software). From there, come up with hypotheses of things that may relieve
bottlenecks and improve performance. Possible examples follow (though in
no particular order). Note that changing the values of system parameters
may only be one avenue to pursue.
o Application redesign (see Chapter 7, Application Design and Development
Guidelines)
o Faster hardware - e.g. CPU, DASD, communications medium (i.e. adapter or
connection)
o Parameter tuning
o Additional RAM or DASD
o Changing the topology - i.e. how many workstations are going through (or
to) what servers, controllers, etc.
o Using or increasing a memory cache
o In a swapping system, perhaps reducing the size of buffers or numbers of
active applications.
2.2.3 Design Tests
ΓòÉΓòÉΓòÉ 5.2.3. 2.2.3 Design Tests ΓòÉΓòÉΓòÉ
Given the possible solutions, define tests to try them out. Make sure that
these tests are repeatable. Approach the testing effort as a "hypothesis <->
proof" cycle.
o Prioritize your ideas from "most likely to have an impact with minimal
effort" to "least likely to have an impact with lots of effort".
o Start with a single hypothesis and decide what you are going to do to prove
whether it is true or false. For example, if your hypothesis is to "add
more buffer space," then only change the parameter needed to accomplish
this.
o If the hypothesis is proven false, move to the next hypothesis and start the
cycle again.
See 2.3 Benchmarking for more specifics.
2.2.4 Set Values and Run Tests
ΓòÉΓòÉΓòÉ 5.2.4. 2.2.4 Set Values and Run Tests ΓòÉΓòÉΓòÉ
Setting new variable values and running tests is probably the most iterative
part of the overall process. In this stage, response time should be measured
for specific, repeatable events, both before and after changing a variable
value.
1. Change only ONE thing at a time
Changing more than one variable will cloud results, since it will be
difficult to determine which variable has had what effect on system
performance. The general rule may perhaps be better stated as "change the
minimum number of related things." In some situations, changing "one
thing at a time" may mean changing multiple parameters, since changes to
the parameter of interest may require changes to related parameters. (For
example, in Database Manager, changing the BUFFPAGE parameter may require
an increase to the SQLENSEG parameter.)
2. Start in the Same State
Start each iteration of your test with your system in the same state. For
example, if you are doing database benchmarking, make sure that the values
in the database are reset to the same values each time the test is run.
3. Check for Control
You can never control your testing too much. This mainly involves
carefully documenting each step in your tests - even to the point of
being "scientific." This will ensure that you're not doing things
differently from run to run, and that you're not introducing new variables
that could cloud the results. Writing things down is also essential for
going back and recalling exactly what you have and have not tested since
it is easy to forget!
4. Work on ONE problem at a time
It is easy to lose control and focus when working with a large number of
parameters and components. Additionally, it is likely that anomalies or
problems will come up during testing, such as functional problems. While
it will be necessary to resolve problems that obstruct your goal of
identifying bottlenecks and improving performance, problems that are not
directly related to this may need to be put aside for a while, or given to
another group to work on in parallel. The point here is to maintain your
focus.
2.2.5 Analyze the Results
ΓòÉΓòÉΓòÉ 5.2.5. 2.2.5 Analyze the Results ΓòÉΓòÉΓòÉ
Analyzing the results of your tests involves being able to interpret and
explain the data you obtain in order to understand what's going on in the
system. As was mentioned in section 2.2.3 Hypothesize Solutions, it is
important to question and think through test results without jumping to
conclusions. For example, suppose tuning has resulted in a decrease in the
response time of a LAN Server. However, the apparent response time at the
requester seems unchanged. This may indicate the presence of another problem,
perhaps somewhere between the LAN Server and the requester.
As you benchmark your system, look for response time and throughput patterns.
If your application has definable 'actions and responses' that can be measured
(meaning that it is a closed system, such as with an interactive application),
the following rule must be true. (See Figure 2-2. "Illustration of throughput
in a closed system." in the following text for the theory behind this rule.)
If this rule does not hold true, the system is unstable due to work or data
being created or destroyed and not being accounted for. (See 2.5 Performance
Concepts for definitions of throughput and response time.)
(throughput) * (avg. response time)
----------------------------------- <= 1
(number of users)
Graphing throughput and response times also provides helpful insight. Collect
throughput and response times as some "variable" in the system is changed (such
as number of users, buffer size, amount of RAM, etc.) Use this data to plot
the following graphs:
o Throughput versus Variable
o Response Time versus Variable
o Throughput versus Response Time
If the variable values have been tested over a wide enough range, there will
be a point in these curves where there will be a distinct break or "knee."
This will be the point where adding more resources (or tasks) will cost more
than it's worth. See sections 2.3 Benchmarking and 2.5 Performance Concepts
for examples of such graphs.
Theory Behind the Closed-System Rule
Following is a discussion of the background to the closed-system rule. Skip
this section if you are not interested in this level of detail. Figure 2-2.
"Illustration of throughput in a closed system." illustrates a closed loop
system (also referred to as an interactive system). In it, there are 'n'
workstations sending requests to a server.
Suppose someone is standing at the location in the figure marked by '**',
counting the number of times requests (or transactions) from a single
workstation pass by. This number could be expressed in the following way:
Number of transactions for 1 WS = T / (Tt + Ts)
where:
Total time for experiment = T
Time for 1 transaction for 1 WS = Tt + Ts
For 'n' workstations, the number of transactions in time T would be:
n * (T / (Tt+Ts)) -or- n*T / (Tt+Ts)
Note that this is the "total throughput," which is also expressed as L*T.
Hence:
L*T = n*T / (Tt+Ts)
L = n / (Tt+Ts)
(L*Tt + L*Ts) / n = 1
L*Ts / n = 1 - (L*Tt / n)
By substituting words for variables, the equation can be rewritten as:
(Throughput * Response time) / Number of workstations = 1 - (
a positive number)
- or -
(Throughput * Response time) / Number of workstations <= 1
2.3 Benchmarking
ΓòÉΓòÉΓòÉ 5.3. 2.3 Benchmarking ΓòÉΓòÉΓòÉ
Benchmarking involves running tests that use a representative set of programs
and data designed to evaluate the performance of computer hardware and software
in a given configuration. When trying to identify bottlenecks in a system,
benchmarking is essential for measuring progress.
Benchmarking can begin by running in a normal environment. As a performance
problem is narrowed down, specialized test cases can be developed to limit the
scope of what's being tested and observed. Benchmarking does not have to be
complex. That is, the specialized test cases need not emulate an entire
application in order to obtain valuable information. Think in terms of simple
measurements and increase the complexity only when warranted.
Characteristics of good benchmarks (or measurement) include:
o Each test is repeatable.
o Each iteration of a test is started in the same system state.
o There are no functions or applications active in the system outside the ones
being measured (unless the scenario includes some amount of other activity
going on in the system). Do not even have them started and sitting idle,
since this will use up RAM resources and increase the likelihood of
swapping.
Following are suggestions for doing simple benchmarking:
o Select individual functions out of your application that are indicative of
the problem being addressed. Generate a test case that emulates that
function as simply as possible. You do not necessarily need to run the
whole application.
o Use synthesized data. In many cases meaningful values aren't necessary, so
one need not invest the time in creating them. Some examples might be:
- In determining how fast data can be retrieved across a communications
line, files or records containing miscellaneous characters will probably
be sufficient.
- In testing screen response to a user-initiated request, the data
presented on the screen may not have to be meaningful or original.
- In a database application, where you're searching for records based on
matching values, the values may not need to be "real." It may be
sufficient to write a program to generate useful patterns of data rather
than keying in an entire database, especially if a large database is
needed. As you learn more about the operational characteristics and
performance of your system, you may need to narrow in on a few complex
scenarios where real data is necessary.
o When measuring distributed system performance, first measure a one requester
run so that you have a solid base from which to start. This will teach you
what behavior to expect from your test or benchmark. It also provides a
good check for your multiple requester runs - if multiple requester
performance doesn't follow similar behavior, something else may be wrong.
o If you are testing in a distributed environment, it may be possible to
simulate many users from a single requester by starting the same program in
multiple processes or sessions on that one requester. Note however that
this will not always work. For example, if the multiple processes cause the
requester to become RAM constrained or CPU bound, or introduces a bottleneck
at the adapter, the arrival rate of requests to the server will not
accurately model the "real world," and the test results will be unreliable.
Hence, this technique must first be verified. Two ways of verifying the
technique are:
- Compare the sensitivity curves for the multiprocess run against the
single process/workstation run. If the characteristics of the curves are
totally different, then this method is not working. (See the paragraphs
below for a definition of sensitivity curve.)
- Run with one process per workstation for 'n' workstations. Follow this
test with a single workstation test on which 'n' processes are run. If
the results are different, this method will not work.
o Start with much more RAM than you need. To find the point where performance
degrades due to swapping, "squeeze" the available RAM down by defining a RAM
disk (VDISK) and gradually increasing it's size. (Since VDISK is
non-swappable, the effective available RAM will be the same as physically
removing RAM from the system.)
Sensitivity studies also provide useful information. A sensitivity study is
when the value of a particular parameter or element in the system is changed
in small increments and the tests run for each new incremental value.
Throughput and response times from each test are then plotted, making it
relatively easy to see where the break in performance is.
Figure 2-3. "Example of a sensitivity study" shows an example of throughput
results for a sensitivity study on the number of OS/2 Database requesters that
might be supported by a OS/2 Database server under a specific workload. (Note
that these specific results cannot be generalized to apply to all remote
database scenarios.) In this figure, each line in the graph represents the
effect of increasing the number of concurrent users of a remote database. The
difference between each of the three lines is the amount of think time between
the database requests sent by each requester. (Think time is the amount of
time between each request. It determines the arrival rate of events at the
servicing machine or application.) The results of this particular test show
that throughput generally increases as the number of users increase, until
swapping occurs, and that no more than 30 requesters can be supported with the
given system configuration (e.g. RAM, processor speed, etc.). To test
whether changing the hardware would make a difference, memory could be added
and the same set of tests rerun . This would show whether or not additional
RAM would support more requesters.
Following are some benchmarking tips for the LAN Server and Communications
environments:
o LAN
Some applications used on a LAN were originally written for local access,
and hence use a lot of small record transfers instead of planned large
record transfers. The following simple test will determine if this is true
for a particular application. Retrieve some data with the application and
then DOS Copy that same data across the LAN DOS. Copy uses large record
transfer. If the application runs noticeably slower than DOS Copy, the
application is probably transferring small records.
Some scenarios involve going across a LAN and then up to a host. In such
situations it may be desirable to understand how much time is spent locally
on the LAN versus the entire "round trip" time. To do this, measure the
entire time from the local workstation's perspective. At the same time on
the host, measure the time spent there. Subtract out the host time to
obtain the time on the local LAN.
o Communications
When evaluating performance in a host-connect environment, the following
aspects of the environment should be understood:
- A dedicated host (one where no application other the one under test is
running) is best. If not dedicated, try to ensure that the environment
is repeatable. If other applications are vying for the same host
resources, they must somehow be factored into the test results. However,
this is difficult since the particular mix and resource usage of other
applications may be beyond your control.
- Is the link to the host dedicated?
- Don't test over busy bridges.
- Understand the controller being used to reach the host. What is the
maximum RU size supported by the controller? How are the workstations
connected to the controller, and the controller to the host? What is the
software running at the host?
- How is the data being sent? For example, if data from the host is being
buffered, where is the buffering occurring - at the controller or at the
host? Although this buffering may be more CPU efficient at the
controller or host, it may degrade performance at the receiving PC due to
the lag time between sending blocks of data.
o Database Manager
Following are a couple of suggestions for benchmarking database
applications:
- Test on a full-sized database
- To time an SQL statement, insert timestamp code before and after an SQL
statement.
2.4 Collecting Performance Data
ΓòÉΓòÉΓòÉ 5.4. 2.4 Collecting Performance Data ΓòÉΓòÉΓòÉ
Collecting data that will help you analyze your performance may be as easy as
clocking screen updates with a stopwatch, or as complex as using sophisticated
line trace tools. Below are some thoughts and suggestions about collecting
performance information.
o If there are certain sections of your application that you wish to time,
source language statements can be embedded within your program to write out
the current time before and after the desired sections. For example, this
method could be used to time individual Database SQL statements. The
shortcoming of this method is that it requires modification and
recompilation of your application.
o Timings can also be collected using the OS/2 TIME command and a few .CMD
files. Although these timings may not be as detailed and accurate as those
produced by trace facilities, they can still provide useful information.
- Timing foreground functions
This method uses the OS/2 prompt to display start and stop times for a
foreground function:
o Change the OS/2 prompt to include the time:
prompt $t $p$g
o Create a .CMD containing the OS/2 function (or functions) to be timed.
Place a dummy command at the end of the file, such as REM or CD. This
will cause the prompt to display after the desired function is
completed.
Following is the output of using this method to time the CHKDSK
function. The .CMD file is called test.cmd.
8:29.70 C:
8:30.06 C:
The type of file system for the disk is FAT.
The volume label is M123123FAT.
The Volume Serial Number is A527-0814
60663808 bytes total disk space.
186368 bytes in 78 directories.
55271424 bytes in 2058 user files.
4096 bytes in bad sectors.
10240 bytes in extended attributes.
5191680 bytes available on disk.
2048 bytes in each allocation unit.
29621 total allocation units.
2535 available allocation units on disk.
8:37.78 C:
When the CHKDSK command is executed, it causes the prompt to be
displayed. Likewise, when the REM at the end of test.cmd is
encountered, it again causes the prompt to be displayed. Subtracting
the start time (chkdsk prompt) from the stop time (rem prompt) results
in 7 seconds and 72 milliseconds.
- Timing background functions
Take for example a program called 'bench.exe' that you would like to
time. The time to run this program could be determined using the
following sequence of .CMD files. At the OS/2 command line, kick off:
detach_b.cmd which contains detach time_b
Note that the reason for detaching 'time_b' is because the OS/2 time
function has a prompt for the "new time." When run in the background,
this prompt is ignored.
The command file 'time_b.cmd' contains:
time >b_start.dat
bench
time >b_stop.dat
When bench.exe completes, the start and stop times will be recorded in
the start.dat and stop.dat files.
To run multiple sessions of bench.exe simultaneously, multiple .cmd
files could be detached, each similar to time_b.cmd. Each 'time_bx' file
would specify uniquely named output files for the start and stop times.
detach_b.cmd which contains detach time_b1
detach time_b2
detach time_b3
detach time_b4
.
.
.
o CPU utilization can be determined by writing an application that runs at the
idle thread, and records how often it runs (as an idle thread application).
Whenever this application is NOT running, another application is using the
CPU.
o In order to determine where time is being spent in a distributed
application, several various events that are meaningful to the application
may need to be logged. Collection of performance data should not be in the
critical performance path, but rather at a workstation where the total
system throughput can be measured. For example, if you are measuring
requester response and throughput of requests being made to a server, don't
put your "logging code" on the server since you would be stealing CPU and
I/O cycles away from the real work you're trying to measure. In addition
you would only be getting part of the picture, since your timings would not
include the entire round trip from requester to server and back.
o Line traces provide valuable information for understanding the performance
of communications and LAN applications. Such traces are generally done
through the use of separate hardware and software that are specifically
designed to display information flowing across a communications line, or to
analyze particular communications protocols. They can show you what is
flowing across the line, such as:
- size and number of packets flowing
- errors
- retransmissions.
o The OS/2 system supports some trace facilities. Some of these are usable by
an end user, while others are not. Note that the OS/2 clock has a 32
millisecond resolution, so some events may appear to occur at the same time
when in reality they are spread over a 32 millisecond period. Also note
that the overhead for most trace facilities is high.
- Communications Manager has an extensive trace facility. To access,
select "Advanced" from the action bar, followed by "Problem Determination
Aids" and "Trace Services." The events from this trace are timestamped,
again, with a 32 millisecond clock.
- There is a NetBIOS Trace called NCB .TRACE which can be started via the
OS/2 Trace Facility. (On DOS systems this trace can be started by a
program that issues an NCB.TRACE command.) This trace is documented in
the IBM LAN Technical Reference, SC30-3383-03.
- The OS/2 System Trace Facility is also used to initiate traces that
produce defect support information. In the event of a potential defect,
IBM Support will describe how to initiate these traces.
2.5 Performance Concepts
ΓòÉΓòÉΓòÉ 5.5. 2.5 Performance Concepts ΓòÉΓòÉΓòÉ
System performance can be described by throughput and response times. The goal
is to maximize throughput while still achieving acceptable response times.
o Throughput
The ability of a system to complete some number of end user or application
requests (transactions) in a unit of time. Throughput should be measured and
evaluated from the perspective of the originator of the request (i.e. at the
workstation where the end user or application is located). Throughput is
generally expressed in terms of number of KBytes per second (KB/sec) or
transactions per second.
o Response time
The time it takes to complete a single request (from the end user's
perspective). Examples of a response time measurement would be number of
microseconds to complete a record transfer or a screen update.
The following formula is used in computing system throughput and is generally
expressed in terms of number of transactions per second.
throughput_rate == total_transactions_during_run
-----------------------------
(end_time - begin_time)
This measure is useful in evaluating the workload capacity of a system.
Plotting throughput and response time for different numbers of active end
users (e.g. 20, 40, 60) and at different inter-arrival rates (think time
between requests) will demonstrate the system's throughput versus response
time. This is useful in choosing system hardware, planning future growth and
making application design choices.
A plot of throughput and response time over a wide range of workloads (or
number of users) will typically have three phases, as illustrated in Figure
2-4. "Typical throughput curve" and Figure 2-5. "Typical response time curve".
o Under-loading
The system capacity exceeds the amount of work. As work is added (more
workstations or transaction requests) the throughput rate will continue to
climb. In this phase, system response time tends to remain flat.
o Optimum-loading
The throughput rate will remain nominally flat as more work is added. The
optimum loading portion of the throughput curve is typically very wide. In
this phase, system response time tends to increase linearly. You will want
to fall within this optimum-loading range, before response time degrades.
o Over-loading
In this phase the system has more work to do than can be handled by
available resources. Typically one resource is fully consumed first and
limits or gates system throughput even though performance at other points in
the system is just fine. (This gating resource is called a bottleneck.) In
this phase, system response time increases exponentially. Throughput may
stay flat or may fall sharply. An example where throughput falls would be
if too many applications are running simultaneously and the sum of their
working sets exceeds the available RAM. (See 3.2.2 Memory Usage Concepts
and Terminology for a definition of working set.) At this point the system
will thrash - which means things will continually be swapped in and out of
memory.
There are cases when the response time of an over-loaded system may
exhibit behavior other than increasing exponentially. For example, in a
communications environment, frame rejects or queue overruns may be the only
symptom. Instead of an exponential increase in response time, you may see
the throughput rate rise and fall like a wave, or at some point under severe
conditions, the system may abruptly shut down.
Chapter 3 Base System Considerations
ΓòÉΓòÉΓòÉ 6. Chapter 3 Base System Considerations ΓòÉΓòÉΓòÉ
3.1 Standard Edition Version 1.3 - Performance Enhancements
3.2 Memory Usage and Tuning
3.3 File Systems, Caches, and Buffers
3.4 DASD Considerations
3.5 RAM Disks (VDISK)
3.6 Memory Performance
3.1 Standard Edition Version 1.3 - Performance Enhancements
ΓòÉΓòÉΓòÉ 6.1. 3.1 Standard Edition Version 1.3 - Performance Enhancements ΓòÉΓòÉΓòÉ
.3.1.1 Introduction
3.1.2 Memory Management and Swapping
3.1.3 File Manager
3.1.4 File System
3.1.5 Enhanced Loader
3.1.6 System Installation
3.1.7 Presentation Manager
3.1.8 Printer Support and Print Spooling
3.1.1 Introduction
ΓòÉΓòÉΓòÉ 6.1.1. 3.1.1 Introduction ΓòÉΓòÉΓòÉ
This section explains how the gains in performance in IBM OS/2 Standard Edition
Version 1.3 were achieved. Some of the changes that were made will be most
noticeable only when the system is running in a memory-constrained environment.
However, improvements in the File Manager and program loading should be seen
regardless of the amount of memory in the system. Also, a design goal of the
OS/2 SE 1.3 program was to ensure that enhancements added to improve memory
swapping would not degrade performance when no swapping of segments was
required.
3.1.2 Memory Management and Swapping
ΓòÉΓòÉΓòÉ 6.1.2. 3.1.2 Memory Management and Swapping ΓòÉΓòÉΓòÉ
IBM improved the algorithm of which, when, and how segments get swapped out of
the system in a memory-constrained environment. The major changes that were
made are:
o New Swap Algorithm
The swap algorithm was changed to eliminate discardable segments from the
calculation to determine how much space was required in the Swap File on
disk. It was also updated to better account for the free and movable areas
in memory. This change should translate into a Swap File that is between 40
and 60 percent smaller than that required for an OS/2 SE 1.2 system.
o Memory Compaction
In OS/2 SE 1.2, memory compaction was always performed from the top of
memory down to the fixed portion of memory. In OS/2 SE 1.3, a pointer is
kept to where memory compaction was last performed and the free block that
was generated. The next time memory has to be compacted, it is done from
this pointer instead of the top of memory. This compacts the area of memory
more likely to contain fragmentation.
o Segment Swapping
An analysis of OS/2 systems showed that most segments are between 64 bytes
and 2KB in size. This can cause excessive disk I/O to be performed when a
segment needs to be allocated. Also, for those segments that are not a
multiple of 512 bytes, two I/Os are required for each segment.
In OS/2 SE 1.3, a 14KB Swap Buffer was created to overcome this phenomena.
A list of the 40 least recently used segments is maintained. Segments
contained in this list are moved into the Swap Buffer until it is full, and
then the Swap Buffer is written to disk. This drastically reduces the number
of disk I/Os that are being performed and provides a kind of pre-swap
activity since 14KB worth of segments will always be written. If segments
were being swapped to satisfy a request for 2KB, there would be 12KB free
space in the Swap Buffer available for the next segment request. When a
segment is larger than 14KB, it is written directly to the disk and does not
go through the Swap Buffer.
Also, the size of the largest free segment that is available is now saved to
determine whether swapping has to be performed or not. This eliminates
searching through memory and performing compaction in order to determine
this.
o Saving Logical Video Buffer (LVB) When Entering DOS Compatibility Box
Whenever the user enters the DOS box, the LVB is saved. When a return is
then made back to protect mode, the screen is repainted using this saved
buffer. Only the process that has foreground focus, i.e., the process that
is running in the active window, is then given a repaint message for its
window. Under OS/2 SE 1.2, every process was given a repaint message for
its window. This caused all processes to have their working sets brought
into memory and, in a constrained environment, caused much swap activity.
o Least Recently Used (LRU) API
In OS/2 SE 1.3, the DosUnlockSet API has been changed to allow the system or
an application to mark a swappable segment as least recently used.
Doingthis causes that segment to be the next one to be swapped out of
memory. This API is used when returning from the DOS box to cause the
swappable portion of the DOS box and the video save buffers to be swapped
out of memory before anything else. This can be used by applications for
segments which will not be needed for a long period of time. Instead of
waiting on the normal system aging process, the application can cause a
segment to be swapped when the memory is required.
o LRU Sweep
In OS/2 SE 1.2, one fourth of the segments in the current process' Local
Descriptor Table (or LDT, which is an internal table that maps virtual
memory to real memory) were examined every 32 milliseconds to determine if
any segments had been accessed. If they had, then their timestamp would be
updated. This timestamp was used to determine which segments could be
swapped out of memory. The oldest timestamped segments were the first to be
swapped out of memory.
In OS/2 SE 1.3, the sweep of the process' LDT is performed only when a task
switch takes place. Also, if there is a high rate of swap activity, an
indicator is set so that more than one fourth of the process' LDT entries
will be updated. This causes more of the segments that the application is
using to remain in memory, and therefore may contribute to improved
application performance in a memory-constrained environment.
o Swap File Allocation
OS/2 SE 1.3 changes the way the Swap File is allocated. The first change
was to allocate the size of the Swap File based on the amount of physical
memory that is in the machine. A 2MB Swap File is created for a 2MB
machine, 1MB file for a 3MB machine, and 512KB for a 4MB or greater machine.
This cuts down on the number of times the Swap File must grow and also
reduces file fragmentation.
When the system is IPLed under OS/2 SE 1.2, the Swap File is deleted and
then reallocated. OS/2 SE 1.3 will reallocate the Swap File in place, to
reduce Swap File fragmentation.
3.1.3 File Manager
ΓòÉΓòÉΓòÉ 6.1.3. 3.1.3 File Manager ΓòÉΓòÉΓòÉ
The File Manager was reworked to take advantage of some inherent advantages of
the OS/2 system. The start up procedures, the copy and delete functions, and
the resource handling functions were all modified to be more aware of the
system environment and the user's activities. The changes are as follows:
o Application Startup
The enhanced File Manager now processes only those resources that are
required at application load time. There is no reason to load in all the
resources at load time when it is not known which will be required by the
user. One example of this is opening the Help files. The Help file
information is not needed until the user actually requests help.
o Fast Directory List Display
OS/2 1.3 File Manager processes directory lists much faster by requesting
files that just have the directory attribute set. The OS/2 1.2 File Manager
received all files and then eliminated non-directory files before displaying
the files. Changes were made to the File System and File Manager in OS/2 SE
1.3 to accomplish this.
o Display High Level Directory Tree
In OS/2 SE 1.2, the File Manager always displayed the complete directory
tree for any drive. This could be a time consuming procedure when LAN
drives were being accessed. OS/2 SE 1.3 will display the complete directory
tree of only the IPLed system drive. For all other drives, it will display
only the high level directory tree. This reduces the number of entries that
must be searched and therefore improves performance. In addition, OS/2 SE
1.3 has new icons to allow the user to determine the type of drive. There
are separate icons for diskettes, hard files, and LAN drives.
o Copy, Delete and Move Dynamic Link Library (DLL)
The code to perform copies, deletes, and moves was enhanced and placed in a
DLL. This code was optimized to reduce some checking and functions that
were being performed in OS/2 SE 1.2. This can result in a performance
improvement up to a factor of two in some cases. Also, when files are being
deleted, they are now displayed, so the user is aware that files are being
deleted.
3.1.4 File System
ΓòÉΓòÉΓòÉ 6.1.4. 3.1.4 File System ΓòÉΓòÉΓòÉ
OS/2 SE 1.3 provides a level of prioritization to an application's requests for
file reads and writes. Those applications which have a higher priority will
have the file I/O requests serviced first. This means that an application
running within the active window (i.e. in the foreground) will receive
priority for requests to the file system over those applications that are
running in the background, even though they were started with the same
application priority. This provides for improved performance of the
application which has the active window, but can decrease the throughput times
of concurrently running applications.
3.1.5 Enhanced Loader
ΓòÉΓòÉΓòÉ 6.1.5. 3.1.5 Enhanced Loader ΓòÉΓòÉΓòÉ
The OS/2 loader has been improved to take advantage of some features that are
offered by the linker and to process the executable (EXE) and DLL files in a
more logical and optimized manner. The specifics are:
o Improved LAN Application Loading
Applications that were loaded from an OS/2 LAN Server were not loaded
optimally in OS/2 EE 1.2, which treated the server disk drive as a removable
media (i.e., as a diskette which could be removed from the system). Because
of this, when an application was loaded, all segments containing executable
code to support this application were read across the LAN. If there was not
enough workstation memory, some of the code segments would be written to the
Swap File on the workstation disk.
OS/2 EE 1.3 will treat the server disk as if applications were being loaded
from a local hard disk, loading only those segments which are currently
needed by the application. This can give a performance boost in application
load times by a factor of two to three for most OS/2 applications loaded
across a LAN.
o Enable Packed Executable Files
The linker offers an EXEPACK option that will remove repeated sequences of
bytes, such as nulls, from the executable file. This reduces the size of
the executable file and will also reduce the number of fixups (code patches
produced in the final stages of the link process) that are defined. All
OS/2 1.3 executable and DLL files are now linked with EXEPACK option. The
amount of disk space being used by equivalent files was reduced by nearly
800KB as compared to the OS/2 1.2 result.
o Buffered Loads
The OS/2 SE 1.2 loader used very small buffers to contain fixup and resource
information. It would process eight of these at a time. The OS/2 SE 1.3
loader was changed to read a complete segment's worth of information at once
and to process that information. This reduces the number of I/Os that could
be potentially performed and reduces processing overhead. This can improve
program load time in the standalone environment.
3.1.6 System Installation
ΓòÉΓòÉΓòÉ 6.1.6. 3.1.6 System Installation ΓòÉΓòÉΓòÉ
A number of changes have been made to the system installation procedure that
will help the system's overall performance.
o Intelligent Advanced BIOS (ABIOS) Installation
The OS/2 system is shipped with ABIOS patches for every potential machine on
which it can be installed. With OS/2 1.2, all of these files were copied to
the fixed disk and had to be processed during system IPL. In OS/2 1.3, a
change has been made to determine on which machine the system is being
installed, and then only the patch files that are required by that system
are copied. This reduces the time required to IPL, as well as reducing the
amount of disk space being used.
o Selective Installation and Reinstallation
OS/2 1.3 provides the capability to selectively install OS/2 functions and
to tailor the OS/2 system to the specific environment required. If at a
later date an uninstalled function is needed, that function can be installed
without reinstalling the whole system. This may result in improved
performance by reducing disk access time and by eliminating some system
initialization processing. This can also substantially reduce the amount of
disk space used by the system.
3.1.7 Presentation Manager
ΓòÉΓòÉΓòÉ 6.1.7. 3.1.7 Presentation Manager ΓòÉΓòÉΓòÉ
Improvements have been made to the Presentation Manager code. Memory
requirements have been reduced, and many code paths have been shortened.
Changes have also been incorporated to reduce the amount of internal PM
initialization processing that occurs. Therefore, even though many functional
enhancements were added, such as support for ATM fonts and for more than 1200
windows, there will be no noticeable changes in performance when running in
unconstrained memory environments. (In this context, "window" refers to all
window objects including every pushbutton, scroll bar, etc.).
A change has been made that delays the loading of the Help facility code until
a request is actually made for the Help facility.
Also, there have been changes made to the processing of system DLLs to load
them only one time or to load them when needed. This reduces the amount of
memory being used and improves the load time for PM applications.
3.1.8 Printer Support and Print Spooling
ΓòÉΓòÉΓòÉ 6.1.8. 3.1.8 Printer Support and Print Spooling ΓòÉΓòÉΓòÉ
A new level of printer drivers is being provided with OS/2 SE 1.3. In some
cases, code path lengths have been reduced making these printer drivers more
efficient. Also, with the new Microsoft LAN Manager spooler now being shipped
with the system, printing procedures are better tuned to LAN operations.
3.2 Memory Usage and Tuning
ΓòÉΓòÉΓòÉ 6.2. 3.2 Memory Usage and Tuning ΓòÉΓòÉΓòÉ
3.2.1 Purpose
3.2.2 Memory Usage Concepts and Terminology
3.2.3 Memory Balance
3.2.4 Tuning Memory Usage
3.2.1 Purpose
ΓòÉΓòÉΓòÉ 6.2.1. 3.2.1 Purpose ΓòÉΓòÉΓòÉ
The purpose of this chapter is to discuss OS/2 memory usage and ways to tune
OS/2 memory usage. Whereas other parts of this document cover methods to
maximize performance, this chapter will concentrate on methods to minimize the
amount of memory used by the system.
For information on how much memory is required by the working sets of various
OS/2 components, refer to the IBM OS/2 Information and Planning Guide for
Versions 1.2 or 1.3, as appropriate. (Working set is defined in the following text.)
3.2.2 Memory Usage Concepts and Terminology
ΓòÉΓòÉΓòÉ 6.2.2. 3.2.2 Memory Usage Concepts and Terminology ΓòÉΓòÉΓòÉ
The OS/2 system uses a technique called virtual memory to allow more program
code and data to be concurrently active than the available physical memory can
contain. When the combination of applications that are in use require more
memory than the amount physically installed, a situation called memory
overcommitment occurs. When this happens, the least-recently-used portions
of memory are either discarded or removed. Unmodified portions of memory,
containing information such as program code or read-only data, are discarded
since they can be reloaded from the original disk file if needed. Modifiable
(read/write) segments are removed from memory and placed on the fixed disk in a
swap file where they remain until they are again needed in real memory. This
virtual memory feature is called segment swapping.
The set of code and data segments actively performing work in the system is
called the working set. Portions of a program's code (for example,
initialization and error handling code) generally tend to be less active. When
memory overcommitment occurs, these inactive portions are designated
least-recently-used and are moved to the swap data set or discarded. A
significant portion of the program's code and data tends to become rarely used.
This causes a program's working set to become appreciably smaller than the
total size of the program.
To improve system performance, it is desirable to minimize the amount of
swapping that occurs. This can be done by providing as much memory as will be
used on a regular and consistent basis. That is, there must be enough memory
to consistently contain the working set. Any memory overcommitment estimations
should be fine-tuned according to the actual performance of the system during
operation.
3.2.3 Memory Balance
ΓòÉΓòÉΓòÉ 6.2.3. 3.2.3 Memory Balance ΓòÉΓòÉΓòÉ
Many of the parameters that are discussed elsewhere in this document have an
effect on the amount of memory that is needed by the system. In many cases,
increasing the amount of memory used by buffers, etc, will increase the
performance of the system, assuming the system has an unlimited amount of
memory. In real systems, which have a finite amount of memory, there is a
point at which any further increase in the amount of memory given to buffers
will severely overcommit memory and cause performance degradation.
Further, rather than maximize performance, an objective of an OS/2 installation
planner or system administrator may be to minimize the amount of hardware
expense required to run the OS/2 system.
Wise use of memory can increase an OS/2 system's performance. However, if
memory is too overcommitted, performance will degrade. Further, installation of
too much memory is a wasteful expense. Finding the point of balance between
wise memory usage and overall system performance is not an easy task. There is
no set of magic numbers that provide this optimal balance. In many cases, it
may be necessary to perform experiments and user studies in order to do this
task well.
3.2.3.1 Relationship of User Performance Expectations to Memory
The OS/2 user, in many ways, controls the amount of memory that is used by the
system. This 'control' is exerted in a number of ways. An important aspect of
the user's part in memory usage is their expectations about the performance of
the system and the manner in which the user interacts with the system.
A user's expectations about performance can have a big influence on how much
memory is required for a system. People who have extremely high performance
expectations may not want any swapping activity. In order to satisfy these
people, more than the usual amount of memory may be required. Other users,
whose experience and expectations relate more to slow remote host access, may
have low performance expectations and may be satisfied with less than average
performance.
Also, the user's interaction patterns with the system are important. Some
environments are rather fixed in function and expect the user to perform rather
well-defined interactions with the system. For example, a user may spend a
long time in just one or two applications. Other environments include usage of
any number of applications. In this case, the user may continually jump around
the desktop from one application to another. The first user, having spent a
long time in an application, may not object if it takes a few seconds to switch
applications. However, the second user, who does frequent application
switches, may object if each of these switches takes several seconds.
Some environments (or individuals) may have high performance expectations and
requirements, as well as require the concurrent usage of many programs. Given
an identical system configuration with identical available applications, such
'power user' environments may require more memory than an 'average user'
environment.
In any environment, it is necessary to construct a user scenario that defines
the typical usage of the system. Proper selection and definition of the user
scenario is an important part of performance and memory analysis. For example,
the scenario should not focus on program loading if, in fact, the typical user
loads all his programs only once per day. Instead, the scenario should be a
characterization of the user's most commonly performed tasks.
In summary, OS/2 system and memory performance is not simply a matter of
throughput, capacity, response time, and parameter tuning. Individual users,
their expectations, their work habits, and their workload mix, are all
important factors in this equation.
3.2.4 Tuning Memory Usage
ΓòÉΓòÉΓòÉ 6.2.4. 3.2.4 Tuning Memory Usage ΓòÉΓòÉΓòÉ
The following sections cover memory tuning suggestions for various aspects of
the OS/2 system.
3.2.4.1 CONFIG.SYS Parameters
Several of the parameters specified in the OS/2 CONFIG.SYS file affect the
amount of memory used by the system. The major ones are:
o DISKCACHE
o BUFFERS
o DEVICE
o DEVICE=VDISK
o IFS=HPFS
o THREADS
o PROTECTONLY
o SWAPDOS
DISKCACHE
DISKCACHE is the statement that sets the amount of memory that is used as a
disk cache for a FAT file system. If the system uses only HPFS-formatted
disks, then do not specify a DISKCACHE (see IFS=HPFS). The default value for
DISKCACHE is 64KB. This default value is good for a system with a small
amount of memory. If a system has sufficient free memory, CACHE should
probably be raised. In no case should DISKCACHE be reduced below 64KB. If,
in a very memory constrained environment, it is necessary to reduce cache
memory usage, then remove the DISKCACHE statement entirely, although this may
have a negative impact on performance. (See 5.6.2 Adjusting Performance
Parameters for LAN Server tuning recommendations.)
BUFFERS
The BUFFERS parameters allocates a number of disk buffers which are used to
hold partially read disk sectors. Each buffer is 512 bytes. The default value
is BUFFERS = 60. This number should be sufficient for most environments. If
it is wished to reduce memory usage, then reduce the number of BUFFERS to 30.
(See section 3.3.1 Disk Buffers (BUFFERS=) for more information on disk
buffers.)
DEVICE
In the OS/2 system, most hardware devices are supported by a DEVICE= statement
in the CONFIG.SYS file. Each device driver specified by DEVICE= consumes some
amount of memory. Do not configure device support that is not being used.
That is, remove any superfluous DEVICE= statements.
DEVICE=VDISK
The OS/2 system allows the creation of a 'virtual disk' VDISK in system
memory. This feature is valuable in DOS, since it could be created from
extended memory which is otherwise unusable in a DOS system. In the OS/2
system, there is usually little need to create a VDISK. Rather, that memory
may be better used for disk cache. (See 3.5 RAM Disks (VDISK) for situations
where VDISK may be helpful.)
IFS=HPFS
The High-Performance File System (HPFS) is an optional alternative to the FAT
File System. HPFS is especially useful for large systems which have
partitions (about 60MB or more) containing large files, a large number of
files, and/or a large number of subdirectories. HPFS requires more memory
than does FAT, and therefore is not the first choice for minimal memory usage.
Also the performance of HPFS is outstanding only when it has sufficient cache.
For HPFS, the cache is specified by the /C: parameter of the IFS=HPFS
statement. If the /C parameter is not specified, then the default HPFS cache
size is allocated, which is computed as 20% of the physical memory on that
system. The minimum useful cache is /C:64 (specified in Kbytes). A 256KB
HPFS cache should be used where there is sufficient memory. The maximum cache
size is 2MB. (See section 3.3.4 File System Performance Considerations for
more information on HPFS benefits and some HPFS cache tuning recommendations.)
THREADS
THREADS specifies the maximum number of concurrent execution threads. Each
THREAD will have certain small control blocks created by the system, therefore
the higher the THREADS value, the more memory that is used by these control
blocks. While these control blocks are not very large, as a whole they do
consume some memory. The default is THREADS=255. On many systems, THREADS=128
will be sufficient.
PROTECTONLY
The PROTECTONLY parameter specifies whether the OS/2 system will support DOS
Compatibility Mode (also referred to as the 'DOS Box'). If PROTECTONLY=NO is
specified, then DOS Compatibility Mode support is included. In general,
unless there is a requirement for it, DOS Compatibility Mode should not be
configured since it uses about .5 MByte of memory while it is active.
SWAPDOS
SWAPDOS is specified as the third parameter of the MEMMAN= statement. If DOS
is configured, then SWAPDOS should, in almost all cases, be specified. This
will reduce the cost of DOS Compatibility to a minimum by making some portions
of DOS swappable. However, the area in memory used by DOS is 'special' and
cannot be used in the same manner as other system memory. Thus SWAPDOS does
not entirely eliminate the memory cost of using DOS. If minimal memory usage
is a goal and there is not a 'hard requirement' for DOS, then consider
removing DOS from the OS/2 configuration. Note that the SWAPDOS default
changed from OS/2 1.2 to 1.3. In 1.2 the default is 'NOSWAPDOS,' whereas in
1.3 it is 'SWAPDOS.'
3.2.4.2 Base System Considerations
General Considerations
The number of programs active in the OS/2 system is a large factor in how much
memory is used by the system. Therefore, the programs which are started in
STARTUP.CMD should only be those that are used on a regular and consistent
basis. Although a large portion of unused programs will be swapped out, some
portion of every program is non-swappable and will use memory for as long as
the program is started. Users of systems with limited memory should carefully
consider which programs they choose to start in STARTUP.CMD.
There are some OS/2 programs which, as a class, are more 'decorative' than
useful. That is, the amount of function that they provide is low compared to
the memory that they use. These programs, (for example 'screen colorizers' or
'clocks' or 'cute icons') should not be used on systems with a limited amount
of memory. Where it is decided that these programs are necessary, the user
should plan for the additional memory that these programs will require.
START
The START command is used to start another program while continuing to execute
itself. Usually, START will invoke a secondary command processor (CMD.EXE) in
order to start the the specified program. However, with the exception of .CMD
file processing, this secondary command processor is not necessary. To save
the memory used by the secondary command processor, specify the /N parameter
on the START statement.
Print Spooling
Print Spooling is a beneficial feature of the OS/2 operating system. However,
it does have a memory cost. In some cases it may be feasible to operate the
system without print spooling enabled and thus save the memory that would be
used by the spooler. An enhancement was made to the OS/2 Operating System
Version 1.3 that better facilitates system operation without spooling. If
there is no spooler and an attempt is made to concurrently print two
documents, the second print program will be suspended until the first program
stops using the printer.
Ending Programs
In general, when a user has finished using a program, the program should be
ended (unless it is anticipated that it will be used again soon). As stated
above, each program will continue to occupy some memory even if it is not
actively used.
Minimizing Programs
Some programs recognize when they are in a Minimized state and do a more
limited amount of processing when they are minimized. The OS/2 Print Manager
is an example of an application that does this - it does not update its print
queue status display when it is minimized. This is beneficial in that the
Print Manager then maintains a minimum amount of code in memory when it is
minimized. Other programs may also use this approach and some benefit may be
gained by keeping these programs in a minimized state when they are not being
used.
3.2.4.3 LAN Requester Considerations
General LAN Requester Considerations
The LAN Requester consists of two parts. One part is the actual device
redirection code, the other part is the user interface programs (NET.EXE,
etc). There are memory considerations for each of these components.
LAN User Interface
There are two methods that a user may interact with the LAN Requester. One is
via a full screen interface, and the other is via a command line interface. If
saving memory is a priority, the full screen interface should generally not be
used. Rather, the equivalent LAN Requester (NET) command line commands should
be used, probably from a .CMD file. If the LAN Requester full screen panel is
used, the program should be ended (Exited) when the user has completed his
work with it.
IBMLAN.INI
The LAN device redirection portion of the LAN requester has many parameters
which are specified in the \IBMLAN\IBMLAN.INI file. Some of these parameters
specify which LAN requester services are being used and other parameters
specify the size and amount of buffers that are used by LAN Requester.
Here again, the first strategy to reduce memory is to not start services (for
example, Messaging and NetPopup) unless these services are needed.
The CHARCOUNT parameter relates to redirected serial device support. Many LAN
users do not use redirected serial devices. In this case, to save memory, set
CHARCOUNT=0.
The NUMALERTS parameter has a default value of 12. To reduce memory
requirements, reduce this to 5.
The NUMSERVICE parameter has a default value of 8. To reduce memory
requirements, reduce this to 4.
The NUMCHARBUF and SIZCHARBUF parameters specify the number and size of
buffers used for 'pipes' and character devices. The default is 10 buffers of
512 bytes each. To reduce memory requirements, reduce these parameters. Three
128 bytes buffers is a suggested size.
NUMDGRAMBUF specifies the number of buffers for NetBIOS datagrams. Among other
things, datagrams are used by servers to 'announce themselves' to a network.
In many cases, NUMDGRAMBUF can be reduced to 8.
The NUMWORKBUF and SIZWORKBUF parameters specify the number and size of
buffers used for temporary working storage, The default is 15 buffers of 4096
bytes each. To reduce memory requirements, reduce these parameters. The
SIZWORKBUF should match the servers, and all requesters should have the same
SIZWORKBUF. To reduce memory, set NUMWORKBUF to 5. If further reductions are
needed, reduce SIZWORKBUF to 2048.
SIZERROR is the maximum number of bytes an error message can contain. The
default is 1024. To save memory, reduce this to 256.
The MAXCMDS parameter specifies the maximum number of NetBIOS commands that
can be outstanding at any one time. The default is 16. To reduce memory
requirements, reduce this parameter to 10.
MAXWRKCACHE specifies the size of the buffer used for large server/requester
data transfers. The default is 64KB. Normally, one would not wish to reduce
this size. However, if reducing memory is critical or if the LAN is seldom
used and large data transfer performance is not critical, then reduce this
parameter to zero (0).
The SIZMESSBUF parameter is used for the Messenger service and specifies the
size of the buffer used to temporarily buffer messages until they are written
to the message log file. If the Messenger service is used, to reduce memory
reduce this parameter from its default value of 4096 to 512.
3.2.4.4 NetBIOS Considerations
NetBIOS has several parameters which affect the amount of memory that is used
by the NetBIOS component. The base NetBIOS parameters are changed in
Communications Manager Configuration, LAN Feature Profiles panel. In
addition, the NetBIOS resources that are used by the LAN requester are
specified as parameters in the IBMLAN.INI file. Each of these parameters are
discussed below.
For reference to the discussion below, the format of the NetBIOS definition in
the IBMLAN.INI file is:
NET1 = NETBIOS$,0,NB30,X1,X2,X3
The X1 parameter of the NET1 statement specifies the number of NetBIOS
sessions that will be used by LAN Requester. This pool of sessions is taken
from the Maximum Sessions which is specified in the Communications Manager
configuration. Therefore, the Communications Manager Maximum Sessions must
equal or exceed the value specified by X1. In general, a simple LAN requester
needs one session plus one session per server that is used by the requester.
Each session reserves 52 bytes. The default Maximum Sessions is 41 and the
default X1 is 32. To save memory in a small LAN requester, one might reduce
Maximum Sessions to 10 and X1 to 8.
The X2 parameter of the NET1 statement specifies the maximum number of
simultaneously uncompleted NetBIOS commands that can be used by LAN requester
and this parameter is related to the Communications Manager Maximum Commands
field in the same way as explained above. One per maximum command per session
should be sufficient. Each Maximum Command uses 202 bytes. The default X2 is
32 and the default Communications Manager Maximum Command is 41. Again, a
Maximum Command of 10 and X2 of 8 will save memory in a small LAN requester
configuration.
The LAN Requester X3 parameter corresponds to the Communications Manager
Maximum Names parameter (and again, X3 must be smaller than Maximum Names).
Maximum Names is the number of NetBIOS names that can be supported. A
NetBIOS session is a connection between two NetBIOS names. Each name
requires 22 bytes. The default Maximum Names value is 20, and the default X3
value is 16. The minimum X3 value is 5, and a X3 value of 10 should be
sufficient for many environments.
The Communications Manager Maximum Link Stations parameter defines the
maximum number of other systems the system has session(s) with. The default
number is 8. Each link station reserves 50 bytes. To reduce memory, a value
of 4 may be acceptable in simple LAN Requester environments.
3.2.4.5 Communications Manager Considerations
General Communications Manager Considerations
The general rule applies for Communications Manager as well - do not
configure or start Communications Manager services unless they are regularly
used. For example, some people will configure their systems to automatically
start, say, five 3270 emulation sessions, even though they regularly use only
one session and even though it only takes a few seconds to manually start a
3270 session. Such a practice is wasteful of memory.
3270 Terminal Emulation Considerations
In addition to the above, a few other general considerations apply to 3270
emulation. First, the size of the display screen, that is, the number of rows
and columns, affects the amount of memory required for emulation. If a Model
2 (80 by 24) screen is adequate, then do not arbitrarily define a larger
screen. Next, the 'Presentation Space Print' function, if defined, has some
amount of memory cost associated with it. If this feature is not used, do not
define the session to support the feature.
APPC Considerations
APPC is used for many things - as an inter-program communications method, but
also to support 5250 and 3270 (non-DFT) terminal emulation. A major portion
of the APPC memory requirement is APPC data buffers. The amount of memory
needed for APPC buffers is not explicitly configured in Communications
Manager. Rather, APPC allocates buffers as needed based on several factors.
These factors include:
o Number of active Logical Units (LUs)
o Size of Request Unit (RU)
o Pacing values
o APPC workload
In order to reduce the amount of memory used for APPC buffers, it is necessary
to reduce one of these factors.
3.3 File Systems, Caches, and Buffers
ΓòÉΓòÉΓòÉ 6.3. 3.3 File Systems, Caches, and Buffers ΓòÉΓòÉΓòÉ
This section covers some characteristics and differences between HPFS and FAT
file systems, as well as some performance implications. Also discussed are disk
buffers ('BUFFERS=' in CONFIG.SYS) and how they differ from the HPFS and FAT
file system caches.
Both HPFS and FAT file systems have a cache that is used to improve performance
by keeping frequently used data in RAM. Additionally in the system, there are
disk buffers, which are often confused with the cache areas. Refer to Figure
3-1. "Relationship Between Cache and Buffers for a READ Operation" for an
illustration of how disk buffers and cache relate to each other. When data is
read from disk it may or may not be cached. From there, some of the data may
or may not go through disk buffers. When data is written to disk, the opposite
flow takes place. (The exact circumstances under which data are cached or
buffered is described in the following sections.)
Note that cache is a function of the file system (FAT or HPFS) and buffers are
a function of OS/2 disk I/O operations, and these areas are only used when
reading or writing to fixed disk. Other components and applications that run
on the OS/2 system also have buffers that are independent of the file system
cache and disk buffers.
o Database buffer areas are described in 6.4.2 Buffer Pool Size (BUFFPAGE),
Figure 6-5. "RDS Parametes at a Server Workstation", and Figure 6-6. "RDS
Parameters at a Requester Sorkstation."
o LAN buffer areas are addressed in 5.2.1 OS/2 LAN Server and Requester
Buffers and Figure 5-1. "Transfer from a Server to a Requester", Figure5-2.
"File I/O related buffers for an OS/2 LAN Requester to OS/2 LAN Server
transfer" , Figure5-3. "File I/O related buffers for an OS/2 LAN Server to
OS/2 LAN Requester transfer"
o Communications buffers are covered throughout the Communications Manager
parameter descriptions. Of particular interest are:
- Transmit Buffer Size in 4.4.2 LAN Feature Profiles
- Receive Buffer Size in 4.4.2 LAN Feature Profiles
- Maximum RU Size (4.4.3.2) in 4.4.3 SNA Feature Profiles
- Maximum RU Size (4.4.3.3) in 4.4.3 SNA Feature Profiles
- Minimum RU Size (4.4.3.6) in 4.4.3 SNA Feature Profiles
- Maximum RU Size (4.4.3.6) in 4.4.3 SNA Feature Profiles
3.3.1 Disk Buffers (BUFFERS=)
3.3.2 File System Characteristics
3.3.3 File System Caching
3.3.4 File System Performance Considerations
3.3.1 Disk Buffers (BUFFERS=)
ΓòÉΓòÉΓòÉ 6.3.1. 3.3.1 Disk Buffers (BUFFERS=) ΓòÉΓòÉΓòÉ
Disk buffers are used to handle partial sectors of information during the OS/2
fixed disk read and write operations, since I/O to the physical disk is only in
done in full sectors. The number of disk buffers allocated in a system is set
by the 'BUFFERS=' command in CONFIG.SYS.
o For a read, the sector containing the requested data is read into a disk
buffer (either from cache or straight from the disk). The partial sector is
then transferred to the requesting application.
o For a write or update, the target sector is read into the disk buffer
(either from cache or the disk). The write or update is then performed
against the requested portion of the sector, and then the entire sector is
written out (either to cache or disk).
Each buffer is 512 bytes in size (the size of a sector). Possible values for
'BUFFERS=' are from 1 to 100 with 60 being the default.
3.3.2 File System Characteristics
ΓòÉΓòÉΓòÉ 6.3.2. 3.3.2 File System Characteristics ΓòÉΓòÉΓòÉ
3.3.2.1 FAT File System
The File Allocation Table (FAT) file system is used in all versions of the DOS
and OS/2 operating systems. Its origin goes back to the CP/M operating system
and Microsoft Disk Basic which were written for the 8080 and Z80 based
microcomputers. It was an excellent solution to disk management where the
diskettes used were rarely larger than 1MB. On such diskettes, the file
allocation table was small enough to be held in memory, allowing fast random
access to any part of any file.
When applied to fixed disks however, the FAT file system began to exhibit the
following operational problems:
o The FAT became too large to be held entirely resident and had to be read
into memory in pieces. This resulted in many extra disk head movements as a
program was reading through a file, degrading system throughput.
o Because the information about free disk space was dispersed across many
sectors of FAT, the time to create a file or enlarge a file grew. File
space was allocated piecemeal and file fragmentation became another obstacle
to good system performance.
o Another disadvantage was the use of relatively large allocation units
(clusters of size 2KB) which resulted in a lot of dead space when used on
fixed disks - an average of one-half cluster (1KB) per file.
o The file name restriction of the 8.3 format was also considered to be
limiting by many users.
Attempts have been made by Microsoft and IBM to prolong FAT's useful life by
lifting restrictions on volume sizes, improving allocation strategies, caching
pathnames and moving tables and buffers into expanded memory. These measures
are only temporary solutions since the fundamental data structures used by FAT
are simply not well suited to large random access devices.
The FAT caching function is described in 3.3.3.1 FAT File System Caching.
3.3.2.2 High-Performance File System (HPFS)
HPFS is supported in OS/2 Standard Edition, Extended Edition, and LAN Server
starting in version 1.2. It solves many problems, including the FAT file
system problems mentioned above. HPFS is not a derivative of FAT - rather it
was designed from scratch to take full advantage of a multitasking
environment. Some of the key functional improvements over the FAT file system
are listed in Table 3-1. "Comparison between the FAT File System and the
HPFS". Many of these improvements have performance implications, described in
the section 3.3.4 File System Performance Considerations.
When allocating file space, HPFS assigns consecutive sectors to files whenever
possible.
o At file creation time, HPFS takes the specified file size and tries to find
a contiguous run of sectors to in which to create the file.
o Each time a file is extended, an additional 4KB is added to the requested
extension size, in hopes of reducing the number of times the file needs to
be extended. Again, HPFS attempts to extend the file in a contiguous run of
sectors.
o If two files are created simultaneously, HPFS attempts to create them in two
separate bands on the disk, reducing the possibility of one file fragmenting
the other.
3.3.3 File System Caching
ΓòÉΓòÉΓòÉ 6.3.3. 3.3.3 File System Caching ΓòÉΓòÉΓòÉ
A cache is designed to allow many users to share frequently accessed data,
especially on a LAN Server machine. (This implies that applications are
locking specific byte ranges for use within a file instead of opening a file
for exclusive use.) On a system other than a LAN Server, the cache is useful
for keeping frequently accessed data in RAM rather than having to repeatedly
read it from disk.
Since caches are used for frequently accessed data, OS/2 caches are designed to
not cache sequential file accesses. If sequential file I/O were cached, the
cache would soon fill with useless data, such as print files going to or from
spool queues, or programs that have been downloaded from a server. However,
there are times when it is beneficial for sequential data to reside in memory,
such as for program loads of frequently accessed applications or LAN Remote IPL
(RIPL) images. To accomplish this, a VDISK should be used (see RAM Disks
(VDISK)).
3.3.3.1 FAT File System Caching
The cache used by FAT is called DISKCACHE. The size of this cache is specified
by the DISKCACHE statement in CONFIG.SYS (see syntax below). A read request
will first scan the DISKCACHE memory (using bitmaps) for the logical sectors
corresponding with the file handle and offset specified in the read request.
If the data is in the DISKCACHE, it is retrieved and passed to the application
from cache memory avoiding a time consuming read from the fixed disk.
Whether or not data is put into the DISKCACHE depends on the size of the read
request and the cache threshold value. A threshold is a value defining the
maximum size read request that will be placed into the cache and is used to
prevent the DISKCACHE from being flushed (old data being replaced with newer
data) by large requests for data (such as a program load or file transfer in
which the amount of data is often 64KB). The default threshold size is 7
sectors (3.5KB). This default can be changed if desired (see the DISKCACHE
syntax below).
The way the threshold works is that as long as the data requested is less than
or equal to the threshold, the request will be rounded up to the next 4KB
boundary, and that amount will be read in. (Cache data is handled in blocks of
4KB.) For example, using the default threshold of 7 sectors (3.5KB), if the
request is less than or equal to 7 sectors, 8 sectors (4KB) will be read into
the cache. Or, if the threshold is 16KB and 13KB is requested, 16KB will be
placed into the cache. In both examples, additional data is read into the
cache, potentially saving the time of multiple future disk accesses.
When more room is needed in the DISKCACHE, 4KB of data are discarded using a
least-recently-used (LRU) algorithm.
When data is written to disk under FAT, it is not cached. However the contents
of the cache are kept in sync with whatever is on disk. That is, if the data
being updated is already in the cache, after writing the change to disk, the
cache will be refreshed to match the disk.
The format of the DISKCACHE statement is:
DISKCACHE = xx,yy
where xx is the cache size in Kbytes and yy is the number of sectors for the
threshold. The default values are 64 and 7 respectively. The maximum cache
size for 1.2 is 7.2MB. In 1.3, this has been increased to 14MB. The threshold
can be increased up to 32 sectors (16KB).
Bigger is not always better, unless you are sure the caching algorithm being
used by DISKCACHE is putting data into cache that will subsequently be used.
To determine whether your data is being cached, you need to know something
about your application in terms of the size of data blocks being requested and
whether that size matches the threshold size. You also need to know whether
your application really needs to be caching data. If the data being read is
rarely reused, you may not need a cache at all. The memory could be more
effectively used elsewhere.
3.3.3.2 HPFS Caching
The cache used by the HPFS is called CACHE.EXE. Its characteristics differ
from DISKCACHE mainly in terms of cache size, threshold size, and its
lazy-write capability. The size of the HPFS cache is specified on the 'IFS='
command in CONFIG.SYS. This command will automatically ensure that the
CACHE.EXE program is started, using default parameter values. To change the
default CACHE.EXE parameter values (described in the following text), the
'RUN=' command must also be included in CONFIG.SYS.
The maximum size HPFS cache is 2MB, and cache data is handled within it in 2KB
pages. The caching algorithm uses a threshold value of 2KB which is not
changeable. (That is, for any requests less than or equal to 2KB, 2KB will be
cached.) In addition to the threshold value, factors such as the number of
"dirty" cache pages influence whether or not reads from disk are cached.
The capability of deferred (lazy) writes gives the HPFS a significant
performance improvement over the FAT file system. This option is specified on
the CACHE.EXE command (see syntax below).
o When lazy-write is enabled, and write-through has not been specified for a
file, all writes of that file to the HPFS go first into the cache memory.
It will later be written to the fixed disk during fixed disk idle periods
(governed by the CACHE.EXE parameters described below). Lazy-write allows
incoming read requests to be processed ahead of write requests, providing
faster response time for the user application. Data to be lazy-written is
partitioned into 2KB cache pages and placed into the cache memory. The
pages are then marked dirty, indicating that it contains modified data that
must be written to disk. This allows an application to continue, as if the
data were actually written to the fixed disk. However, the dirty pages of
data remain in cache.
o When lazy-write is off, or the write-through bit is set to '1' on the file
open (DosOpen), writes still go through the cache (to keep the cache in sync
with the disk), but will also be written directly to disk.
Parameters that define the lazy-write state and how long dirty pages can
remain in the cache are specified on the CACHE.EXE command. This command is
specified as part of the RUN statement in CONFIG.SYS. For example, to turn
lazy-write off, the following line would appear in CONFIG.SYS.
RUN=C:\OS2\CACHE.EXE /LAZY:OFF
Below are descriptions of the CACHE.EXE parameters. A parameter value is
separated from its parameter by a ':'.
/LAZY Specifies whether disk writes to the HPFS file system
cause the contents of the HPFS cache to be immediately
written to disk, or only during disk idle time. Possible
values are:
o OFF: Specifies immediate writing to disk.
o ON: Enables writing of cache memory during disk idle times.
If turned on, lazy-write can be turned off on an individual file basis
by opening the file in "write-through" mode (via the write-through bit of
the DosOpen function).
/DISKIDLE Specifies the amount of time (in milliseconds) that a disk
must be idle before an unreferenced, dirty cache page is
written to disk. When this event occurs, all dirty cache
pages that have not been referenced in 'BUFFERIDLE' time
(or greater) will be written to disk in oldest-first
order. This writing will continue until all eligible
unreferenced pages have been written, or until the disk is
no longer idle.
The default value is 1000 milliseconds. The minimum value
must be greater than the value specified for BUFFERIDLE.
/BUFFERIDLE Specifies that minimum amount of time (in milliseconds)
that a dirty cache page must be unreferenced before the
'DISKIDLE' event will cause it to be written to disk.
Possible values are 0 to 500,000 milliseconds, with a
default of 500.
/MAXAGE Specifies that maximum amount of time (in milliseconds)
that a dirty cache page can remain unreferenced before it
must be flushed to disk. That is, when the data in a
cache page is greater than MAXAGE, it will be written out
immediately, independent of whether or not the disk is
idle. This parameter overrides DISKIDLE. The default
value is 5000 milliseconds.
In addition to these parameters, when the number of dirty pages in the cache
exceeds a certain percentage (set by the system), the dirty pages are written
to disk in oldest-first order. This situation will override the MAXAGE
condition.
3.3.4 File System Performance Considerations
ΓòÉΓòÉΓòÉ 6.3.4. 3.3.4 File System Performance Considerations ΓòÉΓòÉΓòÉ
3.3.4.1 HPFS Performance Feature Summary
Following is a summary of the HPFS features designed to address shortcomings of
the FAT file system, and as a result, gives HPFS better performance.
o HPFS uses more efficient data structures and algorithms:
- Directories are accessible via a sorted B+Tree search rather than the
unsorted linear search required in the FAT system.
- Once in a directory, searches for a file are more efficient than the FAT
unsorted linear search.
- Compact bitmap data structures are used for locating chunks of free
space.
o Disk management information is physically closer on disk to where it's
needed, thus reducing disk arm movement.
- The volume directory is in the middle of the drive.
- File-related information is close to the file it describes.
o HPFS assigns consecutive sectors to files whenever possible. (See 3.3.2.2
High-Performance File System (HPFS) for a description of file allocation.)
This results in less disk head movement. It also enables the file system to
request more sectors at a time, thus resulting in fewer overall requests.
This design also reduces the tendency toward disk fragmentation that is
characteristic of the FAT system.
o HPFS relies on the following two kinds of caching to minimize the number of
physical disk transfers (both read and write transfers).
- Up to 2KB read-ahead (i.e. data is read in 2KB blocks)
- Lazy-write
HPFS Performance Benefits
Given the above mentioned characteristics, HPFS provides a performance benefit
in the following situations:
o Numbers and sizes of files.
In general, HPFS performs better than FAT for large partitions (greater than
or equal to 60MB) with:
- Large files
- Large number of subdirectories
- Large number of files in a subdirectory.
o Lazy-Write
Lazy-write should be active. If lazy-write is not desired for certain
situations, it can be disabled on a single file basis by setting the
write-through option for that file. (See '/LAZY' parameter in 3.3.3.2 HPFS
Caching for a description of both methods.)
Database Manager is an example of an application that overrides lazy-write
for specific files, as described in Database Manager, LAN Server and HPFS
Cache Size below.
HPFS Tradeoffs
o The HPFS system takes about 300KB more non-swappable RAM than the FAT
system. If you're extremely RAM constrained you may want to use FAT.
o When lazy-write is on, there will generally be some data in the cache that
has not been written to disk, even though an application may consider it to
be on disk. Before powering off or re-IPLing the machine (with
Ctrl+Alt+Del), go through the Desktop Manager Shutdown procedure. (Note
that Ctrl+Alt+Del will cause the HPFS cache to be flushed.) Also, in the
event of a power outage, data in the cache would be lost.
Database Manager, LAN Server and HPFS Cache Size
Following are HPFS cache size recommendations for various OS/2 configurations:
o Database Server (and Standalone Database systems)
Database Manager has a buffer pool of its own for caching database data.
When information in the buffer pool is written to disk, Database Manager
must be sure that it is physically on disk. Hence, Database Manager
overrides the lazy-write feature of HPFS by using the "write through" option
on the files it writes to. Pages of the Database log file are also written
straight to disk. Because of this, the HPFS cache does not provide any
performance benefit for Database Manager write operations. In fact, if HPFS
cache too big, HPFS will spend a lot of time looking for pages in the cache
when Database Manager knows that it won't be there (i.e. a lot of database
data will be double-buffered). Hence, it is recommended that the HPFS cache
size be set to the minimum for a Database Server or standalone system.
Any performance gains for Database Manager from HPFS will be from the
improved large file access time. Hence the recommended file system for
Database Manager is still HPFS.
o LAN Server (only)
Set the cache to the maximum size of 2MB.
o LAN and Database Server
If a system is both a Database Server and a LAN Server, it is recommended to
tune the cache in favor of Database Manager. Since all data goes through
the cache, database data may often be replacing LAN Server data in the
cache, thus increasing likelihood of cache misses for LAN. In general
however, it is not recommended to have one system be both a Database Server
and a LAN Server, since both applications are I/O intensive.
o Standalone Workstation Environments
No real performance gains have been seen with caches larger than 1MB.
3.4 DASD Considerations
ΓòÉΓòÉΓòÉ 6.4. 3.4 DASD Considerations ΓòÉΓòÉΓòÉ
3.4.1 Hardfile Partitioning Suggestions
3.4.2 Amounts of DASD Used
3.4.1 Hardfile Partitioning Suggestions
ΓòÉΓòÉΓòÉ 6.4.1. 3.4.1 Hardfile Partitioning Suggestions ΓòÉΓòÉΓòÉ
Physical fragmentation of files on disk can, and will, grow over time. Some
practical partitioning can help contain and manage it.
o Files that are subject to reinitialization and growth (such as the swapper
file and database log file) should be isolated from OS/2 code. Such files
would optimally be placed in small partitions of their own (that is,
partitions that would be large enough to manage these files and no others).
Since these files change so much (they grow, are deleted, are recreated,
grow ... ) they can use up the disk in a manner that causes other files to
become more fragmented.
o For the same reasons described above, if your application creates "runtime
logs" that accumulate over time, they should be isolated from your
application code.
o If you have data that is frequently updated and modified (such as a
database), you may want to put this data in a separate partition.
Periodically this data can be copied (or backed up) to temporary space in
another partition, the original partition can be reformatted, and the data
copied (or restored) back, thus eliminating disk fragmentation.
3.4.2 Amounts of DASD Used
ΓòÉΓòÉΓòÉ 6.4.2. 3.4.2 Amounts of DASD Used ΓòÉΓòÉΓòÉ
Following are different things that affect the amount of DASD required for a
function:
o Using the default linking options, each segment of a program consumes an
integral number of (512 byte) sectors, regardless of the actual size of the
segment. This option can be changed with the /Alignment or the /Packcode
option.
o Each segment which is swapped out consumes an integral number of sectors,
regardless of the actual size of the segment.
o Each FAT file consumes an integral number of clusters (one cluster = four
sectors = (usually) 2KB), regardless of the actual size of the file.
o Each HPFS file consumes an integral number of sectors plus 1 (for the file
header).
o Disk space is allocated out of the Swap File when a DosAllocHuge is done.
(RAM is not allocated until the segment is first referenced.) Hence, don't
over allocate memory using this method.
o Uninitialized data segments or data segments that contain all zeros, do not
occupy any space within the executable file (EXE or DLL).
3.5 RAM Disks (VDISK)
ΓòÉΓòÉΓòÉ 6.5. 3.5 RAM Disks (VDISK) ΓòÉΓòÉΓòÉ
A RAM disk can provide a performance improvement for I/O intensive operations.
Instead of reading or writing to disk, data can be copied to RAM and accessed
from there. Obviously, a major difference between RAM and disk is that RAM is
volatile. Hence, a RAM disk should not be used to store changed, permanent
data.
A RAM disk is especially useful for program loads. In a LAN Server
environment, placing frequently accessed programs (or a Remote IPL image) on a
server RAM disk can improve requester access time. To do this:
o Create a RAM disk (VDISK) on the LAN Server at system startup.
o Place commands in STARTUP.CMD to copy the code to be downloaded to the RAM
disk.
o Share the RAM disk when the LAN Server is started.
RAM disk is also useful for storage of volatile temporary data written during
program execution. For example, the C/2* compiler uses temporary space, as
specified by the 'TMP=' environment variable. (It defaults to the TMP
subdirectory off the compiler's main directory). You could change this to a
RAM disk.
Other general notes about RAM disk:
o OS/2 components do not use VDISK.
o Although some applications can benefit by using a RAM disk, it should always
be a conscious decision.
o Use RAM disk for large requests (such as program loads) and cache for small
requests (such as records within a data file).
o RAM disk memory is non-swappable, so it causes the available virtual RAM to
be reduced.
o If your system has insufficient memory for all system-configured programs,
caches, and buffers, the size of the installed VDISK may end up being
smaller than was specified.
3.6 Memory Performance
ΓòÉΓòÉΓòÉ 6.6. 3.6 Memory Performance ΓòÉΓòÉΓòÉ
The performance of an application running under the OS/2 system on a PS/2* can
be affected by its placement within the machine's memory. PS/2 memory is
divided into two types:
o Faster memory, added to the PS/2 planar board main bus (also known as "low
memory"). The amount of memory on a PS/2 planar board is model dependent.
o Slower memory, added with cards plugged into the Micro Channel bus expansion
slots. This memory is also referred to as "high memory."
OS/2 system code, device drivers, and the DOS box are all placed into lower,
faster memory. OS/2 applications are loaded above these things, as well as
into the high end of memory.
Although there is no control over where the OS/2 system places program code,
you can ensure that there is as much planar memory as possible (i.e. fill up
the planar board before moving to expansion slots). Program code placed in
memory on the planar board will tend to run much faster than the same program
code placed in high memory. Thus application performance may vary from run to
run depending on the order in which applications are started, and may even
vary within a single run if OS/2 compaction and swapping operations occur.
Chapter 4 Communications Manager
ΓòÉΓòÉΓòÉ 7. Chapter 4 Communications Manager ΓòÉΓòÉΓòÉ
4.1 Introduction
4.2 Communications Concepts
4.3 Communications Performance Considerations
4.4 Communications Manager Parameters and Tuning Specifics
ΓòÉΓòÉΓòÉ 7.1. 4.1 Introduction ΓòÉΓòÉΓòÉ
Communications Manager offers several protocols for communicating between
systems. Some of these protocols provide configurable parameters to determine
how the specific protocol layers behave. The goal is for these parameters to
optimize performance within the environment where the communication is going to
occur. The environment includes all physical and logical components through
which data flows.
This section will cover the Communications Manager parameters that can affect
performance for the following major communications functions:
o 3270 Emulation, File Transfer and Gateway
o SNA (APPC) Communications over LAN and SDLC
o NetBIOS Communications
The information covered by this section is grouped as follows:
o Section 4.2 Communications Concepts
Understanding these concepts (which apply to any type of communications --
not just OS/2), is necessary in order to tune Communications Manager
effectively. If you already understand these concepts, you may skip to
section 4.3 Communications Performance Considerations.
o Section 4.3 Communications Performance Considerations
There are some general principles that apply to Communications Manager
tuning. These principles should be covered before moving on to the specific
Communications Manager tuning parameters.
o Section 4.4 Communications Manager Parameters and Tuning Specifics
It is here that specific parameters affecting performance are described and
recommendations given. To effectively use this section, read Section
4.4.1.1. Communications Manager Structure and Tuning Flow.
For a discussion of RAM considerations concerning Communications Manager, see
3.2 Memory Usage and Tuning and 3.2.4.5 Communications Manager Considerations.
This section should be supplemented by the following sources of information:
o OS/2 EE Communications Manger, Systems Administrators Guide.
o Communication documentation for the partner system(s), if other than OS/2
(see -- "Related Publications" -- for a list of documentation).
o Communications application documentation (e.g. LAN Requester/Server)
o The local IS department (i.e. those with skills to help you configure your
system)
o Carrier information (for physical link characteristics), etc.
4.2 Communications Concepts
ΓòÉΓòÉΓòÉ 7.2. 4.2 Communications Concepts ΓòÉΓòÉΓòÉ
The concepts described in this section are not Communications Manager specific,
but are concepts common to any type of communications. Although some of the
terminology found here is not the exact terminology used in Communications
Manager, the concepts are part of Communications Manager design and may
manifest themselves in various parameters. Where possible, the correlation to
Communications Manager terms has been made without going into deep
Communications Manager design detail.
If you are familiar with communications concepts, skip to the following section
on 4.3 Communications Performance Considerations.
4.2.1 Protocols
ΓòÉΓòÉΓòÉ 7.2.1. 4.2.1 Protocols ΓòÉΓòÉΓòÉ
To "communicate" in a data processing environment means to exchange messages
between two systems culminating at a mutually agreed upon result. Thus, in
order for successful communication to occur, there needs to be agreement
between the two systems. This agreement is called a protocol.
What is communicated, how it is communicated, and when it is communicated
must conform to some mutually acceptable set of conventions between the
systems involved. The set of conventions is referred to as a protocol,
which may be defined as a set of rules governing the exchange of data
between two entities. "
4.2.2 Layering
ΓòÉΓòÉΓòÉ 7.2.2. 4.2.2 Layering ΓòÉΓòÉΓòÉ
In order to keep the total communications task manageable, it is divided into a
number of subtasks called "layers". Layering provides a very specific
structure to the way systems communicate. The specific number of layers
involved is protocol dependent. As an example, Figure 4-1. "SNA Layer
Structure" shows the layers used in SNA communications. Each layer
communicates with its corresponding layer by implementing the protocol for that
layer. For example, in SNA, the Data Flow Control layer on one node
communicates with the Data Flow Control layer at the partner node. Part of a
protocol definition addresses the differences in configured resources (such as
pacing and block size values) between communicating systems.
Thus, to communicate efficiently and effectively, each layer should be "in
tune" with its corresponding layer on the partner system. That is, values
should match or at least work with each other.
Additionally, each layer relies on the services of next lower layer to achieve
this logical exchange, which is done through a mutually agreed upon interface.
The lowest layer, (the Physical Control layer in SNA), implements a physical
protocol to move messages across a communications medium between communicating
systems. Part of the interface definition addresses the restrictions in the
type of services that a lower layer can provide to an upper layer.
4.2.3 Protocol Functions
ΓòÉΓòÉΓòÉ 7.2.3. 4.2.3 Protocol Functions ΓòÉΓòÉΓòÉ
Functions affecting performance that are provided by the different layers of
Communications Manager are:
o Disassembly/Assembly
o Connection Establishment
o Flow Control
o Error Handling
A brief discussion of each of these areas will put tuning Communications
Manager in the proper perspective.
4.2.3.1 Disassembly/Assembly
Each layer of a communications protocol can potentially perform some sort of
Disassembly/Assembly function. Disassembly is the process by which a lower
layer breaks a message from an upper layer into blocks of a smaller size.
Assembly is the counterpart of Disassembly. There are a number of
motivations for Disassembly/Assembly, depending on the context:
o A communications network may only accept data blocks up to a certain size,
hence requiring that larger blocks be broken down.
o Error control may be more efficient for smaller blocks.
o More equitable access, with shorter delay, may be provided to shared
transmission facilities. For example, if the line is slow, allowing too big
of a block could monopolize the line.
o Disassembling blocks allows the receiver to allocate smaller receive
buffers.
Some disadvantages of Disassembly/Assembly are:
o Each transmitted unit of data requires some fixed amount of overhead.
Hence, the smaller the block, the larger the percentage of overhead.
o More blocks have to be processed for both sending and receiving sides in
order to transmit equal amounts of user data, which can take more time.
Examples of Disassembly/Assembly functions in Communications Manager are:
o APPC, 3270, SQLLOO
- Chaining: User Buffers are dissassembled into and assembled from session
level RU's.
o 3270
- Segmentation: RU's are dissassembled into and assembled from path
control level PIU's.
o NetBIOS
- User Messages are dissassembled into and reassembled from 802.2 level
frames. (There is no special name for this.)
4.2.3.2 Connection Control
In order to set up for data transfers, most protocols will establish a logical
association or connection between communicating systems. A negotiation about
the capabilities of each system may also take place (in SNA, this is a
"negotiable bind"). Like the Disassembly/Assembly function, Connection Control
may be implemented in several communications layers.
Establishing connections requires communication flows and system resources.
The benefit of Connection Control is that it establishes an environment where
data transfer can occur efficiently (**).
Examples of Connection Control at various Communications Manager layers are:
o APPC, 3270, SNA GATEWAY
- Setting up a SNA session (Bind processing)
o NetBIOS
- Setting up a NetBIOS session, (e.g. Call, Listen)
o 802.2 Applications (LAN DLCs, NetBIOS, SQLLOO, or an application written by
an end-user):
- Setting up a link station, (e.g. Open.Station, Connect.Station )
4.2.3.3 Flow Control
Flow Control is a function performed by a receiving system which limits the
amount or rate of data that is sent by a transmitting system. Its purpose is
to regulate traffic so as not to exceed the receiving system's resources.
Flow Control may be implemented in several layers since each layer may have a
different amount of resources and capabilities.
The SNA term for Flow Control is "pacing." OS/2 Communications Manager only
supports fixed pacing. Although OS/2 supports PU2.1 nodes, adaptive pacing is
not supported. See APPC Session Establishment and APPC Memory Usage below for
more discussion on pacing.
Examples of Communications Manager functions that perform Flow Control are:
o APPC, 3270 Emulation, SQLLOO
- Session Level Pacing: Pacing performed by 3270 Emulation, the APPC
'Receive Pacing Limit' parameter
- Link Level Pacing: DLC 'Send Window Count' and 'Receive Window Count'
parameters
o NetBIOS
- 'Maximum Transmits Outstanding' and 'Maximum Receives Outstanding'
parameters
o 802.2 Application
- Maxout, Maxin values (passed to 802.2 via the 'Open.Sap', 'Open.Station',
'Modify' fields of the API control blocks)
4.2.3.4 Error Control
Error control is needed to guard against loss or damage of data and control
information. Most techniques involve error detection and retransmission.
Retransmission may involve timers. For example, if a frame that is sent on a
physical link gets corrupted in-route, the receiver cannot process the
information and properly acknowledge receipt of that frame. The only way to
recover is for the sender to keep a timer. When the timer expires, the sender
assumes that any unacknowledged frames have not reached the destination and
takes the appropriate actions dictated by the protocol error control
definitions. As with flow control, error control must be implemented by
several layers since each layer may need to guarantee that data has been
successfully exchanged.
4.2.3.5 Sliding Window Protocol
The need to use communication channels efficiently, both in terms of Flow
Control and Error Control, gave rise to the concept of the "Sliding Window
Protocol," (also referred to as "Dynamic Window Protocol"). In this protocol,
there is the concept of a "window" which is defined as the maximum number of
outstanding, unacknowledged blocks (or frames) between two systems. Under
this protocol, Flow Control is achieved by limiting the maximum number of
outstanding (unacknowledged) frames a sender can have. Error Control is
achieved by requiring that a receiver acknowledge the frames that have been
successfully received.
The sliding window protocol provides Flow Control efficiency by:
1. Allowing the receiver to leave individual frame acknowledgements
outstanding by waiting until a "window's worth" of frames have been
received before sending back an acknowledgement.
Likewise, efficiency is provided by allowing the sender to have multiple
outstanding frames (i.e. sending multiple frames before stopping to wait
for acknowledgement from the receiver). This results in better
utilization of the physical link since having to wait for an
acknowledgement at a window boundary slows down the sender.
Note that the number of frames allowed to be outstanding do not have to be
equal between the sender and receiver.
2. Allowing acknowledgements to flow with reverse traffic data, known as
piggybacking . This reverse traffic may or may not have anything to do
with the acknowledgement that is "riding" on it.
Communications Manager parameters that are used for the Flow Control part of
Sliding Window Protocol are:
o APPC, 3270 Emulation, SNA GATEWAY, SQLLOO
- The DLC 'Send Window Count' and 'Receive Window Count' parameters
o NetBIOS
- 'Maximum Transmits Outstanding' and 'Maximum Receives Outstanding'
parameters
For Error Control and Recovery, the Sliding Window Protocol provides the
following functions:
1. Response Timer
This defines the maximum time to wait for acknowledgements before going
into error recovery mode. Communications Manager parameters used here
are:
o NetBIOS: Response Timer Interval
o 802.2: Response Timer (T1)
2. Retransmission Count
Number of times to attempt to do the checkpointing procedure. It prevents
continual retransmission of the same frame. A Communications Manager used
to do this is the NetBIOS 'Retry Count' parameter.
3. Receiver Acknowledgement Timer
The maximum time to wait for the occurrence of reverse traffic on which to
"piggyback" an acknowledgement. Since the sender has to save up to a full
window of sent frames until they are acknowledged (in case they need to be
resent), the protocol offers a bounded amount of time to get an
acknowledgement even if there is no reverse traffic. Communications
Manager parameters used for this function are:
o NetBIOS: Acknowledgement Timer Interval
o 802.2: Acknowledgement Timer (T2)
4.3 Communications Performance Considerations
ΓòÉΓòÉΓòÉ 7.3. 4.3 Communications Performance Considerations ΓòÉΓòÉΓòÉ
Tuning a system allows you to achieve the maximum utilization of resources for
a particular environment. This naturally translates into levels of performance
that satisfy the user requirements.
The most important factor affecting PC communication performance is the
communication link - its data transfer rate and response time.
o For teleprocessing links, communications is limited by the maximum bit rate
of the link. OS/2 Communications Manager supports Asynchronous
communications up to 19200 bps when transmitting, and 1200 to 19200 bps when
receiving. SDLC is supported up to 19200 bps.
o For high speed links such as 3270 DFT and Local Area Networks (LANs),
communications is typically limited by the speed of the PC processor and the
pathlength of the communications software.
For the purposes of this chapter, it is assumed that the choice of
communication link has already been made.
Following are tuning concepts and tips for optimizing Communications Manager
performance. Tuning specifics for each parameter are found in Section 4.4
Communications Manager Parameters and Tuning Specifics
4.3.1 General Tuning Guidelines
ΓòÉΓòÉΓòÉ 7.3.1. 4.3.1 General Tuning Guidelines ΓòÉΓòÉΓòÉ
Although there are many Communications Manager parameters that can potentially
be modified, the most significant performance improvements will be found by
modifying parameters that affect the following key areas. The area that can
provide the most performance benefit is the size of data transfer blocks.
1. Size of data transfer blocks
2. Pacing and windowing values
3. Keeping connections available
The general direction to head in each of these three areas is to "increase"
values. The benefit of increasing values for each area are:
1. Increased data transfer block size (buffer, message, block, RU, frame,
etc.)
o The larger the data blocks, the less overhead is transmitted over a
period of time.
Examples of related Communications Manager parameters are:
o DLC profiles: 'Maximum RU Size'
o APPC Transmission Service Mode: 'Minimum RU Size' and 'Maximum RU Size'
2. Increased values for pacing, windowing
o Improved throughput
o Better line utilization and overlap between communicating partners
Examples of related Communications Manager parameters are:
o NetBIOS: 'Maximum Transmits Outstanding' and 'Maximum Receives
Outstanding'
o LAN DLCs: 'Send Window Count' and 'Receive Window Count'
o APPC Transmission Service Mode: 'Receive Pacing Limit'
3. Keeping connections available (that is, active, up, and running)
o The connection is already in data transfer mode, thus eliminating the
time to go through connection setup flows and processing.
Examples of related Communications Manager parameters are:
o DLC profiles: 'Free Unused Link'
o LAN DLCs: 'Congestion Tolerance'
o APPC Transmission Service Mode: 'Session Limit'
o APPC Initial Session Limit: 'Number of automatically activated sessions'
Of course, there other factors that may limit the degree to which these values
can be increased. For example, increasing data block size may have an adverse
affect on errors (see 4.3.2.1 Communications Medium Errors). Increased values
may also consume limited system resources such as RAM, CPU and the adapter. A
point may be reached where resources are over-committed, resulting in marginal
or negative performance improvements. (One way resource over-committment may
manifest itself is in dropping frames.) In looking at overall resource usage,
other non-communications applications running in the same machine, or the
amount of traffic generated by other systems, but using the same line, may
have to be considered. Refer to Section 4.4 Communications Manager Parameters
and Tuning Specifics for help on where tradeoffs between performance and
resource utilization occur.
Other miscellaneous tuning guidelines are:
o Don't configure or start services unless they're regularly used since this
will waste RAM.
o Move frequently accessed data closer to the end user. For example, reduce
the number of bridges that are crossed.
o Reduce the number of trips across communications lines and LANs.
4.3.2 System Considerations
ΓòÉΓòÉΓòÉ 7.3.2. 4.3.2 System Considerations ΓòÉΓòÉΓòÉ
Communications medium errors and system resource constraints are major factors
that introduce performance impacts for a given configuration. Following is a
discussion of these factors and the negative impact they have on performance.
4.3.2.1 Communications Medium Errors
Communications medium errors occur when data in a transferred block is
corrupted en-route. It is the responsibility of the protocol(s) to recover
from this loss in an orderly fashion. This discussion covers error recovery in
a terrestrial communications environment only. There are many "extension"
products available today that allow OS/2 communications in a non-terrestrial
environment -- that is, for OS/2-supported protocols to be carried across
non-supported bridges and satellite connections. However, the success of this
will depend on how well the extension product abides by the design assumptions
of the incoming LAN protocol. If the extension product does not keep these
design assumptions, problems that are beyond the scope of OS/2 tuning may
occur.
Assuming a terrestrial communications environment then, there are two basic
ways to recover from errors that occur when data in a transferred block is
corrupted en-route:
o If there is no more incoming traffic, then a timer will be used either at
the Sender or Receiver to trigger either a retransmission or a request for
retransmission respectively.
o If there is more incoming traffic, the Receiver will notice that a block has
been lost. Again, either the Sender will retransmit based on a timer or the
Receiver will issue a request for retransmission at the point where the
discrepancy was detected.
Error rates are determined by several factors (none of which are affected by
tuning parameters):
o Type of medium
o Type of connection
o Network topology
o Amount of network traffic
However, error rates can be affected by data frame sizes. The larger the
frame the more chance the frame will contain an error. The larger the frame,
the longer it takes to retransmit. The larger the window, the more frames
potentially to retransmit.
As a user, one needs to determine the error rate that will be tolerated.
Hence tuning with respect to communications medium errors involves finding
appropriate frame and window sizes where the benefits outweigh the cost of the
frequency of error retransmissions.
4.3.2.2 Constrained System Resources
The tasks of keeping communications connections available, keeping user data
until it is acknowledged (on the transmit side) or until the application
requests it (on the receive side), and managing the communications layers all
take system resources, such as memory, CPU, communications adapters and links,
and the limitations of various communications layer implementations. Data
sizes and flow control (pacing, windows) are the two main factors that cause
resource overcommittment, which results in system performance degradation.
Below is a general description of what happens in a resource constrained
environment. Discussion of the specific Communications Manager parameters
that allow the user to configure the communications environment is deferred to
section 4.4 Communications Manager Parameters and Tuning Specifics.
o Swapping
As more and more memory is used for buffers to hold the data being
exchanged, memory can become overcommitted. This will cause OS/2 to use its
virtual memory management algorithms which result in swapping or discarding
code and data. When swapping to disk occurs, system performance degrades
since disk access is much slower than memory access. Most of Communications
Manager's use of memory is short-term. By using flow control and
constraining the frame size, better overall system performance can be
achieved. In a very overcommitted system, even connections being active but
unused can cause a swapping problem and thus, degrade system performance.
o Discarding
At the adapter level, both the memory on the adapter and the physical memory
in the system unit provided by the device driver have to be available at the
time that a data block arrives. If this memory is not available, the data
block has to be discarded. This may take place between the link and the
adapter or between the adapter and the system unit. This will cause
retransmissions, reducing both throughput and link utilization. The faster
the link, the more noticeable this discarding will become. This is because
as the actual link transmission time becomes smaller, the overhead involved
in retransmitting discarded blocks will be more of the overall
communications handling time.
4.3.3 Hardware Considerations
ΓòÉΓòÉΓòÉ 7.3.3. 4.3.3 Hardware Considerations ΓòÉΓòÉΓòÉ
4.3.3.1 Multiple Network Cards
This suggestion only applies to Token Ring adapter cards.
As described in 2.2 Methodical Approach, it is important to identify the
correct bottleneck before applying time and resources to solve it. If it has
been determined that the communications adapter card is the bottleneck (and
neither the Token Ring itself nor the CPU are overloaded), adding an additional
Token Ring card may be desirable. Possible situations where this could be true
are:
o If reliability of the card is a concern, having 2 cards would enable
communications to go through either adapter dynamically.
o If there is too much activity through the one adapter card, it may be
possible to off-load some of the work to another card. Note that if the
adapter card really is the bottleneck, adding another card may result in the
CPU showing up as the next bottleneck.
4.3.3.2 Communications Adapter Configuration
Communications Manager assumes that the communications adapter on the system
is the same as that described in the active Communications .CFG file. If the
contents of the .CFG file do not match the hardware, Communications Manager
may either end up under-utilizing the adapter, or trying to send too much data
to it.
Additionally, be sure that your adapter supports the parameters specified in
the configuration file (for example, half-duplex for SDLC).
4.3.4 Programming Considerations
ΓòÉΓòÉΓòÉ 7.3.4. 4.3.4 Programming Considerations ΓòÉΓòÉΓòÉ
See the discussion 7.2.2.1 Threads and Priorities relative to Communications
Applications
4.4 Communications Manager Parameters and Tuning Specifics
ΓòÉΓòÉΓòÉ 7.4. 4.4 Communications Manager Parameters and Tuning Specifics ΓòÉΓòÉΓòÉ
Before delving into specific Communications Manager parameters, this
introduction will provide guidelines on which parameters to address for a
particular communications feature, and in what order they should be considered.
4.4.1 Communications Manager Structure and Tuning Flow
ΓòÉΓòÉΓòÉ 7.4.1. 4.4.1 Communications Manager Structure and Tuning Flow ΓòÉΓòÉΓòÉ
Parameter values are modified via the Communications Manager configuration
menus. The main menu (see Figure 4-2. "Communications Manager Main
Configuration Menu") is reached by selecting 'Advanced' from the
Communications Manager action bar and then selecting 'Configuration'. Menus
for each of the communications features are found from sub-menus. These menus
are illustrated in the appropriate parameter description sections.
The parameter descriptions in this section are arranged starting with those
from the lower Communications Manager layers. To use this section, refer to
Figure 4-3. "Communications Manager Structure for Tuning" and find the top
layer feature that you are using. Trace your way to the bottom layer that
describes your configuration. Note that the boxes for functional layers that
have performance parameters are outlined in a solid line. Following the figure
is Table 4-1. "Location of Parameter Descriptions" containing the section and
page reference to the appropriate parameter descriptions. To cover the
performance parameters pertinent to your configuration, start with the bottom
configuration file and work your way back up to the top.
Some parameter settings are ultimately set by a negotiation phase between
communicating partners. Where appropriate, this negotiation will be discussed
in the parameter description.
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-1. Location of Parameter Descriptions Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéCOMMUNICATIONS Γöé SECTION Γöé
ΓöéMANAGER FUNCTION Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé802.2 Profiles Γöé 4.4.2.1 IEEE 802.2 Configuration Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéSDLC DLC Γöé 4.4.3.2 SDLC DLC Adapter Profile Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéLAN DLCs Γöé 4.4.3.3 LAN DLC Profiles for Token Γöé
Γöé Γöé Ring, PC Network and ETHERAND AdaptersΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéNetBIOS Γöé 4.4.2.2 NetBIOS Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéAPPC and SNA Γöé 4.4.3 SNA Feature Profiles Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé3270 Gateway Γöé 4.4.3.8 SNA Gateway Profiles -- Host Γöé
Γöé Γöé Connection Profile Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé3270 Emulation and Γöé 4.4.4 3270 Feature Profiles Γöé
ΓöéFile Transfer Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDFT Γöé 4.4.5 DFT Considerations Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
4.4.2 LAN Feature Profiles
ΓòÉΓòÉΓòÉ 7.4.2. 4.4.2 LAN Feature Profiles ΓòÉΓòÉΓòÉ
Parameters described in this section are accessed from the LAN Feature Profile
menu illustrated in the following figure.
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé Γöé
Γöé Γöé
Γöé LAN Profile Configuration Γöé
Γöé Γöé
Γöé Γöé
Γöé Use the spacebar to select. Γöé
Γöé Γöé
Γöé Adapter number..................>0 Γöé
Γöé 1 Γöé
Γöé Γöé
Γöé Interface.......................>IEEE 802.2 Γöé
Γöé NETBIOS Γöé
Γöé Γöé
Γöé Operation...................... >Display Γöé
Γöé Change Γöé
Γöé Γöé
Γöé Γöé
Γöé Γöé
Γöé Γöé
Γöé Γöé
Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Γöé
Γöé Enter Esc=Cancel F1=Help F3=Exit Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Figure 4-4. Communications Manager LAN Profile Configuration
4.4.2.1 IEEE 802.2 Configuration
The 802.2 profiles are adapter specific. That is, before creating an 802.2
profile, the user is prompted for an adapter type. The parameter descriptions
that follow apply to all adapters unless otherwise noted.
The main guidelines for tuning 802.2 are:
1. Choose the optimum frame size on the LAN
2. Ensure an adequate number of transmit buffers
3. Ensure adequate receive buffer space
4. Ensure an adequate number of receive buffers
5. Tune timers for a remote-bridge environment
Descriptions of and recommendations for performance-related 802.2 parameters
follow.
Adapter "Shared RAM" or "Work Area"
The IEEE 802.2 subsystem has a memory constraint that is based on adapter
"Shared RAM". This "Shared RAM" contains overhead for link stations and
control blocks, as well as transmit buffers and receive buffers, as
illustrated in Figure 4-5. "Shared RAM on Token Ring Adapters".
o Shared RAM - Token Ring
On Token Ring adapters, "Shared RAM" physically resides on the adapter. The
amount of RAM available depends on the particular adapter type, as shown in
the following table.
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé Table 4-2. Token Ring Adapters and their respective Γöé
Γöé Shared RAM Size. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé ADAPTER ΓöéSHARED RAMΓöé
Γöé ΓöéSIZE Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIBM Token Ring Network Adapter Γöé8KB Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIBM Token Ring Network Adapter II - Full LengthΓöé8KB Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIBM Token Ring Network Adapter II - Half LengthΓöé16KB Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIBM Token Ring Network Adapter/A Γöé16KB Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIBM Token Ring Network 16/4 Adapter/A Γöéup to 64KBΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIBM Token Ring Network 16/4 Adapter Γöéup to 64KBΓöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Note: For IBM Token Ring 16/4 adapters, make sure that the adapter can use
the full 64KB of memory. This is done by setting the adapter page size to
16KB (paging is enabled) or 64KB. (For the IBM Token Ring 16/4 Adapter/A,
this is done with the Reference diskette. For an IBM Token Ring 16/4
Adapter, this is done through the appropriate dip-switches.) The default
adapter page size is 16KB. Setting it to 64KB does not provide much
benefit, and it also uses up a larger portion of the total 256KB address
available for shared RAM.
o Adapter Work Area - PC Network* and ETHERAND*
On these adapters, "Shared RAM" resides in system memory rather than on the
adapter card, and is referred to as the "Adapter Work Area." Therefore, the
size becomes a configurable parameter regardless of the hardware.
Unlike the Token Ring adapters, this work area contains the overhead for
link stations and control blocks as well as the receive buffers. However,
transmit buffers reside on the adapter card itself and their number is not
configurable.
Settings & Recommendation
Transmit Buffer Size
The maximum frame size that the adapter will transmit. It must be
divisible by 8.
Settings & Recommendation
Number of Transmit Buffers - Token Ring
The number of buffers available for transmit on the adapter. This parameter
regulates how many frames can be in progress between the device driver and
the adapter.
Settings & Recommendation
Minimum Receive Buffers
The amount of Shared RAM available for Receive Buffers will be whatever is
left:
- In adapter shared RAM (on the Token Ring adapter) after configuring
control blocks and Transmit Buffers, or
- In the adapter work area (in system RAM for PC Network and ETHERAND
adapters) after configuring control blocks.
Receive Buffers are a very precious resource since multiple Receive Buffers
are linked to contain an incoming frame. If the adapter runs out of Receive
Buffers, incoming frames will be discarded and the adapter code will go into
busy state. Retransmissions and extra flows are required to get out of busy
state. Care should be taken to configure in such a way as to avoid getting
into this condition. Fortunately, Receive Buffers are freed as fast as the
data they contain can be copied into system memory.
Settings & Recommendation
Receive Buffer Size
This parameter specifies the size of each Receive Buffer. As described
above, the amount of Shared RAM available for Receive Buffers will be
whatever is left:
- In adapter shared RAM (on the Token Ring adapter) after configuring
control blocks and Transmit Buffers, or
- In the adapter work area (in system memory for PC Network and ETHERAND
adapters) after configuring control blocks.
In determining the optimum Number of Receive Buffers and Receive Buffer
Size, the following two factors should be balanced:
- When large numbers of small buffers are linked to contain a large frame,
there is overhead cost both in processing the frame, and in the buffer
space required to contain the "link address" to the next buffer.
- On the other hand, configuring large buffers wastes space in handling
small frames. Also, since fewer buffers can be configured, a buffer
shortage may result. This buffer shortage will manifest itself by
incoming frames being discarded - see Minimum Receive Buffers above
Settings & Recommendation
Override token release default - Token Ring ( 16/4 only)
16/4 Token Ring adapters allow the option of when to release the token.
When configured for 4 Mbps, the default is normal token release. When
configured for 16 Mbps, the default is early token release.
Settings & Recommendation
Timers
The most important functions of timers are:
- To start error recovery (e.g. sync-pointing),
- To provide a limit for the amount of time a frame will remain
unacknowledged,
- To check whether a link with no traffic is still active. Usually when a
timer expires, it will cause one or more frames to flow that carry no user
data. Setting timers to a low value can cause overhead on the network and
decrease system utilization. Setting them to a high value can delay error
recovery and decrease throughput when errors occur.
For most scenarios, the defaults provided by OS/2 are optimum. Modifying
timers becomes a consideration when too many frames are being lost (and
hence need to be retransmitted). In order to effectively modify timer
values, a good understanding of error rates and communications traces is
required. For those who have this knowledge, the following sections
address timer tuning considerations. However, the following description of
how OS/2 timer values are calculated must be understood before going into
specific timer detail.
Timers involve values at two levels: Timer parameters and Timer Interval
parameters.
- Timer parameters are specified at the 802.2 level and are:
o Group 1 and Group 2 Response Timers (T1) (see below Group 1 Response
Timer (T1))
o Group 1 and Group 2 Acknowledgement Timers (T2) (see below Group 1
Acknowledgement Timer (T2))
o Group 1 and Group 2 Inactivity Timers (Ti) (see below Group 1
Inactivity Timer (Ti))
These values act as a "multiplier" to the timer interval values (described
below). Valid values specified in the Communications Manager 802.2 menu
are integers ranging from 0 - 255. However, the effective value in seconds
for these parameters is the specified parameter value * 40 milliseconds. In
the following timer parameter descriptions, both values set through the
configuration menus, and the 'effective value in seconds' are given.
Note however, that this is not yet the timer value used in communications.
To arrive at the final timer value, the effective 802.2 timer value (in
seconds) is multiplied by a timer interval value as described below.
Having two groups of 802.2 timer parameters enables 2 different sets of
timer values to be active at the same time in a system. Which group is
selected depends on the timer interval value (see below). Only the Group 1
Timers are covered here. Group 2 Timers have the same meaning.
- Timer interval values are passed to the 802.2 interface.
o For NetBIOS, this is specified via the Timer Interval parameters (see
section 4.2.2.2 ).
o For LAN DLC's and the SQLLOO protocol, the default values of the
NetBIOS Timer Interval parameters are used by the system.
o For user-written 802.2 applications, the interval value is passed to
802.2 in a control block.
Valid Timer Interval values range from 1 to 10.
o If the value is in the range of 1 - 5, it is multiplied by the
'effective' Group 1 timer values (in seconds).
o If the value is in the range of 6 - 10, 5 is subtracted and then
multiplied by the 'effective' Group 2 timer values (in seconds).
Group 1 Response Timer (T1)
The Group 1 Response Timer is a multiplier that determines how long to wait
for transmitted commands. The number provided will be multiplied times 40
milliseconds to yield the effective Group 1 timer value.
Settings & Recommendation
Group 1 Acknowledgement Timer (T2)
The Group 1 Acknowledgement Timer is a multiplier that determines the
maximum delay to acknowledge a frame. The number provided will be
multiplied times 40 milliseconds to yield the effective Group 1 timer value.
Settings & Recommendation
Group 1 Inactivity Timer (Ti)
The Group 1 Inactivity Timer is a multiplier that determines the amount of
time that is considered an inactive period. The number is provided will be
multiplied times 40 milliseconds to yield the effective Group 1 timer value.
This timer will result in notification to the user. It has no other
implications for the 802.2.
Settings & Recommendation
Number of queue elements
This item specifies the amount of queue space to reserve in memory. Queue
elements are used internally by 802.2 to keep track of certain events or
actions. When the condition that a queue element is tracking completes, the
queue element will be made available for reuse. When 802.2 runs out of
queue elements, it can no longer handle many events (e.g. new user control
block requests). Thus, the system should be configured with enough queue
elements to handle the traffic through the 802.2. Each queue element takes
22 bytes.
Settings & Recommendation
4.4.2 Lan Feature Profiles (continued)
ΓòÉΓòÉΓòÉ 7.4.3. 4.4.2 Lan Feature Profiles (continued) ΓòÉΓòÉΓòÉ
4.2.2.2 NetBIOS
NetBIOS offers two kinds of services:
o datagram (connection-less), and
o message on a session (connection-oriented).
NetBIOS is a user of 802.2. The maximum size frame that NetBIOS will use is
limited by what is configured for the Transmit Buffer Size parameter of the
IEEE 802.2 Profile. For NetBIOS sessions (result of a Call, Listen), the
frame size is further restricted by a negotiation that will take into account
the smallest of the maximum frame size the receiver can accept and the
smallest of the maximum frame sizes that all bridges along the designated
route allow (source routing).
Full Buffer Datagram
This item selects whether the user of NetBIOS will require the full transmit
buffer (as specified for the Transmit Buffer Size in the IEEE 802.2 profile),
for NetBIOS datagrams. If no, the NetBIOS datagram buffer size will be
limited to 512.
Settings & Recommendation
Number of remote names
This item allows the user to select the maximum number of remote names for
which NetBIOS will remember each name's adapter address. This is called the
NetBIOS remote directory. The benefit of this is that for names contained in
the directory, broadcast frames used to obtain destination addresses are not
sent. In an environment with LAN bridges this can be beneficial for overall
network throughput and availability. The only consideration is that each name
in the remote name directory takes 70 bytes of memory from the NetBIOS Device
Driver Work Area, which is limited to 64KB per adapter.
Settings & Recommendation
Datagrams use remote directory
This item allows the user to configure NetBIOS to use the remote directory for
datagrams sent to specific names. By doing this, broadcast traffic on the LAN
is decreased.
Settings & Recommendation
Maximum transmits outstanding
This item allows the user to select the window that NetBIOS will use on
transmitted frames.
Settings & Recommendation
Maximum receives outstanding
This item allows the user to select the window that NetBIOS will use on
received frames. The acknowledgements will be deferred until T2 expires or
this number of frames is received.
Settings & Recommendation
Retry count (all stations)
This item allows the user decide how the maximum number of times to retransmit
an unacknowledged data frame.
Settings & Recommendation
Response Timer Interval (T1)
This item specifies a number that is multiplied by the Group 1 or Group 2
Response Timer value in the IEEE 802.2 profile. For a specific instance of a
link station connection between two adapters, NetBIOS will increase this
interval value by 1 for each bridge encountered along the route between the
two adapters. Refer to the discussion in the section Timers for information
on how the resulting Response Timer value is calculated.
Settings & Recommendation
Acknowledgement Timer Interval (T2)
This item specifies a number that is multiplied by the Group 1 or Group 2
Acknowledgement Timer value in the IEEE 802.2 profile. For a specific
instance of a link station connection between two adapters, NetBIOS will
increase this interval value by 1 for each bridge encountered along the route
between the two adapters. Refer to the discussion in the section Timers for
information on how the resulting Acknowledgement Timer value is calculated.
Settings & Recommendation
Inactivity Timer Interval (Ti)
This item specifies a number that is multiplied by the Group 1 or Group 2
Inactivity Timer value in the IEEE 802.2 profile. For a specific instance of
a link station connection between two adapters, NetBIOS will increase this
interval value by 1 for each bridge encountered along the route between the
two adapters. Refer to the discussion in the section Timers for information
on how the resulting Inactivity Timer value is calculated. When Ti expires,
NetBIOS issues a session-alive frame to the other adapter to check for link
failure.
Settings & Recommendation
4.4.3 SNA Feature Profiles
ΓòÉΓòÉΓòÉ 7.4.4. 4.4.3 SNA Feature Profiles ΓòÉΓòÉΓòÉ
This section covers the following profiles, all of which are found on the SNA
Feature Profiles menu (see Figure 4-6. "Communications Manager SNA Feature
Configuration Menu").
o SNA Base Profile
o Data Link Control (DLC) Profiles
1. SDLC DLC
2. Token Ring DLC
3. IBM PC Network DLC
4. ETHERAND DLC
o APPC profiles
1. APPC Logical Unit (LU) Profile
2. Partner LU Profile
3. Transmission Service Mode Profile
4. Initial Session Limit Profile
o SNA Gateway Profiles
4.4.3.1 SNA Base Profile
Auto-activate the attach manager
This item specifies whether the APPC attach manager is activated when the APPC
service is started. The attach manager is required to be active if there will
be any APPC Remote Transaction Programs (TP) that will be running on this
particular system.
Settings & Recommendation
4.4.3.2 SDLC DLC Adapter Profile
Load DLC
This item indicates whether this DLC support should be loaded during
initialization.
Settings & Recommendation
Free unused link
This item indicates whether the link is to be taken down when not being used.
Settings & Recommendation
Maximum RU Size
This item specifies the maximum SNA unit of transfer on the link. This value
must be greater than or equal to the RU Sizes in all the APPC configurations.
Communications manager adds 9 bytes (for the Transmission Header, TH, and
Response Header, RH) to the actual RU size to accommodate SNA headers. Note
that the total of 'TH+RH+RU' must equal the value of the MAXDATA parameter on
the Host.
Settings & Recommendation
Send Window Count
This item indicates the maximum number of data packets or frames that can be
sent before an acknowledgement is received. For SDLC, this is a Flow Control,
an Error Control, and a buffering statement. Therefore, it must exactly match
the partner(s) Receive window count.
Settings & Recommendation
Receive Window Count
This item indicates the maximum number of data packets or frames that can be
received before an acknowledgement is sent. For SDLC, this is a Flow Control,
an Error Control, and a buffering statement. Therefore, it must exactly match
the partner(s) Send window count.
Settings & Recommendation
LAN DLC Profiles for Token Ring, PC Network, and ETHERAND Adapters
SQLLOO uses LAN DLC configured parameters for its operation. Where
appropriate, they will be discussed.
Load DLC
This item indicates whether this DLC support should be loaded during
initialization.
Settings & Recommendation
Free unused link
This item indicates whether the link is to be taken down by SNA congestion
control when it is not being used.
Settings & Recommendation
Percent of incoming calls
This item specifies a percentage of the number of links configured to be
reserved for incoming calls.
Settings & Recommendation
Congestion Tolerance
This item specifies a threshold of links in use to links configured, that when
exceeded represents a congestion condition. When congestion has been reached,
APPC begins deactivating unused sessions. When there are no sessions on a
link, APPC takes down the link.
Settings & Recommendation
Maximum RU Size
This item specifies the maximum SNA unit of transfer on the link. It must be
greater than or equal to the RU Sizes in all the APPC configurations. It
should also be 24 bytes less than the IEEE 802.2 transmit buffer size.
Note that Communications Manager adds 9 bytes (for the Transmission Header,
TH, and Response Header, RH) to the actual RU size to accommodate SNA headers.
The total of 'TH+RH+RU' must equal the value of the MAXDATA parameter on the
Host.
Settings & Recommendation
Send Window Count
This item indicates the maximum number of data packets or frames that can be
sent before an acknowledgement is received. Therefore, it must exactly match
the partner(s) Receive window count.
Settings & Recommendation
Receive Window Count
This item indicates the maximum number of data packets or frames that can be
received before an acknowledgement is sent.
Settings & Recommendation
Timers
Settings & Recommendation
4.4.3.4 APPC Logical Unit (LU) Profile
LU Session limit
This is the maximum number of sessions to be allowed between the local LU and
all the partner LUs.
Settings & Recommendation
Maximum number of transaction programs
This is the maximum number of concurrently active transaction programs that
APPC should allow to run using this local LU.
Settings & Recommendation
4.4.3.5 APPC Partner Logical Unit Profile
Partner LU session limit
This item specifies the maximum number of sessions allowed between the local
LU and this partner LU.
Settings & Recommendation
4.4.3.6 APPC Transmission Service Mode Profile
Before describing specific parameters in this profile, the following 2
sections provide background on APPC session establishment and memory usage.
APPC Session Establishment
Communicating TP's (Transaction Programs) use a conversation to exchange
data. In order to establish a conversation, a session has to be established.
In order for a session to be established, the link has to be active.
When a link-level connection is established, the DLC and the partner will
optionally go through XID negotiation. The benefit of XID negotiation is that
it will make sure that the maximum size block flowing on the link is
acceptable to both DLC's and that the receive window on each DLC works with
the partner's send window. (Setting send and receive windows that work with
each other defines the link-level pacing.) Link-level connections are
established when a session needs to be established over that particular link.
Once the link is up, a session can be established. Sessions are started
either because a conversation is starting that requires a session or because
the user has configured Communications Manager to automatically start a
certain number of sessions at startup. (If a session exists and is not in
use, a conversation can be allocated on that session.) Independent APPC
sessions go through a negotiable bind. One of the bind functions is to ensure
that the maximum size block being sent for a particular transmission service
mode is acceptable to the receiver. The bind also informs the partner what its
receive pacing window is going to be on that session. (These two things
define the session-level pacing.) The benefit is that a system, on a session
by session basis, can control the maximum size and number of data blocks that
a partner can send to it.
APPC Memory Usage
APPC allocates and deallocates memory as needed to accommodate RU sizes and
pacing limits. Therefore, setting these parameters high does not tie up
excess memory. However, setting the values too high does make overcommittment
of memory a possibility. When receiving, the DLC allocates memory as
specified by the Maximum RU Size in the appropriate DLC profile, plus SNA
headers (9 bytes for RH+TH). When sending, APPC allocates memory as specified
by the maximum RU size negotiated on the bind for that particular session.
Approximate memory usage limits are:
o For systems that are receiving data most of the time, the maximum memory
used per session is approximately (2 * Pacing * DLC Maximum RU size).
o For systems that are sending data most of the time, the maximum memory used
per session is approximately (2 * Pacing * Session Negotiated RU Size).
These calculations are summed up over all the active sessions. As pacing, RU
size, and number of sessions increase, the likelihood of Communications
Manager using large quantities of memory increases. The transmission mode
profile allows the user to properly tune the system.
Minimum RU Size
This item specifies the smallest RU size allowed for this mode.
Settings & Recommendation
Maximum RU Size
This item specifies the largest RU size allowed for this mode.
Settings & Recommendation
Receive pacing limit
Pacing provides per session flow control between sender and receiver. Pacing
prevents the sender from flooding the receiver with data. The value set in
this parameter specifies the size of the pacing window, the number of RU's
that the sending LU can send on a session before requiring permission for
more.
Settings & Recommendation
Session Limit
This item sets the maximum number of sessions permitted for this mode.
Settings & Recommendation
4.4.3.7 APPC Initial Session Limit Profile
Minimum number of contention winners source
This item specifies the number of sessions for which the local LU will be the
contention winner. A conversation can be allocated on a contention winner
session without requiring permission from the partner.
Settings & Recommendation
Minimum number of contention winners target
This item specifies the number of sessions for which the partner LU will be
the contention winner. A conversation can be allocated on a contention winner
session without requiring any permission from the partner.
Settings & Recommendation
Number of automatically activated sessions
This item specifies the number of automatically activated sessions by the LU
when it is brought up (rather than bringing it up on an allocate request from
the TP).
Settings & Recommendation
4.4.3.8 SNA Gateway Profiles -- Host Connection Profile
General Gateway Information
SNA Gateway provides connectivity to a host from multiple workstations without
requiring a separate connection to the host in each workstation.
The gateway does not perform segmenting for traffic through it. For
non-negotiable binds (3270) coming from the host, if the host RU size is
greater than that supported by the DLC(s) involved, it will be rejected. For
negotiable binds (APPC) coming from the host, if needed, the RU size will be
reduced on the bind to the maximum supported by the DLC(s) involved.
Otherwise, the gateway just passes the bind onto the appropriate workstation.
Being an SNA pass-through gateway, the gateway has no means of flow control,
so it relies on the link-level flow control of the DLC (Send and Receive
Window Count) and the communicating partner's session level pacing.
Therefore, for gateways with lots of workstations and/or slow links to the
host, host and workstation parameter tuning should be exercised appropriately
to avoid overcommitting resources on the gateway machine. Pacing is the
appropriate mechanism to do this. Remember that for each active session (2 *
Pacing * RU Size) will be the maximum memory that could be used at the gateway
also.
For best performance, the Maximum RU Size for the gateway should be set to the
Host MAXDATA value. The Maximum RU Size for all workstations going through
the gateway (that is, the Maximum RU parameter for each of the DLCs) should be
equal to that of the gateway.
SNA Gateway RAM
The following information on additional memory required per LU is based on a
test involving 7 PS/2 Mod 60, 70 and 80 workstations, Token Ring connected to
the gateway. Each workstation used 4 LUs (more than 4 over-utilized the CPU).
The gateway machine was a PS/2 Mod 80. An EHLLAPI application emulating MFI
was used for the measurement.
o There is negligible incremental memory used per defined LU.
o There is negligible incremental memory used per logged-on, but idle LU.
o Incremental memory usage depends on how heavy the aggregate workload -- that
is, on how heavy the traffic is and how much data is being transmitted.
o 3MB of RAM (the minimum required for Extended Edition version), will be
sufficient for most environments with a typical workload
o If SWAPPER.DAT becomes large, additional RAM should be added.
o Host, control unit, VTAM definition, line speed, and media type are the
important factors affecting gateway performance.
Host Connection Profile
Permanent Connection: This item specifies whether the link to the host will
be permanent.
Settings & Recommendation
Auto-logoff Timeout: This item specifies the number of minutes a workstation
is inactive before it is automatically logged off. This in only applicable to
LU's that have set auto-logoff to yes.
Settings & Recommendation
4.4.3.9 LUA Considerations: Maximum Number of LUs
Following are factors that affect the maximum number of LUs that can be
supported by an LUA application:
o Adapter used
o Link speed
o Amount and type of memory
o Performance.
Of these, performance is probably the most important factor in determining the
number of LUs that can be configured.
Based on the SNA architectural limit, 255 LU profiles can be defined in OS/2
Communications Manager. However, based on the performance criteria of the
user, the recommendation is to define no more LUs than will be used. The
reason for this is twofold:
o Each LU defined takes approximately 3KB memory.
For example, if there are 10 LU profiles defined, but only 4 activated, 6 *
3KB or 18KB of overhead is incurred for those LUs not used, as well as the 4
* 3KB plus additional memory to activate those being used.
o Link activation time is effected directly with the numbers of LUs defined
and brought active.
The more LUs configured, the longer it will take to activate the link
connection. Therefore, link speed capability and inherent delays within the
link connection will also impact performance.
Typically, on a model 80 PS/2, approximately 30-40 LUs can be defined
comfortably, although this will again depend on performance requirements.
From that point, the performance level of the system must be gauged to
determine whether the number of LUs should be reduced or whether they can be increased.
4.4.4 3270 Feature Profiles
ΓòÉΓòÉΓòÉ 7.4.5. 4.4.4 3270 Feature Profiles ΓòÉΓòÉΓòÉ
For 3270 connections to the host for SDLC, LAN and DFT-SNA, the RU Size and
pacing is determined by the host and specified in the SNA bind processing.
This bind is non-negotiable. That is, the values for RU Size and Pacing cannot
be changed. Because the 3270 subsystem supports segmentation of RU's into
PIU's, the effective transfer size on the network is the size specified by the
DLC and its partner. The RU size is only limited by the value configured on
the host for this session. For 3270 then, the key component for data transfer
size is the appropriate DLC, the maximum link speed and the maximum RU size.
Parameters affecting 3270 are found from the 3270 Feature Profile menu, shown
in the following figure.
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé Γöé
Γöé 3270 Feature Configuration Γöé
Γöé Γöé
Γöé DFT terminal/printer emulation Γöé
Γöé 1. DFT 3270 profile... Γöé
Γöé Γöé
Γöé Γöé
Γöé Non-DFT terminal/printer emulation (only one may be configured).Γöé
Γöé Γöé
Γöé 2. SDLC 3270 profile... Γöé
Γöé Γöé
Γöé 3. IBM Token Ring Network 3270 profile... Γöé
Γöé Γöé
Γöé 4. X.25 3270 profile... Γöé
Γöé Γöé
Γöé 5. IBM PC Network via Gateway 3270 profile... Γöé
Γöé Γöé
Γöé 6. ETHERAND Network via Gateway 3270 profile... Γöé
Γöé Γöé
Γöé DFT and non-DFT options Γöé
Γöé Γöé
Γöé 7. 3270 color and alarm... Γöé
Γöé Γöé
Γöé 8. 3270 file transfer... Γöé
Γöé Γöé
Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Γöé
Γöé Esc=Cancel F1=Help F3=Exit Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Figure 4-7. Communications Manager 3270 Feature Configuration Menu
4.4.4.1 Logical Terminal Session Profile
Data transfer buffer size override (KB)
This item specifies the buffer size in kilobytes to use when transferring data
to and from the host, in File Transfer and SRPI.
Settings & Recommendation
4.4.2 Logical Printer Session Profile
Printer buffer size
This item specifies buffer size for printer data.
Settings & Recommendation
4.4.5 DFT Considerations
ΓòÉΓòÉΓòÉ 7.4.6. 4.4.5 DFT Considerations ΓòÉΓòÉΓòÉ
The DFT adapter located on the workstation is viewed by Communications Manager
as being owned by the communications controller that the adapter is connected
to (e.g. a 3x74 controller). Hence, values that would otherwise be set on the
PC are determined by host supplied parameters. Specifically, the DFT frame size
is set by the host VTAM* RUSIZES parameter.
Since DFT does not support a sliding window, and there is only one logical
link, adapter resources do not need to be configured.
Note that in OS/2 communications, 'DFT' means 'coax-attached'.
4.4.6 Host-Directed Print
ΓòÉΓòÉΓòÉ 7.4.7. 4.4.6 Host-Directed Print ΓòÉΓòÉΓòÉ
4.4.6.1 Resource Limitations
For any graphics mode printer that is supported by an OS/2 PM printer device
driver, host-directed print uses the Graphics Programming Interface (GPI) API.
This causes the entire print job to be held in memory by the printer driver
until everything from the host has been received and submitted to the spooler
as a job (which causes it to be transferred from memory to disk). During host
to PC transmission, the result is the consumption of a great deal of memory as
well as disk space (due to swapping), which can lead to resource availability
problems. Once the job is on the PC, disk space is consumed with the queued
job.
Note: This is dependent on the printer driver, since the driver controls how
memory is allocated.
These resource availability problems manifest themselves via System Errors
(logged in the OS/2 System Log and accessed via the SYSLOG command). These
errors originate from the Presentation Manager (e.g. being unable to allocate
memory or disk space) and are returned to Host Print through the GPI API call
being used. The return codes are byte reversed and are located at the start of
the data section of the log. Some of the error return codes that may be logged
are:
o x'2006': PMERR_BASE_ERROR
o x'203D': PMERR_INSUFFICIENT_DISK_SPACE
o x'203E': PMERR_INSUFFICIENT_MEMORY
Recommended solutions to this resource problem are:
o Add more memory
o Move SWAPPER.DAT to a larger partition or a new hard disk.
o Ensure that print jobs are no larger than can be handled by available swap
space. This may require large host files to be broken into smaller files
that are printed separately.
4.4.6.2 Host Print Performance
Host-Directed Print involves the movement of data files across multiple
components: host application, communications link, 3270 Host Print control,
OS/2 Print Manager, OS/2 Print Spooler, and the OS/2 printer device driver.
As a result, additional overhead in terms of time may be incurred between
print invocation and actual device printing. Specific factors affecting
performance are:
o Link speed
o Buffering of data
Since job queuing is a two step process (first downloading it from the host
to memory, which may cause swapping, and then queuing it to the printer) the
data may be moved several times.
o Device fonts used
Unsupported device fonts may be sent from the Host to OS/2, which can result
in a slower font (outline font) being selected at OS/2 print time. Best
print performance is achieved by using device fonts (as described in the
Presentation Manager Programming Reference Volume 1, 64F0276, or the IBM
OS/2 Programming Tools and Information, Version 1.2, 64F0273). When device
fonts are used, the printer driver doesn't have to emulate the font in
graphics (APA bitmap) mode.
o Printer device driver
The specific implementation of the printer driver has a definite affect on
performance. See the ITSC Cookbook: OS/2 Print Subsystem (GG22-3631) for a
discussion of printer drivers and the OS/2 print subsystem.
Note: Although OS/2 Host-Directed Printing provides 3287 emulation support
for LU1 and LU3 as well as non-SNA printing, it is spooled printing, not batch
printing. The spooling and buffering required may introduce a degree of time
required above that of predecessor 3270 and 3270 emulator products.
Chapter 5 Lan Server and Requester
ΓòÉΓòÉΓòÉ 8. Chapter 5 LAN Server and Requester ΓòÉΓòÉΓòÉ
5.1 Introduction
5.2 OS/2 Lan Server and Requester Buffers
5.3 IBMLAN.INI Parameter Descriptions
5.4 DOSLAN.INI Parameter Descriptions
5.5 PROTOCOL.INI Parameter Descriptions 5.6 Performance Tuning Specifics
ΓòÉΓòÉΓòÉ 8.1. 5.1 Introduction ΓòÉΓòÉΓòÉ
This portion of the document covers the area of performance tuning for the OS/2
LAN Server product. This includes OS/2 servers, OS/2 Requesters, and DOS LAN
Requesters (DLR). The document is divided into three separate areas:
o Overall LAN Server performance concepts
o Parameter descriptions from the various setup files
o specific tuning information.
All three areas are geared toward performance tuning.
Understanding the first two areas, concepts and parameter descriptions, will
make it possible to determine a performance tuning strategy for a specific
workload. The third section, specific tuning information, actually contains
parameter values for some general LAN environments. Some information from
other sections is repeated in the parameter descriptions, this is done to
allow these descriptions to be used as a reference without having to reread
the whole document.
5.2 OS/2 LAN Server Design and Performance Concepts
ΓòÉΓòÉΓòÉ 8.2. 5.2 OS/2 LAN Server Design and Performance Concepts ΓòÉΓòÉΓòÉ
This section attempts to explain the buffer usage and data flow of the OS/2 LAN
Server. This information combined with the parameter information will
hopefully give a better understanding about how the OS/2 LAN Server works and
allow for good tuning decisions.
Understanding the design concepts behind OS/2 LAN Server is necessary in order
to do any type of specific tuning. Each LAN workload has its own
characteristics and no chart or table is going to be able to give the optimum
tuned parameter descriptions for every workload. Because of this, it is
necessary to understand how OS/2 LAN Server is organized and functions. With
this understanding and knowledge of what type of requests the workload
applications are making, intelligent decisions can be made about which
parameters might need to be changed in order to improve performance. Without
this understanding, tuning can be become very haphazard and good results seldom
attained.
Figure 5-1. "Transfers from a Server to a Requester" shows the major components
that may influence the performance of servers and requesters. The diagram
describes the buffers on the server side and the corresponding buffers on the
requester side. The direction of the arrows indicate data moving from the
server to the requester. The bottom of the diagram includes the hardware
adapter buffers involved in the data transfer.
5.2.1 OS/2 LAN Server and Requester Buffers
5.2.2 Data Flow Control
5.2.3 DOS LAN Requester Buffers
5.2.1 OS/2 LAN Server and Requester Buffers
ΓòÉΓòÉΓòÉ 8.2.1. 5.2.1 OS/2 LAN Server and Requester Buffers ΓòÉΓòÉΓòÉ
This section describes the buffers used to move data between the OS/2 Server
and the OS/2 Requester. These buffers are pictured in Figure 5-1. "Transfers
from a Server to a Requester". The OS/2 Server buffers described are also used
with the DOS LAN Requester (DLR). The DLR buffers are described in another
section.
In addition to the buffers owned and managed by LAN Server, the OS/2 system has
buffers and cache of its own. See 3.3, File Systems, Caches, and Buffers and
Database Manager, LAN Server and HPFS Cache Size in Section 3.3.4 File System
Performance Considerations for more information.
5.2.1.1 OS/2 LAN Server
The server uses two types of buffers to handle read and write requests from
requesters on the LAN: request buffers and big buffers.
o Request Buffers
The most used buffers in OS/2 LAN Server are the request buffers. The
numreqbuf and sizreqbuf parameters in the server IBMLAN.INI file establish
the number and size of the request buffers. The default configuration
provides thirty-six 4KB buffers.
The server uses the request buffers to receive Server Message Blocks (SMBs)
from requesters and to hold data being transferred to or from the network
adapter card buffers. All read or write requests to the server that are 4KB
or smaller use the request buffers.
o Big Buffers
The other server buffers are big buffers. The numbigbuf parameter in the
server IBMLAN.INI file establishes the number of big buffers. They are 64KB
and not changeable. Read or write requests to the server that are greater
than the size of a request buffer use big buffers.
Examples of large read and write requests include program loads and the DOS
Copy command, which requests 64KB to be read and written. Applications can
issue read and write requests up to 64KB (the DOS and OS/2 limit).
Server throughput can be increased and requester application response time
decreased by transferring more data with each SMB sent across the LAN. Big
buffers are used in association with the SMB Raw protocol to further enhance
throughput and response time for large sequential file transfers.
5.2.1.2 OS/2 LAN Requester
OS/2 LAN Requester uses two types of memory spaces to provide data buffering
for the requester: work buffers and work cache.
o Work Buffers
Work buffers correspond to the request buffers on the server. The numworkbuf
and sizworkbuf parameters in the requester IBMLAN.INI file establish the
number and size of the work buffers. The default configuration provides
fifteen 4KB buffers.
The requester uses the work buffers to construct the SMBs that are sent to
the server. The work buffers also provide the data buffer between the
application running on the requester and the network adapter card.
The work buffers on a requester work in conjunction with the request buffers
on the server. For the best performance, both sets of buffers should be the
same size. The 4KB default size is best for the majority of LAN
environments, but can be adjusted. Memory on the Server or Requester can be
wasted if these buffers are not matched. The actual buffer value used by the
LAN will negotiated to the lower of two (sizreqbuf, sizworkbuf), thus
wasting the unused buffer space allocated.
It is necessary to increase numworkbufs only if multiple OS/2 applications
are using LAN Server concurrently.
o Work Cache
The work cache memory space corresponds to the big buffers on the server.
The maxwrkcache parameter in the requester IBMLAN.INI file establishes the
size of the work cache. The default configuration provides a 64KB work
cache.
Data transfers to and from the server that are greater than the size of a
work buffer use the work cache. Performance can be improved when multiple
OS/2 applications are using the LAN Server concurrently by enlarging the
work cache. When enlarging the work cache, use 64KB increments to correspond
to the big buffers on the server.
Note: The local file system cache in the requester does not cache any data
requests to the redirected drive.
5.2.1.3 OS/2 Buffer Matching
Corresponding buffers in the server and requester should be assigned
equivalent sizes. When buffers are not matched, the request-processing
overhead increases and can degrade performance. For instance, if the
maxwrkcache parameter in the IBMLAN.INI file on the requester is assigned a
value that is not a multiple of 64KB (hence, unmatched with the big buffers
size on the server), either the excess storage is not used or the server must
send multiple blocks of data to the requester to satisfy a request.
Figure 5-2. "I/O buffers for a Requester to Server transfer" and Figure 5-3.
"I/O buffers for a Server to Requester transfer" show how two different
buffering schemes handle a request from a workstation.
Figure 5-8. "I/O buffers for a DLR to Server transfer" and Figure 5-9. "I/O
buffers for a Server to DLR transfer" show the buffer relationship when using
DOS LAN Requester. The buffer structure used by a DOS LAN Requester is
similar to the structure used by an OS/2 LAN Requester.
5.2.2 DataFlow Control
ΓòÉΓòÉΓòÉ 8.2.2. 5.2.2 Data Flow Control ΓòÉΓòÉΓòÉ
OS/2 LAN Server is designed to optimize the movement of file I/O data from the
server to the requester. This is not a straightforward task because of the
many ways a user application can access data from a file system. It is helpful
to consider the following:
1. A read request for a small amount of data, (for example, 512 bytes)
residing anywhere in the file being accessed.
2. A series of read requests, each for 512 bytes (one sector), located one
after the other on the fixed disk.
3. A large read request, larger than the request buffer size. An example of
this would be a large file transfer.
Case 1 includes random file access, and case 2 includes sequential file
access, and case 3 includes large file transfer:
1. Random File Access
This type of file access is characterized by a request for a small amount
of data that may reside anywhere in the file. Often, this requires a disk
seek. Seeks require much more time than reading the same data from
memory.
Caching for small-record random I/O operations reduces the number of disk
seeks performed. If the file being accessed is small compared to the cache
size and is accessed frequently, much of the file is placed into the cache
to provide optimum response time for the random accesses to that
particular file.
As the size of the file becomes approximately equal to the cache size, the
likelihood of finding the desired data in the cache diminishes rapidly.
This is because the entire file may not be in the cache at any one time.
Unused cache pages are removed from the cache by the LRU algorithm as a
result of file I/O requests for other files at the server.
In the extreme case, the file is much larger than the cache, and the data
being searched for in the cache is rarely found. Performance may be
improved by increasing the cache.
2. Sequential File Access
This type of file access is characterized by successive calls to the file
system, requesting data that is physically contiguous on the fixed disk.
No physical movement of the read/write head assembly is required. Of
course, if the file being read or written is larger than the amount of
data contained on a disk cylinder, the read/write head assembly would have
to move to adjacent cylinders. This action takes less time than
multi-cylinder moves common to random file accesses. For sequential file
access, the throughput, or number of bytes per second transferred from the
fixed disk, can be much larger than for random file access since the time
required to move the read/write head assembly to the data location is
minimized.
3. Large File Transfer
This type of file access is characterized as a data request greater than
the size of the requesters Workbuf. If a file is larger than 64K this
request is broken into separate requests less than or equal to 64K. The
actual size of these requests is determined by the application requesting
the data. The amount of movement of the read/write head assembly depends
on how much of the file is contained in adjacent cylinders on the disk.
Depending upon the size of the file and amount of free space on the disk,
the file system sometimes splits up the file and writes it to different
areas of the disk.
Having considered three types of file access, consider how OS/2 LAN Server
handles each type. The following examples assume the Server's request buffers
are set to 4K bytes (sizreqbuf) and the Requester's work buffers are also set
to 4K bytes (sizworkbuf).
5.2.2.1 Random File Requests
An application running on the requester issues a DOS Read request for 512
bytes of data from the server. The request is passed to the Redirector, which
constructs a Server Message Block (SMB) for DOS Read and calls NetBIOS to
transmit the SMB to the server.
NetBIOS transports the SMB over Token-Ring or Ethernet** to the server, where
it is processed. The SMB is interpreted as a DOS Read of 512 bytes, and a call
to the Server's file system is issued. Since the amount of data requested is
less than 2KB, the file system reads 2KB from the fixed disk, writes it to
cache (assuming an HPFS drive and cache threshold), and returns to the DOS
Read call with the 512 bytes of data requested.
The data goes back to the requester in an SMB that correlates with the
original DOS Read SMB. The Redirector strips the data out and returns it to
the application's DOS Read call. This sequence of operations occurs with each
random file read.
See Figure 5-4. "SMB Protocol: Random Read of 512 Byte Record"
5.2.2.2 Sequential File Requests
Sequential file accesses can be handled in a more efficient manner than random
accesses since the location of the next data to be read is known.
An application at the requester issues a read request for 512 bytes of data.
At this point, the request is similar to the random file request and processed
the same. After that read is complete, the application issues a second read
for the 512 bytes immediately adjacent to the first block. OS/2 LAN Requester
interprets this adjacent data request as a signal that the file is being
accessed sequentially, and initiates a special mode of operation called
buffered read-ahead.
An OS/2 LAN Requester has, by default, 60KB of memory set aside for buffering
of SMB requests. This memory is divided into fifteen 4KB work buffers (whose
number and size can be changed in the IBMLAN.INI file: numworkbuf,
sizworkbuf).
The server has a corresponding set of 4KB (sizreqbuf)buffers called request
buffers. Once the requester is aware that the application is reading data
sequentially, it changes the DOS Read request coming from the application to
4KB from 512 bytes, since both the requester and server have enough buffer
space to handle 4KB as easily as 512 bytes.
When the application issues the next 512-byte read, the read occurs from the
requester's work buffer and the SMB is not issued. The server software also
detects the sequential access and goes into a complementary read-ahead. At
this point, the server brings 4KB from the fixed disk into another requester
buffer in anticipation of another 4KB request from the application. There
should be at least two requester buffers in the server for each active
workstation using that server.
Sending more data with each SMB increases server throughput and decreases
application response time. The decreased application response time is due to
the time saved by not constructing additional SMBs, sending them to the
server, processing them, and receiving the response. Even if an application is
not going to read more than two 512-byte records, the worst that can happen is
that some data sent to the requester will not be used. The requester-derived
signal that begins a buffered read-ahead operation is canceled when two
successive read requests are not contiguous.
See Figure 5-5. "Sequential Read of Multiple 512 Byte Records"
5.2.2.3 Large File Transfers
When this type of access is initiated, two types of protocols can be used: SMB
Raw Protocol, and Read Block Multiplex. Which of these two protocols is used
depends on the availability of big buffers. The SMB Raw protocol is used when
there are big buffers available and the Read Block Multiplex protocol is used
when these buffers are not available. The availability of big buffers is
determined by the first SMB request.
o SMB Raw Protocol
SMB Raw protocol enables fast data transfer across the network. The term raw
indicates that after the first SMB is sent, the remaining transmission is
all data (that is, no SMB headers). The SMB Raw protocol is initiated by the
following sequence:
1. An operator at a workstation copies a file from the server to the local
fixed disk. The DOS Copy command issues a 64KB read request to the
redirected drive, causing the Redirector to construct an SMB.
2. Detecting a large data request, the Redirector issues a special SMB
that requests 4KB (sizworkbuf) of data and polls the server for the
availability of a big buffer. The server sends the 4KB of data
requested and confirms the availability of a big buffer.
3. Now, the Redirector issues a Read Block Raw SMB for 60KB to the server.
The server fills one of the big buffers with 60KB of data and sends the
data. As previously described in Sequential File Requests above, the
server detects sequential accessing of the file being copied and starts
its own read-ahead of the next 64KB of the file into another big
buffer, if available. The SMB Raw protocol provides extremely fast
data transfer across the network.
See Figure 5-6. "SMB Raw Protocol: Large File Read"
o Read Block Multiplex
A variation to the SMB Raw protocol involves Read Block Multiplex. If the
server or requester in the previous sequence do not have any big buffers
available, Read Block Multiplex is used in place of Read Block Raw SMB. The
data is sent in 4KB (sizreqbuf) request buffers as quickly as they can be
set up and transferred to the work buffers on the requester.
Since SMBs are sent with each 4KB of data and 4KB data messages are sent
instead of 64KB messages, Read Block Multiplex is not as fast as Read Block
Raw SMB. The SMB Raw protocol can be initiated by any read or write request
greater than the work buffer size. (sizworkbuf)
See Figure 5-7. "SMB Multi. Protocol: Large File Read"
5.2.3 DOS LAN Requester Buffers
ΓòÉΓòÉΓòÉ 8.2.3. 5.2.3 DOS LAN Requester Buffers ΓòÉΓòÉΓòÉ
This section describes the various buffers and memory areas used by the DOS LAN
Requester to move data to and from the OS/2 Server. Understanding how and when
these buffers are used, makes tuning for specific workload more of an intuitive
process.
5.2.3.1 DOS LAN Requester
The DOS LAN Requester uses two types of buffers: network buffers and big
buffers.
When a DOS LAN Requester is making a data read larger than the big buffer size
another type of protocol is used, User Memory Transfer.
o Network Buffers
Network buffers correspond to the request buffers on the server. The /NBC
and /NBS parameters in the DOSLAN.INI file establish the number and size of
the network buffers. The default configuration provides four 1KB buffers.
The DLR uses the network buffers to construct the SMBs that are sent to the
server. The network buffers also provide the data buffer between the
application running on the DLR and the network adapter card.
Since the DLR is limited to the 640KB memory space, these buffers are kept
at small default values to conserve memory. If application memory
requirements permit, the /NBS parameter value can be increased to 2KB (or
4KB) to improve sequential-file-access performance for data requests smaller
than the big buffer size (/BBS). The /NBC parameter should not be reduced
from its default value.
o Big Buffers
Big buffers on the requester are used in conjunction with big buffers on the
server when SMB Raw protocol is initiated. The /BBC and /BBS parameters in
the DOSLAN.INI file establish the number and size of the big buffers. The
default configuration provides one 4KB big buffer.
Data transfers to and from the server that are greater than the size of a
network buffer use big buffers. As with network buffers, the amount of
memory allocated by default is limited to the 640KB DOS memory constraint.
The big buffers in the DLR are not as important as the work cache in the
OS/2 requester due to the DLR User Memory Transfer feature described in the
large file transfer section.
The big buffer is important in applications that require sequential writes
to the server in data buffer sizes greater than /NBS, including DOS Copy
operations directed to the server. The writes go across the network in
message sizes determined by the size of the big buffer (/BBS).
/BBS must be larger than /NBS, or a Net Start error message is issued.
o User Memory Transfer
When the read data request size is greater than the big buffer size (/BBS),
the User Memory Transfer function moves the data from the network adapter
card buffer directly to the user memory space allocated for the data. User
Memory Transfer moves the data for both program load and application read
requests, but does not move the data through big buffers before sending it
to the application. The big buffers are there to provide flexibility to a
user who wants to keep /NBS small but still have at least one large buffer
which can handle an applications common I/O record size. Use Table 5-1.
"Data Message Sizes for Sequential File Transfers" to help decide the best
memory/performance trade-off for specific applications.
Table 5-1. "Data Message Sizes for Sequential File Transfers" shows examples
of actual data message sizes sent across the LAN, depending upon the request
size and the various buffer sizes. All these values refer to sequential
file transfer with various record size requests.
The data in Table 5-1. "Data Message Sizes for Sequential File Transfers" can
be described by the equation:
If (RecordSize <= NBS) then MessageSize = NBS
Else If (RecordSize <= BBS) then MessageSize = BBS
Else MessageSize = RecordSize. (User Memory Transfer)
5.2.3.2 DOS LAN Requester Buffer Matching
Because of the limited memory available in the DOS environment, there are
special considerations when matching DLR to OS/2 Server buffers. The default
values for DLR (1K) and OS/2 LAN Server (4K) are not matched. Matching these
buffers is desirable for the same reasons as in OS/2 LAN Requester. If the
value of the servers buffers (sizreqbuf) is lowered, special care should be
taken to match the whole network to that level.
See Figure 5-8. "I/O buffers for a DLR to an OS/2 Server transfer"
See Figure 5-9. "I/O Buffers for a Server To DLR Transfer"
5.3 IBMLAN.INI Parameter Descriptions
ΓòÉΓòÉΓòÉ 8.3. 5.3 IBMLAN.INI Parameter Descriptions'. ΓòÉΓòÉΓòÉ
When OS/2 LAN Server first starts, or when it needs more information, OS/2 LAN
Server checks the IBMLAN.INI file. This file contains all the default
information for both the Server and the OS/2 Requester. The DOS LAN Requester
has a corresponding file (DOSLAN.INI) which is covered in another section.
You can adapt OS/2 LAN Server to meet the needs of individual users on the
network. For example, a network with few users may require a network
definition different from that of a network with many users. You can alter the
Server and Requester definitions by changing parameter options in the
IBMLAN.INI file. This file is located in the IBMLAN subdirectory of every OS/2
Server or OS/2 Requester.
The following sections list those parameters that are most likely to be changed
while performance tuning the OS/2 LAN Server/Requester environment. The
descriptions are designed to explain the function of the parameter, when it
might be advantageous to change its value from the default, and how changing
the parameter value may affect performance. Parameters not listed in this
document are generally not thought of as performance related parameters, though
many of them are important to the functionality of OS/2 LAN Server. These
values should rarely be changed, and even then not for performance tuning
purposes.
5.3.1 Changing IBMLAN.INI
ΓòÉΓòÉΓòÉ 8.3.1. 5.3.1 Changing IBMLAN.INI ΓòÉΓòÉΓòÉ
Before changing any parameters, try running your LAN environment using the
default values in the IBMLAN.INI file. If the default values do not meet your
network needs, then you will want to go through the steps to adjust the
parameter values.
Many entries in the IBMLAN.INI file can be overridden for a logon session using
either the full-screen interface or the NET START or NET CONFIG commands. Other
parameters cannot be overridden. To change parameters that cannot be
overridden, or to make permanent changes to any of the parameters, you must
edit the IBMLAN.INI file.
The sections of the IBMLAN.INI files are:
o Networks
o Requester
o Messenger
o Server (for servers only)
o Alerter (for servers only)
o Netrun (for servers only)
o Replicator
o Netlogon (for servers only)
o Services.
Most changes in the IBMLAN.INI file take effect when the service corresponding
to the changed section is restarted. For example, changes to the computername
parameter in the Requester section take effect the next time the requester is
started.
When you edit the IBMLAN.INI file, keep in mind the following:
o Do not edit any default values in the Services section.
o While you can change values in sections other than Services, do not delete
any parameters or attempt to change parameter names. Parameter entries have
the form parameter=value. Change only the value portion.
o You can add comments to the IBMLAN.INI file. Comments must begin with a
semicolon (;) in column 1.
o Entry values in lists are separated by commas, except in the Replicator
section, where values are separated by semicolons.
The following three IBMLAN.INI file parameters are used when the computer
starts. If you change the values of any of these parameters, you must restart
your computer for the change to take effect.
o net1 parameter in the Networks section
o maxcmds parameter in the Requester section
o maxthreads parameter in the Requester section.
The following sections reflect the organization of the IBMLAN.INI file.
Parameters are listed with minimum, maximum, and default values.
The maximum values given for each parameter represent levels above which the
service will not start successfully. Actual resource constraints may prevent
these maximum values from being attained.
5.3.2 Performance Parameters
ΓòÉΓòÉΓòÉ 8.3.2. 5.3.2 Performance Parameters ΓòÉΓòÉΓòÉ
5.3.2.1 OS/2 Server
This section contains those parameters of the IBMLAN.INI that affect Server
performance, they appear in the Server section of the IBMLAN.INI file. The
parameters are listed here in order of performance effectiveness. The first
parameters will have, in most cases, the greatest affect on performance. The
default values for these parameters are set up for an average LAN with 12 to 18
concurrent requesters. It is a good idea to run with these values first since
most environments will run well. If a performance problem occurs, try to
determine the characteristics of the specific workload, then proceed to change
one parameter at a time in order to see the affects of each change.
o numbigbuf
This parameter specifies the number of 64KB buffers the server uses for
moving large files or amounts of data. Normally, these buffers are used
while loading programs and copying files from the server to the requester.
Increasing the value of this parameter may improve performance for file copy
and program load.
Generally, you should have two big buffers for each simultaneous heavy-load
user. For example, if you have ten users but only three of them are copying
large files at the same time, you should have six big buffers. Generally,
this is considered a major tuning parameter, meaning that this should be the
first parameter changed if performance is poor for program load and file
transfers. Since each big buffer takes up 64KB of memory, do not allocate
more big buffers than the server can support.
numbigbuf is also directly related to srvheuristic 17 and 18. The Server is
initialized with 3 bigbufs; additional bigbufs are dynamically allocated as
needed up to the value of numbigbuf. Srvheuristic 17 sets how long
dynamically allocated buffers (beyond numbigbuf) stay resident in memory.
Srvheuristic 18 determines how long the Server waits after failing to
allocate additional big buffers before trying again. Dynamic allocation of
additional big buffers (greater than numbigbuf) will only occur only for
write requests to the server, not read requests. (See srvheuristic section
for more information).
Default value: 5
Minimum value: 0
Maximum value: 128
o numreqbuf
This parameter specifies the number of buffers the server uses to take
requests from requesters. This should be one of the first parameters looked
at if there is a performance problem with the LAN Server environment. It is
important to determine the number of concurrent requesters the server is
dealing with and set this value accordingly. For best performance, the
server should have enough request buffers available to handle a peak request
workload. You should allocate two or three request buffers (numreqbuf) for
each requester actively sending requests to the server. Any more than 3
buffers for each concurrent requester probably will not help performance and
will use up memory. The request buffers are pooled and shared among all
requests coming to the server. Therefore, the recommended number of
allocated buffers can be less than 3 per requester as the number of active
requesters increases. It is important to realize that these buffers are
normally used for data requests up to sizreqbuf, therefore changes to
numreqbuf may affect performance in an environment with frequent requests of
this size. For environments where most of the data requests are larger than
sizworkbuf, refer to numbigbufs. The default value here is set for 12 to 18
concurrent users.
Default value: 36
Minimum value: 5
Maximum value: 408
The maximum depends upon the the value of sizreqbuf, as defined by the
following equation:
max numreqbuf = (65536 Ў (sizreqbuf + 260)) x 8
where 65536 is the size of one segment, 260 is framing overhead, and 8 is
the maximum number of segments allowed to be used for reqbufs.
o numfiletasks
This parameter specifies the number of concurrent processes that handle file
and print requests from requesters. The processes (file tasks) are
multi-threaded, which allows multiple read/write requests to be processed
concurrently. There is a maximum of 48 threads per Filetask. Under normal
conditions this should be sufficient. But, in an extreme environment where
more than 45 or so Requesters are all trying to access the same range of a
file at once additional Filetasks could be necessary, especially if there
are some other Requesters on the net who are trying to do other things.
(Keep in mind that when the server exhausts its threads for a process it
logs a message in the error log. Numfiletasks should be set to 1. If you see
such an error in your error log you can increase this value.
Default value: 1
Minimum value: 1
Maximum value: 8
o sizreqbuf
This parameter sets the size, in bytes, of the buffers the server uses to
take requests from requesters. The value set for this entry should be the
same for every server on the network. The value of sizwrkbuf on each
requester should also be equal to the value of sizreqbuf. There is a limit
to the buffer space available, therefore the value of sizreqbuf directly
affects the maximum value for numreqbufs. The maximum number that can be
configured can be calculated as follows:
max numreqbuf = (65536 Ў (sizreqbuf + 260)) x 8
where 65536 is the size of one segment, 260 is a framing overhead, and 8 is
the maximum number of segments allowed to be used for reqbufs.
Default value: 4096
Minimum value: 1024
Maximum value: 32768
The size of these buffers affects the size that should be specified for the
transmit buffer size in the Communication Manager configuration for optimum
performance:
Transmit Buffer Size = Sizreqbuf + 128
The 128 assumes an SMB less than or equal to 78 bytes. Since the SMB size
is variable, the performance could be impacted if the SMB is consistently
above 78 bytes. The above equation is a rule of thumb for performance. The
network will still function if the value is smaller than this. The amount
of memory available will vary with different adapter types, and may make it
impractical to use the above equation. See the Communications Manager
portion of this document for more information.
5.3.2.2 OS/2 Requester
This section contains those parameters of the IBMLAN.INI that affect Requester
performance, they appear in the Requester section of the IBMLAN.INI file. The
parameters are listed here in order of performance effectiveness. The first
parameters will have, in most cases, the greatest affect on performance.
Generally it not necessary to change any of these parameters from their
default values, except in the case where the requester has multiple
applications concurrently accessing LAN Server resources.
o maxwrkcache
This parameter sets the size limit, in kilobytes, of the requester's
large-transfer buffers. Increase the maxwrkcache parameter value if your
requester has multiple applications running that are file-intensive, such as
copying large files to and from the server, and performance is poor. This
parameter must be a multiple of 64 in order to match the size of the
corresponding Server bigbufs. The Server bigbufs have a set size of 64KB.
The number of Server bigbufs is set by the numbigbuf parameter.
Since increasing this parameter may only be effective when a requester is
running multiple applications against the server and each increase uses an
additional 64KB of the requester's memory, it is generally not suggested
this value be changed.
Default value: 64
Minimum value: 0
Maximum value: 640
o sizworkbuf
This parameter sets the size of requester buffers, in bytes. This value
should be a multiple of 512. It should be the same for every requester on
the network and equal to the sizreqbuf value used by servers. Since this
value should match all corresponding buffers on the network, it is not
recommended that it be changed. See the OS/2 buffer matching section for
more information.
Default value: 4096
Minimum value: 1024
Maximum value: 16384
The size of these buffers affects the size that should be specified for the
transmit buffer size in the Communication Manager configuration for optimum
performance:
Transmit Buffer Size = Sizworkbuf + 128
The 128 assumes an SMB less than or equal to 78 bytes. Since the SMB size
is variable, the performance could be impacted if the SMB is consistently
above 78 bytes. The above equation is a rule of thumb for performance. The
network will still function if the value is smaller than this. The amount
of memory available will vary with different adapter types, and may make it
impractical to use the above equation. See the Communications Manager
portion of this document for more information.
o numworkbuf
This parameter sets the number of buffers the requester can use to store
data for transmission to and from the server. These buffers are used in
constructing the SMB's sent to the server and also provide data buffering
between the application running in the requester and the network adapter
card.
This parameter will probably not impact performance a great deal and hence
should rarely be changed. However it may help requester performance to
increase this value if there are multiple applications on the requester
accessing server resources. If changing this value, be aware that each
additional buffer takes up memory and thus wastes memory if not used.
Default value: 15
Minimum value: 3
Maximum value: 50
o maxcmds
This parameter sets the maximum number of NetBIOS commands a requester can
send to the computer's network adapters simultaneously. Increase the value
of this parameter if you have multiple applications at the requester using
LAN Server simultaneously. Since command processing takes up memory, do not
specify a number higher than you need. The recommended value is 1.6 times
the value specified for the maxthreads parameter.
If you change this value, you must restart your computer to make the change
effective.
Default value: 16
Minimum value: 5
Maximum value: 255
o maxthreads
This parameter sets the maximum number of threads within a requester
available to handle simultaneous network requests. Increase the value of
this parameter if the requester has multiple applications using LAN Server
simultaneously. Since each thread takes up memory, don't change from the
default unless you suspect there are additional threads required.
If you change this value, you must restart your computer to make the change
effective.
Default value: 10
Minimum value: 10
Maximum value: 254
5.3.3 Heuristics
ΓòÉΓòÉΓòÉ 8.3.3. 5.3.3 Heuristics ΓòÉΓòÉΓòÉ
The following sections contain descriptions for both the Server heuristrics
(srvheuristics) and the Requester heuristics (wrkheuristics ) contained in the
IBMLAN.INI file. Many of these heuristics depend upon each others values.
These relationships should be understood before changes are made in order to
make certain that the changes are effective. Again the default values are
considered "tuned" for most environments. They should not be changed unless
performance problems are understood and there seems to be possible benefit in
changing one of these low level functions.
5.3.3.1 Srvheuristics
This section lists the srvheuristics used for fine tuning the Server's
performance and communication with the Requesters.
The srvheuristic is set up as one long 19 character variable in the Server
section of the IBMLAN.INI file where each digit has an independent meaning.
Except where noted, each is a binary digit where 0 means off or inactive, and
1 means on or active. Other values are explained in the following descriptions
of each digit.
1
0123456789012345678 (Position)
The default value is: 1111014111131110133
Digit Meaning
0 Use opportunistic locking when opening files. This lets the Server
assume that the first requester of the file is the only active process
using that file. The Server allows buffering of the reads and writes of
the file while preventing a second requester from accessing the file
until the buffered data is flushed. This buffering can take place even
though the user opens the file in deny-none sharing mode.
Default value: 1
For opportunistic locking to occur, both this heuristic and wrkheuristic
0 in the requester must be active. See wrkheuristic 0 and srvheuristic
15 .
1 Use read-ahead (i.e. read additional data assuming this may be the data
the requester wants) when the requester is doing sequential access, as
follows:
Value Meaning
0 Do not use read-ahead
1 Use single-thread read-ahead
2 Use asynchronous read-ahead thread.
Default value: 1
This heuristic pertains to reading ahead into the server's buffers
(big buffers and requester buffers) from the file system and cache.
See Figure 5-10. "Heuristic Relationships: Large Record Buffering"
and Figure 5-11. "Heuristic Relationships: Small Record Buffering".
2 Use write-behind (i.e. tell the requester a write is completed before
actually doing the write). If the write generates an error, the error
appears on a subsequent write. Files opened with the write-through bit
set do not use write-behind.
Default value: 1
This heuristic pertains to writing behind from the server's buffers (big
buffers and requester buffers) to the file system and cache. See Figure
5-10. "Heuristic Relationships: Large Record Buffering" and Figure 5-11.
"Heuristic Relationships: Small Record Buffering".
3 Use chain sends.
Default value: 1
For the chain send NetBIOS command to work, both this heuristic and
wrkheuristic 8 in the requester must be active (set to 2, default). See
wrkheuristic 8 for a description.
4 Check all incoming SMBs for correct format. This is useful when using
mixed versions and brands of network software on the LAN.
Default value: 0
To prevent wasted CPU cycles in an OS/2 LAN Server environment, leave
this heuristic at the default.
5 Support File Control Block (FCB) opens (i.e. collapse all FCB opens for
a file to a single open). This is only useful for DOS applications on
the network.
Default value: 1
6 Set the priority for the server. The table referenced below lists the
possible priority values (0 is the highest priority and 9 is the
lowest). Refer to the OS/2 Technical Reference for a description of the
DosSetPriority command. Table 5-2. "Server Priority to OS/2 Dispatching
Priority Comparison" shows possible values:
Default value: 4
Server priority may be set to allow other applications to have more CPU
access, if required. For example, changing the priority from 4 to 5
causes applications on the server machine to respond more quickly, but
slows response to requests from the network.
7 Automatically allocate more memory for directory searches as needed, up
to maxsearches. If DOS requesters are on the network, set this to 1.
Default value: 1
This heuristic pertains to directory searches (DosFindFirst). Memory is
allocated dynamically instead of being locked up when it may not be
needed. See maxsearches on page 5.3.4 Capacity Related Parameters for a
description.
8 Write records to the audit trail only when the scavenger wakes, (at an
interval set by srvheuristic 10). The scavenger is a high-priority
server thread that monitors the network for errors, writes to the error
log and audit trail, and sends alerts (see srvheuristic 10).
When this is set to 0, anything that requires a write to the audit
trail wakes the scavenger.
Default value: 1
9 Do full buffering (as controlled by srvheuristics 1 and 2) when a file
is opened with deny-write sharing mode. When this is set to 0,
deny-write access has no buffering for any requester using this server.
See also wrkheuristic 23 .
Default value: 1
If an application breaks using buffering of deny-write opened files, use
this heuristic to disable buffering for all requesters. See Figure 5-11.
"Heuristic Relationships: Small Record Buffering".
10 Set the interval for the scavenger to wake up. The scavenger is a
thread of the server process that does the following tasks:
o Automatically disconnects sessions
o Sends administrative alerts
o Writing to the audit trail file (see srvheuristic 8).
Set this entry as follows:
Value Meaning
0 5 seconds
1 10 seconds
2 15 seconds
3 20 seconds
4 25 seconds
5 30 seconds
6 35 seconds
7 40 seconds
8 45 seconds
9 50 seconds.
Default value: 1
srvheuristic 8 can cause the scavenger to wake up at other times.
11 Allow "compatibility-mode opens" of certain types of files by
translating them to sharing mode opens with deny-none. This is useful
for sharing executable and other types of files.
This heuristic controls how strictly the server enforces compatibility
opens for read only. In the strictest sense of compatibility opening, if
any of the opens against a file is in sharing mode, or if another
session has the file open in compatibility mode, a compatibility-mode
open of that file fails.
The settings of this heuristic allow you to relax the strictness of
compatibility opens. The first level allows different DOS LAN Requester
workstations to execute the same programs. The second level extends
this to batch files. The highest level translates compatibility-mode
opens into deny-none sharing mode while maintaining access authority
(read-only, write-only or read-write). Not all applications support this
mode of operation.
Values for srvheuristic 11 include:
Value Meaning
0 Always use compatibility-mode opens.
1 Use deny-none sharing mode if read-only access to .EXE or
. .COM files is requested on a compatibility-mode open.
. Use compatibility-mode for a .BAT file or for .EXE and .COM
. files with write access requested.
2 Use deny-none sharing mode if read-only access to .EXE or
. .COM files is requested. Use deny-write sharing mode
. if read-only access to .BAT files is requested. Use
. compatibility-mode if write access to .EXE, .COM, or
. .BAT files is requested.
3 Use deny-none sharing mode for any compatibility-mode open
. request.
Default value: 3
12 Allow DOS LAN Requester workstations to use a second NetBIOS session
when sending printer requests. If this is not set, a second NetBIOS
session ends any previous sessions set up for that DOS LAN Requester.
If these sessions are used, make sure that there are enough NetBIOS
sessions available at the server. Setting this value to 1 only allows
the usage of additional sessions, it does not invoke the use of those
sessions.
Default value: 1
13 Set the number of 64KB buffers (big buffers) used for read-ahead.
Possible values are 0 to 9, where 0 means read-ahead is disabled. If
this is set to a value larger than numbigbuf, then it is reset to the
value of numbigbuf-1.
Each 64KB big buffer is divided into sixteen 4KB read-ahead buffers. You
might want more than one big buffer allocated here if you are processing
many files simultaneously with small reads. Setting this value higher
will require additional NetBIOS commands. Generally, depending upon the
amount of NetBIOS work area available, this value shouldn't be set above
4 or 5.
Default value: 1
Using 64KB (big buffers) for read-ahead involves a trade-off between
large file transfers and small-record read and write operations.
Provided there are two 64KB buffers remaining in the server for each
requester doing concurrent large file transfers, you can use the
remaining 64KB buffers for read-ahead without a penalty.
14 Convert incoming path specifications into the most basic format that
OS/2 LAN Server understands. This conversion includes changing
lowercase characters to uppercase, and slashes (used in path names) to
backslashes (/ to \).
Default value: 1
15 Set the time the server waits before putting out an error message
indicating accessed denied due to a previous opportunistic lock (see
srvheuristic 0 ). You may want to set a longer time when the network is
subject to long delays. Table 5-3. "Opportunistic Lock Timeout" shows
possible values:
Default value: 0
If a second requester requests opening of a locked file, the server
notifies the first requester to flush buffers and prepare for unlocking.
This heuristic defines how long the server waits for the first requester
to close the file before sending the "Access denied" message to the
second requester.
The server can lock a file opened in deny-none sharing mode (as long as
there are no other requests to access the file) so that buffering can be
used to enhance performance. The server provides exclusive use of the
file to the first requester, preventing the second requester from
accessing the file until buffer data is flushed (written to disk).
16 Validate the input/output controls (IOCTLs) across the network. When
this is set to 1 (on), the server accepts only generic device IOCTLs
(categories 01H, 05H, and 0BH). Refer to IBM OS/2 Programming Tools and
Information for more information.
With this heuristic set to 0 (off), the server could receive invalid
IOCTL pointers because of differences in device drivers between vendors.
This can shut down the server. You may need to set this heuristic to 0
to use certain device drivers, such as custom-built drivers.
Default value: 1
17 Determines how long the server maintains unused dynamic big (64KB)
buffers before freeing the memory. This digit can range from 0 through
9, with the following meanings:
Digit Timeout
0 0 seconds (immediately after use)
1 1 second
2 10 seconds
3 1 minute
4 5 minutes
5 10 minutes
6 20 minutes
7 40 minutes
8 1 hour
9 Maintain big buffers indefinitely
Default value: 3 (1 minute)
18 Determines how long the server waits after failing to allocate a big
(64KB) buffer before trying again. This digit can be from 0 to 5, with
the following meanings:
Digit Timeout
0 0 seconds (immediately)
1 1 second
2 10 seconds
3 1 minute
4 5 minutes
5 10 minutes
Default value: 3 (1 minute)
5.3.3.2 Wrkheuristics
This section lists the wrkheuristics used for fine tuning the requesters
performance and communication with the server.
The wrkheuristic is set up as one long 33 character variable in the Requester
section of the IBMLAN.INI file where each digit has an independent meaning.
Except where noted, each is a binary digit where 0 means off or inactive,
while 1 means on or active. Other values are explained in the following
descriptions of each digit.
1 2 3
012345678901234567890123456789012 (Position)
The default value is: 111111111131111111000101112011122
Digit Meaning
0 Request opportunistic locking of files.
Default value: 1
When this heuristic is active, it allows a file opened in deny-none
sharing mode to be locked by the server (provided there are no other
access requests). This enables buffering functions which can enhance
performance. The Server service assumes that the first requester is the
only active process using that file and will prevent a second requester
from accessing the file until buffer data is flushed (written to disk).
See also srvheuristics 0 and 15.
1 Do performance optimization for batch (.CMD) files. Heuristic 0
(opportunistic locking) must be set to 1.
Default value: 1
When this heuristic is active, a batch file from the server executing on
the requester is kept in the requester's buffer to prevent a request
across the LAN for each line of the batch file. When this heuristic is
inactive the batch file is opened and closed with each line executed.
The close causes buffer data to be flushed (written to disk).
This may be set to 0 if two or more requesters want to run this batch
file at the same time, but file opens and closes across the LAN can be
costly in terms of performance.
2 Do 'asynchronous unlock' and 'write unlock' as follows:
Value Meaning
0 Never
1 Always
2 Only on an OS/2 LAN Server NetBIOS session.
Default value: 1
With this heuristic, files in the requester buffer are unlocked in
the buffer, and processing continues without waiting for
confirmation from the server. Any errors occurring at the server
are reported later. Generally, the only errors that might occur are
hard media errors, such as disk full or a loss of power to the
server.
Asynchronous unlock and write unlock is not used if the file was
opened with the write-through flag set.
The write-through flag, when active, means that any writes to the
files must be written through the cache to the fixed disk before
returning control to the calling program.
If data integrity is a primary concern and the write through flag is
not being used, this may be set to 0.
Values 1 and 2 are functionally equivalent in an IBM OS/2 LAN Server
environment.
3 Do 'asynchronous close' and 'write close' as follows:
Value Meaning
0 Never
1 Always
2 Only on an OS/2 LAN Server NetBIOS session.
Default value: 1
This heuristic has a function similar to wrkheuristic 2. The server
gives a completion response to the requester for close or
write/close requests before writing buffered file data to the disk.
A subsequent message warns users if all data is not written to the
disk. Asynchronous close and write/close is not used if the file was
opened with the write-through flag set.
If data integrity is a primary concern and the write through flag is
not being used, this may be set to 0.
Values 1 and 2 are functionally equivalent in an IBM OS/2 LAN Server
environment.
4 Buffer named pipes and serial devices.
Default value: 1
This heuristic directs named pipes and communication devices through the
requester's buffers.
5 Do combined 'lock read' and 'write unlock' as follows:
Value Meaning
0 Never
1 Always
2 Only on an OS/2 LAN Server NetBIOS session.
Default value: 1
When this heuristic is active, the lock and read requests are joined
and sent as one command. The write and unlock requests are joined
in the same way.
Values 1 and 2 are functionally equivalent in an IBM OS/2 LAN Server
environment.
6 Use open and read.
Default value: 1
When this heuristic is active, a request to open a file also does a read
of size sizworkbuf from the beginning of the file into the requester's
work buffer. This action anticipates that the data at the beginning of
the file will be subsequently read, saving an additional request across
the LAN.
This may be set to 0 if the requester never uses the data that is read
from the beginning of the file when this heuristic is active.
7 Reserved. This must be set to 1.
8 Use the chain send NetBIOS NCB (Network Control Block) as follows:
Value Meaning
0 Never
1 Do if server's buffer is larger than the workstation's buffer
2 Always (to avoid copy).
Default value: 1
A chained send enables NetBIOS to copy large data blocks directly
from the application's buffer to the network adapter. This bypasses
an intermediate copy to workbufs, from which NetBIOS normally copies
the data to the network adapter.
9 Buffer small read and write requests (i.e. read and write a full buffer)
as follows:
Value Meaning
0 Never
1 Always
2 Only on an OS/2 LAN Server NetBIOS session.
Default value: 1
When this heuristic is active and file access mode allows, requests
to read or write data smaller than sizworkbuf are done locally, in
the requester's buffer, avoiding additional trips across the LAN.
The buffer is flushed when the file is closed or when the buffer is
needed to satisfy other requests.
This heuristic may enhance performance for applications that read,
modify, and write back small records.
Values 1 and 2 are functionally equivalent in an IBM OS/2 LAN Server
environment.
This may be set to 0 if data integrity is a primary concern. See
Figure 5-11. "Heuristic Relationships: Small Record Buffering".
10 Use buffer mode (assuming shared access is granted) as follows:
Value Meaning
0 Use full buffer size if reading sequentially .
1 Use full buffer if file is opened in read/write access mode.
2 Use full buffer if reading and writing sequentially.
3 Buffer all requests smaller than the buffer size (if hits
occur).
Default value: 3
These options allow selective tuning of the buffer mode if any
applications handle data in a manner conflicting with buffering.
(i.e. Specialized database management and access structure) See
Figure 5-11. "Heuristic Relationships: Small Record Buffering".
11 Use raw read and write Server Message Block (SMB) protocols.
Default value: 1
Raw read and write SMB protocols transfer data across the LAN without
SMB headers. These protocols are used to transfer large files directly
between a big buffer in the server and a work cache in the requester.
Polling makes sure that big buffers are available before the transfer
begins.
This heuristic may significantly improve performance of large file
transfers across the LAN.
Wrkheuristic 11 must be active for wrkheuristics 12 and 13 to be
functional.
This may be set to 0 if the record sizes being used are just larger than
sizworkbuf and much smaller than 64k bytes. This would save on the use
of bigbufs and reduce the overhead required to set up the raw protocol.
See Figure 5-10. "Heuristic Relationships: Large Record Buffering".
12 Use large raw read-ahead buffer.
Default value: 1
This heuristic provides independent control over using the raw SMB
protocol for read-ahead. It is active with the default value, but may be
turned off to better suit a particular environment.
Wrkheuristic 11 must be active for this to be effective.
This may be set to 0 if the record sizes being used are just larger than
sizworkbuf and much smaller than 64k bytes. This would save on the use
of bigbufs and reduce the overhead required to set up the raw protocol.
See Figure 5-10."Heuristic Relationships: Large Record Buffering".
13 Use large raw write-behind buffer.
Default value: 1
This heuristic provides independent control over using the raw SMB
protocol for write-behind. It is active with the default value, but may
be turned off to better suit a particular environment.
Wrkheuristic 11 must be active for this to be effective.
This may be set to 0 if the record sizes being used are just larger than
sizworkbuf and much smaller than 64k bytes. See Figure 5-10. "Heuristic
Relationships: Large Record Buffering".
14 Use read multiplexing SMB protocols.
Default value: 1
This SMB protocol is used for large read requests if big buffers
described in heuristic 11 are unavailable, or raw SMB protocol is
inactive. This protocol breaks transfers into buffer-size chunks
(sizewrkbuf) and chains them together to satisfy the request.
Srvheuristic 1 must also be active for read multiplexing to be used. See
Figure 5-10. "Heuristic Relationships: Large Record Buffering".
15 Use write multiplexing SMB protocols.
Default value: 1
This SMB protocol is used for large write requests if big buffers
described in heuristic 11 are unavailable, or raw SMB protocol is
inactive. This protocol divides transfers into buffer-size chunks
(sizewrkbuf) and chains them together to satisfy the request.
Srvheuristic 2 must also be active for write multiplexing to be used.
See Figure 5-10. "Heuristic Relationships: Large Record Buffering".
16 Use big buffer for large core (non-raw) reads.
Default value: 1
17 Use same size small read-ahead or to sector boundary.
Default value: 1
When this heuristic is active, requests to read small data records
sequentially cause read-ahead in multiples of the data record size, so a
full buffer is read and sent to the requester. Because multiple records
may not fit evenly in the buffer, the last record in the buffer may be
incomplete. However, no data is lost.
For example, if the user is reading 50-byte records sequentially from a
4096 buffer, LAN Server will read ahead to fill the buffer to 4050
bytes.
When this heuristic is set to 0, data is read up to the next sector
boundary, usually 512 bytes.
This heuristic is significant only if wrkheuristic 9 is inactive. The
requester will detect small data records of the same size being read
sequentially and will establish the read-ahead operation. See Figure
5-11. "Heuristic Relationships: Small Record Buffering".
18 Use same size small write-behind or to sector boundary.
Default value: 0
When this heuristic is active, requests to write small data records
cause write-behind in multiples of the data record size, so a full
buffer is written to the server. Because multiple records may not fit
evenly in the buffer, the last record written may be incomplete.
However, no data is lost.
When this heuristic is set to 0, data is written up to the next sector
boundary, usually 512 bytes.
This heuristic is significant only if wrkheuristic 9 is inactive. The
server will detect small data records of the same size being written
sequentially and will establish the write-behind operation. See Figure
5-11. "Heuristic Relationships: Small Record Buffering"
19 Reserved (0).
20 Flush files, pipes and devices on DosBufReset or DosClose as follows:
Value Meaning
0 Flush only files and devices opened by the caller.
. Spin until flushed (wait for confirmation
. before proceeding with other tasks).
1 Flush only files and devices opened by the caller.
. Flush only once (don't wait for confirmation).
2 Flush all files and all input and output of short-term pipes
. and devices. Spin until flushed.
3 Flush all files and all input and output of short-term
. pipes and devices. Flush only once.
4 Flush all files and all input and output of pipes and
. devices. Spin until flushed.
5 Flush all files and all input and output of pipes and
. devices. Flush only once.
Default value: 0
This heuristic gives the requester application more flexibility as
to which files, pipes, or devices are flushed (written to disk) when
DosBufReset or DosClose is done.
21 Used to support OS/2 LAN Server encryption.
Default value: 1
22 This heuristic controls log entries for multiple occurrences of an
error. A recurring error can fill up the error log; use this heuristic
to keep down the number of log entries. If the value is other than 0,
the first, fourth, eighth, 16th, and 32nd occurrences of an error are
logged. After that, every 32nd further occurrence is logged.
If the value is other than 0, it also defines the size of the error
table. The table is a record of what errors have occurred. If an error
does not match an existing entry in the table and the table is full, it
replaces the entry with the lowest number of occurrences. The table size
refers to the number of different errors allowed at one time in the
table.
Set the value as follows:
Value Meaning
0 Log all occurrences
1 Use error table, size 1
2 Use error table, size 2
3 Use error table, size 3
4 Use error table, size 4
5 Use error table, size 5
6 Use error table, size 6
7 Use error table, size 7
8 Use error table, size 8
9 Use error table, size 9.
Default value: 0
23 Buffer all files opened with deny-write sharing mode.
Default value: 1
When this heuristic is active, the server buffers all files opened with
deny-write sharing mode, regardless of the access mode the requester
used to open the file. It's important to realize that the sharing mode
and access mode are two different parameters of a file open command.
This heuristic deactivates buffering on this requester if an application
does not work correctly with it. See Figure 5-11. "Heuristic
Relationships: Small Record Buffering".
24 Buffer all files opened with read only (R) attribute set on.
Default value: 1
When this heuristic is active, the server buffers all files with the
read-only attribute set on. Only read access mode will successfully open
a read-only file. to realize that the sharing mode and access are two
different parameters of an open file.
This heuristic deactivates buffering on this requester if an application
does not work correctly with it. See Figure 5-11. "Heuristic
Relationships: Small Record Buffering".
25 Read ahead when opening for execution. Reading an executable file
sequentially is usually, but not always, faster.
Default value: 1
This heuristic value should be 1 for many executable files loaded across
the LAN. For example, the load times of some applications decrease by
more than 50 percent. Experiment with your particular program to
determine which option is better.
26 Handle Control-C (Ctrl+C) as follows:
Value Meaning
0 No interrupts allowed
1 Only allow interrupts on long-term operations
2 Always allow interrupts.
Default value: 2
27 Force correct open mode when creating files on a core server. (A core
server is a DOS-based LAN server, such as PC LAN Program 1.3.) DOS-based
servers open a new file in compatibility mode. This heuristic forces
this workstation to close the file and re-open it in the proper mode.
This may be important if you are using PCLP Servers as external
resources.
Default value: 0
28 Use the NetBIOS NoAck mode (transferring data without waiting for an
acknowledgement) as follows:
Value Meaning
0 NoAck is never used (disable NoAck)
1 NoAck on send only.
Default value: 1
29 Send data along with the SMB write block raw requests.
Default value: 1
When this heuristic is active, the requester sends a requester buffer of
data (sizewrkbuf) to the server along with its request for big buffers
to use for large file transfers. This action may save time if the
server has a limited number of big buffers (numbigbuf) compared to the
number of requesters trying to send large files.
30 Send a popup message to the screen when the requester logs an error, as
follows:
Value Meaning
0 Never
1 On write fault errors only (no timeout)
2 On write fault and internal errors only (no timeout)
3 On all errors (no timeout)
4 Reserved
5 On write fault errors only (timeout)
6 On write fault and internal errors only (timeout)
7 On all errors (timeout).
Default value: 1
Values other than 1 are normally used for debug purposes only.
31 Controls timeout by closing the print file, when printing from a DOS
session on an OS/2 workstation. The value can range from 0 through 9.
Timeout is set as follows:
Value Meaning
0 through 8 Timeout is (this value + 1) * 8 seconds.
9 Timeout is as specified in printbuftime.
Default value: 2
32 Controls DosBufReset behavior for file only (not devices or pipes). When
the call to the API returns, one of the following actions has been
taken:
Value Meaning
0 Any changed data in the buffers was sent from the requester to
. the server.
. The server has written this data to the disk.
1 Any changed data in the buffers was sent from the requester to
. the server.
. The server has not yet written this data to the disk.
2 DosBufReset was ignored for files.
Default value: 2
5.3.3.3 Controlling Buffering with Heuristics
The following diagrams are an attempt to identify the relationship between
some of the Server and Requester heuristics in the IBMLAN.INI file. Except
where noted all the possible combinations of these heuristic values are shown
and the path they create. Hopefully, these graphics will make it easier to
tell just what buffering scheme is active.
The diagrams show the effect of each heuristic by having the possible values
coming out of the bottom of each box. The best way to view it may be from the
bottom up. Look at the buffering modes at the bottom and by following the
diagram up, you can see which heuristic values are needed to set up that
protocol. If you have heuristic values and you want to see what buffering
protocol is active, start from the top and follow the values downward.
o Large Record Buffering
Figure 5-10. "Heuristic Relationships: Large Record Buffering" describes the
relationship of the srvheuristics and the wrkheuristics and how they affect
buffering in the case of large record reads and writes. Large records are
defined as those records greater than the value of sizworkbuf .
o Small Record Buffering
Figure 5-11. "Heuristic Relationships: Small Record Buffering" describes the
relationship of the srvheuristics and the wrkheuristics and how they affect
buffering in the case of small record reads and writes. Small records are
defined as those records smaller than the value of sizworkbuf .
5.3.4 Capacity Related Parameters
ΓòÉΓòÉΓòÉ 8.3.4. 5.3.4 Capacity Related Parameters ΓòÉΓòÉΓòÉ
This section lists parameters that are associated with the functional capacity
of the LAN Server environment. Some of these parameters are related to adding
more users to the network. Others, though not directly related to performance,
do use additional memory depending on their value. In order to free up some of
that memory for other uses, these parameter values should to be reviewed while
tuning and installing the OS/2 LAN Server. For more information on memory
tuning for LAN parameters, see 3.2.4.3 Lan Requester Considerations and
3.2.4.4 NetBIOS Considerations. These parameters appear in the Network and
Server sections of the IBMLAN.INI file.
Some of the parameter descriptions contain equations for setting the values.
These equations only give suggested values that have come from LAN Server 1.2
experiences in various installations. Specific environments may require that
these levels be adjusted.
o net1=NetBIOS$, a, NB30, x1: , x2, x3
The following list contains the parameters in the net1 statement.
- net1
This parameter indicates the name of the network device driver and the
number of the network adapter that the requester uses when running on the
network. The additional values indicate the NetBIOS resources needed to
start the requester. The number of resources indicated in the net1
statement must be available in the Communications Manager configuration
file.
Default value for server: net1=NetBIOS$, 0, NB30, 32, 32, 16
Default value for requester: net1=NetBIOS$, 0, NB30, 16, 16, 16
- NetBIOS$
This parameter indicates the network device driver. The value of this
parameter must be NetBIOS$.
- a
This parameter indicates the number of the network adapter. This should
always be set to 0 for the primary network adapter.
- NB30
This parameter indicates the driver type. This value should always be
NB30.
- x1
This parameter indicates the number of NetBIOS sessions the requester
allocates from the Communications Manager configuration file. Changing
this value increases or decreases the number of connections that can be
made to servers, (Connections include logons and NET USE commands). This
pool of sessions is taken from the "Maximum sessions" which is specified
in the Communication Manager (CM) configuration. Therefore, the CM
Maximum sessions must equal or exceed the value of x1. Following is a
suggested formula for setting this value:
x1 = (# of Requesters) + (# of additional Servers) + (2 + # of
additional Servers used as Requesters) + 3(if DLRINST) + 10 (if
RIPL used)
+ 1 (for messenger service on Server)
Default values: 32 for server; 16 for requester
Minimum value: 0
Maximum value: 254
- x2
This parameter indicates the number of simultaneous NetBIOS commands
(Network Control Blocks) a requester or server can post. For a server,
changing this value can increase or decrease the number of requester
requests it can process at once. This pool of sessions is taken from the
"Maximum sessions" which is specified in the Communication Manager (CM)
configuration. Therefore, the CM Maximum sessions must equal or exceed
the value of x2. Following is a suggested formula for setting this value:
x2 = numbigbuf + numreqbufs + srvpipes (default 3) + 3 (if DLRINS
T)
+ maxchdevjob (default 8) + 1 (for messaging)
Default values: 32 for server; 16 for requester
Minimum value: 0
Maximum value: 255
- x3
This parameter indicates the number of NetBIOS names the requester
allocates from the Communications Manager configuration file. The
requester uses NetBIOS names for the computer name, messaging names, and
forwarded messaging names. To share fewer resources and add fewer
messaging names, reduce the value of this parameter. To share more
resources and add more messaging names, increase the value of this
parameter. This parameter corresponds to the CM "Maximum names"
parameter. Therefore, x3 must be equal to or smaller than CM's "maximum
names". Following is a suggested formula for setting this value:
x3 = 16 (default for x3) + 2 (if DLRINST)
Default value: 16
Minimum value: 0
Maximum value: 254
o maxchdevjob
This parameter specifies the maximum number of requests the server can
accept for all shared serial device queues combined. Increase the value of
this parameter if your shared serial devices are heavily used.
Default value: 6
Minimum value: 0
Maximum value: 65535
o maxconnections
This parameter specifies the maximum number of connections that requesters
can have to the server. This is the number of NET USE commands the server
can handle. For example, a user issuing five NET USE commands needs five
connections. Likewise, five connections are needed for five users who each
issue one NET USE command. Increase the value of this parameter if many
users will use the server. The maxconnections parameter value must be at
least as big as the maxusers parameter value. Following is a suggested
formula for setting this value:
maxconnections = (maxusers * 6) + (3 * (# of DLRs with Windows)
)
+ (4 * (# of OS/2 requesters))
Don't use this equation to set the value below the default (128).
Default value: 128
Minimum value: 1
Maximum value: 1024
o maxlocks
This parameter specifies the maximum number of file locks the server may
have, more precisely, the maximum number of records (byte ranges) that may
be locked by users of the server. Increase the value of this parameter if
there is a large number of heavily used files. This entry applies only to
networks that have DOS requesters. Following is a suggested formula for
setting this value:
maxlocks = Integer of (maxopens * 0.1)
Don't use this equation to set the value below the default (64).
Default value: 64
Minimum value: 0
Maximum value: 8000
o maxopens
This parameter specifies the maximum number of files, pipes, and devices the
server can have open at one time. For example, the value of this parameter
must be at least 5 for a user opening five files. The value of this
parameter must also be at least 5 for five users opening the same file.
Increase the value of this parameter if many users access the server
simultaneously. Following is a suggested formula for setting this value:
maxopens = (# of OS/2 Requesters * 55) + (# of DLR's * 10)
+ (# of DLR's with Windows * 45)
Don't use this equation to set the value below the default (64).
Default value: 64
Minimum value: 40
Maximum value: 8000
o maxsearches
This parameter specifies the maximum number of directory searches the server
can do simultaneously. These searches are executed when a user does a
wildcard search of a directory; for example, DIR Y:\TEXTFILE.* . Increase
the value of this parameter if the server's files are heavily used. See the
srvheuristic 7 parameter on page 5.3.3 Heuristics for more information about
searches. Following is a suggested formula for setting this value:
maxsearches = Integer of (maxusers * 0.3)
Don't use this equation to set the value below the default (50).
Default value: 50
Minimum value: 1
Maximum value: 1927
o maxsessopens
This parameter specifies the maximum number of files, pipes, and devices one
requester can have open at the server. Increase the value of this parameter
if many of the server's resources are used simultaneously.
Default value: 50
Minimum value: 1
Maximum value: 8000
o maxsessreqs
This parameter specifies the maximum number of resource requests one
requester can have pending at the server. Increase the value of this
parameter if users need to perform multiple tasks simultaneously at the
server.
Default value: 50
Minimum value: 0
Maximum value: 65535
o maxshares
This parameter specifies the maximum number of resources the server can
share with the network. For example, if one user is using five resources on
the server, the value of this parameter must be at least 5; but if five
users are using the same server resource, the value of this parameter only
needs to be 1. Increase the value of this parameter if the server must share
many resources. Following is a suggested equation for setting this value:
maxshares = (# of Home Directories Assigned) + (# of Applications * 3
)
+ (# of file and printer aliases)
Don't use this equation to set the value below the default (16).
Default value: 16
Minimum value: 2
Maximum value: 500
o maxusers
This parameter sets the maximum number of users who can use the server
simultaneously. This equals the number of users who might issue a NET USE
command to the server. A user who issues five NET USE commands counts as
one user, but five users, each issuing a NET USE command to the same
resource, count as five users. This value can also be understood as the
number of NetBIOS sessions at the server.
The maxusers parameter value cannot exceed the maxconnections parameter
value. The following is a suggested equation for setting this value:
maxusers = (# of Requesters) + (# of additional Servers)
Default value: 45
Minimum value: 1
Maximum value: 254
o srvpipes
This parameter sets the maximum number of pipes that the server uses.
Increase this value if many users log on simultaneously, otherwise leave
this at the default value. The following is a suggested equation for setting
this value:
srvpipes = maxusers / 40 (up to max)
Default value: 3
Minimum value: 1
Maximum value: 20
5.4 DOSLAN.INI Parameter Descriptions
ΓòÉΓòÉΓòÉ 8.4. 5.4 DOSLAN.INI Parameter Descriptions ΓòÉΓòÉΓòÉ
The DOSLAN.INI file controls the characteristics of the DOS LAN Requester
(DLR). It performs the same function as the IBMLAN.INI file, only for the DOS
LAN Requester.
The DOSLAN.INI file must be in the same directory as the DOS LAN Requester
program files. The DOSLAN.INI file contains the values with which DOS LAN
Requester initially starts. Default values are assumed for all parameters not
specified in the DOSLAN.INI file or with the NET START command.
The following sections list those parameters that are most likely to be changed
while performance tuning the DOS LAN Requester (DLR) environment. The
descriptions are designed to explain what the function of the parameter is,
when it might be advantageous to change from the default, and how changing the
parameter value may affect performance. Those parameters not listed in this
document are generally not thought of as performance related parameters, though
many of them are important to the functionality of DLR. These values should
rarely be changed and even then not for performance tuning purposes.
If the network is not started, type NET START without parameters to start DOS
LAN Requester using DOSLAN.INI as the parameter default file. However, if the
network has been started, type NET START to display the current parameter
settings. Figure 5-12. "Example DOSLAN.INI File" is an example of a DOSLAN.INI
file.
5.4.1 DOS LAN Requester Memory Usage
ΓòÉΓòÉΓòÉ 8.4.1. 5.4.1 DOS LAN Requester Memory Usage ΓòÉΓòÉΓòÉ
You must select the configuration that you want to use when you enter the NET
START command. The configuration requires memory in your workstation. You
should make sure your workstation has enough memory for the configuration you
select and the applications or programs you run on your workstation.
The combination of parameters you specify in DOSLAN.INI and in the NET START
command cannot reserve more than 64KB of memory. Use the following formula to
determine the amount of memory reserved:
(/SRV*28) + (/ASG*75) + (/NBC*(/NBS + 90)) +
(/BBC*(/BBS + 90)) + (2*/PBC*(/PBS + 90)) +
(/NMS*130) + (LASTDRIVE*80) + ((FILES + FCBS) *30) + 10K <=64K
The following diagram shows memory maps for the DOS LAN requester. Figure
5-13. "DOS LAN Requester Memory Requirements with/without HIMEM.SYS" shows the
memory map when all of the DLR code is loaded into lower memory (0-640k) and
the memory map of the DLR code when HIMEM.SYS is installed and the /HIM option
of the DOSLAN.INI file is active. HIMEM.SYS is a driver for using extended
memory above 1M in the DOS environment and must be made active in the
CONFIG.SYS for this option to be available.
5.4.2 DOS LAN Requester
ΓòÉΓòÉΓòÉ 8.4.2. 5.4.2 DOS LAN Requester ΓòÉΓòÉΓòÉ
5.4.2.1 Performance Parameters
This section lists the performance and memory related parameters in the
DOSLAN.INI file. They are listed in order of impact upon performance. The
first parameters will have a greater effect on performance when changed. The
parameter defaults have been chosen to enhance performance for most users.
Only change these values if there is a performance problem. Some of the
parameters listed don't directly affect performance but do affect the amount of
memory available.
o /NBC:n (Network Buffer Count)
This parameter defines the maximum number of network buffers used. The DLR
uses the network buffers to construct the SMBs that are sent to the server.
The network buffers also provide the data buffer between the application
running on the DLR and the network adapter card. This parameter corresponds
to the numwrkbuf in an OS/2 requester. The /NBC parameter should not be
reduced from its default value.
Default value: 4
Minimum value: 3
Maximum value: 64
o /NBS:n[K] (Network Buffer Size)
This parameter defines the size of the network buffers used. The DLR uses
the network buffers to construct the SMBs that are sent to the server. The
network buffers also provide the data buffer between the application running
on the DLR and the network adapter card. This parameter corresponds to the
sizewrkbuf parameter in an OS/2 Requester.
Since the DLR is limited to the 640KB memory space, these buffers are kept
at small default values to conserve memory. If application memory
requirements permit, the /NBS parameter value can be increased to 2KB (or
4KB) to improve sequential-file-access performance for data requests smaller
than big buffer size (/BBS). /NBS times (/NBC plus 90) bytes of memory are
required. If /API is specified, the minimum is 1K. /API makes most of the
LAN Server API's available to the DLR.
Default value: 1K
Minimum value: 128
Maximum value: 16K
o /BBC:n (Big Buffer Count)
This parameter defines the number of large buffers used for file caching and
printing. The DLR's big buffers correspond to the work cache on OS/2
requesters. Big buffers on the requester are used in conjunction with big
buffers on the server when SMB Raw protocol is initiated. Because DLR
handles data read/write requests larger than the big buffer (/BBS) without
the use of the big buffers (User Memory Transfer), this value should only be
increased when the size of many data read/write requests fall between the
network buffer (/NBS) and the big buffer (/BBS) sizes. Setting /BBC to 0
sets /BBS implicitly to 0.
Default value: 1
Minimum value: 0
Maximum value: 64
o /BBS:n[K] (Big Buffer Size)
This parameter defines the size of the large buffers used for file caching
and printing. The DLR's big buffers correspond to the work cache on OS/2
requesters. Big buffers on the requester are used in conjunction with big
buffers on the server when SMB Raw protocol is initiated. The big buffer is
important in applications that require sequential reads and writes to the
server in data buffer sizes greater than /NBS, including DOS Copy operations
directed to the server. The writes go across the network in message sizes
determined by the size of the big buffer (/BBS). As with network buffers,
the amount of memory allocated by default is limited to the 640KB DOS memory
constraint. The big buffers in the DLR are not as important as the work
cache in the OS/2 requester due to the DLR User Memory Transfer feature
described as follows:
When the read data request size is greater than the big buffer size (/BBS),
the User Memory Transfer function moves the data from the network adapter
card buffer directly to the user memory space allocated for the data. User
Memory Transfer moves the data for both program load and application read
requests.
The big buffers are there to provide flexibility to a user who wants to keep
/NBS small but still have at least one large buffer which can handle an
applications common I/O record size. /BBSmust be larger than /NBS. If /BBC
is 0, /BBS is implicitly set to 0. /BBCtimes /BBS bytes of memory are used.
Default value: 4K
Minimum value: 1K
Maximum value: 32K
o /PBC:n (Pipe Buffer Count)
This parameter defines the maximum number of buffers used for remote pipe
caching. A value of 0 indicates no pipe buffering.
Default value: 4K
Minimum value: 0
Maximum value: 16K
o /PBS:n[K] (Pipe Buffer Size)
This parameter defines the size of the buffers used for remote pipe
read-ahead. If /PBC is 0, /PBS is implicitly set to 0. /PBS times /PBC bytes
of memory are required.
Default value: 128
Minimum value: 128
Maximum value: 16K
o /EMS (Expanded Memory Support)
This parameter indicates that the redirector component is loaded into high
memory (above 640KB) using the LIM 4.0 specification. This parameter is
valid only for 386-based machines with LIM 4.0 compatible expanded memory
device drivers or machines with an XMA card installed. /EMS is only valid
with DOS Version 4.0 or higher. The default action is to load the redirector
into low memory (below 640KB). The CONFIG.SYS for DLR must activate this
memory for this option to be available.
Note- The driver XMA2EMS.SYS must be configured with contiguous page blocks.
If the XMA2EMS.SYS driver was configured with non-contiguous pages, and the
/EMS parameter is used, DOS LAN Requester stops the NET START and issues an
error message. For 80386 workstations, use the EMM386.SYS device driver.
This parameter and /HIM are mutually exclusive.
o /HIM (High Memory 'Extended Memory Support')
This parameter indicates that the code segment of the redirector component
is loaded into the first segment above the 1M address. This parameter is
valid only for 286- and 386-based machines with extended memory available.
The default action is to load the redirector into low memory (below 640KB).
Note- To take advantage of high memory, the driver HIMEM.SYS must be added
to the CONFIG.SYS file of the DOS LAN Requester.
This parameter and /EMS are mutually exclusive.
o /SRV:n (Servers)
This parameter defines the maximum number of unique servers from which
resources can be accessed at one time. Each /SRV requires approximately 28
bytes of memory.
Default value: 8
Minimum value: 1
Maximum value: 251
o /ASG:n (Resources Assigned to User)
This parameter defines the maximum number of network resources that you can
access at one time. Each /ASG requires approximately 75 bytes of memory.
Default value: 29
Minimum value: 1
Maximum value: 254
o /PFS:n[K] (Pipe File Size)
This parameter defines the maximum size (in kilobytes) pipe write that can
be buffered. A single write of more than n characters is written directly
to the server. If n is greater than the /PBS value, /PFS is set to the /PBS
value. If /PFS is 0, no pipe writes are buffered. If /PBC is 0, /PFS is
implicitly set to 0.
Default value: 32
Minimum value: 0
Maximum value: 32
o /NMSn (Number of Mail Slots)
This parameter indicates mailslot API support and defines the number of
mailslots that can be created. If this parameter is not present, there is
no mailslot API support. No mailslot API support is the default. If this
parameter is present, the value of n must be from 1 through 64. Receiver
configuration requires a minimum value of 2 to send and receive domain
broadcast messages. A value of 1 is automatically adjusted to 2 for the
Receiver service. If /NVS is present, the minimum number of n for the
configuration is implied.
Default value: Not Present
Minimum value: 1
Maximum value: 64
o /NVSn
This parameter indicates NetServerEnum /API support and defines the number
of server entries supported. If this parameter is not present, there is no
NetServerEnum support. No NetServerEnum is the default. NetServerEnum
requires 70 bytes of memory for each specified entry. NetServerEnum requires
mailslot (/NMS) and resident API (/API) support.
Default value: Not Present
Minimum value: 1
Maximum value: 255
o /API
This parameter indicates support for general network application programming
interfaces (APIs), if present. Non-presence is the default. /API support
requires approximately 9 .5KB of memory and /NBC, /NBS, /NVS, and /NMS
support as prerequisites.
Default value: Not Present
o /WRK:abcdefghij (DLR Wrkheuristics)
This parameter defines the behavior of a set of internal working heuristics.
/WRK provides the ability to specify how to use a set of redirector
component functions. No validation of this parameter occurs. Settings that
cannot be supported are ignored.
The default value is 1111211012.
Note: There is no guarantee that the redirector will behave as specified.
For example, if c=2, asynchronous functions are not available. If
the target server does not support the extended SMB protocol,
functions such as a=1 or b=1 are not allowed.
The heuristics represented in the string are as follows:
a Write-through and Write-behind behavior
Specifies server lazy-write operations for network requests. Valid
values are:
0 Sets write-through bit on Open&X and Write Block Raw protocols on
server. This disables any server disk cache or write-behind operations
on all files.
1 Clears write-through bit on Open&X and Write Block Raw protocols on
server (default). This allows the server to write-behind data written
to the file and removes one SMB exchange on raw I/O.
b Write&X behind behavior
Specifies whether the redirector will wait for Write&X behind responses
or return immediately. Valid values are:
0 Disallows all Write&X behind behavior.
1 Issues Write&Close, Write&Unlock, and Close SMBs asynchronously,
without waiting for SMB response (default).
c NetBIOS type
Specifies NetBIOS behavior. Valid values are:
0 NetBIOS cannot accept network control blocks at interrupt.
1 NetBIOS can accept network control blocks at interrupt (default).
d Core read protocol behavior
Specifies the method by which data is transferred into the
application program data area. Valid values are:
0 Limits transfers using core protocols to the minimum negotiated buffer
size.
1 Uses message incomplete by posting a receive for the SMB header, and
then posting a receive for the data once the SMB is received
(default).
2 Breaks reads larger than the buffer size into two reads: the first for
the SMB header; the second directly into the reader's address.
e Buffer read-ahead behavior
Specifies the read-ahead buffering method. Valid values are:
0 Always reads a full buffer (/NBS) when the read is less than the
buffer size.
1 Always reads only the user request size.
2 Reads /NBS bytes if sequential (default).
f Hard error behavior
Specifies whether to use pop-up error messages. Valid values are:
0 Returns errors to application; does not use pop-up messages.
1 Uses pop-up error messages (default).
g Big buffer read-ahead behavior
Specifies big buffer read-ahead behavior. Valid values are:
0 For reads less than /BBS and greater than /NBS, reads only requested
amount.
1 For reads less than /BBS and greater than /NBS, reads buffer size
(/BBS) amount of data (default).
h Process exit behavior
Specifies handling of process exit SMBs. Valid values are:
0 Never sends process exit SMBs (default).
1 Always sends process exit SMBs.
2 Sends process exit SMBs based on RPDB structure.
i Opportunistic locking (Oplock) behavior
Specifies whether to use Oplock. Valid values are:
0 Does not use Oplock.
1 Uses Oplock.
j Open&Read-Ahead behavior
Specifies Open&Read-Ahead usage. Valid values are:
0 Never uses Open&Read-Ahead.
1 Uses Open&Read-Ahead on read or execution of files.
2 Uses Open&Read-Ahead on read, write, or execution of files (default).
5.5 PROTOCOL.INI Parameter Descriptions
ΓòÉΓòÉΓòÉ 8.5. 5.5 PROTOCOL.INI Parameter Descriptions ΓòÉΓòÉΓòÉ
PROTOCOL.INI Parameter Descriptions
Unlike the other parameter sections, this section does not list parameters in
order of performance impact. Because of the wide range of configurations
affected by these parameters, no attempt was made to separate performance
parameters. Though some of the parameters deal with performance, this section
was included primarily as a general reference for those using the ETHERAND
protocol.
The PROTOCOL.INI file stores configuration and binding information for all the
protocols and MAC drivers in the system. The file uses the same general format
as the IBMLAN.INI file. The PROTOCOL.INI file consists of a series of named
sections, where the section name is the module name from a module
characteristic table. Figure 5-14. "Contents of the PROTOCOL.INI File" shows
an example PROTOCOL.INI file.
5.5.1 Introduction for PROTOCOL.INI Configuration
ΓòÉΓòÉΓòÉ 8.5.1. 5.5.1 Introduction for PROTOCOL.INI Configuration ΓòÉΓòÉΓòÉ
All of the parameters for the PROTOCOL.INI file are assigned preset values when
OS/2 LAN Server is shipped. When new protocols or MAC drivers are added to the
system, new modules are added to the PROTOCOL.INI file.
If you have problems bringing up the system when using the ETHERAND Network
protocol, the following parameter settings should be reviewed and changed, if
necessary.
Parameter Function Related Parameters
Interrupt levels Interrupt
IRQ
IRQ_Level
Memory and I/O addresses IOAddress
IOBase
IO_Port
RamAddress
MemoryWindow
DMA Channel Number DMAChannel.
Errors are most often caused by parameters related to interrupt levels, memory
and I/O addresses, and DMA channel numbers. All PROTOCOL.INI parameters must
be correctly specified, as well.
5.5.2 Configuration Procedure
ΓòÉΓòÉΓòÉ 8.5.2. 5.5.2 Configuration Procedure ΓòÉΓòÉΓòÉ
Use the following procedure to configure the PROTOCOL.INI file to support your
LAN requirements:
1. Install OS/2 LAN Server
2. Install additional LAN software, as desired.
3. Configure the Network Driver Interface Specification (NDIS) subsystem with
the user interface. Configuring the NDIS subsystem adds the required
entries to the PROTOCOL.INI file in the CMLIB subdirectory. You can also
use any ASCII text editor to modify the PROTOCOL.INI File.
The PROTOCOL.INI file contains information required for the proper
configuration of NDIS, which is used by OS/2 LAN Server. The PROTOCOL.INI
file is read by the system when it is reset at power-on or after you press
the Reset (Ctrl+Alt+Del) key. Therefore, reset the system after you
change the file.
This section describes the parameters for each supported medium access
control (MAC) adapter device driver supplied with OS/2 LAN Server.
You must define the PROTOCOL.INI file because of the many possible
configurations of MAC device drivers. For example, the interrupt levels
for an ETHERAND Network adapter depend on the interrupt levels of other
adapters in that machine. However, there are default parameters that can
be commonly used across various configurations. Recommended values for
these parameters are presented in this section.
The PROTOCOL.INI file can be edited for the following purposes:
o To remove definitions for adapters that are not used in the target system
o To change preset parameters to different values.
5.5.2.1 Reference Information
When you configure Communication Manager for the ETHERAND feature, OS/2
Extended Services installs on the OS/2 workstation a file named PROTOCOL.INI.
This file contains adapter-specific information used during the configuration
and initialization of the ETHERAND Network. You must modify this file for
each OS/2 workstation that has an ETHERAND Network adapter. When planning
for, installing, and configuring ETHERAND adapters, the following
considerations apply:
o The PROTOCOL.INI file must contain a separate adapter definition for each
ETHERAND Network adapter in a workstation.
o When you install an adapter on a PS/2 workstation with the IBM Micro Channel
architecture, use the appropriate PS/2 reference diskette to configure the
system.
o Values for the ETHERAND adapters must be consistent among:
- The Communication Manager configuration file that you create
- The PROTOCOL.INI file
- Adapter switches and jumpers, if any.
Use the following instructions and guidelines to establish appropriate
configuration parameters and their values for ETHERAND Network adapters. Also,
see the documentation provided by your adapter manufacturer for information
about the adapter's capabilities and requirements.
5.5.2.2 Module Descriptions
The PROTOCOL.INI file contains module names and the configuration values for
those modules. The following system-level module names are always present in
the PROTOCOL.INI file regardless of the supported adapters being used:
o PROTOCOL MANAGER
o ETHERAND.
The following listing identifies the module name for each supported ETHERAND
Network adapter type:
Module Name Adapter Type
TCMAC 3Com adapter (workstations with standard PC bus).
TCMAC2 3Com adapter (Personal System/2* workstations with Micro
Channel architecture).
WDMAC Western Digital adapters (this name is used for both the
Western Digital adapter for workstations with the standard
workstation bus (IBM Personal Computer AT workstation), and
the adapter for Personal System/2 workstations with Micro
Channel architecture).
UBMAC Ungermann-Bass adapters (this name is used for both the
Ungermann-Bass adapter for workstations with the standard
workstation bus (IBM Personal Computer AT workstation), and
the adapter for Personal System/2 workstations with Micro
Channel Architecture).
5.5.2.3 Module Name Specification
Table 5-4. "PROTOCOL.INI Module Specifications" identifies all of the
parameters, along with example values, for each PROTOCOL.INI module.
5.5.2.4 PROTOCOL.INI Module Name Parameter Descriptions
Table 5-5. "PROTOCOL.INI Module Parameter Descriptions" contains descriptions
and usage information about the parameters used with the PROTOCOL.INI modules.
5.6 Performance Tuning Specifics
ΓòÉΓòÉΓòÉ 8.6. 5.6 Performance Tuning Specifics ΓòÉΓòÉΓòÉ
This section includes information on tuning OS/2 LAN Server performance. It
should be used with the descriptions of the IBMLAN.INI file in 'IBMLAN.INI
Parameter Descriptions'.
Note: Before changing parameter values for your network, try using the default
values in the IBMLAN.INI file. For an average network (12 to 18
concurrent users) with various data traffic situations and applications,
the default values should result in adequate performance.
OS/2 LAN Server performance depends on the following:
o CPU Speed
o Hardware architecture
o LAN speed and architecture
o DASD access time and data transfer rate
o File server operating system and file system
o File server/requester software design
o Memory available for file system caching and network buffers
o File server workload, including number of concurrent users
o Other applications running on the server
o Requester applications
Since CPU speed, hardware and LAN architectures, and DASD access rates are
fixed with respect to OS/2 LAN Server, this section describes only those
elements that can be altered by the network administrator. These adjustable
elements are interrelated. By isolating particular elements in the network
and adjusting them as needed, the network administrator can optimize network
performance.
5.6.1 Tuning Steps for Server Workload
ΓòÉΓòÉΓòÉ 8.6.1. 5.6.1 Tuning Steps for Server Workload ΓòÉΓòÉΓòÉ
Tuning a LAN server requires some investigation up front. Some of the things
that should be understood are: the number of concurrent users, the type of user
they are (i.e. file transfer, random record user), if possible the size of the
records being sent across the LAN. Basically, the more information that can be
gathered about the way the LAN Server is being used the better. With this
information in mind, apply the information in this section in order to tune the
LAN Server for the specific environment.
5.6.1.1 Number of Active Requesters
For best performance, the server should have enough request buffers available
to handle a peak request workload. You should allocate two or three request
buffers (numreqbuf) for each requester actively sending requests to the server.
The request buffers are pooled and shared among all requests coming to the
server. Therefore, the recommended number of allocated buffers can be reduced
as the number of active requesters increases. The default values (numreqbuf=36;
sizreqbuf=4 KB) are adequate for up to 18 typical requesters on a server.
5.6.1.2 Number of Concurrent Big Buffer Users
For best results, the server should have enough big buffers available to handle
a peak request workload for large sequential-file-access operations. You should
allocate two big buffers (numbigbufs) for each requester that might be sending
this type of request concurrently to the server. Since each big buffer is 64KB
in size, the minimum number required to handle the typical environment should
be specified.
Digit positions 17 and 18 of the srvheuristics parameter enable the network
administrator to modify the dynamic usage of big buffers. These digit
positions allow the server to request additional memory from the operating
system when all the big buffers are in use and more are needed. Allocation of
additional big buffers (greater than numbigbuf) does not occur for reads from
the server, only writes to the server. They do so by setting the time that the
server keeps the big buffers and also the frequency at which the server
reissues the request if memory is not available. If the server has sufficient
memory, set the value of the numbigbuf parameter as previously described to
ensure optimum performance.
5.6.2 Adjusting Performance Parameters
ΓòÉΓòÉΓòÉ 8.6.2. 5.6.2 Adjusting Performance Parameters ΓòÉΓòÉΓòÉ
The performance improvements described in the remainder of this section involve
the adjustment of parameters in the IBMLAN.INI file. Default values for
performance parameters are selected to meet the needs of users operating in a
network characterized as follows:
12 to 18 concurrent users doing a mix of small record random reads and large
file transfers.
Since the needs of a particular workstation can change in several ways, the
resources required to support various application programs may vary. The
following situations may require adjustment of the performance-related
parameters:
o Heavy random I/O environments need a large disk cache in the server.
o Heavy file transfer environments need large numbers of 64KB buffers
(numbigbuf parameter) in the server.
o In an environment with many concurrent users, the value of the numreqbuf
parameter on the server should be increased.
o For heavily loaded server stations, consider the following parameter value
changes:
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé Suggested Parameter Changes for Heavily Loaded Servers Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé PARAMETER Γöé DEFAULT Γöé TUNED 1 Γöé TUNED 2 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CACHE or DISKCACHE Γöé 64KB Γöé 1560KB Γöé 2MB Γöé
Γöé (CONFIG.SYS) Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé NUMBIGBUF Γöé 5 Γöé 12 Γöé 30 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé NUMREQBUF Γöé 36 Γöé 48 Γöé 60 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Tuned 1 settings will provide good performance in an environment consisting
of 15 to 20 workstations doing a mix of file I/O and print jobs allowing for
optimum performance of 5 or 6 workstations doing large file transfers
concurrently.
Tuned 2 settings would be appropriate for an environment consisting of 30 or
more workstations and 10 to 12 workstations doing large file transfers
concurrently. Note that a large amount of memory is needed for buffers.
See Database Manager, LAN Server and HPFS Cache Size for other HPFS cache
information.
o The srvheuristics and wrkheuristics parameters can be changed to improve
OS/2 LAN Server performance for your particular configuration. These
parameters are described in the IBMLAN.INI section.
To improve processing of OS/2 .CMD and DOS .BAT files, the server opens the
files with opportunistic locks and does not close them immediately.
However, this causes the files to remain open longer than expected, possibly
resulting in file I/O errors. In addition, when files are copied from the
DOS command prompt, the times and dates may change. To turn off
opportunistic locking and batch-file performance optimization, set both
digit position 0 of the srvheuristic parameter and digit position 1 of the
wrkheuristic parameter to 0.
o For configurations with limited memory, monitor the effect of the following
parameters on performance. If performance is degraded using default
values, adjust the parameter values in the order listed:
1. numbigbuf
2. maxwrkcache
3. numreqbuf
4. numworkbuf
5. sizreqbuf
6. sizworkbuf
7. numcharbuf
8. sizcharbuf
o On a server with several sessions running simultaneously, you may need to
use values other than the defaults for the following parameters:
- maxconnections
- maxlocks
- maxopens
- maxsearches
- maxsessopens
- maxsessvcs
- maxshares
- maxusers
In order to reduce memory usage in an OS/2 requester here are some additional
suggestions which may be used depending upon the environment.
o Maintain the total size of the work buffers in the requester (sizworkbuf +
numworkbuf) at less than 20KB.
o The charcount parameter relates to redirected serial device support. Many
LAN users do not use redirected serial devices. In this case, to save
memory, set charcount = 0.
o The numalerts parameter has a default value of 12. To reduce memory
requirements, reduce this to 5.
o The numservices parameter has a default value of 8. To reduce memory
requirements, reduce this to 4.
o The numcharbuf and sizcharbuf parameters specify the number and size of
buffers used for 'pipes' and character devices. The default is 10 buffers
of 512 bytes each. To reduce storage requirements, reduce these parameters.
Three 128 bytes buffers is a suggested size.
o numdgrambuf specifies the number of buffers for NetBIOS datagrams. Among
other things, datagrams are used by servers to 'announce themselves' to a
network. In many cases, numdgrambuf can reduced to 8.
The parameters in both lists are interdependent. When one parameter is
changed, related parameters should also be adjusted. See the individual
parameter descriptions in the IBMLAN.INI section of this document for the
related parameters.
5.6.3 Tuning for Applications
ΓòÉΓòÉΓòÉ 8.6.3. 5.6.3 Tuning for Applications ΓòÉΓòÉΓòÉ
This section contains information about applications running on the server or
requester and how they may affect performance. This gets a little beyond just
tuning the LAN Server program, but these application ideas can be used in some
environments to help performance.
5.6.3.1 Applications Running on the Server
Any application running on the server requires CPU and I/O resources and
therefore degrades file server performance to some degree. For example, a
common application to find on the server is the print spooler, which services
network print requests.
As determined by standard file server tests (where 64KB print jobs are sent at
5-second intervals), a print spooler can degrade a server's performance by 15%
for random-file I/O and 25% for sequential-file I/O. Ultimately, the effect on
a server's performance depends on the size and frequency of the print jobs.
Also, commonly used in networks are client/server applications. These
applications usually entail a high-performance workstation (the server)
providing processing services to less powerful workstations (clients). The
workstations communicate with each other over a network by using a protocol
such as IBM's Application Program to Program Communication (APPC) or NetBIOS.
For example, Remote Data Services (part of Data Base Manager) is a
client/server application.
SQL queries are formulated at a database requester workstation and sent to the
database server for processing. The results of the query are sent back to the
database requester.
When the APPC protocol is used, OS/2 LAN Server is optional. However, any
client/server application running on the LAN Server computer can degrade
performance.
By default, the server operates at a higher priority than any other
application. To boost the relative priority of an application running on the
LAN Server computer, digit position 6 of the srvheuristics parameter in the
IBMLAN.INI file can be changed from 4 to 5 or higher. This degrades the
performance of the server while the client applications are being executed.
5.6.3.2 Applications Running on Requesters
There are several types of applications on requester workstations. Performance
concerns center around executables, buffering, and non-LAN aware applications.
Executables
For OS/2 LAN Server, neither DOS nor OS/2 programs are cached in the server.
If program load time is important in your environment (for example a
classroom), copy the .EXE file to a VDISK on the server and Net Share the VDISK
to the requesters.
Cache Versus Self-Buffering
Usually, multiple requesters running the same application are connected to a
server workstation. For example, requester workstations use a database
application with the database files located on the server.
If the database files are small compared to the cache size, they are placed
into cache to provide improved response time to the requesters. Conversely, if
the database files are large compared to the cache size, placing the files into
cache is probably non-productive and the memory could be used elsewhere. In
this case, reduce the cache size to 64KB and evaluate the results.
Some database applications maintain their own cache of commonly used data,
referred to as a buffer pool . If the database program is running on the LAN
Server (client/server mode), performance may be improved by increasing the size
of the buffer pool at the expense of cache memory. The database program knows
more about the data usage than does the file system cache.
Non-LAN Aware Applications
Many applications in use today were written for standalone operation and are
not designed for performing file I/O across a network. DOS Database
applications, in particular, often perform a large number of very small reads
and writes. These reads and writes place a heavy demand on the server to
perform file I/O and process SMB's for data requests and transmissions.
To reduce the heavy demand on the server, put the database files into cache and
structure the database queries to minimize the amount of data returned to the
requester. Also, consider moving to current database technologies such as
client/server.
5.6.4 OS/2 LAN Server 1.3 Enhancements
ΓòÉΓòÉΓòÉ 8.6.4. 5.6.4 OS/2 LAN Server 1.3 Enhancements ΓòÉΓòÉΓòÉ
5.6.4.1 Record Locking
LAN Server 1.3 includes performance improvements when requesters are locking
byte ranges of files. These types of locks are used by many applications.
(For example, databases lock records during update operations.)
The improvements include an algorithm to queue lock requests at the server when
the byte range requested is already locked. This reduces the number of lock
retries, which reduces the number of messages on the network and the number of
requests seen by the server. The result is increased network and server
capacity when there is lock activity.
Secondly, 1.3 DLR now gives immediate response back to the applications if a
lock request fails. (Previously, DLR retried the lock 3 times before notifying
the application. OS/2 requesters always notified the application immediately.)
This change speeds up applications that rely on lock failures as part a search
technique. (Some applications use this to control software license usage.)
5.6.4.2 Program Load
LAN Requester 1.3 has improved program load performance. No specific numbers
are given here, but tests have shown that program load performance is better
than LAN Requester 1.2. This is due to changes made to the OS/2 Loader and how
it functions in the LAN environment. Unlike 1.2, 1.3 does not load all DLL's
at program startup over the LAN. (See 3.1.5 Enhanced Loader for more
information.)
5.6.4.3 Bridge Timeout
Bridge timeout refers to the problem in LAN Server 1.2 where large (64K)
messages are sent across a slow bridge (either bridges on slow
telecommunications links or local bridges which are suffering congestion) and
NetBIOS at the server times out and disconnects the remote user. A SYS0240
error message is the symptom users see when this occurs.
With LAN Server 1.3, the server will drop the session when this occurs, but
will re-establish it and set NetBIOS so the new session will use a smaller
message (request buffers instead of big buffers).
For LAN Server 1.2, the problem still exists, but the NetBIOS timer can be
changed using srvheuristic 15 (oplock timeout) (see Table 5-7."Conversion for
Srvheuristic") Since the oplock timeout is stored as a 16 bit number and the
NetBIOS timeout is stored as an 8 bit number a messy conversion must be made to
the time values listed for srvheuristic 15. Table 5-4. "PROTOCOL.INI Module
Specifications" identifies all of the parameters, along with example values,
for each PROTOCOL.INI module.
Chapter 6 Database Manager
ΓòÉΓòÉΓòÉ 9. Chapter 6 Database Manager ΓòÉΓòÉΓòÉ
6.1 Roadmaps for using this section
6.2 Database Manager Structure
6.3 Design Issues
6.4 Database Manager and Database-specific Parameters
6.5 Operational Considerations
6.6 Improving Performance of Query Manager Reports
ΓòÉΓòÉΓòÉ 9.1. 6.1 Roadmaps for using this section ΓòÉΓòÉΓòÉ
To make this chapter as useful as possible, the next three sections provide
several possible paths through the information:
o 6.1.1 Information Ordered by Types of Questions You May Have
o 6.1.2 Information Ordered by Stages of Your Project
o 6.1.3 Database Performance Guidelines, From Most to Least Important.
The information in this chapter is presented in sections that follow a logical
progression, but there is little need to read the entire chapter if your needs
fit one of the categories listed above.
ΓòÉΓòÉΓòÉ 9.1.1. 6.1.1 Information Ordered by Types of Questions You May Have ΓòÉΓòÉΓòÉ
Following are references to information in this chapter, as well as to other
sections of the book. These references have been grouped according to certain
questions or areas of interest you may have.
o Learning about general database performance issues:
1. Read Chapter 2 How to Approach Performance Tuning
2. Read 6.1.3 Database Performance Guidelines, From Most to Least
Important
3. Read 6.2 Database Manager Structure
4. Browse 6.3 Design Issues
5. Browse 6.4 Database Manager and Database-specific Parameters
6. Refer to Related Publications
o Starting from scratch:
1. Read Chapter 2 How to Approach Performance Tuning
2. Read 2.5 Performance Concepts
3. Read 6.2 Database Manager Structure
4. Read 6.1.2 Information Ordered by Stages of Your Project
5. Read 6.3 Design Issues
6. Browse 6.4 Database Manager and Database-specific Parameters.
o Designing a new database:
1. Read the first section of 6.3 Design Issues
2. Read 6.3.3 Database Design Issues
3. Read 6.4 Database Manager and Database-specific Parameters.
o Designing an application to use an existing database:
1. Read the first section of 6.3 Design Issues
2. Read 6.3.2 Database Application Design
3. Read 6.4 Database Manager and Database-specific Parameters.
o If your database(s) and application(s) is/are already in production:
1. Read Chapter 2 How to Approach Performance Tuning
2. Read 6.1.2 Information Ordered by Stages of Your Project
3. Read 6.3.2.7 Remote Data Services and SQLLOO or APPC
4. Read 6.5.3 Reorganization (REORG) and 6.5.4 Run Statistics Utility
(RUNSTATS)
5. Read sections on indices in 6.3.3 Database Design Issues.
o Benchmarking your database performance:
1. Read Chapter 2 How to Approach Performance Tuning
2. Read 2.3 Benchmarking
3. Read the sections in the Communications Manager chapter that are
relevant to the communications methods you are using. (See 4.4
Communications Manager Parameters and Tuning Specifics)
6.1.2 Information Ordered by Stages of Your Project
ΓòÉΓòÉΓòÉ 9.1.2. 6.1.2 Information Ordered by Stages of Your Project ΓòÉΓòÉΓòÉ
These are lists of things that can be done to improve the performance of a
database application. If the application has been obtained from an outside
source, you may not be able to do everything on this list. If you are
developing an application, there are more things that can be done, depending on
how far along you are in its implementation. The following recommendations are
divided into the different phases of a project.
1. Any time
o Add memory to your system. (However, there are always tradeoffs in
adding memory; see 3.2 Memory Usage and Tuning.)
o Eliminate or turn off TRACE in CONFIG.SYS (see OS/2 System Trace in 2.2
Methodical Approach)
o Put STARTDBM (which initially allocates Database Manager resources) in
STARTUP.CMD on the machine where the database is located. This will
cause initialization to occur at system startup rather than during your
application.
o Implement Record Blocking - RDS only (see 6.3.2.9 Record Blocking)
o Restructure indices (see 6.3.3.2 Indices)
o Put database and logs on separate drives (see Splitting Log Files from
the Database)
o Tune performance-related parameters
- Change Database Manager and Database-related parameters (see 6.4
Database Manager and Database-specific Parameters)
- Increase size of Query Manager Row Pool; this may improve response for
displaying reports (see 6.6 Improving Performance of Query Manager
Reports)
- Change size of Communications Manager Transmit Buffer (see Transmit
Buffer Size and Receive Buffer Size in 4.4.2 Lan Feature Profiles)
o Use APPC/SQLLOO (installed by Basic Configuration Services) (see 6.3.2.7
Remote Data Services and SQLLOO or APPC)
o Take advantage of HPFS versus FAT fixed disk partitions (see Database
Manager, LAN Server and HPFS Cache Size in 3.3.4 File System Performance
Considerations)
o Do performance benchmarking (see 2.3 Benchmarking)
2. Setting up the system
o Decide on initial RAM requirements (see Chapter 7, "Memory and Fixed Disk
Requirements," in the OS/2 1.3 Information and Planning Guide and allow
additional RAM for parameters that have been increased from default
values, especially BUFFPAGE. Also see 3.2 Memory Usage and Tuning.)
o Determine hardware requirements
- Eventual database size (see 6.5.2 Estimating Database Size)
- Justification of second fixed disk to split logs (see Splitting Log
Files from the Database)
- Decide on number of network cards (see Multiple Network Cards in 4.3.3
Hardware Considerations)
- Decide on use of HPFS versus FAT (see 3.3 File Systems, Caches, and
Buffers)
o If using Remote Data Services, install using Basic Configuration Services
(this will ensure that you get SQLLOO (see 6.3.2.7 Remote Data Services
and SQLLOO or APPC))
3. Early Development
o Restructure (Normalize) Database Tables (see 6.3.3.1 Table design)
o Determine initial requirements for indices (see 6.3.3.2 Indices)
o Use Database Application Remote Interface if appropriate - a Remote Data
Services consideration only (see 6.3.2.8 Database Application Remote
Interface (ARI))
o Take advantage of Locking and Data Isolation Levels (see 6.3.2.6 Locking
and 6.3.2.5 Data Isolation Levels)
o Take advantage of commit points (see 6.3.2.4 Commits)
o Use static SQL where appropriate (see 6.3.2.3 Static versus Dynamic SQL)
o Make initial pass at setting configuration parameter values (see 6.4
Database Manager and Database-specific Parameters)
4. Late Development
o Use appropriate Data Isolation Levels (see 6.3.2.5 Data Isolation Levels)
o Make final determination of what indices are required (see 6.3.3.2
Indices)
o Test on a full-sized database
o Determine frequency of administration utilities: Reorganization, Rebind,
and Run Statistics procedures (see 6.5 Operational Considerations)
o Determine final tuning parameters (see 6.4 Database Manager and
Database-specific Parameters).
5. In Production
o Determine an appropriate backup procedure.
o Run Reorganization (REORG) (see 6.5.3 Reorganization (REORG))
o Run statistics (RUNSTATS) (see 6.5.4 Run Statistics Utility (RUNSTATS))
o Rerun the Bind procedure
o Adjust tuning parameters (see 6.4 Database Manager and Database-specific
Parameters).
6.1.3 Database Performance Guidelines, From Most to Least Important
ΓòÉΓòÉΓòÉ 9.1.3. 6.1.3 Database Performance Guidelines, From Most to Least Important ΓòÉΓòÉΓòÉ
Many factors can affect the performance of Database Manager. Some of these
factors are system related (memory, processor speed, network), some are
Communications related (e.g., parameters specific to Communications Manager),
and others are specific to Database Manager. Of the Database Manager factors,
some may be applied only during application development, while others may be
used only after the database and applications are up and running.
Following is a list of the database performance factors, listed in order of
performance impact. Performance is a consideration from design and
implementation to production; some of the more important performance decisions
come in the database and application design phases. The following list starts
with "Design" and progresses from there.
1. 6.3 Design Issues (see 6.3 Design Issues)
o Database application design issues
- Database Services considerations
These apply to any workstation on which a database resides, and are
related to both RDS and standalone systems.
o SQL Query Design, including Static versus Dynamic SQL decisions
(see 6.3.2.2 Query Design and Access Plan Concepts and 6.3.2.3
Static versus Dynamic SQL). This item is an iterative process with
"Index Design," listed under "Database Design" in the following
text.
o Commit strategy (see 6.3.2.4 Commits)
o Data isolation levels and Locking (see 6.3.2.5 Data Isolation
Levels and 6.3.2.6 Locking).
- Remote Data Services (RDS) considerations
o Database Application Remote Interface (see 6.3.2.8 Database
Application Remote Interface (ARI))
o Record blocking (see 6.3.2.9 Record Blocking)
o APPC/SQLLOO decision (see Remote Data Services and SQLLOO or APPC)
o Remote Data Services alternatives (see Alternatives)
o Database Design (see 6.3.3 Database Design Issues)
- Logical Design of a Database
o Table design (number of tables, degree of normalization) (see
6.3.3.1 Table design)
o Index design, which will be iterative with query design (see
6.3.3.2 Indices)
o Referential Integrity (RI) definitions (RI overhead will have some
performance impact; see 6.3.3.3 Referential Integrity)
- Physical Design of a Database
o Decide on which physical drive to place the database (see Physical
Considerations)
o Decide where to place the database log files (see 6.4.3.6 Tuning
Log-related Parameters for Performance).
2. Database Parameter Configuration
For RAM-related parameters, RAM used by database-specific parameters will
need to be balanced with other system parameters (see 3.2 Memory Usage and
Tuning).
o Database parameters applicable in any database environment
- Buffer Pool Size (see 6.4.2 Buffer Pool Size (BUFFPAGE))
- Log-Related Parameters (see 6.4.3 Log-related Parameters)
- Sort Heap, if sorting is needed (see Table 6-10. "Summary of
information for Sort Heap size")
- Locklist Size, in heavy concurrency situations (see 6.4.5 Lock List
Parameters (LOCKLIST and MAXLOCKS))
o RDS-related parameters
- APPC versus SQLLOO decision, which is actually a Communications
parameter specifically for Database Manager (see Remote Data Services
and SQLLOO or APPC)
- Record Block Size (see 6.4.6 Requester and Server I/O Block Sizes
(RQRIOBLK and SVRIOBLK))
3. Hardware Requirements
o Model of PS/2
- Server: This machine needs CPU speed and DASD space and speed. PS/2
Models 80 and 95 are recommended for production use, although other
models may be sufficient for testing or for use in an environment
where the database server is not heavily used.
- Requester or Standalone: Most PS/2 systems will suffice for these
workstations. (A standalone will require more power than a mere
requester .)
o Amount of RAM - see Chapter 7, "Memory and Fixed Disk Requirements," in
the OS/2 1.3 Information and Planning Guide. Factor in additional RAM
for parameters that have been increased from default values.
o DASD considerations
- Having two physical drives is important, to split the log from the
database (see 6.4.3.6 Tuning Log-related Parameters for Performance)
- If your system supports them, consider SCSI drives for speed.
o Communications Manager connectivity (for RDS)
6.2 Database Manager Structure
ΓòÉΓòÉΓòÉ 9.2. 6.2 Database Manager Structure ΓòÉΓòÉΓòÉ
This section contains a brief overview of the different OS/2 Database Manager
components and how they fit together. Figure 6-1. "Structure of OS/2 Database
Manager" shows the general organization of and relationship between the
components of Database Manager.
Database Manager consists of the following three components:
Component Description
Database Services This is the relational engine of the Database Manager.
It provides SQL support, and manages data integrity and
the storage of data on disk. Database Services is a
prerequisite to using the other two components of
Database Manager.
Query Manager This is an IBM-supplied database application. It
includes the interface to do data entry and edit,
prompted generation of queries, a command line
interface, a interface to build customized applications
(including procedures, panels, and menus), the ability
to edit and create forms, and the ability to change
Database Manager parameters and parameters associated
with a particular database. Note: Do not do
benchmarking with Query Manager unless you are using
Query Manager as your database interface.
Remote Data Services As an option at installation, the user can decide
whether their machine will be set up to use databases
located on other workstations, create and use local
databases, and/or allow locally stored databases to be
used by others. If the user asks to use remote
databases or to share local databases with others, the
system is set up for Remote Data Services (RDS). Once
RDS is installed, the user must then configure the
appropriate Communications Manager parameters. See
Remote Data Services and SQLLOO or APPC for more
information on the communications aspects of RDS.
6.3 Design Issues
ΓòÉΓòÉΓòÉ 9.3. 6.3 Design Issues ΓòÉΓòÉΓòÉ
6.3.1 Before you begin
6.3.2 Database Application Design
6.3.3 Database Design Issues
6.3.1 Before you begin
ΓòÉΓòÉΓòÉ 9.3.1. 6.3.1 Before you begin ΓòÉΓòÉΓòÉ
A basic question that may not have been asked is: "Do I need a relational
database?" If your data and application are very simple, flat files may be all
that are necessary to do the job. To help make a decision between a relational
database and a flat file implementation, following are some considerations:
o With a relational database, changes can be made to the structure of the data
without necessarily affecting the applications accessing that data. With a
flat file, any time columns of data are added, deleted, or changed in
length, applications referencing that data may have to change. With a
relational database, all queries are in SQL so the database system handles
changes in the physical location of the data.
o In a relational database, common functions performed against data are
handled by the database system rather than coded into the application. For
example, the Join operation is a common database function. In a flat file
system, join data can be handled in one of two ways:
- Tables can be denormalized (see 6.3.3.1 Table design for a description of
normalization). However, this results in redundant data as well as
additional work and application complexity in keeping track of all
occurrences of repeated data.
- The join operation can be coded directly into the application, including
merging the data and performing comparison loops through the data. Note
that this code could possibly be unique for each set of files being
joined. In a relational system, data need not be repeated to avoid joins.
A join is simply expressed in SQL and the database system handles all the
underlying operations.
o A flat file system would be a consideration if the use of data is a single
table operation, and it is relatively certain that the structure of the data
will never change. Examples of this might be a phone list or a recipe list.
However, if there is ever a need to cross reference these two lists (or
tables), such as determining who likes which recipes, the application then
moves into the realm of needing a relational database.
The remainder of this chapter assumes that you do need a relational database.
Once this is decided, the next question to be asked is "Do I need the
capabilities of Remote Data Services, or would a local solution be adequate?"
Remote Data Services adds much function and flexibility to your options, yet
it also adds complexity and can impact performance if not properly tuned.
Again, the sections in this chapter related to Remote Data Services (see
6.3.2.7 Remote Data Services, 6.3.2.8 Database Application Remote Interface
(ARI), and 6.3.2.9 Record Blocking) have been ritten with the assumption that
Remote Data Services is in fact a feature you have decided to use.
6.3.2 Database Application Design
ΓòÉΓòÉΓòÉ 9.3.2. 6.3.2 Database Application Design ΓòÉΓòÉΓòÉ
6.3.2.1 General OS/2 Application Issues
Some concepts of application design are common to all OS/2 applications. These
issues are covered in Chapter 7 Application Design and Development Guidelines
6.3.2.2 Query Design and Access Plan Concepts
Requests to Database Manager are expressed as SQL statements. However, Database
Services (the runtime engine of Database Manager) requires that SQL statements
be translated into what can be thought of as "SQL object code". This
translation process is performed by Database Manager's optimizing precompiler.
The result of this translation process is called an access plan which defines a
path to the data represented by an SQL statement. The act of creating an
access plan is called binding an application to a database. (Note that an
access plan can be created by running the precompiler, SQLPREP, or by executing
the bind command, SQLBIND, against the .BND file generated as optional output
from the precompiler.)
Database Manager's precompiler is an optimizing compiler, meaning that it
determines the most efficient path (access plan) to the data. For example, the
precompiler determines whether it is more efficient to use an index or not (and
if so, which indices), whether to sort values in memory or build a sorted
temporary table, etc. The precompiler makes its decisions based on statistics
that are stored in the database. (See 6.5.4 Run Statistics Utility (RUNSTATS)
for more information on these statistics.)
The purpose of this section is to provide a better understanding of how the
Optimizer works so that you will understand how to formulate queries and know
what indices to create to improve the performance of a database application.
Relation Scan versus Index Scan
There are two ways of accessing data in a table - by directly reading the table
itself (relation scan), or by first accessing an index on that table (index
scan).
A relation scan is when Database Manager sequentially accesses every row of a
table. This type of scan is used when no index exists that can be used for the
particular query, or if it is determined that an index scan would be more
costly than a relation scan (because of small table size, poor index clustering
(see Index Clustering in this section for more information), etc.).
An index scan is when Database Manager accesses an index to do any or all of
the following:
o Narrow down the set of qualifying rows (by scanning the rows in a certain
range of the index) before accessing the base table.
o Order the output.
o Fully retrieve the requested data.
The index scan range (i.e. start/stop points) is determined by the values in
the query against which index columns are being compared. If all of the
requested data is in the index, the base table will not be accessed. If
additional data is needed that is not in the index, this data is obtained by
reading the base table using a record pointer obtained from the index key.
Index Scan Concepts
When an Index can be used
In determining whether an index can even be used for a particular query,
Database Manager first looks at all the EQUAL predicates in the statement's
WHERE clause. (Note: A predicate is an element of a search condition in a
WHERE clause that expresses or implies a comparison operation.) Predicates
that can be used to delimit the range of an index scan are those involving an
index column in which one of the following is true:
o the index column is being tested for equality against a constant, a host
variable, an expression that evaluates to a constant, or a keyword.
o the test is against the index column
o the test is for equality against a basic subquery (i.e. one that doesn't
contain ANY, ALL or SOME) and the subquery does not have a correlated column
reference.
An inequality predicate can also be used in an index scan but it must involve
the index key column immediately after the last usable "=" predicate index
column. So, in the following example:
WHERE NAME = :hv1
AND DEPT = :hv2
AND MGR > :hv3
all three predicates can be used to delimit the index scan: the first two
satisfying an equals condition and the last predicate being an inequality.
There can be no more than one inequality predicate used in the index scan,
unless the multiple inequality predicates are all for the same column. The
valid inequality operators are:
o >, <, >=, <=
o BETWEEN
o LIKE with a constant starting with other than '_' or '%' (can be used in an
index scan for dynamic SQL only).
o A LIKE clause can be changed into a BETWEEN so that it can be used in an
index scan for static SQL.
o USER keyword
For example, given an index with the following definition:
INDEX IX1: NAME ASC,
DEPT ASC,
MGR DESC,
SALARY DESC,
YEARS ASC
The following predicates could be used in delimiting the range of the scan of
index IX1:
WHERE NAME = :hv1
AND DEPT = :hv2
- or -
WHERE MGR = :hv1
AND NAME = :hv2
AND DEPT = :hv3
Notice that in the second example the WHERE predicates do not have to be
specified in the same order as the key columns appear in the index. Database
Manager will sort this out.
In the following WHERE clause, only the EQUAL predicates for NAME and DEPT
would be used in delimiting the range of the index scan, but not the
predicates for SALARY or YEARS.
WHERE NAME = :hv1
AND DEPT = :hv2
AND SALARY = :hv4
AND YEARS = :hv5
This is because there is a key column separating these columns from the first
two index key columns, so the ordering would be off. However, once the range
is determined by the 'NAME = :hv1' and 'DEPT = :hv2' predicates, the remaining
predicates can be evaluated against the remaining index key columns.
If the query involves ordering, an index can be used to order the data if the
ordering column(s) appear consecutively in the index, starting from the first
index key column. An exception to this is when the first 'n' index key columns
are compared for equality against constant values (i.e. any expression that
evaluates to a constant). Here the ordering column can be other than the first
index key column(s). For example, in the query:
WHERE NAME = 'JONES' AND DEPT = 'D93'
ORDER BY MGR
the index could be used to order the rows since NAME and DEPT will always be
the same values and will thus be ordered. Another way of saying this is that
the above WHERE clause is equivalent to:
WHERE NAME = 'JONES' AND DEPT = 'D93'
ORDER BY NAME, DEPT, MGR
There may be other predicates that involve columns that are in the index, but
cannot be used to bracket the index search. Database Manager will use the
index data in evaluating these predicates rather than reading the base table.
Beyond that, there may be predicates whose columns do not appear in the index.
To evaluate these predicates, rows of the base table must be read and
comparisons made to determine which rows satisfy the remaining predicates.
Index versus Relation Scan Costs
In determining whether to access rows through an index or via a relation scan,
the most obvious criteria will be whether sorting is required (assuming the
appropriate index exists). Aside from that, the actual cost (in estimated
performance) to access the data is also considered. There may be times when
the optimizer determines that it is quicker to perform a relation scan
followed by a sort, than to go through an index to access a base table. For
example, this might happen when:
o The base table is so small that it can be totally manipulated in RAM. There
would be less I/O involved than to read both the index and data pages into
memory.
o The physical order of the rows in the table may not be anywhere near the
same order as the most important index (that is, the index cluster ratio,
found in the CLUSTERRATIO column of the SYSIBM.SYSINDEXES system table, is
low). Here, it may be faster to sequentially access the base table than to
access index pages and then very randomly read data pages in and out of the
buffer pool.
Index Clustering
The term "index clustering" has a slightly different meaning in OS/2 Database
Manager than in DB2. In DB2, having a clustering index means that rows will be
physically inserted into the base table as close to the clustering index order
as possible.
In OS/2 Database Manager, index clustering is just a statistic (generated by
the RUNSTATS utility) that tells how close the physical table order is to a
specific index order at the time that the RUNSTATS utility is run. Insertion
of rows into the table is always done in a first fit fashion. The statistics
used by the OS/2 Database Manager optimizer in making access plan decisions
are described in the section 6.5.4 Run Statistics Utility (RUNSTATS).
Database Manager Index Structure
OS/2 Database Manager uses a B+ tree structure for storing its indices. A B+
tree has one or more levels, as pictured below (where 'rid' means 'record
id'):
The top level is called the root node. The bottom level consists of leaf
nodes, where the actual index key values are stored. Any levels between the
root and leaf node leaves are called intermediate nodes.
In looking for a particular index key value, Index Manager searches the index
tree starting at the root node. The root contains one key for each node at the
next level. The value of each of these keys is the largest possible key value
for the corresponding node at the next level. For example, suppose an index
has 3 levels as shown above. To find an index key value, Index Manager would
search the root node for the first key value greater than or equal to the key
being looked for. This root node key would point to a specific intermediate
node. The same procedure would be followed with that intermediate node to
determine which leaf node to go to. The final index key would be found in the
leaf node. Using the diagram above, suppose the key being looked for is 'I'.
The first key in the root node greater than or equal to 'I' is 'N'. This
points us to the middle node at the next level. The first key in that
intermediate node that is greater than or equal to 'I' is 'L'. This points us
to a specific leaf node where we find the index key for 'I' along with its
"rid" (the record id of the corresponding row in the base table).
Predicate Terminology
Let's introduce some new terminology for the ideas presented in the section
Index Scan Concepts above. Internal to Database Manager, there are several
components sharing the work of retrieving rows to satisfy a query. These
components and their relationship to one another can be pictured as follows:
A User Application requests a set of rows from Database Manager with an SQL
statement, qualifying the specific rows desired with predicates. When the
optimizer decides how to evaluate an SQL statement, each predicate falls into
1 of 4 categories, determined by how and when that predicate is used in the
evaluation process. These categories are listed below, ordered by performance
from best to worst:
o Range Delimiting Predicates and Index Sargable Predicates
o Sargable Predicates
o Residual Predicates
Range Delimiting and Index Sargable Predicates
Range Delimiting Predicates are those used to bracket an index scan (i.e. they
provide start and stop key values for the index search). Index Sargable
Predicates are not used to bracket a search, but can be evaluated from the
index because the columns involved in the predicate are part of the index key.
For example, assume we have the following WHERE clause and the index, 'IX1',
defined in the section Index Scan Concepts above:
WHERE NAME = :hv1
AND DEPT = :hv2
AND YEARS > :hv5
The first two predicates (NAME = :hv1, DEPT = :hv2) would be Range Delimiting
Predicates, while 'YEARS > :hv5' would be an Index Sargable Predicate.
Sargable Predicates
Predicates that cannot be satisfied through an index and require the access of
individual rows from a base table are called Sargable Predicates. Data
Management Services (DMS) handles these predicates, retrieving the columns
needed to evaluate the predicate. Also, DMS will retrieve any other required
columns (i.e. to satisfy the columns in the SELECT list) that could not be
obtained from the index.
Residual Predicates
Residual Predicates are those that require I/O beyond the simple accessing of
a base table. Examples of Residual Predicates are ones that involve handling
of correlated subqueries, quantified subqueries (subqueries with ANY, ALL,
SOME or IN), or the reading of LONG VARCHAR data (which is stored in a
separate file from the table itself). These predicates are evaluated by
Relational Data Services.
Order of handling the predicates
Rows that satisfy a query are passed back to a user application one row at a
time in response to the SQL FETCH statement. Depending on the access plan,
these rows may be:
o directly retrieved one at a time from the actual data files and indices, and
then passed back to the application before retrieving another row,
o or, the entire answer set may be materialized temporarily under the covers
and then each row from this temporary set is passed back to the application
one at a time in response to the FETCH statement.
Relational Data Services is the component that receives the application's SQL
requests and returns the rows. Relational Data Services determines from a
statement's access plan which predicates are to be evaluated by the DMS and
Index Manager components and passes them on to DMS. If DMS detects that some
of the predicates are to be evaluated using an index (from the predicates and
access plan information it received), it calls Index Manager, passing on only
the index-related predicates.
Index Manager then finds a row satisfying its predicates (the Range Delimiting
and Index Sargable predicates) and returns to DMS the appropriate column
values and/or a pointer to the base table. If there are any remaining
predicates to be evaluated and/or column values to be obtained that could not
be satisfied from the index, DMS with do these things by accessing base
tables.
All the results for a row that are collected by Index Manager and DMS are
passed back to Relational Data Services. If the data must be sorted,
Relational Data Services passes the rows (that it receives from DMS) to the
Sort Manager, until no more rows are found which qualify as input to the sort.
(See Piped versus Non-Piped Sorts for more information.) After this,
Relational Data Services accesses the sorted rows as needed - either for
further operations (such as a merge join) or to pass directly back to the user
application.
Residual predicates are also handled by Relational Data Services, after
receiving rows from DMS, or even after sorting or joining rows.
Join Concepts
A join is where rows from one table are concatenated to rows of one or more
other tables based on predicate comparisons between the rows of the tables.
For example, given the following two tables:
Table1 Table2
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé proj Γöé proj_id Γöé Γöé proj_id Γöé name Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé A Γöé 1 Γöé Γöé 1 Γöé Sam Γöé
Γöé B Γöé 2 Γöé Γöé 3 Γöé Joe Γöé
Γöé C Γöé 3 Γöé Γöé 4 Γöé Mary Γöé
Γöé D Γöé 4 Γöé Γöé 1 Γöé Sue Γöé
Γöé Γöé Γöé Γöé 2 Γöé Mike Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Joining Table1 and Table2 where the ID columns are equal would be represented
by the following SQL statement:
SELECT proj, x.proj_id, name
FROM Table1 x, Table2 y
WHERE x.proj_id = y.proj_id
and would yield the following set of result rows:
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé proj Γöé proj_id Γöé name Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé A Γöé 1 Γöé Sam Γöé
Γöé A Γöé 1 Γöé Sue Γöé
Γöé B Γöé 2 Γöé Mike Γöé
Γöé C Γöé 3 Γöé Joe Γöé
Γöé D Γöé 4 Γöé Mary Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
When joining 2 tables, one table is the "outer table" and the other is the
"inner table." The outer table is accessed first and is only scanned once.
Whether the inner table is scanned multiple times will depend on the type of
join (see below) and what indices are present.
Database Manager supports two types of join: Nested Loop Join and Merge Join
below. Which join is chosen will depend on the existence of a join predicate
(defined in the section "Merge Join"), and various costs involved as
determined by table and index statistics.
Nested Loop Join
If there is no predicate of the form table1.column = table2.column, Database
Manager must do a nested loop join. A nested loop join is performed by
scanning through the entire inner table for each accessed row of the outer
table. If the inner table has an index that is usable for the specified
predicates and there is a predicate of the following form ( where relop is =,
>, >=, <, or <= ):
outer_table.column relop inner_table.column
Database Manager could do an index lookup on the inner table for each row of
the outer table that is accessed. This could be a way to significantly reduce
the number of rows accessed in the inner table for each access of the outer
table, (though it will depend on the selectivity of the join predicate, etc.).
Merge Join
Merge join requires a predicate of the form table1.column = table2.column.
This is called a join predicate. Merge Join also requires sorted input on the
joining columns, either through index access or a call to the Sort Manager.
There is also the implied limitation that each join column be
sortable/indexable (i.e. size <= 255 bytes) The joined tables are scanned in
parallel. The outer table of the Merge Join is scanned just once. The inner
table is more or less scanned only once. If there are repetitive values in
the outer table, Database Manager may have to re-scan a group of records in
the inner table. For example, suppose the 'A' columns in tables T1 and T2 have
the following values:
Outer Table Inner Table
T1: column A T2: column A
ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇ ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇ
2 1
3 2
3 2
3
3
The steps for doing the Merge Join are:
o Read the first row from T1. The value for A is '2'.
o Scan T2 until a match is found, then join the two rows.
o Keep scanning T2 while the columns match, joining rows.
o When the '3' in T2 is read, go back to T1 and read next row.
o The next value in T1 is '3', which matches T2, so join the rows.
o Keep scanning T2 while the columns match, joining rows.
o The end of T2 is reached.
o Go back to T1 to get the next row - Database Manager sees that the next
value in T1 is the same as the previous value from T1, so it re-scans T2
starting where the '3' first appeared in T2 (Database Manager remembers this
position).
Outer versus Inner Join Determination
How does Database Manager determine which table will be the inner and outer
table of the join? The following comments are not a complete indication of
the outer versus inner decision in the Optimizer, but rather are to be taken
as general guidelines.
For a Merge Join, determination of outer versus inner is based on the
following:
" For each instance of a join value in the outer table, there can
potentially be a group of values in the inner table. If the outer table
has multiple instances of a single value, the group of identical values in
the inner table has to be scanned multiple times (once for each instance
of the value in the outer table). Database Manager attempts to minimize
the number of repetitive scans of groups in the inner table. For each row
accessed in the inner table, any additional predicates involving the two
tables must also be evaluated. If the predicates include I/O, attempts
are made to minimize that too. So, suppose Table1 has repetitive join
values, but Table2 doesn't. (This can be determined by looking at the
number of rows in the table and the column cardinality - that is, number
of values in the column.) Table2 sounds potentially better as the choice
for the outer table as group scans on Table1 can be avoided. "
There are two main considerations in selecting the inner and outer tables for
a nested loop join: Indices and Buffering.
o Indices: It may be possible to do an index lookup on one of the tables,
making it a possible inner table candidate. It could then be accessed with
an index key lookup using the outer table's join key predicate as one of the
key parts. If the other table doesn't have an index, it probably would not
be a good candidate for the inner table since in that case the entire inner
table would have to be scanned per row of the outer table.
o Buffering: Buffering can have quite an effect on the selection of the inner
table. If, for instance, the entire inner table must be scanned for each
row of the outer table (i.e. can't do index lookup on the inner), Database
Manager will potentially choose the smaller of the two as the inner table to
take advantage of buffering via the Buffer Pool. This will be influenced by
table size and Buffer Pool size, and what may be required after a join (such
as ordering). (Note that since join decisions are influenced by Buffer Pool
size, you may need to re-bind your application to the database after
changing the Buffer Pool size.)
Following are overall considerations regardless of the type of join selected:
If the ordering of the outer table would also support the ordering
requirements after the join is performed, there is extra incentive to choose
that table as the outer table to avoid sorting the output of the join. An
example of joining on a column on which you are also ordering is:
SELECT * FROM t1, t2
WHERE t1.a = t2.b
ORDER BY t1.a
If there is an ascending index on t1.a, accessing t1 through the index could
be a good choice.
All these decisions also depend on the selectivity of the predicates applied
against the outer table. If they are very selective, accesses against the
inner table are minimized. If they are not selective, the inner table will
have to be accessed more times for each row of the outer table.
Piped versus Non-Piped Sorts
When fetched rows need to be sorted (because no index exists that satisfies
the requested ordering, or because sorting would be less expensive than an
index scan), Relational Data Services calls Sort Manager to sort the rows.
Once sorted, Sort Manager may have to insert the sorted row set into a
temporary table (which requires calls to DMS). After this, the rows are
returned to Relational Data Services one at a time as Relational Data Services
requests them. If Sort Manager has to fetch the rows from the temporary
table, the sort is "Not Piped." If no temporary table is created and Sort
Manager is returning the rows directly from its sort buffers, the sort is
"Piped."
Note that independent of whether a sort is piped, the time it takes will
depend on the number of rows to be sorted. If the number of rows to be sorted
is larger than the size of the sort heap, several sort passes are performed,
where each pass sorts a subset of the entire set of rows. Each sort pass is
temporarily stored to disk. Once all the sort passes are complete, these
sorted subsets must be merged into a single sorted set of rows. If the sort is
Piped, as the rows are merged they are handed directly to Relational Data
Services without building a temporary table. If the sort is Not Piped, the
sort passes are merged into a temporary table, and the rows are returned by
accessing them from the temporary table in a Relation Scan.
6.3.2.3 Static versus Dynamic SQL
There are two types of SQL; these types are distinguished by when they are
bound to the database (see the first two paragraphs of 6.3.2.2 Query Design
and Access Plan Concepts for a definition of binding):
o Static SQL:
For static SQL, the bind process occurs before the application which
contains SQL is executed. Hence, the content of the SQL statement must be
known before the application runs (and is hardcoded into the application).
The access plan is stored in the database it was precompiled against and is
later invoked when the application runs.
o Dynamic SQL:
Essentially, dynamic SQL is interpreted, meaning that it is bound during
program execution (and hence runs slower than the same statement in static
SQL). An access plan is created for each individual dynamic SQL statement
(via the SQL PREPARE statement). This access plan is not stored in the
database. Rather, it lasts only for the duration of the module containing
the statement. This type of SQL should be used only for SQL statements that
must be built "on the fly" based on input from a user or other programs.
Query Manager is an example of an application that uses dynamic SQL
extensively, since the statements entered by an end user are not known at
the time the Query Manager executables are built. This means that any
application your create in Query Manager will consist of all dynamic SQL,
and will thus run slower than if the application were coded in a programming
language with static SQL. Other application development utilities written
to OS/2 Database Manager (such as Easel) exist that use dynamic SQL
exclusively. Better performance may be obtained by calling external,
statically-bound programs for all SQL in the application.
Doing precompilation and binding at the time the application is developed or
installed provides better performance than when it is done at runtime.
Therefore, when SQL statements are known in advance, use static SQL.
Guidelines
Use STATIC SQL when SQL statements are known in advance.
6.3.2.4 Commits
The COMMIT function ensures that all changes to the database are somehow
reflected in the database. In OS/2 Database Manager, COMMIT ensures that all
log records for database changes are on disk as well as releases all locks
held by the committing process. (See 6.4.2 Buffer Pool Size (BUFFPAGE) and
6.5.5 Logging and Recovery Concepts for more information on COMMIT and
writing database changes to disk.)
The guideline is to COMMIT as frequently as makes sense for your application.
That is, do not COMMIT halfway through a logical unit of work. (In this
discussion, the term "unit of work" means the same thing as "transaction"
which is a series of operations that must be completed or rolled back as a
unit.) Benefits of frequent COMMITs are:
o The smaller the amount of uncommitted work in your application, the less
work there is to be redone in the event of a system failure.
o Releasing locks improves concurrency by reducing the number of locking
conflicts.
Of course there is always balance - too many COMMITs may degrade performance
from the overhead of executing the COMMIT function. For example, if there is
an extremely large number of batch, short duration interactions (as in
building a table with single inserts), then committing every hundred (or so)
interactions instead of after every interaction (such as the insert), will
result in better performance.
1. Reduce database operations between commits (e.g., increase commits) to
increase concurrency.
2. Never commit in the middle of a logical unit of work.
3. Reduce the number of commits when a high number of short duration
operations will be run in a batch mode, to reduce the overhead of commits.
6.3.2 Database Application Design (continued)
ΓòÉΓòÉΓòÉ 9.3.3. 6.3.2 Database Application Design (continued) ΓòÉΓòÉΓòÉ
6.3.2.5 Data Isolation Levels
Data Isolation Levels apply to all systems, both Remote Data Services and
Standalone, that have more than one application simultaneously interacting with
the same data.
Introduction
Data Isolation Levels determine how long data will be locked for use by an
individual application. If locks are being held for a long time, the
performance of other applications or users trying to access the same data can
be affected because of reduced concurrency.
There are three different levels of isolation, the use of which will depend on
the needs of the applications and the characteristics of the data being
accessed.
o Repeatable Read
o Cursor Stability (default for 1.2 and beyond)
o Uncommitted Read
How it works
Data Isolation Levels are enforced with locks on data. What primarily
differentiates each level is how long the locks are held. Regardless of the
data isolation level, data that has been changed but not yet committed by one
transaction can never be changed by a different transaction - allowing this
would result in the first transaction's update being lost. Stated another
way, only committed data in the database can ever be changed by a transaction.
The following isolation level descriptions explain when locks are used and how
long they are held.
An isolation level pertains to an SQL application module and is specified at
bind time with the "/I" option. The valid values are:
o /I=RR - Repeatable Read
o /I=CS - Cursor Stability
o /I=UR - Uncommitted Read
You can determine the isolation level of an access plan (a bound module, see
6.3.2.2 Query Design and Access Plan Concepts) by executing the following
query:
SELECT ISOLATION FROM SYSIBM.SYSPLAN WHERE NAME = 'XXXXXXXX'
where 'XXXXXXXX' is the name of the access plan, in all capitals.
More in-depth information on data locking can be found in the following
section.
6.3.2.6 Locking
When an application accesses data, Database Manager locks it to ensure data
integrity across the multiple processes that could potentially be accessing
the data. Locking will be at the row level, table level, or "none." The
optimizer chooses the type of locking based on:
o Isolation Level (see 6.3.2.5 Data Isolation Levels)
o Access Path (see 6.3.2.2 Query Design and Access Plan Concepts)
o Predicates in an SQL statement (see 6.3.2.2 Query Design and Access Plan
Concepts)
Row Locking
If row locking is done, rows can be locked 'S' (share) for reading or 'X'
(exclusive) for insert, update or delete. Row level locking is done under the
following conditions:
o Isolation level is RR and an index scan is being performed.
o Isolation level is CS and any scan is being performed, assuming lock
escalation has not occurred. (See 6.4.5 Lock List Parameters (LOCKLIST and
MAXLOCKS) for a description of lock escalation.)
Table Locking
Before the individual table rows are accessed at the row level, Database
Manager will obtain an "intent" lock on the entire table. Following are the
intent table lock types, a description of when each is used by Database
Services, and what types of row locking each allows:
o Lock Intent Share (IS) - Rows are only going to be read and will be locked
'S'.
o Lock Intent Exclusive (IX) - Rows are going to be updated. When the row is
read, an 'S' lock is obtained; when updated, an 'X' lock is obtained. If an
index lookup is being performed and Database Services knows that every row
in a range of rows is going to be updated, then immediate 'X' locking is
performed. That is, the rows are updated without first being fetched and
read, hence the 'S' lock is not needed.
o Lock Share Intent Exclusive (SIX) - The table is locked 'S' so rows that are
read do not need to be locked. Rows that are updated are locked 'X'.
o Lock Intent None (IN) - No row locks are obtained. This is for the
Uncommitted Read isolation level.
If strict table locking is being done (i.e. there is no row locking), the lock
will be one of three types:
o Share (S) for reading. Multiple readers can access the table
simultaneously.
o Exclusive (X) for inserting, updating, deleting. Other applications cannot
access the table unless they are bound with the Uncommitted Read isolation
level.
o Super Exclusive (Z) for Drop/Alter of a table, or Create/Drop of an index.
Not even an Uncommitted Read application can access a table locked 'Z'.
The 'S', 'X', and 'Z' table locks are used only in the following instances:
o Under RR with a relation scan (see Relation Scan versus Index Scan for more
information)
o After lock escalation (see 6.4.5 Lock List Parameters (LOCKLIST and
MAXLOCKS) for more information)
o If a user or application issues the SQL LOCK TABLE statement
o If a table or index is being created, dropped or altered.
The last table lock situation to consider is the Uncommitted Read (UR)
isolation level which obtains no row locks of its own and allows an
application to read uncommitted changes of another transaction. Even though
UR does not obtain locks, it does obtain a table intent lock (Lock Intent
None), as described above.
Compatibility between all table lock types is described in Figure 6-2.
"Compatibility between table lock types" Note that once an intent lock is
obtained on a table, the corresponding row locking that goes with that table
lock is automatically allowed. For example, a table with an intent lock of
'IS' or 'S' will allow row locks of 'S', but not 'X'.
6.3.2.7 Remote Data Services
Introduction
Remote Data Services (RDS) enables a workstation to access databases on other
OS/2 EE workstations. The requesting workstation must have the appropriate
communications profiles defined (an initial set of profiles is created on
request during OS/2 EE installation through Basic Configuration Services), and
the location of the remote database must be catalogued in the requester's
database directory.
When a requester connects to a remote database (via the "Start Using Database"
API or the Query Manager Open Database function), a communications session is
established and an agent process is started at the server to run on behalf of
the remote requester. Note that this means that if a single requester has two
database connections active simultaneously (whether to the same database or
two different databases), there will be two communications sessions active and
two separate agent processes started.
Remote Data Services and SQLLOO or APPC
The communications protocols supported by Remote Data Services are either
APPC, the SQL LAN-Only Option (SQLLOO), or NetBIOS (for DOS Database
Requesters):
o APPC supports remote database access over LAN (via Token Ring, ETHERAND, or
PC Network), or via SDLC or X.25 protocols. Choosing and tuning the
appropriate network for speed of communications is purely a
communications-related question (see 4.3 Communications Performance
Considerations and 4.4 Communications Manager Parameters and Tuning
Specifics).
o The performance of Remote Data Services communications can be enhanced by
using the SQL LAN-Only Option (SQLLOO), which only supports the LAN DLCs
(Token Ring, PC Network, and ETHERAND). SQLLOO is a proprietary protocol
written for exclusive use by RDS (that is, it cannot be used by other
applications). It consists of the minimum subset of APPC needed to run RDS,
hence it is faster than straight APPC. The performance improvements of
using SQLLOO will be most notable for applications that spent less time
doing database work and more of their time in communications.
SQLLOO is automatically installed and configured in Communications Manager
when Remote Data Services is selected during OS/2 Extended Edition
installation. The only drawback to SQLLOO is that the Communications Manager
Subsystem Management functions (to view, activate, or delete sessions from
within Communications Manager) are not available for SQLLOO sessions.
o For DOS Database Requesters, the only supported protocol is NetBIOS. See
the chapter entitled "Using DOS Database Requester" in the IBM OS/2 EE
Database Manager Administrator's Guide for installation and configuration
information of this requester.
In a single remote database request (that is, anything requiring a roundtrip
to/from the server), unless the time spent in the database server is roughly
equal to the time spent in communications, tuning the Communications Manager
for RDS may not be the most efficient use of time and effort. Lab experiments
have shown that the time required for the communications round trip (via Token
Ring) is in the 30-60 millisecond range. If the database processing time is
much greater than this (as happens using Application Remote Interface where
multiple database actions are batched together, see 6.3.2.8 Database
Application Remote Interface (ARI)), greater performance gains can be achieved
by focusing on database tuning issues (e.g. buffer pool size, indices, query
design, etc.). Under SDLC or X.25, tuning may be a consideration since
communications is considerably slower and may take up more of the overall
processing time. (Note that information on these protocols is currently
outside the scope of this Guide.)
If you dowant to focus on tuning communications see the following information:
o For information on the LAN DLC parameters that affect SQLLOO:
- Maximum RU Size in 4.4.3.3 4.4.3 SNA Feature Profiles
- Send Window Count in 4.4.3.3 4.4.3 SNA Feature Profiles
- Receive Window Count in 4.4.3.3 4.4.3 SNA Feature Profiles
o For information on the APPC Transmission Service Mode Profiles and other
parameters affecting SQLLOO:
- Maximum RU Size in 4.4.3.6 4.4.3 SNA Feature Profiles
- APPC Memory Usage in 4.4.3.6 4.4.3 SNA Feature Profiles
Maximum Number of Requesters on a Server
There are many different factors at the Database Server that will determine
the number of Database Requesters that can access that Server. Among these
are:
o The amount of RAM, for holding control blocks and work areas on behalf of
each requester
o The CPU capacity, which will determine whether the CPU can keep up with the
number of requests being sent to it
o The amount of DASD, for holding the database and handling the OS/2 swap file
o The speed of the DASD, which like the CPU, will determine how fast the
server can handle the number of requests being received
o The speed of the communications connectivity.
Only the RAM restriction on the number of requesters is being covered here.
This RAM restriction defines the maximum number of requesters that could
possibly be supported. To estimate the number of requesters that can be
physically supported by a Database Server:
o Take the amount of memory installed on the server, and subtract the amount
of memory being used by the Base Operating System, Database Manager,
Communications Manager, LAN Server and/or Requester (see the IBM OS/2 1.2 or
1.3 Information and Planning Guide for this information).
o Next subtract the memory used by the working set of any other applications
running on the server workstation. (See 3.2.2 Memory Usage Concepts and
Terminology for a definition of working set.)
o Take the remaining amount of memory, and divide it by 150KB (the working set
RAM required for each agent process on the server). The nearest smaller
whole number value will provide an estimate of the number of requesters that
can be supported.
Lab testing has shown that a practical limit on a 16MB Database Server is
between 50 to 70 requester workstations in a heavily concurrent environment.
The absolute limit is 128 requester workstations. However, such a scenario can
result in a very large swapper file.
Alternatives
There are times when the number of remote connections that RDS is physically
able to support is insufficient for the requirements of an installation (or
the resulting performance from the attempted number of requesters is
unacceptable). If this is the case, consider the following alternatives.
o Consider a localized solution.
This may seem obvious, but in some situations, a local solution will
suffice.
o Develop a scheduler on the database server. This solution will require OS/2
application programming skill, since functions such as processes, messages
and queues may be involved.
With this solution, rather than communicating to Database Services on the
server machine via RDS, tokens representing some action against the database
are sent to the server using a method such as named pipes, mailslots, or a
communications client/server application over a suitable protocol (e.g.
NetBIOS, APPC, etc.). These tokens may be no more than a number indicating
which subroutine, from a set of SQL subroutines, to execute against the
database. Note that no Database Manager code is required on the requester.
This solution involves writing the following special code:
- A process at the requester to receive tokens from a local application
(indicating an action to be done against the database), and to send these
tokens to the server.
- A scheduler process at the server to receive the tokens and ensure that
they are routed to one of several instances of a database "service
program" (see the following bullet), and to send the results back to the
requester. The scheduler may have to maintain a queue of incoming
requests if all the service processes are busy.
- A service program that can be run in a predetermined number of processes,
each of which is connected to the database and waits for the scheduler to
give it database work. This service program would understand the
incoming database tokens, see that the correct SQL statement or
subroutine was executed, gather the results and return them to the
scheduler.
Again, this solution only requires Database Services to be running at the
server machine. It allows many requesters to be handled through a
limited number of connections to the database, hence reducing RAM
requirements at the server. Further, each database process at the server
stays busier, making better use of other limited server resources. This
solution has been used quite effectively by several customers.
1. Use Remote Data Services when the application and configuration allow it.
2. Consider the above alternatives when they will not.
6.3.2.8 Database Application Remote Interface (ARI)
Introduction
Database Application Remote Interface is available only on systems that use
Remote Data Services.
The Application Remote Interface (ARI) is one of the more important of all RDS
performance techniques. It is used to execute a block of code (a remote
procedure) stored on the database server. Examples of when this is useful
are:
o Applications that process large amounts of data but return only a subset of
the data to the user. Here, if ARI is not used, an application of this type
running remotely could require many records to be sent across the network,
when perhaps only a few of the records are actually needed by the user.
Using ARI, intermediate processing could take place at the Database Server
with only those records that are required being sent to the requester.
o When multiple SQL statements can be performed in batch without requiring
intermediate input from the requester workstation. For example, in a single
Automatic Teller bank transaction, several tables may be updated without
needing to return to the requester.
In both examples, reduced communications and better performance result:
o First, communication costs are reduced. SQLCAs and SQLDAs are moved to and
from the server only once.
o Second, a faster microprocessor on the server may be used. Instructions that
would normally be executed on the requester are moved to the server, which
is generally a faster machine.
o Finally, the intermediate control blocks required by Remote Data Services
for each SQL call do not need to be packed for every call. This leads to
further savings of the requester CPU, and usually offsets the additional
responsibility of the server.
To use the ARI, precompile, compile, and link the remote procedure into a DLL,
or create a REXX command file (.CMD). (The remote procedure can be written in
C, COBOL or REXX.) Place the remote procedure on the server, and use the
Database Manager API to call it from the requester program. Any data passed
to and from the remote procedure is done via SQLDAs. Note that it is not
necessary to use these SQLDAs in the exact manner Database Manager does. They
can simply be viewed as an array of pointers to the parameters to be passed.
Also, there is no requirement that a Database Remote procedure include any SQL
statements or other Database API calls.
6.3.2.9 Record Blocking
Introduction
Record Blocking applies only to those systems which use Remote Data Services
and is a feature that provides remote transmission of data in blocks, rather
than one row at a time, from a server workstation to a requester workstation.
If an application does not use record blocking, a network transaction occurs
every time a row is FETCHed from the requester workstation. Record blocking
usually enhances performance by reducing the number of requests crossing the
network.
Record blocking is transparent to the end user and application, except that
performance may be better using this type of design. Gains will be most
pronounced when many database rows are expected to be returned.
How it works
When an application issues an SQL OPEN CURSOR request that can be blocked (as
defined by a bind time parameter, see How to use it below in the following
text), a record block is allocated on the application's workstation (and at
the server workstation), and enough rows to fill the record block are
pre-FETCHed at the server and transmitted to the local record block. Note
that at the server, the FETCHed rows no longer have a cursor pointing to them,
so they cannot be updated from the requester. Record blocking is only allowed
for read-only cursors.
When the requester application issues a FETCH, rows are transferred one at a
time from the local record block to the application. When the requester's
record block is depleted, the next FETCH from the application will cause RDS
to gather another block of rows at the server and transmit them to the
requester.
The size of the record block is defined by:
o The RQRIOBLK parameter for OS/2 Database Requesters (see 6.4.6 Requester and
Server I/O Block Sizes (RQRIOBLK and SVRIOBLK)), and
o The SQLSIZE parameter for DOS Database Requesters (see Table 6-15. "Summary
of information for DOS Communication Block Size").
Record blocks on OS/2 workstations are allocated out of an area called the
communications heap. When a cursor is OPENed, if there is no more room in the
communications heap for another record block, blocking will be turned off for
that instance of the cursor. Note that later, the same program could be run
and if there is room in the communications heap, blocking will occur (see
6.4.6 Requester and Server I/O Block Sizes (RQRIOBLK and SVRIOBLK) for more
information). Record blocking will be turned off for the entire time a
process is connected to a remote database if there are no communications heaps
available (see Table 6-16. "Summary of information for Communications Heap
Size" for more information).
How to use it
Record blocking is a bind time parameter, which means that it can only be
specified when building an application. It is specified on a per module
basis. As mentioned above, record blocking applies only to fetch-only
(read-only) cursors. However, some cursors are "ambiguous" - that is,
Database Manager doesn't know whether they will be used for update at bind
time. Refer to the DECLARE CURSOR statement in the IBM OS/2 EE Database
Manager SQL Reference for a description of which cursors are fetch-only and
which are ambiguous. Following are the blocking options, which allow the user
to specify what to do with ambiguous cursors:
Blocking Option Description
/K=NO No blocking is done on any of the cursors in the module.
/K=UNAMBIG Blocking will be done for fetch-only cursors. Ambiguous
cursors will not be blocked. This is the default.
/K=ALL Blocking will be done for fetch-only cursors. Ambiguous
cursors will be treated as fetch-only, therefore will be
blocked.
To ensure that record blocking is only used for fetch-only situations, specify
"/K=UNAMBIG" option when binding. If you are using dynamic SQL, the
"/K=UNAMBIG" option should be used.
6.3.3 Database Design Issues
ΓòÉΓòÉΓòÉ 9.3.4. 6.3.3 Database Design Issues ΓòÉΓòÉΓòÉ
6.3.3.1 Table design
Logical Design
Table design must be considered at database design time. Note however, that
application design also has an impact on how tables in the database are
designed. The objective in defining the number and structure of tables in a
database is to strike a balance between reducing redundant data
(normalization), and limiting the number of joins that are performed.
Normalization is the process of separating data into multiple tables in order
to eliminate redundancy. For example, if sales for a store were being stored
in the database, all the information could be kept in one table, including each
customer's name and address, and information about the sales person. However,
it would be simpler and less redundant to split this data into several tables.
One table could describe each customer, another could describe each sales
person, and a third table could record each individual sale and reference the
customer and salesperson by some ID number.
The balancing factor in normalization is to determine how many joins will be
needed and how expensive the joins will be. One consideration in join expense
is the number of tables that are involved. More information on join
considerations is found in the sections Join Concepts and Outer versus Inner
Join Determination.
A good description of normalization can be found in An Introduction to Database
Systems (3d edition) by C.J. Date, published by Addison-Wesley, ISBN:
0-201-14471-9.
Physical Considerations
Table access is improved slightly if the database is stored on an HPFS file
system. A reason for this is that access time is improved when data is stored
in contiguous sectors on disk, and HPFS attempts to keep files as contiguous as
possible. This and other reasons are described in 3.3.4 File System
Performance Considerations.
Reorganizing tables is also important since it eliminates fragmentation within
each 4KB table page, which may reduce the total number of pages in the file and
hence result in less pages to access. Reorganization can also be used to
reorder rows in a table according to a frequently used index, thus improving
table access efficiency. (See 6.5.3 Reorganization (REORG) for more
information.)
See 6.5.1 Physical Table Structure for more information on the physical
structure of Database Manager tables.
Guidelines:
1. Normalize tables to reduce redundancy.
2. Merge tables that will be frequently joined.
3. Place the database on HPFS when possible.
6.3.3.2 Indices
The impact of indices cannot be over emphasized. In read-only situations, it
is usually worthwhile to create indices on fields which will be used for joins
or will otherwise appear in 'WHERE' or 'ORDER BY' clauses of an SQL query.
Indices should always exist on primary keys of larger tables. Creating
appropriate indices can reduce response times considerably in some
situations. For an explanation of when Database Services can and cannot use
indices in performing an SQL statement, see 6.3.2.2 Query Design and Access
Plan Concepts.
As with most things, the number of indices defined must be balanced against
the overhead they incur. First, indices use additional DASD. Secondly,
updating, deleting from, and adding to a table with indices takes longer than
modifying a table without indices, since each index must be kept in synch with
its corresponding table. However, since an index may be used to help locate a
row to be updated or deleted, the overhead of updating the index may not be a
concern. If INSERT is the primary operation against a table (that is, the
table is rarely read from or updated), an index may indeed degrade the overall
performance of accessing that table.
For information on the physical structure of an index file, see Database
Manager Index Structure.
When importing a table, it is much faster to import the table and then create
its index(es), than to create the index(es) and then import the table. This
is because the import operation is simply a series of inserts, one per row.
If the index is present, the index must be updated simultaneously with each
table insert. Additionally, each index insert is also logged. Hence, it is
more efficient to first import the table, and then create the index against
the table. However, if the data being imported is appended to an existing
table with existing indices, dropping the indices before the import, and then
re-creating the indices will cause access plans that use the index(es) to be
invalidated and then re-bound the first time they are subsequently executed.
(See "Extensive Imports and Updates" for more details on the IMPORT function.)
The previous import discussion also applies to massive inserts against a
table. Additionally, during a massive series of inserts, frequent commits
should be performed to free locks and log space.
Definition, creation and modification of indices can be done during many
different application development phases. Refer to Information Ordered by
Stages of Your Project for these phases.
Guidelines:
1. Use appropriate indices in columns referenced in 'WHERE', 'ORDER BY'
clauses to speed up reads.
2. Remove seldomly used indices to improve updates, deletes, and inserts.
3. Import tables before creating indices.
6.3.3.3 Referential Integrity
Referential Integrity is the ability to define relationship constraints
between and within tables and have Database Manager maintain those
relationships. For example, suppose there is a table called 'DEPARTMENTS'
that lists all valid department numbers and names. One constraint might be
that if any other table uses a department number, the number must already
exist in the DEPARTMENTS table.
Referential integrity and referential constraints are included in the
definition of a table. Once defined, Database Manager automatically checks
that the referential constraints are not violated any time an SQL statement
affecting them is executed. The user should be aware that checking out these
constraints will incur some measure of overhead. That is, Database Manager
will be performing the equivalent of one or more SQL statements under the
covers to ensure the constraints are not violated before actually performing
the user-specified SQL statement. The degree of overhead will depend on how
complex the constraints are: how many tables are involved, how large the
tables are, etc.
When a primary key is defined, Database Manager automatically creates a unique
index on the primary key, if one has not already been defined by the user.
Although this index is usually the most logical index to have defined on a
table, the user should be aware that it is there, along with the overhead of
updating it.
6.4 Database Manager and Database-specific Parameters
ΓòÉΓòÉΓòÉ 9.4. 6.4 Database Manager and Database-specific Parameters ΓòÉΓòÉΓòÉ
6.4.1 Introduction
6.4.2 Buffer Pool Size (BUFFPAGE)
6.4.3 Log-related Parameters
6.4.4 Sort List Heap (SORTHEAP)
6.4.5 Lock List Parameters (LOCKLIST and MAXLOCKS)
6.4.6 Requester and Server I/O Block Sizes (RQRIOBLK and SVRIOBLK)
6.4.7 DOS Database Requester Communication Block Size (SQLSIZE=bs/ws)
6.4.8 Communication Heap Size (COMHEAPSZ)
6.4.9 Number of Remote Connections (NUMRC)
6.4.10 Maximum Number of Shared Segments (SQLENSEG)
6.4.11 Maximum Number of Concurrently Active Databases (NUMDB)
6.4.12 Maximum Number of Active Applications (MAXAPPLS
6.4.13 Maximum Database Files Open per Application (MAXFILOP)
6.4.14 Maximum Total Files Open per Application (MAXTOTFILOP)
6.4.15 Database Heap (DBHEAP)
6.4.16 Application Heap Size (APPLHEAPSZ)
6.4.17 Statement Heap (ATMTHEAP)
6.4.18 Time Interval for Checking Deadlock (DLCHKTIME)
6.4.1 Introduction
ΓòÉΓòÉΓòÉ 9.4.1. 6.4.1 Introduction ΓòÉΓòÉΓòÉ
6.4.1.1 Configuration Files
The Database parameters fall into the following two categories. The parameter
descriptions in this section are listed in order of potential performance
impact, with the more important ones listed first.
o Database Manager parameters
These parameters are stored in the Database Manager configuration file,
which is created when Database Services is installed. In general, they
affect the amount of system resources that will be allocated to a single
Database Manager installation. That is, these parameters generally have
global applicability, independent of any one database stored on that system.
They can be viewed or changed in two ways:
- Using the Query Manager "Reconfigure Database Manager" option. This is
found by selecting "System" from the action bar in the main database
menu.
- Via the APIs to Reset, Update and Get a Copy of the Database Manager
Configuration File. After changing these parameters, Database Manager
must be stopped (STOPDBM) and then restarted (STARTDBM).
o Database Parameters
These parameters are stored in a Database Configuration file, which is
created when a database is created. (There is one configuration file per
database.) In general, these parameters specify the amount of resources to
be allocated to that database. They can be viewed and changed in two ways:
- Using the Query Manager "Reconfigure Local Database" option. This is
reached by placing the cursor on the desired database name and selecting
"Tools" from the action bar in the main database menu.
- Via the APIs to Reset, Update and Return a Copy of the Database
Configuration File. After changing these parameters, all applications
connected to the database must disconnect from (Stop Using) the database.
The new values take effect at the following Start Using.
o DOS Database Requester Parameters
There are only three DOS Database Requester parameters: SQLLIB, SQLNAME, and
SQLSIZE. These parameters are stored in a file called DBDRQLIB.CFG in a
subdirectory called \DBDRQLIB on the DOS requester. The only parameter
having any relationship to performance is SQLSIZE.
6.4.1.2 How Database Manager Uses RAM
Many of the parameters discussed in this chapter use memory on a system. Some
may use memory on the server, some on the requester, some on both.
Furthermore, memory is allocated and deallocated at different times and from
different areas of the system. This information is included with the
description of the individual parameters. For information on the amount of
RAM used by the Database Manager code, see the IBM OS/2 Information and
Planning Guide. For insight on balancing RAM usage on the overall system, see
3.2 Memory Usage and Tuning.
The following figures show how the non-RDS parameters use memory. See Figure
6-5. "RDS Parameters at a Server Workstation" and Figure 6-6. "RDS Parameters
at a Requester Workstation" for the RDS-related parameters.
Figure 6-3. "How memory is used by DB Services and Processes that use a
database" shows that Database Manager uses both Shared and Private RAM. This
RAM is allocated at the following times:
o When Database Services itself is started (STARTDBM), the area marked "Global
Control Blocks" is allocated and contains information that is needed by
Database Services to manage activity across all database connections. This
area remains allocated until STOPDBM is issued.
o When a process connects to a database (a process can connect to no more than
one database at a time), and it is the first user of this database, both
Shared and Private RAM areas are allocated. The Shared RAM area contains
things that are used across all processes that might connect to the
database, such as the Buffer Pool, Lock List and Database Heap. A Private
RAM area is also allocated for the process, containing things that will be
used only by this specific process, such as the Sort Heap and Application
Heap.
o Once a database is already in use by one process, any subsequent processes
that connect will have only Private RAM allocated on their behalf.
Figure 6-4. "Memory usage by non-RDS database parameters" is similar to Figure
6-3. "How memory is used by DB Services and processes that use a database."
except that the processes are left out and the parameters are shown in the RAM
areas that they occupy. This figure also shows the relationship between some
of the parameters. For example:
o As NUMDB (maximum number of concurrent active databases) is increased, the
amount of Shared RAM that can potentially be allocated grows.
o Since SQLENSEG is a parameter that shows the maximum amount of Shared RAM
that can be allocated, it must be large enough to cover everything held in
Shared RAM.
o MAXAPPLS defines the maximum number of processes that can simultaneously
connect to a single database, so it will affect the amount of Shared RAM
that can potentially be allocated.
It will be helpful to refer to this figure to see how each parameter fits in
the overall scheme of database resources.
6.4.1.3 Parameter Summary
The following pages contain three tables that summarize information on the
parameters covered in this section. Page references to the actual parameter
descriptions are also listed.
o Table 6-1. "Parameters that most affect performance" lists parameters that
will have the greatest impact on database performance. Among these, the
Buffer Pool Size is by far the most important.
o Table 6-2. "Parameters that may affect RDS performance" lists parameters
that should be considered in an RDS environment.
o Table 6-3. "Remaining Parameters" lists all other parameters covered in
this section. Though they really have no performance impact, some may
indirectly affect performance while others may have names that sound like
performance parameters.
For some of the parameters, there is a "recommended beginning value" that
differs from the "default" value shown. The default values were chosen to
minimize RAM usage. The recommended beginning values provide a starting point
for determining the most performance efficient values for your application.
Further iteration may be necessary to arrive at values tuned to your
particular environment. This may entail benchmarking of some sort. However,
the recommended beginning values are often sufficient to provide acceptable
performance.
6.4.2 Buffer Pool Size (BUFFPAGE)
ΓòÉΓòÉΓòÉ 9.4.2. 6.4.2 Buffer Pool Size (BUFFPAGE) ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-4. Summary of information for Buffer Pool Size Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéCONFIGURATION FILE ΓöéDatabase Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDEFAULT Γöé16 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéRECOMMENDED BEGINNINGΓöé250 Γöé
ΓöéVALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéRANGE Γöé(2*MAXAPPLS) to 1500 Shared 4KB RAM PagesΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéWHEN ALLOCATED ΓöéFirst 'Start Using' Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéWHEN FREED ΓöéWhen last application disconnects Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéAPPLIES TO ΓöéRDS Servers, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
This configuration parameter is the most important parameter affecting database
performance. There is one buffer pool per database and it resides in Shared
RAM on the workstation where the database is located. It is here that all data
manipulation for all processes connected to the database takes place. If it is
large enough to keep the required data in memory, less disk activity will
occur. There is a tradeoff between the buffer pool and other caches in the
system. See Database Manager, LAN Server and HPFS Cache Size in 3.3.4 File
System Performance Considerations for the recommended HPFS cache size when
running Database Manager.
The entire buffer pool is allocated at the first 'Start Using Database'
command. As an application requests data out of the database, 4KB pages
containing that data are transferred to the buffer pool from disk. (Note that
database data is stored in 4KB pages in the table files on disk.) Pages are
not written back to disk until one of the following occurs:
o All processes disconnect from the database
o The database quiesces (i.e. all connected processes have committed)
o Room is needed for another incoming 4KB page.
The size of the buffer pool is used by the Optimizer in determining access
plans. Therefore, database applications should be rebound after changing the
value of this parameter.
The effect of changing this parameter is described in an article entitled
"Performance of OS/2 EE 1.1 Database Services" in the Summer 1989 issue of IBM
Personal Systems Developer, publication G362-0001-02. This article has been
reprinted in the Microsoft Press book, OS/2 Notebook: The Best of the IBM
Personal Systems Developer, edited by Dick Conklin, ISBN: 1-55615-316-3.
6.4.3 Log-related Parameters
ΓòÉΓòÉΓòÉ 9.4.3. 6.4.3 Log-related Parameters ΓòÉΓòÉΓòÉ
The second most important set of parameters to tune are those related to
logging. This section presents a description of each log-related parameter,
followed by a section on log tuning guidelines (see 6.4.3.6 Tuning Log-related
Parameters for Performance below). This section assumes an understanding of
the concepts behind Database Manager's logging. For a description of these
concepts, see 6.5.5 Logging and Recovery Concepts.
Database Manager's log is a circular log and there are two types of files used
in support of this: primary log files and secondary log files. The number and
size of these files are determined by the following configuration parameters.
6.4.3.1 Size of Log Files (LOGFILSZ)
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-5. Summary of information for Size of Log Files Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé 50 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 1000 Γöé
Γöé VALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 15 to 4095 4KB RAM Pages Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé RDS Servers, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
This parameter defines the size of each primary and secondary log file.
6.4.3.2 Number of Primary Log Files (LOGPRIMARY)
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-6. Summary of information for # of Primary Log Files Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé 2 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 2 Γöé
Γöé VALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN ALLOCATED Γöé Initially, at database creation. If Γöé
Γöé Γöé the parameter is increased, new log Γöé
Γöé Γöé fials allocated at the first Γöé
Γöé Γöé 'Start Using' after the value change. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN FREED Γöé Not freed unless this parameter Γöé
Γöé Γöé decreases. If decreased, unneeded Γöé
Γöé Γöé log files deleted at the first Γöé
Γöé Γöé 'Start Using' after the value change. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 0 to 63 Log Files Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé RDS Servers, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
These are allocated to their full length (LOGFILSZ) when the database is
created. The number of these files is determined by the LOGPRIMARY parameter.
Database Manager will do circular logging within these files until the "tail
runs into the head", and then secondary log files will be allocated.
The LOGPRIMARY parameter may be set to '0', meaning no primary log files are
created when the database is created. This would only be useful for a database
that is infrequently accessed in order keep from wasting disk space when the
database is not being used. See 6.4.3.6 Tuning Log-related Parameters for
Performance below for guidance on setting LOGPRIMARY.
6.4.3.3 Number of Secondary Log Files (LOGSECOND)
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-7. Summary of information for # of Secondary Log FilesΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé 3 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 2 Γöé
Γöé VALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN ALLOCATED Γöé As needed during program execution Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN FREED Γöé When all log records are committed Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 0 to 63 Log Files Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé RDS Servers, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
These files are initially created with a length of 0. The number of these files
is determined by the LOGSECOND parameter. They are not expanded until they are
actually needed, at which time they will be set to LOGFILSZ. These secondary
files are truncated to 0 as soon as they are no longer needed. Since the total
log file size fluctuates during an application (that is, COMMITs often cause
the log to be truncated), it is possible for secondary files to be initialized
and collapsed many times during a database connection. Thus, they should be
used only as a safety net.
6.4.3.4 Log Records to Write Before Soft Checkpoint (SOFTMAX)
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-8. Summary of information for Soft Checkpoint Reset Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé 100 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 100 Γöé
Γöé VALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 0 to 65535 Log Records Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé RDS Servers, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
The SOFTMAX parameter determines how often Database Manager will take a soft
checkpoint (i.e., reset the Redo/Restart point), by defining the number of log
records to be written before recalculating the location of the soft checkpoint.
6.4.3.5 Location of Log File / New Location of Log File
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-9. Summary of information for Log File Location Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé OS/2 directory containing the Γöé
Γöé Γöé database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDATION Γöé A physical drive other than Γöé
Γöé Γöé the database drive Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé Any valid OS/2 drive and directory Γöé
Γöé Γöé that does not contain database logs Γöé
Γöé Γöé for another database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé RDS Servers, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
LOGPATH: This specifies the current location of the log file. It cannot be
changed by a user, but rather is set by Database Manager after a user changes
the NEWLOGPATH parameter. However, the value of this parameter can be viewed.
NEWLOGPATH: This is used to specify a new location for the log file. Database
Manager will move the log to this location the next time the database is
started up from the 'quiesced' state.
6.4.3.6 Tuning Log-related Parameters for Performance
The suggestions in this section are based on internal IBM lab experience.
Number and Size of Log Files
The default log parameter values were set with the goal of conserving DASD at
installation. Some performance improvement can be gained by changing the
values of these parameters. Recall that the default values are:
o LOGFILSZ - 50 4KB pages
o LOGPRIMARY - 2 files
o LOGSECOND - 3 files
Using these default values, a database will be created with 0 bytes of log
file on the disk. However, when the first process connects to the database
the first action will be to create a secondary log file of size LOGFILSZ.
This is a disk-intensive operation (i.e. the disk light will be ON!) and any
processes accessing that database (as well as users doing work through these
processes) will wait until the new file is created.
Given the default values, experience has shown that some number of the
secondary logs are initialized during a "real" run. This initialization will
tend to occur at undesirable times. The following values are recommended and
may reduce runtime overhead:
1. LOGFILSZ=1000 pages, 4KB each. This will result in fewer initializations
.
2. LOGPRIMARY=2. Primary logs are created when a database is created, or at
the first START USING DATABASE after changing the parameter value, rather
than during the run. These sizes may be a bit larger than needed, but
lessens the chance of needing secondary logs during the run.
3. LOGSECOND=2 (or more). These log files should be used as a backup should
the primary logs overflow. Setting the value high will have no DASD
impact until they are used.
The goal is to keep the day-to-day logging in the primary log files. To
determine the number and size of log files needed, monitor the date and
timestamps of the OS/2 files themselves. This monitoring can be done as early
as the test phase of your database (after development, with a full-sized
database), enabling you to set the initial log sizes. Further adjustments to
these sizes can be made by monitoring the logs during production. Following
are the steps to monitor the log file sizes:
o After all processes have stopped using the database, change to the directory
containing the log for this database. To determine the name of this
directory from Query Manager, look at the "Current Location of Log" under
"Change Database Log Configuration" of Database Configuration file.
o Issue the following command from this subdirectory:
dir *.log
o The first .log file listed is the log control file and will always be
present. Files with non-zero sizes are primary log files, and 0-length
files are secondary log files. Looking at the timestamps on the files will
tell you how many of the files have been used.
If the database had 2 primary and 3 secondary logs defined, the logfile date
and timestamps might look something like this:
SQL00000 LOG 12288 1-28-91 5:26p
SQL00001 LOG 204800 1-28-91 5:09p
SQL00002 LOG 204800 5-17-90 2:48p
SQL00003 LOG 0 5-17-90 2:48p
SQL00004 LOG 0 5-17-90 2:48p
SQL00005 LOG 0 5-17-90 2:48p
Here, only the first primary file has been used since the database was
created on 5-17-90. This would suggest that the number of primaries should
be reduced, and possibly be made smaller. If the dates on several of the
secondary files show they are always being used, it may be wise to increase
the number or size of the primary log files.
Varying Log File Settings
Once a database is in production, the type of activity against that database
may vary, depending on business and/or maintenance cycles. Since these varying
activities may have different logging requirements, it may be useful to adjust
the log parameters accordingly. For example, normal daily activity against a
database may have one set of log requirements, while periodic database reloads
(through extensive imports and/or updates) may have another. (Note that
Database Backup does not backup the log files.) Suggestions for these
situations would be:
o Daily Use
If there is only one type of "daily use" load against the database, the log
parameters would normally be set for this use. By examining log file sizes
over time, efficient values could be determined.
o Extensive Imports and Updates
Major portions of a database may need periodic reloading from some other
more central source. Since the IMPORT operation is simply a series of
INSERTs (each of which are logged), and all updates are logged, larger log
files may be needed at these times and the log parameters should be
adjusted. (For IMPORT Create and IMPORT Replace, even though INSERTs are
logged, the IMPORT function does a COMMIT under the covers to free up the
log space when the log becomes full. For IMPORT Insert, where rows are being
appended to an existing table, COMMIT is not performed until the IMPORT is
complete.)
Soft Checkpoint Recommendation
Taking a soft checkpoint may shorten the time to restore a database, but it
may also increase normal execution time by a small amount. However, since
initial testing has shown that this parameter does not greatly affect
performance, neither taking a soft checkpoint nor doing a restart takes an
inordinate amount of time. Therefore, we suggest that this parameter be left
at the default value.
Splitting the Log Files from the Database
Better performance can be achieved by placing the database and the database
logs on different physical hard disks. This is true because some degree of
overlap can be realized by accessing both drives at the same time. SCSI
drives, being faster devices, may also make these improvements even more
pronounced.
Since the log files are subject to re-initialization and growth, they should
be isolated from OS/2 code and should optimally be placed in a small partition
of their own. Since these files change so much (they grow, are deleted, are
recreated, grow ... ) they can use up the disk in a manner that would cause
other files to become more fragmented.
The best performance is achieved by placing the log on a FAT partition and
keeping the partition small with no other files in that same partition. In
any other case, HPFS provides the best log performance. activity). For other
disk partitioning suggestions, see Hardfile Partitioning Suggestions.
Guideline:
Place the database and the database log on different physical hard disk drives
to maximize overlapped I/O.
6.4.4 Sort List Heap (SORTHEAP)
ΓòÉΓòÉΓòÉ 9.4.4. 6.4.4 Sort List Heap (SORTHEAP) ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-10. Summary of information for Sort Heap size Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé 2 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 2 Γöé
Γöé VALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 1 to 20 64KB Private Segments Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN ALLOCATED Γöé As needed to perform sorts Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN FREED Γöé When sorting is complete Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé RDS Servers, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
This parameter defines the number of private RAM segments to be used for the
sort heap which is an area where data is sorted before returning it to the
application or user. There is a separate sort heap for each process that
connects to a database and it is allocated as needed to perform the sorts.
If sorting is not needed, this parameter is not a concern. If sorting is
required, the use of the sort heap can be minimized through the existence of
appropriate indices (i.e. indices that are ordered in the same manner requested
by a SELECT). However, in the absence of helpful indices, the sort heap must be
used. Note that sorting may be required in less than obvious situations. Some
simple examples of when sorting might be required are when the ORDER BY or
DISTINCT clauses are specified on a SELECT statement. Other operations
requiring sorting are described in the sections Index versus Relation Scan
Costs, Merge Join, and Piped versus Non-Piped Sorts.
Again, refer to the article "Performance of OS/2 EE 1.1 Database Services" in
the Summer 1989 issue of IBM Personal Systems Developer, publication
G362-0001-02, where it is shown how both the SORTHEAP size and the existence
of relevant indices help performance.
6.4.5 Lock List Parameters (LOCKLIST and MAXLOCKS)
ΓòÉΓòÉΓòÉ 9.4.5. 6.4.5 Lock List Parameters (LOCKLIST and MAXLOCKS) ΓòÉΓòÉΓòÉ
6.4.5.1 Maximum Storage for Lock Lists (LOCKLIST)
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-11. Summary of information for LOCKLIST Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé 8 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 25 Γöé
Γöé VALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 4 to 250 4KB RAM pages Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN ALLOCATED Γöé At the first 'Start Using Database' Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN FREED Γöé When last application stops using Γöé
Γöé Γöé database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé RDS Servers, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
There is one lock list per database and it contains the locks held by all
processes concurrently connected to the database. Locking is the mechanism
Database Manager uses to control concurrent access to data in the database by
multiple processes. Both rows and tables can be locked. See Figure 6-2.
"Compatibility between table lock types" for a complete description of lock
types and when they are used.
Each lock requires about 25 bytes. When the lock list reaches the percentage
set by the MAXLOCKS parameter, Database Manager will perform lock escalation
(described below). Although the escalation process itself does not take much
time, locking entire tables (versus individual rows) decreases concurrency, and
overall database performance may decrease for subsequent accesses against the
affected tables. (See MAXLOCKS below for an idea of how many tables are
affected.) Suggestions of how to control the size of the lock list are:
o Perform frequent COMMITs. to release locks
o When performing many updates, lock the entire table before updating (via the
SQL LOCK TABLE statement). This will use only one lock and keep other
updaters from interfering.
o Use the Cursor Stability isolation level (see 6.3.2.5 Data Isolation Levels)
when possible to decrease the number of share locks obtained.
To know when to change this parameter value, wait until your application
receives the message SQL0912N: "The maximum number of lock requests has been
reached for the database." Once the locklist is full, performance can degrade
since lock escalation will generate more table locks, thus reducing
concurrency. Additionally there may be more deadlocks between applications
(since they are all waiting on a limited number of table locks), which will
result in transactions being rolled back.
6.4.5.2 Maximum Percent of Lock List Before Escalation (MAXLOCKS)
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-12. Summary of information for Max. Percentage of LocksΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé 200/(MAXAPPLS+1) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 200/(MAXAPPLS+1) Γöé
Γöé VALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 1 to 100% of the Lock List Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé RDS Servers, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
Lock escalation is the process of replacing row locks with table locks,
reducing the number of locks in the list. This parameter defines a percentage
of the lock list that must be filled before Database Manager perform
escalation locks. When the number of locks held by any one application
reaches this percentage of the total lock list size (or a certain minimum
amount of space is left in the lock list) then lock escalation will occur.
Database Services decides which locks to escalate by looking through the lock
list and finding the table with the most row locks. If after replacing these
with a single table lock, the size of the lock list is back in range, lock
escalation will stop. If not, it will continue until the lock list is
sufficiently reduced in size.
6.4.6 Requester and Server I/O Block Sizes (RQRIOBLK and SVRIOBLK)
ΓòÉΓòÉΓòÉ 9.4.6. 6.4.6 Requester and Server I/O Block Sizes (RQRIOBLK and SVRIOBLK) ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-13. Summary of information for Requester I/O Block SizeΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéCONFIGURATION FILE ΓöéDatabase Manager Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDEFAULT - STANDALONE Γöé0 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDEFAULT - SERVER Γöé4KB Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDEFAULT - REQUESTER Γöé4KB Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéRECOMMENDED BEGINNINGΓöé4KB Γöé
ΓöéVALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéRANGE Γöé4KB - 64KB Bytes (Record blocks can be a Γöé
Γöé Γöémaximum of 32KB) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéWHEN ALLOCATED ΓöéFor normal buffering between DB Services Γöé
Γöé Γöéand Communications Manager: Γöé
Γöé ΓöéAt 'Start Using Database' Γöé
Γöé ΓöéFor record blocking: Γöé
Γöé ΓöéWhen a blocking cursor is opened Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéWHEN FREED ΓöéCommunications buffer: Γöé
Γöé ΓöéAt 'Stop Using Database' Γöé
Γöé ΓöéRecord blocks: Γöé
Γöé ΓöéWhen blocking cursor is closed Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéAPPLIES TO ΓöéRDS Requesters (which may also be a Γöé
Γöé ΓöéServer) server. Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-14. Summary of information for Server I/O Block Size Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Database Manager Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT - STANDALONE Γöé 0 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT - SERVER Γöé 4KB Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT - REQUESTER Γöé 0 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 4KB Γöé
Γöé VALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 4KB to 64KB Bytes Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN ALLOCATED Γöé When a Requester issues a 'Start UsingΓöé
Γöé Γöé Database' to this Server Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN FREED Γöé When remote application disconnects Γöé
Γöé Γöé from the database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé RDS Servers Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
The RQRIOBLK and SVRIOBLK parameters are used as follows (see Figure 6-5. "RDS
Parameters at a Server Workstation" and Figure 6.6 "RDS Parameters at a
Requester Workstation" for an illustration of how the RDS parameters relate):
o When a database requester starts using a remote database, an area is
allocated in Shared RAM on both Requester and Server workstations to act as
a buffer between Database Manager and Communications Manager. (Note that
this area does not come out of the Communications Heap, see Table 6-16.
"Summary of information for Communications Heap Size".) All SQL and
Database Services requests that do not involve record blocking pass through
this buffer on both the server and requester.
The value of RQRIOBLK determines the size of this area on the requester,
and similarly SVRIOBLK on the server. This area is deallocated when the
requester stops using the database. For non-blocking requests, a reason for
increasing the size of either parameter would be if the data to be
transmitted by a single SQL statement is so large that the default value is
insufficient.
o When record blocking is used, the value of RQRIOBLK on the requester
determines the size of the record blocks on both the server and requester.
One record block is needed for each open blocking cursor: a block is
allocated when the cursor is opened, and freed when it is closed. (See page
6.3.2.9 Record Blocking for information on blocking cursors.) These blocks
are allocated out of the Communications Heap (see page Table 6-16. "Summary
of information for Communications Heap Size"), which actually grows and
shrinks as the record blocks are allocated and released.
Record blocks cannot exceed 32KB in size. That is, if the size of RQRIOBLK
is greater than 32KB, only 32KB will be allocated. When a blocking cursor is
opened and there is no more room in the Communications Heap, no message will
be given. Instead, blocking will be turned off for that particular
execution instance of the cursor.
Increasing the size of the record block may yield better performance if the
number or size of records being transferred is large (for example, if the
amount of data is greater than 4KB). However, there is a tradeoff in that
larger record blocks increase the size of the working set RAM for each
connection. This could in fact hinder performance by limiting the number of
connections possible.
6.4.7 DOS Database Requester Communication Block Size (SQLSIZE=bs/ws)
ΓòÉΓòÉΓòÉ 9.4.7. 6.4.7 DOS Database Requester Communication Block Size (SQLSIZE=bs/ws) ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-15. Summary of information for DOS Comm. Block Size Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé DOS Database Requester Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé 1KB Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 4KB Γöé
Γöé VALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 512 - 64KB Bytes Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN ALLOCATED Γöé When the DOS application connects to aΓöé
Γöé Γöé remote database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN FREED Γöé When the DOS application disconnects Γöé
Γöé Γöé from a remote database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé DOS Database Requesters Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
The SQLSIZE parameter in the DOS Database Requester configuration file contains
two parts: the communication block size (bs) and the work area size (ws). Of
interest here is the communication block size (bs). This parameter defines the
maximum number of bytes that are transferred on a single network send or
receive request. If record blocking is being done, this same area is used for
record blocking and thus this parameter determines the size of the record
block. As is true for an OS/2 requester, the value of the communication block
size parameter on the DOS workstation determines the record block size on both
the server and requester for a particular database connection. Although the
default is 1KB, 4KB is recommended.
6.4.8 Communications Heap Size (COMHEAPSZ)
ΓòÉΓòÉΓòÉ 9.4.8. 6.4.8 Communications Heap Size (COMHEAPSZ) ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-16. Summary of information for Communications Heap SizeΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéCONFIGURATION FILE ΓöéDatabase Manager Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDEFAULT - STANDALONE Γöé0 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDEFAULT - SERVER Γöé2 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDEFAULT - REQUESTER Γöé2 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéRECOMMENDED BEGINNINGΓöé2 Γöé
ΓöéVALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéRANGE Γöé1 to 255 64KB Shared Segments Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéWHEN ALLOCATED ΓöéReserved at 'Start Using Database', RAM Γöé
Γöé Γöéallocated when a blocking cursor is OPEN Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéWHEN FREED ΓöéWhen a blocking cursor is CLOSEd Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéAPPLIES TO ΓöéRDS Servers, Requesters Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
One communications heap is allocated on both server and requester machines for
each requester process that connects to a server database. This heap is used
only for record blocking (see page 6.4.6 Requester and Server I/O Block Sizes
(RQRIOBLK and SVRIOBLK)); its size should be set according to the number of
blocking cursors that will be simultaneously open for a remote database
connection. More specifically, the value of COMHEAPSZ should be calculated
with the following formula, where 'm' is the number of concurrently open
cursors used by an application:
COMHEAPSZ >= (RQRIOBLK * m + 65535) / 64KB
When a requester connects to a remote database, if this connection causes the
number of reserved Communications Heaps to be exceeded, record blocking will be
turned off for that entire connection. The number of Communications Heaps
available on the server and requester is determined by the NUMRC parameter (see
Table 6-17. "Summary of information for Number of Remote Connections").
6.4.9 Number of Remote Connections (NUMRC)
ΓòÉΓòÉΓòÉ 9.4.9. 6.4.9 Number of Remote Connections (NUMRC) ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-17. Summary of information for # of Remote ConnectionsΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéCONFIGURATION FILE ΓöéDatabase Manager Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDEFAULT - STANDALONE Γöé0 (not changeable) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDEFAULT - SERVER Γöé10 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDEFAULT - REQUESTER Γöé3 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéRECOMMENDED BEGINNINGΓöéNumber of remote database connections + 2Γöé
ΓöéVALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéRANGE Γöé1 to 255 connections Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéAPPLIES TO ΓöéRDS Servers, RDS Requesters, DOS Γöé
Γöé ΓöéRequesters Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
This parameter is used when Database Manager is initially started (STARTDBM) to
reserve 'numrc' Communications Heaps on both RDS servers and requesters (see
Table 6-16. "Summary of information for Communications Heap Size").
Communications Heaps are assigned at both the server and requester when a
requester issues a remote 'Start Using Database'. When the 'numrc+1'
connection is made (on either side of the connection), the connection is still
allowed, but record blocking will be inactive during that time. If the same
process later connects to the same remote database there may be enough
Communications Heaps at that time to enable record blocking.
For DOS requesters, usage of this parameter differs from 1.2 to 1.3. In EE 1.2,
NUMRC is used to define a hard limit on the number of DOS requesters that can
connect to an RDS Server. That is, when the 'numrc+1' DOS requester tries to
connect to a server, it is not allowed to connect. (Note that this check is
independent of the number of OS/2 requesters that are connected to that
server.) In EE 1.3, this parameter works the same in DOS as in the OS/2
system, in that it will only cause record blocking to be turned off for that
connection. In this situation, OS/2 and DOS requesters are all considered
together as one group.
6.4.10 Maximum Number of Shared Segments (SQLENSEG)
ΓòÉΓòÉΓòÉ 9.4.10. 6.4.10 Maximum Number of Shared Segments (SQLENSEG) ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-18. Summary of information for # of Shared Segments Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Database Manager Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT - STANDALONE Γöé 20 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT - SERVER Γöé 50 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT - REQUESTER Γöé 1 (not changeable) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 802 Γöé
Γöé VALUE - STANDALONE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 802 Γöé
Γöé VALUE - SERVER Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 7 to 802 64KB Shared Segments Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN ALLOCATED Γöé As needed Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN FREED Γöé When no longer needed Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé RDS Servers, Standalone, Requester Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
This parameter is not really a tuning parameter, but is rather a summary of
values that are set for any parameters that use shared RAM. It defines the
maximum number of shared segments available to a Database Manager installation.
(See 6.4.1.2 How Database Manager Uses RAM for more information.)
Since all the RAM that falls under the umbrella of this parameter is allocated
only as needed for each database, there is really no problem setting this value
high. The recommended value is the maximum value of 802, which will keep you
from having configuration errors.
If for some reason you want to prevent any particular combination of active
databases from using up more than a specified number of shared RAM segments,
then you may wish to set this value lower. If so, the value of this parameter
must satisfy the following formulas:
SQLENSEG >= 1 + (NUMDB*6)
- and -
SQLENSEG >= 1 + (numsegs for DBa) + (numsegs for DBb) + ...
where: (numsegs for DBx) = 3 + DBHEAP + (LOCKLIST+15)/16
+ (BUFFPAGE+15)/16
- and - All DBx's are concurrently active databases.
6.4.11 Maximum Number of Concurrently Active Databases (NUMDB)
ΓòÉΓòÉΓòÉ 9.4.11. 6.4.11 Maximum Number of Concurrently Active Databases (NUMDB) ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-19. Summary of information for Number of Databases Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Database Manager Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT - STANDALONE Γöé 3 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT - SERVER Γöé 8 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT - REQUESTER Γöé 0 (not changeable) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé Number of databases you plan to have Γöé
Γöé VALUE - STANDALONE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 8 Γöé
Γöé VALUE - SERVER Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 1 to 8 Databases Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé RDS Servers, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
This Database Manager configuration parameter specifies the number of local
databases that can be simultaneously active (i.e., have applications connected
to them) on a single standalone or server/requester workstation. Setting this
value high does not take more system memory. It is simply used as a check when
applications issue a 'Start Using Database.'
Since each database takes up DASD and an active database uses shared RAM, you
can reduce system resource usage by limiting the number of separate databases
on your machine. However, arbitrarily reducing the number of databases is not
the answer. That is, putting all data, no matter how unrelated, in one
database will reduce disk space, but may neither make sense nor be prudent. On
the other hand, since an application process can perform SQL against only one
database at a time, functionally related information should be kept in a single
database.
Note that an empty database uses approximately 360KB of DASD for its system
catalog tables. Any database created with Query Manager will be approximately
650KB because of the Query Manager owned table QRWSYS.QRWSYS_OBJECT, which
contains all Query Manager object definitions. If a database was not created
via Query Manager, but is later opened by Query Manager, this Query Manager
table will be created.
In a Remote Data Services environment, this parameter along with the MAXAPPLS
parameter (see below) can be used to limit the number of database application
processes that can connect to a server.
6.4.12 Maximum Number of Active Applications (MAXAPPLS)
ΓòÉΓòÉΓòÉ 9.4.12. 6.4.12 Maximum Number of Active Applications (MAXAPPLS) ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-20. Summary of information for Max. # of Applications Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéCONFIGURATION FILE ΓöéDatabase Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDEFAULT Γöé8 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéRECOMMENDED BEGINNINGΓöé(Number of processes projected to be Γöé
ΓöéVALUE Γöésimultaneously connected to the database)Γöé
Γöé Γöé + 2 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéRANGE Γöé1 to 117 Processes Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéAPPLIES TO ΓöéRDS Servers, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
This parameter specifies the maximum number of concurrent processes that can be
connected (both local and remote) to a database. Since each process that
attaches to a database causes some private RAM to be allocated, allowing a
larger number of concurrent processes will potentially use more RAM. You may
not want to set this value arbitrarily high.
When an application attempts to connect to a database, but MAXAPPLS has already
been reached the following error message will be received:
SQL1040N: "The maximum number of applications are already connected to the
database.
For the following Database Manager utilities, MAXAPPLS must be at least '2':
o Import / Export
o Backup / Restore
6.4.13 Maximum Database Files Open per Application (MAXFILOP)
ΓòÉΓòÉΓòÉ 9.4.13. 6.4.13 Maximum Database Files Open per Application (MAXFILOP) ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-21. Summary of information for Max. # of DB Files OpenΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé 20 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 20 Γöé
Γöé VALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 2 to 235 File Handles Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé RDS Servers, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
This parameter limits the number of database files allowed open at one time by
a specific process connected to a database. If opening a file causes this
value to be exceeded, some database files in use by this process are closed.
The value of this parameter must satisfy the following formula:
(MAXFILOP * MAXAPPLS) + 20 <= 32700
Decreasing the value of this parameter may cause Database Manager to open and
close its files more frequently. However, this should not be a performance
concern since the OS/2 Open and Close operations are simply file directory
operations. This directory is loaded into a RAM area defined by the CONFIG.SYS
'BUFFERS=' parameter. If this area is large enough, the directory should be in
RAM.
Increasing the value of this parameter may leave more database files open for a
particular process, but it will hold file handles that another process might
need. However, this should be a concern only on early EE 1.1 systems where the
file handle limit was 255. (The current file handle limit for 1.2 and beyond is
32K.)
6.4.14 Maximum Total Files Open per Application (MAXTOTFILOP)
ΓòÉΓòÉΓòÉ 9.4.14. 6.4.14 Maximum Total Files Open per Application (MAXTOTFILOP) ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-22. Summary of information for Max. # of Total Files OpenΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé 255 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 255 Γöé
Γöé VALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 25 to 32700 File Handles Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé RDS Servers, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
This parameter defines the total database and application file handles that may
be used by a specific process connected to a database. This value is passed to
the OS/2 system when the process connects to a database, and the OS/2 system
ensures that the process does not exceed this limit. The value of this
parameter must satisfy the following formula:
MAXTOTFILOP >= MAXFILOP + 20
6.4.15 Database Heap (DBHEAP)
ΓòÉΓòÉΓòÉ 9.4.15. 6.4.15 Database Heap (DBHEAP) ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-23. Summary of information for Database Heap Size Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé 1 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 3 - 5 Γöé
Γöé VALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 1 to 45 64KB Shared Segments Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN ALLOCATED Γöé First 'Start Using' against a databaseΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN FREED Γöé When last application stops using Γöé
Γöé Γöé database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé RDS Servers, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
There is one Database Heap per database, and Database Services uses it on
behalf of all processes connected to the database. This parameter is relevant
only on Database servers. It primarily contains cursor control block
information, so its size will be dependent on the number of cursors that are
open simultaneously for that database. This data area (the minimum amount
Database Manager needs to get started) is allocated at the first 'Start Using
Database', and it is expanded as needed up to the maximum specified by DBHEAP.
This parameter is not related to performance. A value of '3' should be
sufficient for most cases. You will know to increase this value when your
application receives the message SQL0956C: "Not enough storage available in the
database heap to process the statement."
6.4.16 Application Heap Size (APPLHEAPSZ)
ΓòÉΓòÉΓòÉ 9.4.16. 6.4.16 Application Heap Size (APPLHEAPSZ) ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-24. Summary of information for Application Heap Size Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé 3 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 3 Γöé
Γöé VALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 2 to 20 64KB Private Segments Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN ALLOCATED Γöé Start Using Database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN FREED Γöé when application stops using database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé RDS Servers, Requesters, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
This parameter defines the number of private RAM segments available to be used
by Database Services on behalf of a specific application process. The
parameter is used on both Servers and Requesters, though the application heap
on a requester is generally much smaller than that on a server. The heap is
allocated when a process issues a 'Start Using Database'. The amount allocated
will be the minimum amount needed to connect to the database. As the
application requires more heap space, Database Services will allocate RAM as
needed up to the maximum specified by this parameter. Like the DBHEAP
parameter, APPLHEAPSZ is not related to performance. You should use the
default value until you receive message SQL0954C: "Not enough storage heap to
process the statement."
6.4.17 Statement Heap (STMTHEAP)
ΓòÉΓòÉΓòÉ 9.4.17. 6.4.17 Statement Heap (STMTHEAP) ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-25. Summary of information for Statement Heap Size Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéCONFIGURATION FILE ΓöéDatabase Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDEFAULT Γöé64 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéRECOMMENDED BEGINNINGΓöé64 Γöé
ΓöéVALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéRANGE Γöé8 to 255 64KB Private Segments Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéWHEN ALLOCATED ΓöéFor each statement during precompilation Γöé
Γöé Γöéor binding Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéWHEN FREED ΓöéWhen precompilation or binding of each Γöé
Γöé Γöéstatement is complete Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéAPPLIES TO ΓöéRDS Servers, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
The statement heap is used as a work space for the SQL compiler during
compilation of an SQL statement. This parameter specifies the size of this work
space. This area does not stay permanently allocated, but is allocated and
released for every SQL statement handled. Note that for dynamic SQL statements,
this work area will be used during execution of your program.
If you are doing a larger amount of dynamic SQL, you should set this value
larger.
6.4.18 Time Interval for Checking Deadlock (DLCHKTIME)
ΓòÉΓòÉΓòÉ 9.4.18. 6.4.18 Time Interval for Checking Deadlock (DLCHKTIME) ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-26. Summary of information for Deadlock Interval Ck. TimeΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Database Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé 10,000 milliseconds (10 seconds) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé 10,000 milliseconds Γöé
Γöé VALUE Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 1000 to 600,000 milliseconds Γöé
Γöé Γöé (1 second to 10 minutes) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé RDS Servers, Standalone Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Description
A deadlock occurs when two or more processes connected to the same database
wait indefinitely for a resource. The waiting is never resolved because each
process is holding a resource that the other needs to continue. The deadlock
check interval defines the frequency at which Database Services checks for
deadlocks among all the processes connected to a database. The overhead of
checking for deadlocks has not been found to affect performance, thus the
default value should be sufficient. If you find a need to detect deadlocks
faster, you may want to decrease this value.
6.5 Operational Considerations
ΓòÉΓòÉΓòÉ 9.5. 6.5 Operational Considerations ΓòÉΓòÉΓòÉ
6.5.1 Physical Table Structure
6.5.2 Estimating Database Size
6.5.3 Reorganization (REORG)
6.5.4 Run Statistics Utility (RUNSTATS)
6.5.5 Logging and Recovery Concepts
6.5.1 Physical Table Structure
ΓòÉΓòÉΓòÉ 9.5.1. 6.5.1 Physical Table Structure ΓòÉΓòÉΓòÉ
Each Database Manager table is stored in a single OS/2 file. All indices for
that table are stored in a separate OS/2 file. If the table contains LONG
VARCHAR data, this data is stored in yet a third file. All three files have
the same name:
SQLxxxxx
where 'xxxxx' is an integer. To determine the value of this integer for a
particular table, issue the following SQL statement:
SELECT FID FROM SYSIBM.SYSTABLES WHERE NAME = 'TABLENAME'
These three files are differentiated by their file extensions. All possible
extensions for database files are listed in the following table:
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-27. Types os database files Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéEXTENSIONΓöé TYPE OF FILE Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé .DAT ΓöéMain table file Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé .INX ΓöéIndex file for a table Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé .LF ΓöéFile containing LONG VARCHAR data (or "long field Γöé
Γöé Γöédata"), if such columns are present in the table. Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé .LOG ΓöéDatabase log files Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé .SEM ΓöéA semaphore file, that prevents more than one Γöé
Γöé Γöéinstance of Database Manager from accessing a data- Γöé
Γöé Γöébase simultaneously. For example, it is possible forΓöé
Γöé Γöéa local instance of Database Manager to connect to Γöé
Γöé Γöéthe database, and then have another instance of Γöé
Γöé ΓöéDatabase Manager attempt to access the database via aΓöé
Γöé ΓöéredirectedLAN Server/Requester drive (that is, not Γöé
Γöé Γöéusing RDS). Allowingsuch a thing could destroy the Γöé
Γöé Γöédatabase, since neither instance of Database Manager Γöé
Γöé Γöéwould know about the other. Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
The main table (.DAT) file consists of 4KB pages. Each page contains 76 bytes
of Database Services overhead. This leaves 4020 bytes to hold user data (or
rows), although no row can exceed 4005 bytes in length. See 6.5.2 Estimating
Database Size for information on calculating table row size. Rows are
inserted into the table in a first-fit order. That is, the file is searched
(via a free space map) for the first available space that is large enough to
hold the new row. When a row is updated, it is updated in place unless there
is insufficient room left on the 4KB page to contain it. If this is the case,
a "tombstone record" is created in the original row location which points to
the new location in the table file of the updated row. Rows do not span 4KB
pages.
See Database Manager Index Structure for information on index file structure.
The LONG VARCHAR file is structured differently than the main data file. Data
is stored in 32KB areas that are broken up into segments whose sizes are
"powers of two" times 512 bytes. (Hence these segments can be 512 bytes, 1024
bytes, 2046 bytes, etc. up to 32KB.) They are stored in a fashion that enables
free space to be reclaimed easily. Additionally, file allocation and free
space information is stored in 4KB allocation pages, which appear infrequently
throughout the file. The amount of wasted space in the file depends on the
size of the LONG VARCHAR data and whether this size is relatively constant
across all occurrences of the data. Overhead per data entry could range from
0% to as much as 50%. If character data is less than 4KB in length, the CHAR
datatype should be used instead of LONG VARCHAR.
6.5.2 Estimating Database Size
ΓòÉΓòÉΓòÉ 9.5.2. 6.5.2 Estimating Database Size ΓòÉΓòÉΓòÉ
The following information provides a rule of thumb for estimating the size of
a database. Information such as row size and structure is factual. However,
the multiplication factors for estimating file overhead due to fragmentation
and free space are educated estimates and not based on testing. Even with
testing, no single factor could be determined, since there is such a wide range
of possibilities for the structure and size of data in a database. After
initially estimating your database size, create some test databases. Through
this you may find multipliers that better fit your environment.
Following are things to consider when estimating the size of a database:
1. System Catalog Tables
When a database is initially created about 360KB of system tables are
created. These system tables will grow as user tables and indices,
Grant/Revoke authorizations, and access plans are added to the database.
Descriptions of the catalog tables can be found in the appendix to the IBM
OS/2 Database Manager Administrator's Guide.
After Query Manager has been used to open a database, the size of an
"empty" database will increase to about 650KB due to the creation of a
Query Manager table, and the insertion of Query Manager's access plans
into the database.
2. User Data Tables
For each user table in the database, the space needed is:
(average row size + 8) * number of rows * 1.5
The average row size is the sum of the average column sizes, where each
column size is described in the following table. For every column that
allows nulls, add one extra byte for the null indicator. The factor of
'1.5' is for overhead such as page overhead and free space. (See 6.5.1
Physical Table Structure for information on page overhead size.)
3. LONG VARCHAR Data File
If a table has LONG VARCHAR data, in addition to the byte count of 24 for
the LONG VARCHAR descriptor (in the table row), the data itself must be
accounted for. LONG VARCHAR data is stored in a separate file (as
described in 6.5.1 Physical Table Structure). To compute the size of this
file, compute the actual data length of all the LONG VARCHAR data and
multiply it by '1.3' for overhead. Note that as you experiment with your
LONG VARCHAR data, you may find a factor that better suits your
application.
4. Index Space
For each index, the space needed can be estimated as:
(average index key size + 8) * number of rows * 2
where the 'average index key size' is the byte count of each column in the
index key (using Table 6-28. "Column overhead for data types"). The
factor of '2' is for overhead, such as non-leaf pages and free space.
5. Log File Space
The amount of space required for log files is minimally:
(# primary log files * log file size) + 12288
where '12288' is the number of bytes for the log control file. If
secondary logs are used during interaction with the database, space must
added for these during runtime:
(max. # secondary log files used simultaneously * log file size)
6. DASD Work Space
Some SQL statements require temporary tables for processing (such as for
sorts or joins that cannot fit in RAM). These require disk space for
storage during the time they are used, and the amount required will be
totally dependent on the nature of the queries .
6.5.3 Reorganization (REORG)
ΓòÉΓòÉΓòÉ 9.5.3. 6.5.3 Reorganization (REORG) ΓòÉΓòÉΓòÉ
The REORG utility removes unused space from a table. This includes fragmented
space throughout the file, and unused space at the end of the file (files are
not shrunk when rows are deleted). Since Database Manager does not insert rows
into a table in any particular order (a first fit algorithm is used), REORG
also allows you to specify an index on which to physically order the rows of
the table. Indices on tables can also be reorganized. REORG may be used
against any table or index in a database, including Database Manager's system
catalog tables (SYSIBM.SYSTABLES, SYSIBM.SYSCOLUMNS, etc.).
The REORG utility works in the following way.
o Using the table (and optionally index) name specified by the user of REORG,
REORG builds a SELECT statement of the following form:
SELECT * FROM tablename ORDER BY col1, col2, ...
Where the ORDER BY contains all the columns of the specified index. (If the
table is not being reorganized according to an index, there will not be an
ORDER BY clause.)
o This SELECT is passed to Database Services
o Database Services performs the SELECT, retrieving the rows into a temporary
table.
o The original table file is deleted and replaced with the temporary table
file.
o All indices defined for the table are dropped (since the index key record
pointers are no longer valid) and then recreated.
Only two log records should be written during REORG: one to indicate that the
temporary table is created and another to indicate that the original table is
replaced. In OS/2 Database Manager Version 1.2, a bug caused all inserts into
the temporary table to be logged. This has been corrected in Database Manager
Version 1.3.
In addition to log space, enough DASD should be available to create a copy of
the table file being reorganized.
Tables tend to need reorganization after there have been massive updates,
inserts and deletes (either all at once or over a long time). However, since
REORG is not a split-second process, you will want to be judicious in choosing
which tables and/or indices to reorganize. The following formulas are rules of
thumb (based on table and index statistics) that may be helpful in determining
when to run REORG. Note that this implies that the statistics on the tables
and/or indices being considered must be up to date in order for the formulas
to work. If any one of the formulas is not true, the table or index is a
REORG candidate. If you do end up reorganizing one or more tables or indices,
statistics will need to be re-run after the REORG.
Some of the variables below represent columns found in system catalog tables.
These variables are shown in bold and are described in 6.5.4 Run Statistics
Utility (RUNSTATS).
The following formulas apply to tables:
o OVERFLOW/CARD < 0.05
The total number of overflow rows in the table should be less than 5% of the
total number of rows.
o TABLESIZE / ((FPAGES-1) * 4020) > 0.7
Where:
- TABLESIZE is the number of bytes of user data in the table, including 8
bytes of row overhead. This is computed using the following formula for
each column in the table, and then summing the results for all columns in
the table:
(AVGCOLLEN + 8) * CARD
The table size in bytes should be more than 70% of the total space
allocated for the table.
o NPAGES/FPAGES > 0.8
The number of pages that contain no rows should be less than 20% of the
total number of pages.
The following formulas apply to indices.
o CLUSTERRATIO > 0.8
The cluster ratio of the most important index should be greater than 80%.
Note that when multiple indices are defined on one table, some of the
indices will probably have a low cluster ratio since each index is
satisfying a different ordering. If an index contains many duplicate keys
and the number of entries in the index is large, the cluster ratio usually
will never be optimal.
o BYTES / NLEAF * 4096 > 0.5
Where:
- BYTES =
FULLKEYCARD*(IXSZ+10) + (CARD- FULLKEYCARD)*4
- IXSZ = AVGCOLLEN (for all participating columns) * FULLKEYCARD
Less than 50% of the space reserved for index entries should be empty.
o 0.9 * (IXP ** (NLEVELS-2)) / BYTES / 3076 < 1
Where:
- IXP =
4000 / (IXSZ + 10) (maximum number if index entries per 4KB block)
(Note that IXSZ is defined in the previous formula.)
- BYTES: defined in the previous formula.
The actual number of index entries should be more than 90% of the number
that NLEVELS-1 can handle.
Once a table or index is reorganized, RUNSTATS must be run to update
statistics on the table or index. The steps for updating statistics and having
these new statistics applied to your SQL statements are:
1. Run REORG
2. Do RUNSTATS
3. REBIND your application(s) - not necessary for dynamic SQL applications
6.5.4 Run Statistics Utility (RUNSTATS)
ΓòÉΓòÉΓòÉ 9.5.4. 6.5.4 Run Statistics Utility (RUNSTATS) ΓòÉΓòÉΓòÉ
The Run Statistics utility (or RUNSTATS) gathers statistics on user tables and
indices. (Statistics can not be gathered for the system catalog tables or
indices. Database Services has its own predetermined statistics for these.)
These statistics are used by the Database Optimizer during precompile/bind to
determine the best access path for each SQL statement in a program. (See
6.3.2.2 Query Design and Access Plan Concepts for a description of access path
or plan, and the things the Optimizer considers in determining an access plan.)
In order for Database Manager to make a wise choice in accessing data, it is
important to keep these statistics up to date. Anytime the RUNSTATS utility is
run, application(s) must be rebound to the database so that new access plans
can be created that are based on these new statistics. There is no automatic
REBIND after performing RUNSTATS. Auto REBIND occurs only if plans are
invalidated through the dropping of an object such as a index, table or view on
which the plan depends, or if Referential Integrity constraints affecting one
or more SQL statements in a plan are changed. Note that the auto REBIND will
fail if a required table or view is not found.
These statistics are stored in the following system catalog tables and consist
of the following information:
SYSIBM.SYSTABLES: Table Statistics
Column Name Description
CARD Table cardinality (# rows in a table)
FPAGES Total number of 4KB pages in the table file
NPAGES Number of 4KB pages in the table file on which rows appear
OVERFLOW Number of overflow records in the table. An overflow
record is created any time a row is updated and because of
an increase in data size, the row no longer fits in the
current page of the table file. The location in the table
of the original row will now contain a pointer to another
page where the updated row is stored.
SYSIBM.SYSCOLUMNS: Column Statistics
Column Name Description
COLCARD Column cardinality (# distinct values in a column)
AVGCOLLEN Average column length
HIGH2KEY Second highest value in column
LOW2KEY Second lowest value in column
SYSIBM.SYSINDEXES: Index Statistics
Column Name Description
NLEAF Number of leaf pages in index file
NLEVELS Number of B+ tree levels in index file
FIRSTKEYCARD Number of distinct first key values (i.e. the first value
in key),
FULLKEYCARD Number of distinct full key values
CLUSTERRATIO Cluster ratio (i.e. how close the table's physical order
is to the index order)
Determining how often to use RUNSTATS
Following are some suggestions on when to perform RUNSTATS. This utility may
take a long time to run, depending on the size of your table(s). (RUNSTATS
reads every row in a table and generates statistics on every column of every
row.) If your table is large, perform RUNSTATS only when necessary.
o If you find your system performance slowing down, you may want to rerun
statistics. The decrease in performance may suggest that the statistics
are out of date and that Database Manager is creating access plans that do
not have anything to do with reality. You can run statistics against tables
only, indices only, or both tables and indices.
- Table Statistics: If column values of a table have changed drastically,
this could affect such things as the selectivity of predicates,
calculated table cardinalities, sort costs, and join ordering.
- Index Statistics: If the values of columns that appear in indices have
changed, this could affect such things as the clustering of the index and
index key cardinalities. Here the Optimizer might choose the wrong index
or choose an index scan when it should really be sorting.
o If simply running RUNSTATS doesn't help, it may be time to REORG the
table(s) being accessed (see 6.5.3 Reorganization (REORG)).
o If your application is running against a large database you will probably
want to perform RUNSTATS after the values in the database have stabilized
and the database has reached its normal operating size. If the values and
size don't change significantly after that, you may still want to reorganize
tables, but RUNSTATS may not be necessary (i.e. if you like your statistics
- keep them!)
o If you have added indices, RUNSTATS and REBIND will be necessary for the
indices to be used, but no REORG is necessary.
6.5.5 Logging and Recovery Concepts
ΓòÉΓòÉΓòÉ 9.5.5. 6.5.5 Logging and Recovery Concepts ΓòÉΓòÉΓòÉ
This section describes the concepts and characteristics of Database Manager's
logging facility for EE 1.2 and 1.3. Descriptions of the parameters affecting
logging are found in 6.4.3 Log-related Parameters.
6.5.5.1 How Logging Works
There is one recovery log per database. When data is modified in the buffer
pool, a log record of the change is written to the recovery log. (For INSERTs
and DELETEs, the log record consists of the newly inserted or the deleted row,
respectively. For UPDATEs, the log record consists of both the before and
after image of the row.) This log is an in-flight transaction log, meaning that
it reflects the database changes of all processes connected to the database
since the last time the database was quiesced. To be quiesced means that all
processes have disconnected from the database, (which implies a COMMIT of those
processes) or all the processes have committed without making any subsequent
changes.
When a process issues a COMMIT, only the log pages are forced to disk - the
actual data pages changed by this process may not yet be written to disk.
Until a database quiesces, data pages are swapped from the buffer pool only
when room is needed to read in another data page. When a database quiesces,
all data pages are written to disk and the log file is "thrown away" since the
database itself reflects the actual state of the data. Note also that until a
database quiesces, log records for uncommitted transactions will be interleaved
in the log file with records for committed transactions.
6.5.5.2 Use of Log Files for Recovery
When the system goes down, data pages containing committed data may not yet be
written to disk. However, the description of the change will have been
written to disk as a log record. So, after a database abnormally terminates
(for example, if the system loses power), the next time an application connects
to the database, a Database Restart must be performed. Restart restores the
database to a consistent state by ensuring that all committed changes are
written to disk and all uncommitted changes are NOT written to disk. This is
accomplished in three steps:
1. Forward Roll: Starting from the beginning of the log, every log record
found in the log file is redone, both committed and uncommitted changes.
2. Backward Roll: Starting from the end of the log, every uncommitted change
is undone.
3. All changed pages are forced to disk.
6.5.5.3 Soft checkpoints
It is recommended that the soft checkpoint parameter (SOFTMAX) be left at the
default value. However, the following conceptual information is being
included for completeness.
Over time, data pages associated with old log records (i.e. those at the
beginning of the log file) will be swapped to disk (refer to Figure 6-7.
"Database Manager Transaction Log"). When a soft checkpoint is taken, a place
is marked in the log where all previous log records are both:
o From completed transactions (i.e. these transactions have either committed
or been rolled back).
o Associated with data pages that have already been written to disk.
This point in the log is called the Redo/Restart point, and the Forward Roll
part of Restart starts at this point. The Soft Checkpoint parameter defines
how often to reset this Redo/Restart point in the log. Note that a soft
checkpoint does note cause any data to be written to disk.
6.5.5.4 Circular Log Characteristics
Database Manager 1.2 and 1.3 use a circular recovery log. (Previous versions
of Database Manager use a linear log.) This means that the space at the
beginning of the log (occupied by log records that are no longer needed) can
be reclaimed up to a certain point and used for new log records. In effect,
the "log head chases its tail." Referring to Figure 6-8. "Database Manager
Circular Log", the log is a single logical entity (although the physical log
itself may be stored in a series of separate OS/2 files). The portion of the
log file that can be "circularly reused" is the redo/restart point minus any
reserved space Database Manager must set aside for uncommitted transactions
that must be rolled back (e.g., the Rollback process causes log records to be
written).
6.6 Improving Performance of Query Manager Reports
ΓòÉΓòÉΓòÉ 9.6. 6.6 Improving Performance of Query Manager Reports ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-29. Summary of Information for Query Manager Row PoolΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONFIGURATION FILE Γöé Query Manager Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé 16 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RECOMMENDED BEGINNING Γöé Increase to improve scrolling Γöé
Γöé VALUE Γöé performance Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé RANGE Γöé 1 to 999KB Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN ALLOCATED Γöé As needed for report from Private Γöé
Γöé Γöé RAM, 1 64KB block at a time Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé WHEN FREED Γöé Upon exit of Query Manager Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé APPLIES TO Γöé The system on which Query Manager isΓöé
Γöé Γöé running (it is a local application) Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Query Manager scrolling performance can be improved by increasing the size of
the Query Manager Row Pool. Beginning with release 1.2, Query Manager began
formatting an entire row of data instead of only a screen of data. Although
this improved the performance for horizontal scrolling, it did require a larger
row pool. So, the default row pool size may no longer be adequate for some
applications. Note that this will improve the performance of Query Manager only
for scrolling.
The size of the Row Pool affects how long it will take to run the initial query
and how long it will take to refill the row pool after it has been emptied
(i.e. after scrolling past the end of the last row in the row pool).
You can experiment on setting the size of the Query Manager Row Pool. If you
set the value too large, you will notice slower performance when the pool has
to be refilled.
Chapter 7 Application Design and Development Guidelines
ΓòÉΓòÉΓòÉ 10. Chapter 7 Application Design and Development Guidelines ΓòÉΓòÉΓòÉ
7.1 Introduction
7.2 OS/2 Multitasking
7.3 OS/2 Memory Management
7.4 OS/2 Application Packaging
7.5 Ported DOS Applications
7.6 Presentation Manager Application Guidelines
7.7 Reentrancy
ΓòÉΓòÉΓòÉ 10.1. 7.1 Introduction ΓòÉΓòÉΓòÉ
For more information on the subjects covered in this chapter, refer to the
manual IBM OS/2 Programming Tools and Information, Version 1.2
7.2 OS/2 Multitasking
ΓòÉΓòÉΓòÉ 10.2. 7.2 OS/2 Multitasking ΓòÉΓòÉΓòÉ
7.2.1 Multitasking Concepts
7.2.2 Multitasking Guidelines
ΓòÉΓòÉΓòÉ 10.2.1. 7.2.1 Multitasking Concepts ΓòÉΓòÉΓòÉ
The OS/2 program is a multitasking operating system. This means that it
manages multiple programs running at the same time by distributing processor
time and system resources (such as files, memory and devices) among them. The
following entities are part of this multitasking system and need to be
understood when writing programs for the OS/2 system:
Thread A dispatchable unit of work (a program runs in a thread).
Process A collection of one or more threads. A process owns system
resources.
Priorities Ratings assigned to each thread that determine which thread
will run before other threads.
Time Slice A fixed period of execution time assigned to a thread.
Serialization A method allowing multiple threads to access the same
resources that prevents more than one of them from making
changes at the same time.
7.2.1.1 Threads
The thread is the basic unit of execution in the OS/2 system. As a CPU passes
through a set of program instructions, it creates a thread of execution with
an environment that consists of its register values, stack and CPU mode. This
execution environment is called its context. The OS/2 system automatically
maintains a thread's context as it gains and gives up its access to the CPU.
Every OS/2 process consists of at least one thread. If a process consists of
multiple threads, all the threads share the resources of the process.
The maximum number of threads that can be active at one time is defined with
the THREADS command in CONFIG.SYS. The syntax is:
THREADS=x
where 'x' can range from 6 to 512. The default value is 64 active threads.
7.2.1.2 Processes
A process is a collection of system resources allocated to a particular
program. When a program is started, they OS/2 system creates a process to own
the resources of the process (such as memory, open files, devices, connections
to .DLLs, etc.) The program code is then loaded, and a thread is started for
the program to run in. Each process in the system has its own unique set of
resources. When the OS/2 dispatcher switches the CPU context between threads,
it ensures that the thread being dispatched is associated with the resources
of the correct process.
A comparison of processes and threads is listed below:
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 7-1. Processes versus Threads comparison Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéPROCESSES ΓöéTHREADS Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéProgram-sized ΓöéFunction-sized Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéMore overhead to create ΓöéFast creation Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéData Sharing is slightly slower ΓöéData sharing and using RAM Γöé
Γöésince the operating system is Γöésemaphores is a little faster Γöé
Γöécontrolling resources Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéOwns resources, files and memoryΓöéOwns stack space and registers. Γöé
Γöé ΓöéDevelopers must synchronize Γöé
Γöé Γöéresource access. Γöé
Γöé Γöé Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
7.2.1.3 Priorities
Although all threads in the system compete for CPU time, there are some
threads that should take precedence over others in gaining access to the CPU.
For example, a communications thread should have priority over a printer
thread. To manage this, all threads in the system have an associated
priority. A priority consists of two parts: a class and a level within the
class. Following are the priority classes and levels supported:
Class Levels
Time Critical 0 - 31
Fixed High 0 - 31
Regular 0 - 31
Idle 0 - 31
A higher priority thread that is ready to run will always get the CPU before a
lower priority thread. Within the regular class, the levels of priority are
dynamically modified by the system, depending on how often the thread performs
I/O and how much CPU time the thread consumes. Finally, threads of equal
priority are allocated CPU time in a round robin fashion.
When a process is started, the default priority is "Regular 0." Threads
started within the same process inherit the priority of the parent process.
Priorities of processes and threads can be modified via the OS/2 DosSetPrty
API function.
7.2.1.4 Time Slicing
All threads in the OS/2 system get a fixed period of time on the CPU. This
period of time is called a time slice. Threads are preempted when their time
slice is finished. Thus, the OS/2 dispatcher is called a preemptive time
slicing dispatcher. Note that timeslicing only helps threads that have the
exact same priorities.
The length of a time slice is set with the TIMESLICE command in CONFIG.SYS.
The syntax of this command is:
TIMESLICE = xx,yy
where:
o xx defines the length of a minimum timeslice, in milliseconds. of a short
timeslice. The minimum time slice is used for temporary priority boosts,
described in Regular Class. This value must be an integer greater than or
equal to 32.
o yy defines the length of a normal time slice, in milliseconds. This is the
time used for all circumstances other than the temporary boosts. This value
must be an integer greater than or equal to 'xx', but less than 65536.
The default values are: TIMESLICE=32,248.
The MAXWAIT command (specified in CONFIG.SYS) defines the amount of time a
regular class process can wait before the system assigns it a higher priority
(also called a starvation boost). The syntax of this command is:
MAXWAIT=x
where 'x' is an integer number of seconds. Valid values range from 1 to 255,
with a default of 3.
7.2.1.5 Serialization
When multiple applications are running simultaneously, it is possible for one
application to be accessing, or even be modifying, a resource (such as a file)
and have its time slice run out before it is finished with that resource.
Another application that wants to access or change the same resource could
then begin running. It is evident that allowing this next application to
access this same resource would be undesirable. To prevent this, the OS/2
system provides a facility called semaphores so that applications can
serialize their use of shared resources. Semaphores are also used to
serialize events.
A semaphore is a data structure that the OS/2 system gives to only one thread
at a time. Hence, if one thread obtains a semaphore for accessing a
particular object, another thread will not be able to obtain it until the
first is finished. There are three types of semaphores:
o System Semaphores
This type of semaphore serializes events and resources between processes.
The data structure is stored in a global data area owned by the OS/2
kernel. If a process abnormally terminates, the OS/2 kernel knows about it
and can release any semaphores owned by the process, thus freeing other
processes that may be waiting on those semaphores.
o RAM Semaphores
This type of semaphore is functionally equivalent to a system semaphore,
except that it is stored in memory provided by the process that creates the
semaphore. The system routines to manage the semaphore run in the
application's context instead of in the OS/2 kernel, resulting in better
performance. However, the OS/2 system doesn't provide any control over
these semaphores, so a RAM semaphore should only be used for coordinating
events and resources between threads of a single process. If used across
processes, one process could abnormally terminate before releasing the
semaphore. Other processes waiting on the semaphore would then hang.
o FSRAM Semaphores
FSRAM semaphores combine the performance characteristics of a RAM semaphore
and the protection characteristics of a System semaphore. The performance is
achieved through the semaphore being stored in memory provided by the
creating process or thread. However, the OS/2 system keeps better track of
this type of semaphore, and should one process terminate abnormally, the
OS/2 system can clean up in the same way it does for System semaphores.
Hence, these semaphores can be used to communicate across processes.
7.2.2 Multitasking Guidelines
ΓòÉΓòÉΓòÉ 10.2.2. 7.2.2 Multitasking Guidelines ΓòÉΓòÉΓòÉ
7.2.2.1 Process and Thread Guidelines
Following are suggestions concerning processes and threads.
o The OS/2 system is especially useful for spinning off units of work which
ordinarily would tie up the screen and prevent the user from doing other
functions of the application. A general rule of thumb is "if it takes
longer than a 1/2 second to respond to the user, the unit of work should be
done in a background thread or process". Note that an event appears
instantaneous to a user if it is less than a 1/2 second (i.e. 500
milliseconds), and appears to pause if it is more than a 1/2 second.
o Threads versus Processes
If an application needs to exchange data quickly, and does not need to be
concerned with the management of resources (such as protecting against
abnormal termination), then starting an additional thread may be
appropriate. If the application is concerned about resource management, a
process would be a better alternative. For example, if there is a concern
this new thread could abnormally end (thereby taking down the entire
process), starting an additional process would be better choice.
In terms of resources, a process takes about 3.3KB of system memory (this
includes the first thread), and each additional thread takes about 2KB. In
terms of time, dispatching between threads of the same process is faster
than between threads of separate processes, but only by 100 microseconds or
so. Separate processes would require you to use system semaphores where you
might otherwise use RAM semaphores, but this difference is only 80
microseconds at most. (Note that the times quoted here are for a 6 Mhz
Personal Computer AT*. Other PS/2 models should be faster.)
From a programmer's point of view, communication between processes is more
difficult than communication between threads, since the two processes do not
share the same address space.
In general, take advantage of threads, but don't overdo it. Too many
threads will cost you in:
- Design complexity
- Performance overhead for dispatching and code to coordinate among them.
- Memory
- System resources - There is a hard limit of 512 threads in the system,
about 31 of which are used by the OS/2 system. There is also a limit of
52 threads per process.
o TIMESLICE and MAXWAIT values
In general, the default values of 32 and 248 are sufficient.
o Create threads in groups
Creating and destroying threads on demand may degrade performance. Instead,
threads should be created in groups. Once created, a thread can be placed
in the suspended state until needed (using DosSuspendThread and
DosResumeThread). This will also minimize the use of system resources,
since each thread requires a minimum stack of 4KB. (Note: Stack sizes
should be specified in 4KB multiples for 32-bit portability.)
Threshold management should be used in destroying threads. That is, wait to
drop threads until the number of required threads drops below a certain
threshold value that makes sense for your application.
o Presentation Manager* message threads
Don't tie up a PM message thread more than 500 milliseconds. (Tying up a PM
message thread means it is doing too much work and not servicing the message
queue. This causes other processes/threads to hang until their message gets
serviced.) If you're going to do something longer than 500 milliseconds,
spawn another thread. A PM message thread should be for receiving messages,
not doing extensive work in servicing a message.
To determine if a PM message thread is taking too long, try moving the mouse
and clicking a button. If there's a pause before receiving a response,
there is a PM message thread taking up too much time. Communications
applications'.
o Threads and priorities relative to Communications applications
&cm. thread priorities are carefully selected to provide good performance
for applications using the default system priorities. Communications Manager
accomplishes this by adjusting the priorities of the user threads it runs
on. Hence, it is recommended that all users of the Communications Manager
use the default priorities for their threads.
It is important to keep the number of threads across all processes of a
Communications application to an absolute minimum. The opportunity for
overlapped processing is only there to the extent that there are different
physical devices. Even with multiple devices, it is still not necessary to
have a separate thread for each device. One thread with a MUXWAIT can
service many devices. If pathlengths are kept down, one thread may be all
that is needed for the total communications application, with a queue for
each type of incoming request (from the API and from adapters).
As an example of a Communications application's use of both threads and
priorities, consider a foreground application that does a call to send a
buffer of data, followed by some lengthy processing. It should expect to
overlap this lengthy processing time with the time it takes to send the data
across the communications line and have it processed by the partner on the
other end. If the application's priority is left at the default, then
Communications Manager will send the data buffer at a higher priority (since
many of the Communications Manager critical functions run at Fixed High
Class), thus giving the desired result.
The pathlengths of the main sending and receiving paths of an application
should also be as short as possible, both to minimize response time and to
maximize capacity for concurrent activity. Short pathlengths provide
another reason to minimize the number of threads in an application:
- The time to do thread switching, and the code to coordinate among
threads, will become more expensive than the other functions being
performed.
- While one might think separate threads are needed in order to be
responsive to incoming data, reduced pathlengths will improve the time it
takes to do other things and the code will be more available to respond
to the incoming data.
7.2.2.2 Priority Guidelines
OS/2 programs are initiated with a specific priority (either the system
default of 'Regular 0' or some user-assigned priority). Regular 0 priority is
best for most applications. Additionally, when PRIORITY=DYNAMIC is specified
in CONFIG.SYS, the system will vary the priority of Regular class programs
under certain circumstances. See "Regular Class" for more information on how
the OS/2 system varies (boosts) priorities.
Applications that need to take exception to using the default priority are
those with time critical requirements such as communications with external
devices or the need to interrupt a foreground task. Following are Guidelines
of when to use the various priority classes and levels. OS/2 Extended Edition
follows these guidelines.
o Time Critical Class
If an application has time critical requirements, the work should be divided
among threads so that the minimum time is spent processing at the time
critical level. (A rule of thumb for the size of any of these threads is a
path length of less than 20,000 instructions.) This is important to
maintain good interactive responsiveness to the user and enable
communications and other time critical applications to run concurrently.
The priority level within the time critical class is important to any
application that needs to share the system with other applications. The
guidelines for level within the time critical class are set to maximize the
number of different applications that can successfully multitask in an OS/2
system. These priority level guidelines are:
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé 1. Γöé Robotics/Real time Γöé 20-31 Γöé
Γöé Γöé process control Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 2. Γöé Communications Γöé 10-19 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 3. Γöé Other Γöé 0-09 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
The philosophy behind these levels is:
1. OS/2 systems may be used on manufacturing lines to control equipment,
or in other real time process control applications where response time
is crucial to safe and proper operations. Therefore, the highest
priority levels are reserved for these applications.
2. In Communications, inability to get the processor may cause the loss of
data or communications sessions. Therefore this class of applications
is next highest.
3. Other threads may need to preempt the foreground task for the user, for
example Ctrl+Break. These should be set below the previous two types
of time critical events.
o Fixed High Class
The Fixed High class is equivalent to Regular class with a foreground boost.
However, within the Fixed High class there are no priority boosts. Levels 0
- 31 are supported by this class. This class is good for a Communications
service or some other service that runs on behalf of a foreground
application. Communications Manager makes use of this class, for example in
the File Transfer function.
o Regular Class
As previously stated, most applications do not need to take any explicit
action regarding priority. The default priority for an application is
Regular 0, and a characteristic of the Regular Class is that when
PRIORITY=DYNAMIC is specified in CONFIG.SYS, the OS/2 system will vary the
priority of programs (that is, give them a temporary priority boost) under
the following circumstances. The length of these boosts is defined by the
Minimum Timeslice parameter on the TIMESLICE command (see 7.2.1.4 Time
Slicing).
- I/O Boost
When a thread completes I/O, it is given a temporary boost in priority.
This gives an I/O intensive thread the chance of initiating another I/O
operation, thus ensuring that I/O overlaps with CPU activity as much as
possible. If the thread doesn't initiate anymore I/O, it drops back down
to its original priority.
- Starvation Boost
If a certain amount of time expires (defined by the MAXWAIT command)
during which a Ready thread (as opposed to a Blocked thread) has not
received any CPU time (that is, it hasn't been dispatched), it is given a
starvation boost in priority.
- Yield Boost
When the OS/2 kernel code has to yield CPU time to a time critical
thread, the current thread is set to the yield state. When the time
critical thread completes, the thread that yielded will resume where it
left off.
Boosts are also given for the following situations, though these differ
from the previous in that a permanent priority change is made for as long
as the characterizing circumstance remains true. Once boosted, these
threads run for the length of a normal timeslice (see TIMESLICE command
in 7.2.1.4 Time Slicing).
- PM Message Boost
Threads with a PM message queue are given a priority boost in response to
an action in the user focus (foreground) task that relates to the PM
message queue.
- Keyboard Boost
Threads using the keyboard get a priority boost. This is generally
initiated by a keyboard action in the user focus task.
- Foreground Boost
When a program moves to the foreground (i.e. receives the user focus), it
is given a priority boost. This gives the application good interactive
responsiveness and makes it nondisruptive when it is in the background.
In cases where a user wants to explicitly assign priority levels within
the Regular class, the following guidelines apply:
ΓöîΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé 1. ΓöéCommunications applications running Γöé 26-31 Γöé
Γöé Γöéindependent of interactive user Γöé Γöé
Γöé Γöé(i.e. forground) tasks. Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 2. ΓöéOther Γöé 0-25 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
The philosophy behind these levels is:
1. Communications should take priority over other background processing
to increase overlap with transmission and processing on the partner PC
or host. This gives the best system performance.
2. If an application has multiple threads it may be necessary to set them
to several different levels to optimize the order in which they run. A
range of priority levels is provided to allow this.
Priorities should be set to dynamic (with the PRIORITY=DYNAMIC command
in CONFIG.SYS). Following are the reasons:
- The system dynamically modifies the priorities of threads for the various
reasons described above. An OS/2 performance design point assumes that
these priority boosts are occurring.
- Communications Manager itself takes advantage of dynamic priorities in
order to improve system throughput.
o Idle Class
Any application running at the Idle priority class runs only when nothing
else in the system is using the CPU. Levels 0-31 are supported in this
class. This class never gets a priority boost.
7.2.2.3 Serialization Guidelines
o Although System semaphores are safer than RAM semaphores, they are not
optimal if used within a process due to the overhead of managing them.
o RAM and FSRAM semaphores are about the same speed. However, FSRAM
semaphores give the added protection against abends leaving things hanging.
o If a semaphore usually has no one waiting on it, then RAM semaphores are
much faster than System semaphores. However, when someone is waiting on a
semaphore, System semaphores are faster than the RAM semaphores.
o If events are being serialized across multiple processes (versus threads),
System semaphores are easier to manage than FSRAM semaphores.
o Semaphores intended for use solely among threads of the same process should
be private (i.e. opened for Exclusive use).
o Using DosSuspendThread and DosResumeThread to serialize threads within a
process will be more performance efficient than using semaphores. These
API's run faster and have less system overhead. However, they are a bit
more complex to use than semaphores.
7.2.2.4 Interprocess Communications Guidelines
The three major ways that processes can communicate are through shared memory,
pipes, and queues. These are not usually used for communications between
threads since global data within a process is normally a sufficient and fast
means of interthread communications.
o Shared Memory
Shared memory is the simplest and fastest way to transfer data. Shared
memory can be accomplished by a process allocating a shared segment.
Another process can then access the segment and exchange information.
Shared memory has no built-in synchronization, therefore activities must be
coordinated to preserve the data integrity (often through the use of
semaphores).
o Pipes
A common use of pipes is for redirecting data from standard input and output
devices. Pipes permit a process rather than the programmer to redirect data
going to another process. Pipes are slower than shared memory since the
information being passed must be copied into the pipe.
o Queues
Queues are the most versatile means of interprocess communication. Data can
be read in as FIFO, LIFO, or according to priority of the message. Queues
are normally used when one process reads data from several other processes.
Data is shared by passing an address rather than actual data, thus making
the queue method very fast. A queue is the method of choice when large
amounts of data must be transferred quickly between processes.
7.3 OS/2 Memory Management
ΓòÉΓòÉΓòÉ 10.3. 7.3 OS/2 Memory Management ΓòÉΓòÉΓòÉ
7.3.1 Memory Management Contepts
7.3.2 Memory Management Guidelines
ΓòÉΓòÉΓòÉ 10.3.1. 7.3.1 Memory Management Concepts ΓòÉΓòÉΓòÉ
The standard unit of memory allocated by all OS/2 1.x versions is called a
segment (described below). Memory management is the OS/2 function that
allocates and releases segments as required by concurrent applications
contending for physical and virtual memory.
7.3.1.1 Introduction to Segments
A segment can be up to 64KB in size, although 60KB is the recommended maximum
size (see 7.3.2 Memory Management Guidelines). Segment descriptors contain the
attributes and location of the physical memory of each segment, and are stored
in descriptor tables maintained by the OS/2 system. Selectors are used as
indices (or handles) into these tables. Each OS/2 process has its own Local
Descriptor Table (LDT) containing references to all segments associated with a
process. A Global Descriptor Table (GDT) contains references to segments which
are available to all processes in the system.
At program load time, the OS/2 loader creates an LDT for the program and makes
an entry into the table for each segment defined in the program. If a segment
entered into the LDT is defined as Preload, that segment is loaded into memory
at program load time; if a segment is defined as Load On Call, the descriptor
for that segment is marked 'not present,' and the segment is not loaded. When,
or if, the segment is eventually referenced, the 'not present' indication
causes the OS/2 system to load the absent segment from disk.
Whenever a segment is entered into an LDT, the OS/2 system must reserve memory
for the segment. If a situation arises where there is insufficient physical
memory to satisfy a segment load request, the OS/2 system first attempts to
move segments to combine fragmented free memory areas. If this compaction
fails to recover enough memory to load the new segment, the OS/2 system then
employs a least recently used algorithm to determine which currently loaded
segments must be swapped to disk. It examines an internal time stamp value for
each segment in the system to determine which has been referenced the earliest,
and these segments are then moved to the system swap file. The descriptors
associated with the swapped segments are marked 'not present,' and another
attempt is made to load the new segment. If swapping fails to recover enough
free memory for the new segment to be loaded, compaction is again tried.
7.3.1.2 Creating Segments: Compiling and Linking
When a C source file is compiled into a .OBJ file (or module) the compiler, by
default, places the program instructions for all C functions into one or more
(depending on the memory model) text segments. (This is the terminology used
in the compiler reference manual and refers only to a block of code, not an
OS/2 segment.) Then the OS/2 linker, by default, tries to pack all text
segments from all .OBJ files into a single OS/2 segment. The /PACKCODE option
can be used to force the linker to pack text segments only up to a specified
number of bytes before using another OS/2 segment, thus allowing the programmer
to set an upper bound on the size of segments created.
In order to follow the segment grouping guidelines in section 7.3.2.1 Use of
Segments, the programmer must force the OS/2 linker to group specific compiled
text segments together and to place each group within its own OS/2 segment.
There are different ways by which this can be done, two are shown below:
1. Use the /NT compiler option to explicitly name all text segments and give
identical names to the text segments which are to be grouped into the same
segments. Then, link using either the /NOPACKCODE linker option or the
SEGMENTS parameter in the linker DEF file; the linker will group all text
segments of the same name together and put each text segment into its own
OS/2 segment. This is the recommended method.
2. Examine the sizes of all text segments, carefully order their input to the
linker, and use the /PACKCODE option to try to force adjacent functions
into common OS/2 segments. This is an iterative process that will
probably be unsuitable for large applications and which may require
creation of dummy functions to force the segment breaks. Not recommended
for large applications.
Figure 7-1. "Segment packing example MAKE file" shows a MAKE file for program
'NPP', where the goal is to place functions in the files NPPINPUT.C and
NPPDIAL.C into a common OS/2 segment. To do this, both files are compiled
with the /NT option and identical text segment names (INPUT_TXT) are assigned
to the compiled code for these files.
The compiled code is then linked with the /NOPACKCODE (abbreviated /NOP)
switch as shown in Figure 7-2. "Segment packing example Link file".
The resulting MAP file in Figure 7-3. "Segment packing example MAP file
(partial)" shows that the code for NPP (text segment MAIN_TEXT) has indeed
been placed in a single OS/2 segment 0001, the code for NPPCOMM (text segment
COMM_TEXT) has been placed in segment 0002, the code for both NPPINPUT and
NPPDIAL (text segment INPUT_TEXT) has been placed in a common segment 0003,
and the code for RPTERR (text segment _TEXT) has been placed in segment 0004.
(For help in interpreting linker MAP file output, refer to the manual C/2
Compile, Link and Run, Version 1.1.)
7.3.2 Memory Management Guidelines
ΓòÉΓòÉΓòÉ 10.3.2. 7.3.2 Memory Management Guidelines ΓòÉΓòÉΓòÉ
When writing 16-bit OS/2 applications, 32-bit should be kept in mind. The
primary guideline is:
o Allocate memory in multiples of 4KB.
This is because the unit of allocation in the 32-bit OS/2 system is a 4KB
page.
Another general guideline that applies to both 16 and 32-bit systems is to
limit your segment size to 60KB. The reason for this is twofold:
o The maximum sized I/O unit in the OS/2 system is 127 sectors (or 63.5KB).
This means that a 64KB segment requires two reads from or writes to disk.
o Rather than running a segment up to 63.5KB in size, 60KB is recommended to
keep the segment size a multiple of 4KB.
Following is a summary of the main guidelines covered in this section on 7.3.2
Memory Management Guidelines.
o Segment Management Considerations:
There is no hard figure on the number of segments an application should
have. The general guideline is to have the fewest number of segments
possible, yet keep the working set code separate from non-working set code.
More specifically:
- It is better to have a few large segments rather than a large number of
small segments, since the performance cost for compaction and swapping of
small segments is about the same as for large ones.
- Group mainline code into one segment, or as few segments as possible,
including functions frequently called by the main program.
- Group infrequently used code (such as error processing, initialization,
etc.) together in a different segment from the mainline code segment.
This gives a program a smaller working set.
- Place functions that call each other frequently in the same segment (Near
Calls). Intersegment (Far) calls can be expensive.
- Segments should be Load On Call rather than Preload.
- Place data and code that are used together in time, together in memory.
o Memory Allocation and Deallocation Considerations:
- Suballocate memory within a segment to reduce the number of segments.
- Use discardable segments, which get written over rather than saved to
disk when no longer needed.
- Free allocated memory when no longer needed. This is quicker than
automatic swapping and compaction.
- Free thread stack space no longer needed. It is not automatically freed
upon thread termination.
- Each program referencing a .DLL module should free it when it is no
longer needed, otherwise it may occupy memory unnecessarily.
- Don't allocate more memory than you need.
- Program your application to fail gracefully if the memory it's requesting
is not available.
7.3.2.1 Use of Segments
Grouping of Functions
Performance of an application can be greatly affected by the organization
chosen for code and data within program segments. Careful selection and
segregation of code and data segments can, in some cases, result in an order
of magnitude improvement in performance over more laissez-faire organization
strategies. Although there are no rigid guidelines for placement of code and
data within program segments, there are some basic guidelines which can be
generally agreed upon.
In order to follow these segment grouping guidelines, the programmer must
force the OS/2 linker to group specific compiled text segments together and to
place each group within its own OS/2 segment, as described in section 7.3.1.2
Creating Segments: Compiling and Linking.
o All mainline program code should be grouped into one segment. This includes
the program main procedure and any functions frequently referenced by the
main procedure. This segment can be large, approaching the recommended 60KB
limit, as it is expected to be used frequently and will rarely, if ever, be
swapped out.
o Error processing code should be placed in a separate segment, as should
application initialization code. If these pieces of code are small, they
might be combined into a single segment. Other infrequently used code
should also be combined into a single segment. Placing rarely used code
into separate segments provides the OS/2 system some obvious candidates for
swapping should it become necessary.
o Hardware dependent code should be put into individual segments. Only the
code for one particular hardware device should be contained in any one
segment unless it is known that a particular group of devices is commonly
used together. The idea is to avoid having the OS/2 system load unnecessary
device support code within a segment when loading or swapping.
o Functions which call each other frequently should be placed within the same
code segment in order to minimize intersegment calls. These can be
expensive in CPU time, since if both the segments are not in memory at the
same time, swapping is required.
o The total number of code and data segments should not be large. It is
generally preferable to have a few large segments as opposed to many small
segments. Use of small segments can cause a lot of work for the OS/2 memory
compaction and swapping functions, as swapping small segments out of memory
requires almost as much resources as swapping large segments, but releases
less memory.
o Place code used together in time, together in memory. Some of this will
involve understand the working set (or sets) or your application. For
example, if there is just one 60KB working set, 1 60KB segment should be
created. On the other hand, if there are 3 working sets, 20KB each, create
3 20KB segments.
o Some suggested compiler options are:
- /Alfu: for Long Calls, Far Data Pointers, Stack and Data Segments
Separate (required for PM calls)
- /Oalt: for Relax Alias Checking, Perform Loop Optimization, Optimize for
speed
o The more segments that are in a particular process, the longer it takes to
allocate each additional segment. It takes approximately 100 times longer
to allocate the 2000th segment as the first.
Segment Loading and Linking
As a general rule, all segments, whether data or code segments, should be
specified as Load On Call and not Preload. (Note that device drivers are
always preloaded, so should be specified as such.) Segments should be loaded
by the system only when actually referenced and not at program load time.
By default, the linker makes all code and data segments Load On Call, though
this setting may be overridden by use of the CODE, DATA, and/or SEGMENTS
statements within the linker module definition file (.DEF file.)
If a particular DLL is usually used, then using Load Time linking is OK, even
though this will increase the process creation time. If a DLL is not usually
used, then demand loading (Load On Call and Run Time linking) should be used.
Note that Run Time linking increases execution time at the DosLoadModule
request. Whether you use Load Time linking or demand loading will affect the
size of your 'working set.' Judicious use of Run Time linking can dramatically
reduce the working set.
7.3.2.2 Memory Allocation
o Use of Suballocation
Frequent allocating and freeing of segments and objects may result in poor
system performance. Allocation of many small segments (4KB or less) via
DosAllocSeg should especially be avoided, as it can cause extra work for the
OS/2 memory compaction and segment swapping functions. Therefore, when
possible, larger segments should be allocated (DosAllocSeg), followed by
suballocation of smaller blocks within that segment via the OS/2 DosSubSet
and DosSubAlloc functions.
When a segment is allocated, the memory is immediately allocated as opposed
to being increased to a maximum size. Hence, a segment should not be
allocated any larger than is needed. If necessary, a segment can later be
re-sized via DosReallocSeg. Threshold management should be used in
re-sizing however - that is, only grow or shrink a segment when the memory
requirements reach certain threshold points. Segment allocation should be in
4KB multiples, for 32-bit portability.
Another benefit of suballocation out of a single segment is that it allows a
program to access all blocks without requiring a reload of the processor
segment register for each (which would be a time consuming operation).
Although suballocation has its benefits, one tradeoff is the lack of memory
protection between suballocated blocks.
o DosMemAvail Function
It is not recommended that this function be used in determining whether or
not there is enough memory available to perform your task. DosMemAvail
returns the size of the single largest free space in memory. However, since
the OS/2 program is a virtual memory system, this information is not very
useful. There may be plenty of memory available, though fragmented, that
just needs to be compacted. If there isn't enough memory available, the
OS/2 system can swap segments to disk. In this case, the gating factor is
the space available in the swapper file.
o Use of Discardable Segments
Segments allocated with DosAllocSeg or DosAllocHuge can be made discardable
by setting the appropriate flag parameter. Discardable segments can be
written over by the OS/2 system if the memory space occupied is needed, thus
enhancing performance. This is appropriate where the application only needs
to store data briefly , or when the data can be quickly regenerated.
There will be periods during use of a discardable segment when it is not
desirable for it to be overwritten. During these periods, the segment can
be locked (made swappable instead of discardable) by using the DosLockSeg
function. DosUnLockSeg will return the segment to being discardable. A
segment can be locked multiple times with the number of times maintained by
a lock counter. Hence, for a segment to return to discardable, it must be
unlocked the number of times it was locked. Note that DosAllocSeg and
DosReallocSeg increment this lock counter.
In Operating System/2 Version 1.3, a new capability was added to
DosUnLockSeg. If used on any type of segment that is currently unlocked,
the timestamp on the segment will be set to '0', making it the most likely
LRU candidate for reuse.
7.3.2.3 Memory Deallocation (Clean up)
Memory allocated by an application should be freed as soon as the application
no longer has need for it. Freeing up unused memory reduces the need for the
OS/2 system to swap segments in a loaded system, and thus provides more useful
CPU time which can be allocated to running applications.
Allocated Segments and Blocks
Memory allocated via the DosAllocSeg, DosAllocHuge, and DosAllocShrSeg should
be freed when no longer needed by execution of DosFreeSeg. Memory blocks
suballocated within a segment via DosSubSet and DosSubAlloc need to be freed
with DosSubFree.
Shared Memory
The OS/2 system maintains a usage count for shared segments. Each time a
process accesses a segment via the DosGetShrSeg call or gives another process
access to a segment it owns via the DosGiveSeg call, the usage count for the
segment is incremented; each use of DosFreeSeg against the segment decrements
the count. A segment will actually be freed only when DosFreeSeg decrements
the segment usage count to zero.
Thread Stack Space
Memory allocated to serve as stack space for a created thread is not
automatically freed when the thread terminates. An application which
repeatedly allocates OS/2 segments as stack space for new threads but which
never frees those segments may eventually tie up a large number of selectors
and associated memory.
Care must be used when freeing stack space used by threads. A thread that is
terminating must inform some other thread via semaphore that it is exiting and
will no longer need the allocated stack space, but the thread must also
execute DosExit before the stack space is freed by the other thread. If the
clean-up thread kills the terminating thread's stack before DosExit is
executed, the call will fail and the application will suffer a memory
protection violation error.
The following procedure is one method of preventing a thread's stack memory
being freed before it can terminate:
1. A creator thread executes a DosAllocSeg to acquire stack memory for a new
thread and then creates the thread via DosCreateThread.
2. Some clean-up thread executes DosSemSetWait and waits for the semaphore to
be cleared by the new thread.
3. The newly created thread does whatever work it is supposed to do, and then
it terminates in the following manner:
a. Execute DosEnterCritSec to freeze the clean-up thread;
b. Execute DosSemClear to indicate to the clean-up thread that stack space
can be freed;
c. Execute DosExit, which also performs an implicit DosExitCritSec.
4. The now unblocked clean-up thread frees the terminated thread's stack
memory via DosFreeSeg.
Loaded DLLs
A program's links to a DLL module are discarded when the program terminates.
As a rule however, DosFreeModule should also be coded into a program to free a
DLL as soon as the program is finished with it. In this way the module will
remain in memory until all programs accessing it either terminate or execute
DosFreeModule (with the exception of normal segment swapping). In other
words, if the OS/2 system doesn't know the module is no longer needed, it will
continue to swap it as usual, thus wasting system resources. It should be
noted that DosFreeModule will fail if the DLL being freed has an active exit
list.
An exception to always coding a DosFreeModule might be for modules that are
repeatedly referenced and released (as in a large application that has common
functions placed into separate modules). In this case, the program
referencing the module(s) could be changed to delay the freeing of modules
until they are no longer referenced. One method of doing this would be to
implement a timeout after which an unreferenced module could be freed.
Resource Segments
Resources allocated via the DosGetResource call must be freed via the
DosFreeSeg call. DosFreeSeg must be executed against a resource segment just
as many times as DosGetResource was executed within the process. If the
resource segment was defined as Load On Call, the segment will then be freed.
If the resource segment was defined as Preload, one must invoke DosFreeSeg N+1
times, where N is the number of DosGetResource calls that have been issued.
The additional call is required because Preload carries an implicit
DosGetResource, executed at application load time.
7.4 OS/2 Application Packaging
ΓòÉΓòÉΓòÉ 10.4. 7.4 OS/2 Application Packaging ΓòÉΓòÉΓòÉ
7.4.1 Packaging Concepts
7.4.2 Packaging Guidelines
ΓòÉΓòÉΓòÉ 10.4.1. 7.4.1 Packaging Concepts ΓòÉΓòÉΓòÉ
Packaging refers to the assembly of source code into an optimal arrangement, in
this case how the various individual elements are compiled and linked into the
final executable application (EXE) and its associated Dynalink libraries
(DLLs).
As an assistance to the following discussions, Figure 7-4. "Overall Compile and
Link process" summarizes the overall flow of application generation, compile,
and link process.
The rationale for packaging has various motives, other than just obtaining
optimal performance from the software application. There may also be marketing
considerations, maintenance considerations and support considerations. In the
marketing area, it may be necessary to separate certain functions because these
capabilities are being sold separately, even though from a performance
viewpoint the code may belong together. In the maintenance area, it may be
necessary to separate printer and plotter device drivers to minimize the number
of upgrade diskettes that have to be distributed.
7.4.1.1 Application Linking: Introduction
Before getting into the details of how to package DLLs for optimal performance
(how many functions, how many segments, etc.), it is beneficial to review the
basics of linking, and to distinguish between the Static and Dynamic options.
Methods for obtaining optimum organization of functions and segments within
DLLs then becomes more apparent. There are two basic methods of linking:
static and dynamic.
o With static linking, all of an application's functions are resolved during
the compile/link process. The final executable module is a complete,
independent, standalone unit. A main disadvantage to this technique is that
common functions are not shared during execution, instead each application
has its own copy in memory.
o The concept of Dynamic Linking, overcomes this problem by allowing common
functions to be shared by different programs. In Dynamic Linking, routines
accessed by an application are resolved either when the application is
loaded into memory (load time linking) or when the application is ready to
access the function during execution (run time linking). Do not confuse
these terms with Load On Call or Preload. The latter are types of loading,
which although related to linking, are not types of linking. LINKING refers
to the process of resolving references to a function; LOADING refers to
actual physical placement of the function into computer memory.
7.4.1.2 Static Linking (.EXEs)
An example of when to use Static Linking is if a program uses a function
frequently, and the function is relatively small so as not to be a RAM impact;
or when response time is critical. The drawback is that other programs using
the same function will also have their own copy of the object code in their
.EXE file, thus causing an overall increase in memory requirements.
Object files to be linked can be combined into a single library by use of the
C/2 Compiler LIB command. This will create a library (.LIB) file containing
the object modules, known as an object library. This is used to aid the
developer doing a static link. The developer only needs to specify the .LIB
object library as opposed to each .OBJ module used during the link step.
The above example, Figure 7-5. "Static linking with Object modules (.OBJ
files)", links sales.obj, inventory.obj, revenue.obj into one executable
module.
The next example, Figure 7-6. "Static linking with Object libraries to create
.LIB files", demonstrates how to combine these files into a library named
ACNT.LIB, using the LIB command.
Object libraries are the result of static linking and are often confused with
the import libraries used in dynamic linking. Both type of libraries end with
the .LIB file extension. However, import libraries are created with the
IMPLIB command for dynamic link modules (.DLLs). They contain header
information about object modules, but not the modules themselves. Both object
libraries and import libraries are specified as parameters of the LINK command
in exactly the same format. In fact, a combination of object libraries and
import libraries can be included in the LINK command.
7.4.1.3 Dynamic Linking (.DLLs)
With Dynamic Linking, called functions do NOT have to be included as part of
the executable file (EXE), but can instead be included in Dynamic Link
Libraries (DLLs). These contain code, data, and resource segments just like
executable files. Functions are loaded in memory once, and then can be shared
by many applications. DLLs and EXEs can be changed independently, thus
reducing maintenance costs. As mentioned previously there are two types of
Dynamic Linking, load time and run time. With load time linking, calls to
functions contained in DLLs are resolved when the application is loaded into
memory. With run time linking, the actual link occurs during the
application's execution.
o Load time Linking
As the name implies, external links to functions are resolved when the
invoking application is loaded into memory (the application .EXE file
contains a table of all referenced DLLs). The question which often arises
is whether the entire DLL (code, data, and resource segments) is loaded into
memory when accessed or just the functions to be used. The answer depends on
how the DLL was linked. Header information of a referenced DLL is loaded
and contains a list of which segments inside the DLL are to be loaded with
the application. How was this list created? If the segment in which a
function resides (inside the DLL) was created with the Preload linker
option, then the routine is loaded into memory at the same time the invoking
application is invoked. If all the segments in the DLL are created with the
Preload option, then the entire DLL is loaded (a negative memory impact).
If the segment was created using the Load On Call linker option, then only
the module header information is loaded along with the application. From a
performance standpoint, developers should use the Load On Call linker option
in most situations. This loading method of DLLs also applies to run time
linking.
With load time linking, a user links a function contained in a DLL by
specifying the function(s) in the IMPORT section of the .DEF file, or by
using import libraries. From the previous static link example, Figure 7-6.
"Static linking with Object libraries to create .LIB files", if the
functions SALES, INVENTORY, and REVENUE were stored in the Dynamic link
library ACNT.DLL, the linker (.DEF) file would contain the code fragment
shown in Figure 7-7. "Dynamic linking at load time", which is a simple
example of load time dynamic linking of functions.
o Run time Linking
With run time dynamic linking, the application (using the DosLoadModule
function) links to the DLL when the application is ready to use the
function. A big advantage of run time linking is that the application can
continue to execute even if the link to the routine can not be established.
In load time linking, if a module is not found then the application is
terminated by the operating system. Figure 7-8. "Dynamic linking at Run
time" demonstrates an example of run time linking.
Another advantage of the run time linking technique is that it allows an
application to handle National Language Support. It can also be used to
handle hardware differences. For example, code for various languages or
hardware could be placed in separate DLLs. The application could then
choose which DLL to load based the language or hardware currently in use.
o Ordinals
Applications can be linked either with an import library that matches the
libraries that are going to be used at run time, or the module and entry
point names for the routines can be specified with IMPORTS statements in the
module DEF file. In either case the Linker builds the names of the
libraries and routines into tables in the EXE file header, along with
pointers to each reference to the libraries in the executable code.
Imported library routines can also be specified in a .DEF file by an Ordinal
instead of a name. The Ordinal is merely the index to the position of the
routine in the library's entry point table. Using this technique can speed
up dynamic linking, and reduce the size of the executable file.
A major disadvantage to the use of Ordinals are the maintenance problems,
particularly when dealing with commercial applications scattered at various
customer locations. Ordinals cannot be changed from one version of the
library to the next, i.e. from one release of the application to the next.
If the application changes so that it no longer requires a particular
ordinal, 'stub' code must be written to artificially maintain the existence
of that ordinal definition.
7.4.1.4 Linking Summary
Following are some useful definitions:
o Linking is the process of resolving references to a function. This may
happen when executing the linker (static linking, resulting in a .EXE), or
at load or run time (dynamic linking, done with .DLLs).
o Loading involves actual placement of a function into physical memory.
The steps involved in Load and Run Time linking follow:
o If the application links .DLL routines when the .EXE is loaded (load time
linking), then:
- If the requested segment in the .DLL is Preload, the segment is loaded
and linked to the application
- If all the segments in the .DLL are Preload, then the entire .DLL is
loaded and linked.
- If the requested segment is Load On Call, then only the .DLL module
header is loaded and links are made using the header information.
o If the application waits to link .DLL routines until they're invoked (run
time linking), then:
- If the segment is Preload, the segment is loaded with the application,
but the link is not done until the function is invoked.
- If the segment is Load On Call, then the segment is both loaded and
linked when the function in the segment is invoked.
The following table compares features among static linking, dynamic load time
linking, and dynamic run time linking.
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 7-2. Feature comparison of linking methods Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéCAPABILITY ΓöéSTATIC ΓöéDYNAMICΓöéDYNAMIC Γöé
Γöé ΓöéLINKINGΓöéLOAD ΓöéRUN TIMEΓöé
Γöé Γöé ΓöéTIME ΓöéLINKING Γöé
Γöé Γöé ΓöéLINKINGΓöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéUses Object files (compiled source) Γöé X Γöé X Γöé X Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéRoutines (contained in object files) Γöé X Γöé Γöé Γöé
Γöéare linked prior to load and execution Γöé Γöé Γöé Γöé
Γöétime, and are included as part of the Γöé Γöé Γöé Γöé
Γöéapplication executable file (.EXE) Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéEach program has its own copy of Γöé X Γöé Γöé Γöé
Γöéfunctions linked resident in memory at Γöé Γöé Γöé Γöé
Γöéall times the program is resident Γöé Γöé Γöé Γöé
Γöé(a potential memory impact). Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéLinking is performed at execution/load Γöé Γöé X Γöé Γöé
Γöétime for inclusion of DLLs. Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéLinking is performed during run time Γöé Γöé Γöé X Γöé
Γöéfor inclusion of DLLs as needed by the Γöé Γöé Γöé Γöé
Γöéprogram. Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéWhether external DLL routines accessed Γöé Γöé X Γöé X Γöé
Γöéby an application are loaded at appli- Γöé Γöé Γöé Γöé
Γöécation invocation is dependent upon Γöé Γöé Γöé Γöé
Γöétheir DLL segment being marked Load Γöé Γöé Γöé Γöé
ΓöéOn Call or Preload (a linker option). Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéOnly one copy of a DLL is resident in Γöé Γöé X Γöé X Γöé
Γöémemory for use by multiple programs Γöé Γöé Γöé Γöé
Γöé(solves potential memory problem). Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéLinking is a manual step required priorΓöé X Γöé Γöé Γöé
Γöéto execution each time a function or Γöé Γöé Γöé Γöé
Γöélibrary used by the program changes. Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéThe program does not have to be Γöé Γöé X Γöé X Γöé
Γöére-linked prior to execution of DLLs Γöé Γöé Γöé Γöé
Γöéare changed. Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéThe source program must be able to Γöé Γöé X Γöé Γöé
Γöéresolved all references at load time Γöé Γöé Γöé Γöé
Γöé(must know name of DLLs being used, Γöé Γöé Γöé Γöé
Γöéversus being passed a DLL name during Γöé Γöé Γöé Γöé
Γöérun time and linking during run time). Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéThe source program does not resolve Γöé Γöé Γöé X Γöé
Γöéreferences to DLLs until DLL is needed Γöé Γöé Γöé Γöé
Γöéduring run time. DLL names can be Γöé Γöé Γöé Γöé
Γöépassed by other programs during run Γöé Γöé Γöé Γöé
Γöétime and run time linking is performed.Γöé Γöé Γöé Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
7.4.2 Packaging Guidelines
ΓòÉΓòÉΓòÉ 10.4.2. 7.4.2 Packaging Guidelines ΓòÉΓòÉΓòÉ
Following are suggestions on how to package functions into DLLs. After dividing
the application into a group of DLLs, the next step is to optimize the
placement of functions/routines into segments within the DLL (to reduce
intersegment memory calls). For the process of optimal grouping of functions
into segments, refer to Grouping of Functions .
o It is more desirable to have fewer DLLs and larger segments than many DLLs
and smaller segments. The general rule of thumb is "the fewer DLLs and
larger memory segments the better." But as with any rule, balance is
needed. The extremes would be at one end, to have one DLL for every API in
the application, and on the other end the whole application in a single 8MB
DLL! An application slowing down or the RAM usage growing due to the
packaging may be signs that the balance has been crossed. If a DLL gets too
big and there are many segments, there is also the problem of the overhead
created at load time - a 100-byte header is created for each segment even if
the segment isn't loaded.
- It is a good idea to place frequently executed code in as few segments as
possible and group within one DLL.
- Inter-DLL function calls have a high overhead on their initial invocation
(e.g. table look up, disk read), unless both function segments
previously reside in memory. After the first call, their cost to the
system is equivalent to an intersegment call.
o Group external APIs into their own DLL.
o Hardware and device dependent code should also be combined into their own
DLL.
o Rarely executed code segments (e.g. error code) may be grouped into their
own DLL.
o Separate .DLL and .EXE code allows updating and recompilation of either
separately.
o Use a common .DLL for shared functions. Only one copy of the .DLL code is
loaded into memory and referenced by all programs using it.
o Use run time, rather than load time, linking of a .DLL.
o Avoid use of Ordinals - they can introduce maintenance difficulties.
o Grouping DLLs by application function can be a good idea from a marketing
and maintenance point of view, but pay close attention to the amount of
intersegment calls being done. From a performance point of view, it is best
to limit intersegment calls as much as possible.
o Suggested link options are:
- /ALIGN:16
- /PACKDATA
- /EXEPACK
- /FARCALLTRANSLATION Near calls should be used to access private
subroutines and Far to Near Call Optimization should be used for
externally available entry points (/F when linking).
o LIBPATH Considerations
- Put the path to the most frequently used .DLLs at the beginning of the
LIBPATH statement, thereby making directory search more efficient. (Note
that this means that the OS /2 DLL subdirectory should be placed at the
end, since these .DLLs are only loaded at IPL.)
It may be helpful to put "." as the very first directory in the LIBPATH
statement, indicating that the current directory be searched first. If
programs are run from their own subdirectory, these directories need not
be listed in LIBPATH.
- Include subdirectories in the LIBPATH statement only if they contain
.DLLs or Device Drivers.
7.5 Ported DOS Applications
ΓòÉΓòÉΓòÉ 10.5. 7.5 Ported DOS Applications ΓòÉΓòÉΓòÉ
o DOS Single-threading versus OS/2 Multitasking
DOS does not provide multitasking functions. For this reason, most DOS
applications are single-threaded. DOS applications can multitask, but this
capability must be implemented by the application itself. To do this, the
application takes over the system timer and shares time slices among its own
program functions.
The OS/2 system, on the other hand, provides multitasking functions for you.
By moving the CPU dispatching functions into the OS/2 kernel, other resource
management functions can also be provided by the system. When a task switch
is made, the OS/2 system can associate the correct resources with the new
task and prevent processes from using resources owned by other processes.
If your DOS application emulates some of the multitasking functions provided
by the OS/2 system, the application can be simplified through the use of the
OS/2 multitasking functions. For example, suppose "pseudo-threads" have been
implemented in your DOS application. If this application is moved to the
OS/2 system with the pseudo-threads in a single OS/2 thread, the OS/2 system
won't know about your threads, and won't be able to overlap I/O with CPU
activity. Hence if one of the pseudo-threads performs I/O, all the other
pseudo-threads will be blocked. If this design is converted to run in OS/2
threads, when one thread does I/O, control will return to the OS/2
dispatcher and another thread can be dispatched.
o DOS Polling versus OS/2 Semaphores
In the DOS environment, an application that is waiting on another event
occurring before it proceeds typically uses a polling loop. This is simply a
loop that checks a data area, at given time intervals, for a flag to be set.
This method of determining when an event occurs puts the responsibility on
the application and also uses quite a bit of CPU.
Polling loops can be very expensive since they are so CPU intensive.
Therefore in the OS/2 system, semaphores should be used to serialize events
rather than polling. Semaphores allow a thread or process to sit idle until
the requested event occurs, at which time they will be notified of the event
by the system. See Section 7.2.1.5 Serialization, and Section 7.2.2.3
Serialization Guidelines for more information on semaphores.
7.6 Presentation Manager Application Guidelines
ΓòÉΓòÉΓòÉ 10.6. 7.6 Presentation Manager Application Guidelines ΓòÉΓòÉΓòÉ
Following are some Presentation Manager suggestions:
o Build multi-segmented Presentation Manager applications. (This does not
apply to 32-bit systems.)
o Build multi-threaded Presentation Manager applications.
- Use one thread to process messages only
- use other threads for application processing and GPI calls
- Never use a message processing thread for disk I/O
o Use one segment to process messages only. Use other segments for
application processing.
o Use the specific include files required by the application instead of the
general files that contain every definition.
o Use WinInvalidateRect to repaint a client.
o Do not use WinInvalidateRect in WM_PAINT processing.
o Use WinGetErrorInfo and WinFreeErrorInfo to retrieve error string text
created by the system
o When creating private PM objects, store the address pointer in the object.
7.7 Reentrancy
ΓòÉΓòÉΓòÉ 10.7. 7.7 Reentrancy ΓòÉΓòÉΓòÉ
Reentrancy allows more than one instance of a program to run concurrently in
separate threads without having multiple copies of the code in the system.
This obviously provides a RAM benefit and should be used whenever possible.
For a program to be reentrant it must obey the following rules:
o The program must not modify itself.
o Any changeable data that is unique to a single execution instance of the
program must be stored in an area that is dynamically allocated by the
program and is not shared across threads.
o If writable data is to be shared by all instances of the program, the
program must serialize access to it through the use of OS/2 synchronization
facilities (such as semaphores).
Chapter 8 Dos Compatability Mode Hints
ΓòÉΓòÉΓòÉ 11. Chapter 8 DOS Compatibility Mode Hints ΓòÉΓòÉΓòÉ
DOS Compatibility Mode (also referred to as the "DOS box"), resides in low
memory - that is, memory below the 1MB line. The amount of memory available to
the DOS box is that which is left over after other components that must be
below 1MB (such as device driver and portions of the OS/2 kernel), are loaded.
Following is a list of suggestions that may give memory back to DOS
compatibility mode. However, before considering this, be sure that this
feature is really needed in your system. Though DOS compatibility memory is
swappable, it can only be replaced by other segments that are swappable (i.e.
no fixed, non-swappable segments can be loaded in this space). The reason for
this is that DOS must be reloaded into the same location when needed.
The following suggestions were tried on a PS/2 Mod 70-E61 running OS/2 EE 1.2
and LAN Requester 1.2 over Token Ring and 3270 coax to a host. (This means
that there were DEVICE statements in CONFIG.SYS for the 3270 connection and the
Token Ring adapter.) Given this configuration, CHKDSK reported 513K (525792
bytes) for the DOS box. Note that your "total DOS box size" may vary. PS/2
machines have ABIOS patches that are loaded into low memory, decreasing the
amount of memory available to the DOS box. The size of these ABIOS patches
differs from model to model. PC-AT machines do not have ABIOS patches.
1. Don't load device drivers that you are not using. For example, COM02.SYS
is often loaded even though it may never be used. See Table 8-1. "DOS
Compatibility Memory Worksheet" for the RAM requirements of various device
drivers.
2. To gain an extra 4K of DOS box memory, remove or REMark out the TRACE=OFF
statement in your CONFIG.SYS. Any TRACE statement will automatically
allocate a 4K RAS Trace buffer. TRACE was the default in OS/2 1.1. In
OS/2 1.2 and beyond, this line was removed from the default CONFIG.SYS.
If you migrated your system from OS/2 1.1, you may still have this line in
your CONFIG.SYS.
3. Part of the standard AUTOEXEC.BAT is the APPEND terminate-stay-resident
program. Most people don't need APPEND, although it is indirectly used by
a few DOS commands. (For example, the MORE command uses the APPEND
function.) If you don't need APPEND, REMark out or remove this line from
your AUTOEXEC.BAT and save 5K. If you later receive a SYS0318 message
("Message file xxxx cannot be found."), you should add APPEND back in.
4. Experimenting with different ordering of device drivers can sometimes gain
or lose a couple K of memory. Beware however that the ordering of some
interdependent device drivers cannot be altered. Specifically, the
following device driver ordering must not be changed:
o The POINTDD statement must appear before the MOUSE statement. In
addition to the MOUSE statement, there is also a unique device driver for
each type of mouse. This device dependent statement must be specified
before the MOUSE statement.
o The LANDD statement must appear before the device statement for the LAN
hardware being used (TRNETDD, PCNETDD, or ETHERDD). The NETBDD statement
must follow both of these two statements.
5. EGA.SYS provides a "DosBox EGA Register Interface", meaning that this
statement may provide a greater level of compatibility for EGA programs
which run in the DOS box. To see whether you need this statement, REMark
it out of CONFIG.SYS, bring up your application in the DOS box, hotkey out
of the DOS box and then re-enter it. If upon re-entry the information on
the screen is not correct, you will need to keep the EGA.SYS. If
everything still works correctly, you can REMark the statement out and add
2K to your DOS box.
6. For systems with a mouse, this tip may save 1K. Add the parameter QSIZE=3
to your mouse driver (MOUSE0X.SYS for OS/2 1.1, or MOUSE.SYS for OS/2 1.2
and 1.3). QSIZE defaults to 10 if it is not specified. Operationally,
there appears to be no difference between a QSIZE of 10 versus 3.
7. The CONFIG.SYS THREADS= statement controls how many execution threads an
OS/2 system can concurrently run. The OS/2 Standard Edition default is
64. The OS/2 Extended Edition default is 255. Every 12 or 13 threads
costs 1K of DOS box. It is recommended that you not reduce threads below
64 for OS/2 SE or below 128 for OS/2 EE. THREADS=128 is adequate for many
simpler OS/2 EE environments. Changing to THREADS=128 (from 255) will add
about 10K to your OS/2 EE DOS box size. However, when you run out of
threads, message you receive will not be intuitive - it will be SYS0008,
"Not enough storage is available to process this command."
8. OS/2 EE (versions 1.2 and beyond) has a device driver called R0CSDD.SYS
(the 'OS/2 EE Common Services' device driver). In OS/2 EE 1.2 and 1.3,
this device driver is only required with the SDLC and Multi-protocol
communications adapters. REMarking out R0CSDD.SYS will save .5K.
9. OS/2 SE (versions 1.2 and beyond) has a device driver called DOS.SYS. This
driver allows Dual Boot and also allows DOS Programs to be started from
OS/2 menus. If you use neither of these features then remove DOS.SYS and
add a little more to the DOS box.
10. For LAN users, OS/2 EE 1.2 CSD 4064 or later provides a version of
NETBDD.SYS that will add 16K to the memory available to you in the DOS
box.
To estimate the amount of available DOS compatibility memory, add the size of
the required and optional supports. Subtract this amount from 640KB or the
RMSIZE parameter, whichever is less.
8.1 Device Drivers Used in an OS/2 Environment
ΓòÉΓòÉΓòÉ 11.1. 8.1 Device Drivers Used in an OS/2 Environment. ΓòÉΓòÉΓòÉ
Following are the names of all the device drivers used in an OS/2 environment.
Note that the storage allocated for device drivers is not swappable. For the
DOS box to have access to a device driver it must be stored below the 1MB line.
o The following are system device drivers for an AT:
BASEDD01 Handles the system clock, the keyboard, output to the
printer, and screen output.
COM01 Handles "base services" ASYNC.
DISK01 Handles both internal diskettes and hardfiles.
o The following are system device drivers for a PS/2:
BASEDD02 Handles the system clock, the keyboard, output to the
printer, and screen output.
COM02 Handles "base services" ASYNC.
DISK02 Handles both internal diskettes and hardfiles.
ABIOS Provides ABIOS patches.
o The following are system device drivers which are not machine sensitive:
ANSI Provides "extended screen output facilities" for the
DOS box.
COUNTRY Specified on the CONFIG.SYS 'COUNTRY' command, and
contains country-dependent information used by the OS/2
system.
DOS Provides dual boot support and capability to start DOS
programs from OS/2 menus.
EGA Provides "DosBox EGA Register Interface"
EXTDSKDD Driver for External 3.5" disk drive
MOUSE Provides device-independent mouse support. Note that
in addition to this device driver, there is also a
device-dependent mouse driver that is specified, the
specific name of which depends on the physical mouse
installed on the system.
POINTDD Pointer driver for PM
PMDD Required PM device driver
VDISK RAM DISK driver
o The User Device Drivers for OS/2 EE and LS are:
ASYNCDDA Handles "Extended Services" ASYNC for an AT.
ASYNCDDB Handles "Extended Services" ASYNC for a PS/2.
DFTDD Provides the 3270 coaxial communications support.
ETHERDD Provides ETHERAND support.
ICARICIO Provides X.25 support.
LANDD Provides base LAN support.
NETBDD Provides &netb. support.
NETWKSTA Provides support for the LAN Server and Requester
redirector.
RDRHELP Provides support for the LAN Server and Requester
redirector.
PCNETDD Provides PC Network support.
R0CSDD The 'OS/2 EE Common Services' device driver, providing
support for SDLC and Multi-protocol communications
adapters.
SDLCDD Provides SDLC support.
T1P1NDD Provides 5250 Twinaxial support.
TRNETDD Provides Token Ring Network support.
ΓòÉΓòÉΓòÉ 12. GLOSSARY ΓòÉΓòÉΓòÉ
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
ΓòÉΓòÉΓòÉ 12.1. A ΓòÉΓòÉΓòÉ
access control
access control profile
access mode
access path
access plan
access priority
access procedure
ACDI port
active monitor
adapter address
adapter number
Adapter Support Program
additional server
address
Adjacent Link Station (ALS)
administrative authority
administrator
Advanced Program-to-Program Communications (APPC)
alert
alert focal point
alias
allocate
application program
application program interface (API) trace
application program interface (API)
application-embedded SQL
Asynchronous Communications Device Interface (ACDI)
attach
attach manager
attach queue
auto-answer
auto-start
autocall
autodial
automatic bind
ΓòÉΓòÉΓòÉ 12.2. B ΓòÉΓòÉΓòÉ
base band local area network (LAN)
base operating system
Basic Input/Output System
basic transmission unit (BTU)
baud
beacon
beaconing
binary synchronous communication (BSC)
bind
bind file
binding
BIOS
block
bridge
bridge number
broadband local area network (LAN)
broadcast
broadcast topology
ΓòÉΓòÉΓòÉ 12.3. C ΓòÉΓòÉΓòÉ
C & SM
call accepted packet
call connected packet
call request packet
callable programming interface (CPI)
called address
calling address
candidate key
carrier
cascading
CCITT (Comite Consultatif International Telegraphique
character string delimiter
child process
circuit switching
class of service
clause
clear to send (CTS)
clipboard
code page
column data type
column delimiter
column function
COM
commit
Common Services API
Common user access (CUA)
Communication and System Management (C & SM)
Communications Manager
concurrency control
concurrent sort
CONFIG.SYS
configuration file
configure
contention winner
contention-loser polarity
contention-winner polarity
continuous carrier
control privilege
conversation
conversation security
conversation security profile
conversation state
conversation type
conversation verb
Corrective Service Diskette (CSD)
correlated reference
correlated sub-query
CRC error detection
cursor stability
custom build
custom build diskette
custom install diskette
Custom Install mode
Customer Information Control System (CICS)
ΓòÉΓòÉΓòÉ 12.4. D ΓòÉΓòÉΓòÉ
data carrier detect (DCD)
data circuit-terminiating equipment (DCE)
Data Definition Language (DDL)
data interchange format (DIF)
data link control (DLC)
data link layer
data ser ready (DSR)
data terminal equipment (DTE)
data terminal ready (DTR)
database
database administrator (DBADM)
database directory
database management system (DBMS)
Database Manager
datagram
Datastream Compatibility (DSC)
deallocate
dedicated server
dependent logical unit (LU)
destination address field (DAF)
device driver
diagnostic tool
direct priviledge
disk operating system (DOS)
Distributed Function Terminal (DFT)
domain
domain control database
domain controller
domain definition
DOS
DOS mode
double-byte character set (DBCS)
drive
dump services
duplex
dynamic link library (DLL)
dynamic link routine (DLR)
dynamic linking
dynamic priority
dynamic SQL language (DSL)
ΓòÉΓòÉΓòÉ 12.5. E ΓòÉΓòÉΓòÉ
embedded SQL
emulation
Emulator High-Level Language Application Programming Interface (EHLLAPI)
end user
Enhanced Connectivity Facility (ECF)
error log
exchange station ID (XID)
export
extended binary-coded decimal interchange code (EBCDIC)
external resource
external server
ΓòÉΓòÉΓòÉ 12.6. F ΓòÉΓòÉΓòÉ
flow control
foreign key
form
format identification (FID)
frame
full-select
ΓòÉΓòÉΓòÉ 12.7. G ΓòÉΓòÉΓòÉ
gateway
graphics
group access list
group SAP
ΓòÉΓòÉΓòÉ 12.8. H ΓòÉΓòÉΓòÉ
half-duplex
half-session
heap
help
high-level language application programming interface (HLLAPI)
home fileset
host computer
ΓòÉΓòÉΓòÉ 12.9. I ΓòÉΓòÉΓòÉ
IBM AS/400 PC Support
IBM Operating System/2 Extended Edition
IBM Operating System/2 LAN Server
IBM PC Network
IBM Token-Ring Network
IEEE 802.2 interface
image
image definition
import
IND$FILE
independent LU
indirect directory
indirect privilege
initial program load (IPL)
input-output privilege level (IOPL)
interactive processing
intermediate node
International Organization for Standardization (ISO)
International Telegraph and Telephone Consultative Committee (CCITT)
interprocess communication
interrupt
ΓòÉΓòÉΓòÉ 12.10. J ΓòÉΓòÉΓòÉ
join
join condition
ΓòÉΓòÉΓòÉ 12.11. K ΓòÉΓòÉΓòÉ
kernel
keyboard mapping
keyboard remapping
kilobyte (KB)
ΓòÉΓòÉΓòÉ 12.12. L ΓòÉΓòÉΓòÉ
LAN adapter
LAN Requester
LAN Server
leased line
link
link level
link protocol
Link Service Access Point (LSAP)
link station
local area network (LAN)
local database
local initiation
local logical unit
local logical unit profile
local session identification
local station address
local transaction program
local workstation
lock
lock escalation
locking
log
log record
logical connector
logical device
logical link control (LLC)
logical unit (LU)
logical unit profile
logical unit 6.2 (LU 6.2)
lookup table
LU 0
LU 1
LU 2
LU 3
LU 6.2
LU-LU
ΓòÉΓòÉΓòÉ 12.13. M ΓòÉΓòÉΓòÉ
MAC service data unit (MSDU)
machine ID
Main Frame Interactive (MFI) presentation space
mapped conversation
mapped conversation verb
master key
Media Access Control Service Access Point (MSAP)
medialess requester
medium access control (MAC)
medium access control (MAC) frame
medium access control (MAC) protocol
medium access port
message log
message queue
messaging name
Micro Channel
migration
mode name
model profile
modem
module
module definition file
mouse
MSAP
multipoint line
multistation access unit (MAU)
multitasking
ΓòÉΓòÉΓòÉ 12.14. N ΓòÉΓòÉΓòÉ
NAU
NCP
negotiable BIND
NetBIOS
netname
network
network address
network addressable unit (NAU)
network administrator
Network Control Program (NCP)
network management
network management vector transport (NMVT)
network name
network printer
Network Problem Determination Application (NPDA)
network user address (NUA)
nickname
node
node address
node directory
node type
non-delimited ASCII format
non-switched line
NPDA
NUA
null character
numbered frames
ΓòÉΓòÉΓòÉ 12.15. O ΓòÉΓòÉΓòÉ
OAF
object
object code
object file
object module
object name
object names menu
octet
odd parity
OIA
one-byte checksum error detection
Operating System/2 Extended Edition (OS/2)
Operating System/2 LAN Server
operational status
operator information area (OIA)
operator information line
optimization
origin address field (OAF)
originator
OS/2 Extended Edition
OS/2 LAN Server
OS/2 screen group
output rows
ΓòÉΓòÉΓòÉ 12.16. P ΓòÉΓòÉΓòÉ
pacing
pacing character
pacing group
pacing interval
pacing response
pacing window
packet
packet level
packet switching
packet switching data network (PSDN)
packet window
packet-level data circuit-terminating equipment
page
parallel session
parent process
parent row
parent table
parity
parity bit
partner logical unit (LU)
partner transaction program
path
path control layer
path control network
path information unit (PIU)
PC Network
PC/IXF
PCLP
peer-to-peer
peripheral node
permanent virtual circuit (PVC)
personal computer local area network program (PCLP)
personal computer/integrated exchange format (PC/IXF)
physical level
physical unit (PU)
physical unit type
PIU
plan
plan name
point of consistency
point-to point
point-to-point line
poll
polling
Post Telephone and Telegraph Administration (PTT)
precision attribute
pre-compilation
pre-compiler
predicate
Presentation Interface
Presentation Manager
presentation space
presentation space ID (PSID)
primary half-session
primary index
primary key
primary link station
primary logical unit (PLU)
print nickname
privilege
priviledge level
procedure (PROC)
procedure language statements
Procedures Language 2/REXX
professional office system (PROFS)
profile
PROFS
prompted interface
prompted query
Prompted View
protocol converter
protocol handler
PSDN
PSID
PTT
PU
PUBLIC
PVC
ΓòÉΓòÉΓòÉ 12.17. Q ΓòÉΓòÉΓòÉ
Q-bit
QBE
QLLC
QMF
qualified logical link control (QLLC)
qualifier
qualifier-bit (Q-bit)
query
Query Management Facility (QMF)
Query Manager
Query Manager Callable Interface
query-by-example (QBE)
queued attach
ΓòÉΓòÉΓòÉ 12.18. R ΓòÉΓòÉΓòÉ
RDS
rebind
receive
receive pacing
Recommendation X.25
record
record format
recovery
recovery log
redirection
referential constraint
referential integrity
relational database
relative path name
Reliability, availability, and serviceability (RAS) programs
remote
remote controller
Remote Data Services (RDS)
remote database
remote device
remote equipment
remote initial program load (RIPL)
remote initiation
remote IPL
remote IPL requester
remote IPL server
remote transaction program
remote workstation
repeatable read
report
request
request header (RH)
request to send (RTS)
request unit (RU)
request/response header (RH)
requester
reset packet
response header (RH)
response unit (RU)
restart procedure
RESTRICT
results table
reverse charging
reverse charging acceptance
revoke
rollback
root table
rounding rule
routing
routing table
row
row lock
row pool
RTS
RU
RU chain
ΓòÉΓòÉΓòÉ 12.19. S ΓòÉΓòÉΓòÉ
SAA
SAP
scalar function
scale
scale attribute
screen group
SCS
SDLC
search
search condition
secondary half-session
secondary link station
secondary logical unit (SLU)
self-referencing constraint
self-referencing row
self-referencing table
send
send pacing
sequence
serial device
server
server alias
Server-Requester Programming Interface (SRPI)
service access point (SAP)
service transaction program (service TP)
session
session activation
session deactivation
session limit
session partner
session security
session-initiation request
session-termination request
SET NULL
shared resource
short name
short-session ID
shutdown
single session
SLU
SNA
SNA character string (SCS)
SNA controller
SNA gateway
SNA LU session type 6.2 protocol
SNA network
SNBU
soft checkpoint
source address
SQL
SQL communication area (SQLCA)
SQL descriptor area (SQLDA)
SQL escape character
SQL query
SQL statement
SQL string delimiter
SQLCA
SQLDA
SRPI
SRPI router
SSCP
SSCP-LU
SSCP-PU
standalone
static SQL
statistics
stop bits
Structured Query Language (SQL)
subarea
sub-query
sub-select
sub-table
sub-vector
summary function
SVC
switched line
switched network backup
switched virtual circuit (SVC)
sync point
synchronization level
synchronous
Synchronous Data Link Control (SDLC)
synchronous data transfer
synchronous transmission
SYSADM (system administrator)
system
system administrator
system database directory
system name
system services control point (SSCP)
system trace
Systems Application Architecture (SAA)
Systems Network Architecture (SNA)
ΓòÉΓòÉΓòÉ 12.20. T ΓòÉΓòÉΓòÉ
table
table fields
Task Manager
telecommunication facility
telecommunication line
terminal
TH
throughput class negotiation
token
token monitor
token-ring
Token-Ring Network
topology
TP
trace
trace buffer
trace services
transaction
transaction program (TP)
transaction service mode
transfer file
transfer request
transmission control layer
transmission frame
transmission group
transmission header (TH)
transmission service mode
transmission service mode profile
transmit
twinaxial data link control (TDLC)
twinaxial feature
ΓòÉΓòÉΓòÉ 12.21. U ΓòÉΓòÉΓòÉ
Unbind Session (UNBIND)
uncommitted read
unformatted system sevices (USS)
unique index
universal access authority
universal access control (UAC)
universal asynchronous receiver/transmitter (UART)
universal naming convention (UNC)
user ID
user profile
User Profile Management
user types
USS
ΓòÉΓòÉΓòÉ 12.22. V ΓòÉΓòÉΓòÉ
V.25
V.35
variable length string
variable pool
view
virtual call facility
virtual circuit
virtual machine (VM)
Virtual Machine/Conversation Monitoring System (VM/CMS)
Virtual Machine/System Product(VM/SP)
Virtual Telecommunications Access Method (VTAM)
VM
VM/CMS
VM/SP
VTAM
VT100
ΓòÉΓòÉΓòÉ 12.23. W ΓòÉΓòÉΓòÉ
WAN
wide area network (WAN)
window procedure
worksheet formats (WSF)
workstation
workstation address
workstation controller (WSC)
WSC
WSF
ΓòÉΓòÉΓòÉ 12.24. X ΓòÉΓòÉΓòÉ
X.21
X.21 bis
X.21 feature
X.25
X.25 feature
X.25 network
X.25 verb
X.25 verb request control block (XVRB)
X.32
XID
xmodem
XOFF
XON
XVRB
3101
3270 terminal emulation
5250 Work Station Feature
5250 WSF
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé Γöé
Γöé Γöé Γöé
│ ┌───п───────────┤ │
Γöé Γöé ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÉ Γöé
Γöé Γöé Γöé Disk BuffersΓöé Γöé
Γöé Γöé ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ Γöé
Γöé Γöé Γöé
Γöé Γöé Γöé
Γöé Γöé Γöé Γöé
│ ├───п───────────┤ │
Γöé Γöé Γöé Γöé
Γöé Γöé ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ Γöé
Γöé Γöé ΓöéHPFS or FAT CacheΓöé Γöé
Γöé Γöé ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ Γöé
Γöé Γöé Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé Γöé Γöé
│ └──о────────────┤ (Present for SCSI drives with │
Γöé Γöé cache on the drive adapter.) Γöé
Γöé Γöé Γöé
Γöé ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÉ ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ Γöé
Γöé Γöé FIXED DISK Γöé- - - - Γöé SCSI Cache Γöé Γöé
Γöé ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ Γöé
Γöé Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Figure 3-1. Relationship Between Cache and Buffers for a READ
Operation
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 3-1. Comparison between the FAT File System and the HPFSΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CHARACTERISTIC Γöé FAT FILE SYSTEM Γöé HPFS Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéMaximum filename Γöé11 (8.3 format) charac- Γöé255 characters Γöé
Γöélength Γöéters Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéFile attributes ΓöéBit flags plus up to ΓöéBit flags plus up Γöé
Γöé Γöé64KB text or binary Γöéto 64KB text or Γöé
Γöé Γöé Γöébinary Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéMaximum path Γöé64 characters Γöé260 characters Γöé
Γöélength Γöé64 characters Γöé260 characters Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéAverage wasted Γöé1/2 cluster (1KB) Γöé1/2 sector Γöé
Γöéspace per file Γöé Γöé(256 bytes) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéAllocation infor-ΓöéCentralized in FAT file ΓöéLocated nearby eachΓöé
Γöémation for files Γöésystem on home track Γöéfile in its FNODE Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéFree disk space ΓöéCentralized in FAT file ΓöéLocated near free Γöé
Γöéinformation Γöésystem on home track Γöéspace in bit maps Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDirectory ΓöéUnsorted linear list, ΓöéSorted B-Tree Γöé
Γöéstructure Γöémust be searched exhaus-Γöé Γöé
Γöé Γöétively Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDirectory ΓöéRoot directory on home ΓöéLocated near seek Γöé
Γöélocation Γöétrack, others scattered Γöécenter of volume Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéRead-ahead ΓöéNone prior to DOS 4.0, ΓöéCaches reads in 2KBΓöé
Γöé Γöéprimitive read-ahead Γöéblocks Γöé
Γöé Γöéoptional on DOS 4.0 Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéCache replacementΓöéSimple LRU ΓöéModified LRU Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéWrite-behind ΓöéOne sector (CONFIG.SYS ΓöéOptional, can also Γöé
Γöé(lazy write) Γöé'BUFFER=' function) Γöébe turned off on a Γöé
Γöé Γöé Γöéper-file-handle Γöé
Γöé Γöé Γöébasis Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéCaching program ΓöéDISKCACHE (in ΓöéCACHE.EXE (with Γöé
Γöé ΓöéCONFIG.SYS) Γöéparameters speci- Γöé
Γöé Γöé Γöéfied on the 'IFS=' Γöé
Γöé Γöé Γöéand 'RUN=' commandsΓöé
Γöé Γöé Γöéin CONFIG.SYS) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéMaximum cache Γöé1.2: 7.2MB Γöé 2MB Γöé
Γöésize Γöé1.3: 14MB Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéCache threshold ΓöéVariable, up to 16KB ΓöéFixed at 2KB Γöé
Γöé Γöé(3.5KB default) Γöé Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Data and Computer Communications by William Stallings, 1985., page 372.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Data and Computer Communications by William Stallings,1985., Page 376-377.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Data and Computer Communications by William Stallings, 1985., Page 377-378.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Data and Computer Communications by William Stallings, 1985., Page 378-379.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Data and Computer Communications by William Stallings, 1985., Page 379.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Computer Networks by Andrew S. Tanenbaum, 1981., Page 148-164.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé Verify Exit F1=Help Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Γöé
Γöé Communication Configuration Menu Γöé
Γöé Γöé
Γöé Γöé
Γöé Configuration file name ...............: TESTCFG Γöé
Γöé Configuration file status .............: Γöé
Γöé Verified Γöé
Γöé Γöé
Γöé Press F10 to go to the action bar or Γöé
Γöé select the type of profile you want to configure. Γöé
Γöé Γöé
Γöé 1. Workstation profile (and auto-start options)...Γöé
Γöé Γöé
Γöé 2. Asynchronous feature profiles Γöé
Γöé 3. 3270 feature profiles Γöé
Γöé 4. SNA feature profiles Γöé
Γöé 5. Server-Requester Programming Γöé
Γöé Interface (SRPI) profiles Γöé
Γöé 6. LAN feature profiles Γöé
Γöé 7. 5250 Work Station Feature profiles Γöé
Γöé 8. X.25 feature profiles Γöé
Γöé Γöé
Γöé 9. Configuration file utilities Γöé
Γöé Γöé
Γöé Γöé
Γöé Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Figure 4-2. Communications Manager Main Configuration Menu
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-3. PC NEtwork/ETHERAND Adapter Γöé
Γöé Work Area Size in K Bytes. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 8 Γöé 16 Γöé 64 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Increase it up 64KB until all the other parameters that affect total "Shared
RAM" are tuned. (These parameters are found in the formulas expressed in the
description of Receive Buffer Size) Once the adapter work area reaches 64KB,
follow the same logic as for Token Ring, where "Shared RAM" is fixed by the
physical adapter.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-4. Transmit Buffer Sizes per Adapter in Bytes. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéADAPTER Γöé MIN Γöé DEF Γöé MAX Γöé
Γöé Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIBM Token Ring Network Adapter Γöé 96 Γöé 1048 Γöé 2048 Γöé
Γöé Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIBM Token Ring Network Adapter II - Γöé 96 Γöé 1048 Γöé 2048 Γöé
ΓöéFull Length Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIBM Token Ring Network Adapter II - Γöé 96 Γöé 1048 Γöé 4464 Γöé
ΓöéHalf Length Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIBM Token Ring Network Adapter/A Γöé 96 Γöé 1048 Γöé 2048 Γöé
Γöé Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIBM Token Ring Network 16/4 Adapter/A Γöé 96 Γöé 1048 Γöé 2048 Γöé
Γöé Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIBM Token Ring Network 16/4 Adapter @ Γöé 96 Γöé 1048 Γöé 4464 Γöé
Γöé4 Mbps Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIBM Token Ring Network 16/4 Adapter @ Γöé 96 Γöé 1048 Γöé 17960Γöé
Γöé16 Mbps Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéPC Network Γöé 96 Γöé 1048 Γöé 2048 Γöé
Γöé Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéETHERAND Γöé 96 Γöé 1496 Γöé 1496 Γöé
Γöé Γöé Γöé Γöé Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
For most adapters, set it to the maximum physically allowed value, or the
maximum available given the number of link stations defined. Exceptions exist
for the following adapters:
o For Token Ring Network Adapter, set it to 1048, the default.
o For Token Ring Network Adapter II - Half-Length, set it to 2176.
o For general purpose Token Ring 16/4, set it to 4224.
o For Token 16/4 Adapters @ 16 Mbps, that do large data transfers on a very
few links concurrently, and the target systems can handle this large frame,
make it 8320.
Even these recommended values may be overridden by the results of the
calculations given in the description of Receive Buffer Size.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-5. Number of Transmit Buffers - Γöé
Γöé Token Ring Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 1 Γöé 2 Γöé 10 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
The value should be set to two, and never higher. Two Transmit buffers are
enough to keep the adapter busy when handling multiple transmit requests
concurrently. Only consider going to 1 if there is not enough Shared RAM to
satisfy the Receive Buffer considerations following.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings - Token Ring & PC Network
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-6. Minimum Recieve Buffers - Γöé
Γöé Token Ring and PC Network Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 2 Γöé 10 Γöé 151 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Settings - ETHERAND
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-7. Minimum Recieve Buffers - Γöé
Γöé ETHERAND Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 8 Γöé 25 Γöé 151 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
The recommendation is deferred to the following section on Receive Buffer Size
since these values must be considered together. It may also involve resetting
the Transmit Buffer Size and Number of Transmit Buffers.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings - Token Ring & PC Network
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé Γöé
ΓöéTable 4-8. Receive Buffer Size - TokenΓöé
Γöé Ring and PC Network Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 96 Γöé 280 Γöé 2048 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Settings - ETHERAND
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-9. Receive Buffer Size - Γöé
Γöé ETHERAND Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 96 Γöé 256 Γöé 1696 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Following is an iterative procedure for reaching an effective configuration of
receive buffer size and minimum number of receive buffers for any adapter.
Note: For IBM Token Ring 16/4 adapters configured with 16KB pages, additional
overhead may need to be included to account for wasted Receive Buffer space at
page boundaries. (See the '**' in Figure 4-5. "Shared RAM on Token Ring
Adapters".)
If there are any problems with the values resulting from these calculations,
the Communications Manager configuration will not 'Verify.'
1. Set Receive Buffer to 512
o (RECVBUFF) = 512
2. Get "Shared RAM" for Adapter
o (TOTRAM)
3. Calculate Used RAM without receives
o Token Ring, (USEDRAM) = 1652
o PC Network and ETHERAND, (USEDRAM) = 744
4. Account for Memory used by SAPS
o (USEDRAM) = (USEDRAM)+(64 * # of SAPS)
5. Account for Memory used by Link Stations
o (USEDRAM) = (USEDRAM)+(144 * Link Stations)
6. Account for Memory used by Group SAPS
o (USEDRAM) = (USEDRAM) + ((14 + (2 * # Members of Group SAPS)) * Group
SAPS)
7. Account for Transmit Buffers
o Token Ring, (USEDRAM) = (USEDRAM) + (Transmit Buffer Size * # of
Transmit Buffers)
o PC Network and ETHERAND, (USEDRAM) = (USEDRAM)
8. Calculate Receive RAM available
o (RECVRAM) = (TOTRAM) - (USEDRAM)
9. Calculate number of Receive Buffers
o (RECVNUM) = INT( (RECVRAM)/((RECVBUFF)+8) )
10. Calculate effective Receive RAM
o (ERECVRAM) = (RECVNUM) * (RECVBUFF)
11. Calculate reasonable number of concurrent transmits (4)
In this step it is assumed that the system has enough receive space to
hold 4 full frames. If your number is other than 4, plug it into the
following formula -- it will still work.
o (TRANS) = (4 * Transmit buffer size)
12. Determine if there is enough Receive RAM
o If (ERECVRAM) >= (TRANS) Continue at step 14
13. Try to increase effective Receive RAM (ERECVRAM)
o For Token Ring
- (Transmit Buffer Num) = 1
o Reduce Transmit Buffer Size, Continue at step 7
- (Transmit Buffer Num > 1)
o Decrement Transmit Buffer Num, Continue at step 7
- For PC Network and ETHERAND
o (Adapter Work Area) = 64KB
- Reduce Transmit Buffer Size, Continue at step 7
o (Adapter Work Area) < 64KB
- Increase Shared RAM, Continue at step 2
- Determine if there are enough Receive Buffers
o If few Link Stations configured ( <= 10)
- If (RECVNUM) >= 20, Continue at step 15
- If (RECVNUM) < 20
o Reduce Receive Buffer Size, Continue at step 9
o If moderate number of Link Stations configured ( > 10 )
- If (RECVNUM) >= 50, Continue at step 15
- If (RECVNUM) >= (2 * Link Stations), Continue at step 15
o Reduce Receive Buffer Size, Continue at step 9
- Done
o Minimum receive buffers = (RECVNUM)
o Receive buffer size = (RECVBUFF)+8
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-10. Override Token Γöé
Γöé Release default Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé OPTION Γöé DEFAULT Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Yes Γöé No Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Use the default of 'No'. At 4 Mbps, the adapter uses normal token release and
the value of early token release is negligible. At 16 Mbps, the adapter uses
it and this will improve network efficiency.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé Table 4-11. Group 1 Response Timer Γöé
Γöé Multiplier Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 (5) Γöé 15 Γöé 255 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
If 0 is specified, the value is set to 5. The effective value that the Group 1
Timer will have then is:
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-12. Group 1 Response Timer in Γöé
Γöé seconds (i.e. Response Γöé
Γöé Timer * 40 ms.) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0.040 Γöé 0.6 Γöé 10.2 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Using the default Response Timer value (0.6 seconds) and default Response Timer
Interval value (5, see Response Timer Interval (T1)below), the resulting timer
would be: 0.6 * 5 = 3 seconds.
Recommendation
Following is a simple rule of thumb that may help in determining T1 for
transmitting over bridges. Table 4-13. "T1 Calculation for Different Token
Ring Line Speeds and Frame Sizes" provides an example of using these rules for
Token Ring frame sizes and link speeds recommended in the documentation for the
IBM Token Ring Network Bridge Program (16F0493).
1. Find the speed of the slowest link that a Data Frame will cross.
2. Compute Transfer Time of the largest frame that can be sent. Maximum frame
sizes are those recommended for particular line speeds by the network
bridge program being used.
3. Multiply the value by 5 for multiple frames en-route and add 0.1 seconds
for intermediate bridge system processing.
Note that the default T1 value provided by OS/2 is greater than the calculated
T1 values shown in the following table. Since the object is to set the timer
large enough to avoid unnecessary timeouts, the default value is
sufficient.Table 4-13. "T1 Calculation for Different Token Ring Line Speeds
and Frame Sizes"
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-13. T1 Calculation for Different Token Ring Line Γöé
Γöé Speeds and Frame Sizes Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéFRAMEΓöéLINK ΓöéMAX FRAME ΓöéTIME TO TRANSFERΓöéT1 VALUE: Γöé
│SIZE │SPEED │TRANSFER │5 FRAMES т │ │
Γöé Γöé Γöé Γöé(TRANSFER TIME Γöé((TRANSFER TIME) * 5)Γöé
ΓöéBYTESΓöé ΓöéTIME (SEC)Γöé * 5) Γöé+ 0.1 Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé516 Γöé9600 Γöé0.430 Γöé2.150 Γöé2.250 Γöé
Γöé Γöébps Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé516 Γöé19200 Γöé0.215 Γöé1.075 Γöé1.175 Γöé
Γöé Γöébps Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé516 Γöé38400 Γöé0.108 Γöé0.540 Γöé0.640 Γöé
Γöé Γöébps Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé1500 Γöé56000 Γöé0.214 Γöé1.070 Γöé1.170 Γöé
Γöé Γöébps Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé1500 Γöé64000 Γöé0.188 Γöé0.940 Γöé1.040 Γöé
Γöé Γöébps Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé2052 Γöé1.344 Γöé0.017 Γöé0.085 Γöé0.185 Γöé
Γöé ΓöéMbps Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé2052 Γöé 4 Γöé0.004 Γöé0.020 Γöé0.120 Γöé
│ │MbpsЭ │ │ │ │
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé4472 Γöé 4 Γöé0.009 Γöé0.045 Γöé0.145 Γöé
Γöé ΓöéMbps Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé4472 Γöé 16 Γöé0.002 Γöé0.010 Γöé0.110 Γöé
│ │MbpsЭ │ │ │ │
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé8144 Γöé 16 Γöé0.004 Γöé0.020 Γöé0.120 Γöé
Γöé ΓöéMbps Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéNOTE: Γöé
Γöé Γöé
Γöé1. The number '5' is just an assumed value, and represents theΓöé
Γöé number of frames that one might expect to be transmitted Γöé
Γöé before there should be any check for a timeout. Other Γöé
Γöé numbers may be used and the final T1 recalculated. Γöé
Γöé2. Two values are given for 16 Mbps and 4 Mbps, depending on Γöé
Γöé configuration. Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-14. Group 1 Acknowledgement Γöé
Γöé Timer Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 (1) Γöé 3 Γöé 255 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
If 0 is specified, the value is set to 1. The effective value that the Group 1
Timer will have then is:
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-14. Group 1 Acknowledgement Γöé
Γöé Timer (in seconds) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0.040 Γöé 0.120 Γöé 10.2 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Using the default Acknowledgement Timer value (0.120 seconds) and default
Acknowledgement Timer Interval value (2, see above, the resulting timer would
be: 0.120 * 2 = 0.240 seconds.
Recommendation
The resulting T2 timer should always be less than the resulting T1 timer in
order to be able to acknowledge frames the sender tries to retransmit. (That
is, if T1 < T2, frames that aren't lost will be retransmitted.) The default is
certainly a safe value. In order to cut down on the acknowledgements on slow
links (see the last column of Table 4-13 T1 Calculation for Different Token
Ring Line Speeds and Frame Sizes the rows for 64000 bps or less), the
effective T2 could be increased up to half of T1.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-16. Group 1 Inactivity Timer Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 (25) Γöé 255 Γöé 255 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
If 0 is specified, the value is set to 25. The effective value that the Group 1
Timer will have then is:
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-17. Group 1 Inactivity Timer Γöé
Γöé (in seconds) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0.040 Γöé 10.2 Γöé 10.2 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Using the default Inactivity Timer value (10.2 seconds) and default Inactivity
Timer Interval value (3, see Inactivity Timer Interval (Ti) below), the
resulting timer would be: 10.2 * 3 = 30.6 seconds.
Recommendation
Use the default.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-18. Number of queue elements Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 200 Γöé 600 Γöé 1400 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
For most systems, use the default, which is (# of users * 200). If messages are
logged that reflect a condition where queue elements have been exhausted, raise
the number of queue elements and reboot the system. For very busy systems, set
this parameter to 1400.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé Table 4-19 Γöé
Γöé Full Buffer Datagram Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé OPTION Γöé DEFAULT Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Yes Γöé No Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
This parameter need not be changed to Yes unless you have an application uses
NetBIOS datagrams to transfer large quantities of data. However, setting it to
Yes will not hurt anything even if there are no applications needing this type
of support. If you have this type of application, the increased frame size
will provide performance benefits as discussed in previous sections on
increasing the block size.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-20. Number of remote names Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 Γöé 0 Γöé 255 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
For most cases, set it to 8 or less. If more names are needed, NetBIOS will
simply use the broadcast capability. If that is not desired, set it to the
number of most frequently used destinations.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-21. Datagrams use Γöé
Γöé remote directoryΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé OPTION Γöé DEFAULT Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Yes Γöé No Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Set it to yes.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-22. Maximum transmits outstandingΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 (2) Γöé 2 Γöé 9 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
If increased performance for large data transfers is desired and all partners
and bridges have adequate Receive Buffer space, set it to 4. Otherwise, use the
default. Any number greater than 4 provides marginal benefit with the
increased risk of frames being discarded.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-23. Maximum receives outstandingΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 (1) Γöé 1 Γöé 9 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Leave it at 1. Do not increase it.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-24. Retry count(all stations) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 (8) Γöé 8 Γöé 255 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
For most applications on the LAN, use the default.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable. 4-25. Response Timer Interval (T1)Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 1 Γöé 5 Γöé 10 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
The effective value that the NetBIOS T1 will be:
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-26. Effective T1 for NetBIOS Γöé
Γöé (in seconds) with IEEE Γöé
Γöé 802.2 defaults Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0.040 Γöé 3.0 Γöé 5.0 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Use the default value. If there will be remote bridges with speeds less or
equal than 64K bps refer to the IEEE 802.2 profile discussion on timers.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-27. Acknowledgement Timer Γöé
Γöé Interval (T2) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 1 Γöé 2 Γöé 10 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
The effective value that the NetBIOS T2 will be:
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-28. Effective T2 for NetBIOS Γöé
Γöé (in seconds) with IEEE Γöé
Γöé 802.2 defaults Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0.120 Γöé 0.240 Γöé 2.0 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Use the default value. If there will be remote bridges with speeds less or
equal than 64K bps refer to the IEEE 802.2 profile discussion on timers.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-29. Inactivity Timer Interval Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 1 Γöé 3 Γöé 10 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
The effective value that the NetBIOS Ti will be:
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-30. Effective Ti for NetBIOS Γöé
Γöé (in seconds) for IEEE Γöé
Γöé 802.2 defaults Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 10.2 Γöé 30.6 Γöé 51.0 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Leave it at the default.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé Γöé
Γöé Γöé
Γöé SNA Feature Configuration Γöé
Γöé Γöé
Γöé Γöé
Γöé SNA base profile... Γöé
Γöé Data Link Control (DLC) profiles... Γöé
Γöé APPC logical unit (LU) profiles... Γöé
Γöé APPC partner logical unit profiles... Γöé
Γöé APPC transmission service mode profiles... Γöé
Γöé APPC initial session limit profiles... Γöé
Γöé APPC remotely attachable transaction program Γöé
Γöé (TP) profiles... Γöé
Γöé APPC conversation security... Γöé
Γöé SNA gateway profiles... Γöé
Γöé SNA LUA profiles... Γöé
Γöé Γöé
Γöé Γöé
Γöé Γöé
Γöé Γöé
Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Esc=Cancel F1=Help F3=Exit Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Figure 4-6. Communications Manager SNA Feature Configuration Menu
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-31. Auto-activate theΓöé
Γöé attach manager Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé OPTION Γöé DEFAULT Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Yes Γöé No Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
If your system will execute APPC remote TP's, then set to Yes. Otherwise, save
system resources by setting it to No. RDS requesters should set it to No.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-32. Load DLC Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé OPTION Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Yes Γöé No Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
For systems with one connectivity, select Yes. If there are multiple
configured DLC's that are not used concurrently, select No. Loading DLC's that
are not used wastes memory and system resources.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-33. Free unused linkΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé OPTION Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Yes Γöé No Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Select No if used with 5250 Workstation Feature. If this parameter is set to
Yes and there are auto-activate sessions (see 4.4.3.1 SNA Base Profile), there
may be unproductive use of system and network resources by continuously taking
down and bringing up links. For environments where application's communication
is active for short periods of time, it might be worth the overhead of
selecting Yes. Otherwise, select No. If there are connection price
considerations for switched connections, for environments where applications
are active for a short time, select Yes. Again, this has to be balance out
with the overhead of taking down and bringing up links frequently.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-34. SDLC Maximum RU Size Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 256 Γöé 256 Γöé 4096 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
It is important that this value matches the system(s) to which this system will
communicate. SNA provides a mechanism called XID exchange by which
communicating DLC's exchange capabilities in order for certain values to match
between partners. If XID exchange occurs, then there is less concern about
this parameter matching the partner(s). If XID exchange does not occur, make
sure that this value matches the partner system(s) exactly. Communications
Manager always tries to do this XID exchange. For SDLC, the communications
carrier should be consulted concerning error rate considerations. Generally,
switched links will have higher error rates than non-switched links. Specific
considerations are beyond the scope of this section. IBM lab experience has
shown that for low error rate situations, RU sizes above 1KB don't gain that
much. (This is due to the fact that with line speeds no greater than 19.2
Kbps, Communications will spend most of its time waiting for the line, so
increasing RU sizes to save processing time isn't really worth the extra
memory.) Hence in these situations, setting the RU size to 1024 should provide
adequate levels of performance as far a line utilization and CPU utilization
are concerned.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-35. Send Window Count Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 1 Γöé 4 Γöé 7 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
This parameter is set correctly if XID exchange is used. The effective value
for Send Receive Count ends up being the Receive Window Count of the partner.
For best performance, it should be 7. Remember, if no XID exchange will occur,
set it to match the partner(s) Receive Window Count.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-36. Receive Window Count Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 1 Γöé 4 Γöé 7 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
If XID exchange will occur set it to 7. Otherwise, set it to match the
partner(s) Send Receive Window Count.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-37. Load DLC Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé OPTION Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Yes Γöé No Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
For systems with one connectivity, select Yes. If there are multiple
configured DLC's that are not used concurrently, select No. Loading DLC's that
are not used wastes memory and system resources.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-38. Free unused linkΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DEFAULT Γöé OPTION Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Yes Γöé No Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Set to No if you never want your links to be taken down. If you want to allow
links to be brought down under certain circumstances, set to Yes. Along with
this, use DLC Congestion control (Congestion Tolerance parameter), APPC Partner
LU Permanent Connection control, and SNA Gateway Auto-Logoff Timeout control.
This will let the user be more deliberate in choosing which links to keep up
and for how long.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-39. Percent of incoming calls (%)Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 Γöé 50 Γöé 100 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
For most configurations, set it to 0. Then, link stations are used based on
need, to contact a partner or being contacted by a partner. If the system is
running an application(s) that requires a minimum number of links available for
incoming connections, it should set this number appropriately. By reserving
link stations for incoming connections, the number link stations used for
outgoing connections are restricted by the percentage specified. This is a
drawback in some environments.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-40. Congestion Tolerance (%) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 Γöé 80 Γöé 100 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Use the default or higher number. Congestion tolerance is a good idea when
links are starting to be a scarce resource. Remember, maximum number of link
stations for the LAN DLC is a fixed number. If this percentage is too low,
there can be negative impact on system and network resources resulting from the
taking down and bringing up links frequently.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-41. Maximum RU Size Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé DLC Γöé MIN Γöé DEFAULΓöé MAX Γöé
Γöé Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Token Ring Γöé 256 Γöé 1024 Γöé 16384Γöé
Γöé Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé PC Network Γöé 256 Γöé 1024 Γöé 1920 Γöé
Γöé Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé ETHERAND Γöé 256 Γöé 1024 Γöé 1408 Γöé
Γöé Γöé Γöé Γöé Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
It is important, that this value matches the system(s) to which this system
will communicate. SNA provides a mechanism called XID exchange by which
communicating DLC's exchange capabilities in order for certain values to match
between partner's. If XID exchange occurs, then there is less concern about
this parameter matching the partner(s). If XID exchange does not occur, make
sure that this value matches the partner system(s) exactly. Communications
Manager always tries to do this XID exchange.
SQLLOO Considerations
SQLLOO uses this parameter for its transmit buffer size. SQLLOO does not do
XID exchange. Therefore this specification should match that of the partner
system.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-42. Send Window Count Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 1 Γöé 2 Γöé 8 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
This parameter is set correctly if XID exchange is used. The effective value
for Send Receive Count ends up being the Receive Window Count of the partner.
If XID are not exchanged, this value should be greater than or equal to the
partner Receive Window Count. Set it to 4 for improved performance in most
environments. Set it to 7 for improved performance in environments with few
partners doing large data transfers.
SQLLOO Considerations
SQLLOO does not do XID exchange. This value should be greater than or equal to
the partner Receive Window Count. Set it to 4 for improved performance in most
environments. Set it to 7 for improved performance in environments with few
partners doing large data transfers.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-43. Receive Window Count Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 1 Γöé 1 Γöé 8 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
This parameter is set correctly if XID exchange is used. If XID's are not
exchanged, this value should be less than or equal to the partner Send Window
Count. Set it to 2 for improved performance in most environments. Set it to 4
for improved performance in environments with few partners doing large data
transfers.
SQLLOO Considerations
SQLLOO does not do XID exchange. This value should be less than or equal the
partners Send Window Count. Set it to 2 for improved performance in most
environments. Set it to 4 for improved performance in environments with few
partners doing large data transfers.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
LAN DLC uses the default multiplier (1-10) that the provided by the 802.2
subsystem. The effective timer is then computed by using the settings of the
Group 1 and Group 2 multiplier set in the IEEE 802.2 profile for the
appropriate timer.
Recommendation
See the section on Timers in the IEEE 802.2 profile.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-44. LU Session Limit Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 1 Γöé 255 Γöé 255 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Use the default value of 255. Use the Partner LU and Transmission Service mode
profile to limit the total number of effective sessions, if in a constrained
memory environment.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-45. Maximum number of transactionΓöé
Γöé programs Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 Γöé 0 (No Γöé 255 Γöé
Γöé Γöé Limit) Γöé Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Use the default value 0, which is no limit. For constrained environments, it
can be used to control the number of concurrently active TP's in the system.
Note that this is a per Local LU value. The maximum possible TP's running
concurrently is the sum of this value for all the active LU's.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-46. Partner LU session limit Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 1 Γöé 1 Γöé 255 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Set it to the desired value of maximum sessions required for the partner. For
constrained environments, this parameter can be used to limit the maximum
number of active sessions. The Transmission Service Modes limits can also be
used jointly to limit the maximum number of active sessions.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-47. Minimum RU Size Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 256 Γöé 256 Γöé 16384 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Leave it at the default, 256. This parameter should only be increased if the
user wants to ensure that data transfers over a session for this mode have a
minimum size, for performance reasons. If a session is successfully
established, that means that the RU size is at least the value specified in
this parameter.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-48. Maximum RU Size Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 256 Γöé 1024 Γöé 16384 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
For sessions that will be used for data transfers, choose the appropriate DLC
RU Size for this parameter (see Table 4-34. "SDLC Maximum RU Size" and Table
4-41. "Maximum RU Size"). For constrained environments and transactions
consisting of short message exchanges, select a lower value. This allows the
receiving system to limit the size of data transfer block that the sender will
use.
SQLLOO Considerations
SQLLOO uses this value for the *SQLLOO Mode to determine the RU size to use.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-49. Receive pacing limit Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 Γöé 8 Γöé 63 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
For most communications environments, if the system is not RAM constrained
(i.e. no swapping), setting pacing to 16 should be sufficient. If there is
swapping, set this value lower. For transactions exchanging only short
messages, even in constrained environments, also set pacing to 16. Choosing a
value greater than 16 will provide no significant benefit and might create
problems in constrained environments.
SQLLOO Considerations
For SQLLOO, use Pacing = 3, the Maximum RU Size from the *SQLLOO Transmission
Service Mode Profile, and plug these values into the preceding calculations.
During execution, SQLLOO will limit total memory allocation to 320K, which may
be a very important benefit for very busy servers.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-50. Session Limit Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 Γöé 1 Γöé 253 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
If the LU is configured as a dependent LU, this value must be set to 1. For
constrained environments, use this number to limit the number of sessions with
a partner on each specific mode.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-51. Minimum number of contentionΓöé
Γöé winners source Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 Γöé 0 Γöé 255 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
For most environments, leave it at the default, and it will be determined
dynamically. For specific needs, set it to the expected number of sessions
that will be allocating for a specific mode.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-52. Minimum number of contentionΓöé
Γöé winners target Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 Γöé 0 Γöé 255 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
For most environments, leave it the default, and it will be determined
dynamically. For specific needs, set it to the expected number of sessions
that the partner will be allocating conversations for a specific mode.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-53. Number of automatically Γöé
Γöé activated sessions Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 Γöé 0 Γöé 255 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
This parameter should be left at the default 0. For applications that cannot
tolerate latency or uncertainty on the first attempt to allocate a conversation
with the partner, modify the value to the number of sessions for which the
latency cannot be tolerated.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-54. Permanent connectionΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé OPTION Γöé DEFAULT Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Yes Γöé No Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Select Yes unless you have a very good reason not to.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-55. Auto-logoff timeout in minutesΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 1 Γöé 999 Γöé 999 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Only change the default, if the gateway is servicing more workstations than
allowed. For workstations, where availability is not crucial and traffic is
sporadic, the auto-logoff timeout can be used to provide shared access to more
workstations than LU's available.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-56. Data transfer buffer size Γöé
Γöé override Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 Γöé 0 Γöé 32 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
If a value of 0 is entered, an automatic calculation of optimum size is
performed by the 3270 terminal at runtime (see following table).
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-57. Effective Buffer Size (if set to 0) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CONNECTION Γöé SIZE Γöé MAX/ RECOMMENDED Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé SNA (SDLC, LAN, DFT) Γöé 12KB Γöé 32KB Γöé
Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Non-SNA (DFT-only) Γöé 7KB Γöé 7KB Γöé
Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé CICS Γö¼ Γöé 2KB Γöé 2KB Γöé
Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé NOTE: Γö¼ CICS is an application that on top of communica- Γöé
Γöé tions. However it has buffer size requirements that are Γöé
Γöé different than suggested for the underlying protocol. Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
For SNA environments, use 32KB for improved throughput. For non-SNA
environments, 7KB is recommended. If experiencing problems (such as IND$FILE
level mismatches, see the IBM OS/2 System Administrators Guide), try specifying
a low (non-zero) value, like 2KB. For a CICS environment, 2KB is recommended.
For constrained environments, it can be reduced, since 3270 file transfer will
continue allocating memory as needed to service the file transfer.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Settings
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 4-58. Printer buffer size (bytes)Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé MIN Γöé DEFAULT Γöé MAX Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 1920 Γöé 1920 Γöé 15300 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Recommendation
Set it greater than or equal to the host mode table RUSIZES. Large print jobs
on printer sessions configured at the host with large RUSIZES and large pacing
windows can cause overcommitment problems in constrained environments.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé Γöé
Γöé Requester Γöé Server Γöé
Γöé ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇ Γöé
Γöé Γöé Γöé
ΓöéApplication Redirector Γöé Redirector File System Γöé
Γöé Γöé Γöé
ΓöéRead request ΓöÇΓöÇΓöÇ> Use Wrkbuf Γöé Γöé
Γöé512 bytes Build SMB Γöé Store in Reqbuf Γöé
Γöé Send WrkbufΓöé Interpret Γöé
Γöé Γöé Γöé SMB as Γöé
Γöé ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇ> DosRead ΓöÇΓöÇΓöÇΓöÇ> Read data (2KB)Γöé
Γöé Γöé in cache Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé DosRead <ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ Γöé
Γöé Γöé completed: Γöé
Γöé Γöé Modify SMB for Return Γöé
Γöé Γöé Data to Reqbuf Γöé
Γöé Γöé (512 bytes) Γöé
Γöé Γöé Γöé
Γöé Γöé Γöé
Γöé Receive SMB<ΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇ send Reqbuf Γöé
Γöé DosRead Γöé Γöé
Γöé completed Γöé Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé Γöé Γöé
Γöé pass data Γöé Γöé
Γöé (512 bytes) Γöé Γöé
Γöé to appl. Γöé Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé Γöé Γöé
ΓöéApplication <ΓöÇΓöÇΓöÇΓöÇΓöÿ Γöé Γöé
Γöérequest Γöé Γöé
Γöécompleted Γöé Γöé
Γöé Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Figure 5-4. SMB Protocol - Random Read of 512 Byte Record
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé Requester Server Γöé
ΓöéΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇ Γöé
Γöé Γöé
Γöé Application Redirector Γöé
Γöé Γöé
Γöé Read request Γöé
Γöé large file ΓöÇΓöÇΓöÇΓöÇ> Build special Γöé
Γöé SMB, request Γöé
Γöé 4KB data + Γöé
Γöé poll server Γöé
Γöé for bigbuf's Γöé
Γöé Γöé Γöé
Γöé ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇ> Receive SMB Γöé
Γöé Get data from disk (4KB)Γöé
Γöé into Reqbuf (no caching,Γöé
Γöé assuming default Γöé
Γöé thresholds) Γöé
Γöé Γöé
Γöé If bigbufs available, Γöé
Γöé confirm to requester Γöé
Γöé <ΓöÇΓöÇ Send Reqbuf Γöé Γöé
Γöé Data to appli- Γöé Γöé
Γöé 4KB<ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇ cation Γöé Γöé
Γöé READ BLOCK RAW SMB Γöé Γöé
Γöé Γöé Γöé
Γöé Γöé
Γöé Use bigbuf Γöé
Γöé <ΓöÇΓöÇΓöÇ get 60KB data from diskΓöé
Γöé send data only (no SMB)Γöé
Γöé 60KB<ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇ Data to appli- Γöé Γöé
Γöé cation Γöé Γöé
Γöé Γöé Γöé
Γöé Γöé Γöé
Γöé Γöé Γöé
Γöé Γöé
Γöé Use bigbuf Γöé
Γöé <ΓöÇΓöÇΓöÇ get 64KB data from diskΓöé
Γöé 64KB<ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇ Data to appli- send data only (no SMB)Γöé
Γöé cation Γöé
Γöé . . Γöé
Γöé . etc Γöé
Γöé . . Γöé
Γöé . . Γöé
Γöé . . Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Figure 5-6. SMB Raw Protocol: Large File Read
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé Requester Server Γöé
Γöé ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇ Γöé
Γöé Γöé
Γöé Application Redirector Γöé
Γöé Γöé
Γöé Read request Γöé
Γöé large file ΓöÇΓöÇΓöÇΓöÇ> Build special Γöé
Γöé SMB, (Read Block Raw) Γöé
Γöé request 4KB data Γöé
Γöé + poll server Γöé
Γöé for bigbuf's Γöé
Γöé Γöé Γöé
Γöé ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇ> Receive SMB Γöé
Γöé Get data from disk (4KB) Γöé
Γöé into Reqbuf (no caching, Γöé
Γöé assuming default Γöé
Γöé thresholds) Γöé
Γöé No bigbufs available, Γöé
Γöé <ΓöÇΓöÇ Send Reqbuf to requester Γöé
Γöé Data to appli- Γöé Γöé
Γöé 4KB<ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇ cation Γöé Γöé
Γöé + READ MULTIPLEX SMB Γöé Γöé
Γöé Γöé Γöé
Γöé Γöé
Γöé Use Reqbuf Γöé
Γöé <ΓöÇΓöÇΓöÇ get 4KB data from disk Γöé
Γöé send data + SMB Γöé
Γöé 4KB <ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇ Data to appli- Γöé Γöé
Γöé cation Γöé Γöé
Γöé + READ MULTIPLEX SMB Γöé Γöé
Γöé Γöé Γöé
Γöé Γöé Γöé
Γöé Γöé
Γöé Use bigbuf Γöé
Γöé <ΓöÇΓöÇΓöÇ get 4KB data from disk Γöé
Γöé 4KB<ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇ Data to appli- send data + SMB Γöé
Γöé cation Γöé
Γöé . + READ MULTIPLEX SMB Γöé
Γöé . etc Γöé
Γöé . . Γöé
Γöé . . Γöé
Γöé . Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Figure 5-7. SMB Read Block Multiplex Protocol: Large File Read (No Bigbufs Avail.)
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 5-1. Data Message Sizes for Sequential File Transfers Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéRECORDΓöé NBS 4K, BBS Γöé NBS 4K, BBSΓöé NBS 4K, BBSΓöé NBS 1K, BBSΓöé
ΓöéSIZE Γöé 5K Γöé 8K Γöé 32K Γöé 32K Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Γöé WriteΓöé Read Γöé Write Γöé Read Γöé WriteΓöé Read Γöé WriteΓöé ReadΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé128 Γöé 4K Γöé 4K Γöé 4K Γöé 4K Γöé 4K Γöé 4K Γöé 1K Γöé 1K Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé512 Γöé 4K Γöé 4K Γöé 4K Γöé 4K Γöé 4K Γöé 4K Γöé 1K Γöé 1K Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé2048 Γöé 4K Γöé 4K Γöé 4K Γöé 4K Γöé 4K Γöé 4K Γöé 32K Γöé 32K Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé4096 Γöé 4K Γöé 4K Γöé 4K Γöé 4K Γöé 4K Γöé 4K Γöé 32K Γöé 32K Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé16384 Γöé 16K Γöé 16K Γöé 16K Γöé 16K Γöé 32K Γöé 32K Γöé 32K Γöé 32K Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé65534 Γöé 64K Γöé 64K Γöé 64K Γöé 64K Γöé 64K Γöé 64K Γöé 64K Γöé 64K Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 5-2. Server Priority to OS/2 Dispatching PriorityΓöé
Γöé Comparison. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé SERVER PRIORITY Γöé OS/2 PRIORITY CLASSΓöé LEVEL OF CLASSΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 Γöé 3 (=Fixed High) Γöé 31 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 1 Γöé 3 Γöé 23 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 2 Γöé 3 Γöé 15 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 3 Γöé 3 Γöé 7 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 4 Γöé 3 Γöé 0 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 5 Γöé 2 (=Regular) Γöé 31 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 6 Γöé 2 Γöé 23 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 7 Γöé 2 Γöé 15 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 8 Γöé 2 Γöé 7 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 9 Γöé 2 Γöé 0 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 5-3. Opportunistic LockΓöé
Γöé Timeout Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé VALUE Γöé TIME (SECONDS) Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 Γöé 35 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 1 Γöé 70 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 2 Γöé 140 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 3 Γöé 210 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 4 Γöé 280 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 5 Γöé 350 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 6 Γöé 420 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 7 Γöé 490 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 8 Γöé 560 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 9 Γöé 640 Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéRDR PARIS STARTRAK Γöé
Γöé/SRV Example DOSLAN.INI File
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé; ---------- ETHERAND Protocol Definition ---------- Γöé
Γöé Γöé
Γöé[ETHERAND] Γöé
Γöé DriverName = OS2EE$ Γöé
Γöé Bindings = TCMAC2 Γöé
Γöé Γöé
Γöé; The modules listed below are valid choices for the Γöé
Γöé Bindings field. Γöé
Γöé; Γöé
Γöé; (1) TCMAC2 - 3Com** Micro Channel* Adapters. Γöé
Γöé; (2) TCMAC - 3Com IBM Personal Computer AT workstation Γöé
Γöé Adapters. Γöé
Γöé; (3) WDMAC - Western Digital** Micro Channel and IBM PC Γöé
Γöé AT Adapters. Γöé
Γöé; (4) UBMAC - Ungermann-Bass** Micro Channel and IBM PC Γöé
Γöé At Adapters. Γöé
Γöé; Γöé
Γöé; NOTE: If you choose UBMAC, please check the Adapter Γöé
Γöé; type definition to ensure that it is correct for your Γöé
Γöé; adapter. Γöé
Γöé Γöé
Γöé; ------- 3Com Network Adapter Definition ------- Γöé
Γöé Γöé
Γöé[TCMAC2] Γöé
Γöé DriverName = ELNKMC$ Γöé
Γöé MaxTransmits = 10 Γöé
Γöé[TCMAC] Γöé
Γöé DriverName = ELNKII$ Γöé
Γöé Interrupt = 3 Γöé
Γöé IOAddress = 0x300 Γöé
Γöé DMAChannel = 3 Γöé
Γöé MaxTransmits = 10 Γöé
Γöé Transceiver = Onboard Γöé
Γöé Γöé
Γöé; ------- Western Digital Network Adapter Definition -------- Γöé
Γöé Γöé
Γöé[WDMAC] Γöé
Γöé DriverName = MACWD$ Γöé
Γöé IRQ = 3 Γöé
Γöé RamAddress = 0xC400 Γöé
Γöé IOBase = 0x280 Γöé
Γöé ReceiveBuffers = 16 Γöé
Γöé ReceiveChains = 16 Γöé
Γöé MaxRequests = 10 Γöé
Γöé MaxTransmits = 10 Γöé
Γöé ReceiveBufSize = 256 Γöé
Γöé Γöé
Γöé Γöé
Γöé; ------- Ungermann-Bass Network Adapter Definition ------- Γöé
Γöé Γöé
Γöé[UBMAC] Γöé
Γöé DriverName = UBMAC$ Γöé
Γöé AdapterType = NIUps Γöé
Γöé MemoryWindow = 0xD8000 Γöé
Γöé IO_Port = 0x350 Γöé
Γöé IRQ_Level = 4 Γöé
Γöé MaxRequests = 10 Γöé
Γöé MaxTransmits = 10 Γöé
Γöé ReceiveBufSize = 600 Γöé
Γöé MaxMulticast = 20 Γöé
Γöé UseReceiveChain = Never Γöé
Γöé Γöé
Γöé Γöé
Γöé;**** END OF FILE **** Γöé
Γöé Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 5-4. PROTOCOL.INI Module Specifications Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Γöé REQUIRED DEVICE DRIVER ΓöéMODULE-SPECIFIC PARA- Γöé
ΓöéMODULE NAME Γöé PARAMETER ΓöéMETERS WITH SAMPLE VALUESΓöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé SYSTEM-LEVEL MODULES Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéPROTOCOL ΓöéDrivername = PROTMAN$ Γöé Γöé
ΓöéMANAGER Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéETHERAND ΓöéDrivername = OS2EE$ ΓöéBindings = WDMAC Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé ADAPTER-SPECIFIC MODULES Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéTCMAC(1, 2) ΓöéDrivername = ELNKII$ (orΓöéDMAChannel = 3 Γöé
Γöé(3Com) ΓöéELNKII2$)(3) ΓöéInterrupt = 3 Γöé
Γöé Γöé ΓöéIOAddress = 0x300 Γöé
Γöé Γöé ΓöéMaxTransmits = 10 Γöé
Γöé Γöé ΓöéTransceiver = Onboard Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéTCMAC2(4, 5)ΓöéDrivername = ELNKMC$ (orΓöéMaxTransmits = 10 Γöé
Γöé(3Com) ΓöéELNKMC2$)(6) ΓöéSlotNumber = 3 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéWDMAC(7) ΓöéDrivername = MACWD$ ΓöéIRQ = 3 Γöé
Γöé(Western Γöé ΓöéIOBase = 0x280 Γöé
ΓöéDigital) Γöé ΓöéMaxRequests = 10 Γöé
Γöé Γöé ΓöéMaxTransmits = 10 Γöé
Γöé Γöé ΓöéRamAddresses = 0xC400(9) Γöé
Γöé Γöé ΓöéReceiveBuffers = 16 Γöé
Γöé Γöé ΓöéReceiveBufSize = 256 Γöé
Γöé Γöé ΓöéReceiveChains = 16 Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéUBMAC(8) ΓöéDrivername = UBMAC$ ΓöéAdapterType = NIUps Γöé
Γöé(Ungermann- Γöé ΓöéMemoryWindow = 0xD8000(9)Γöé
ΓöéBass) Γöé ΓöéIO_Port = 0x350 Γöé
Γöé Γöé ΓöéIRQ_Level = 4 Γöé
Γöé Γöé ΓöéMaxRequests = 10 Γöé
Γöé Γöé ΓöéMaxTransmits = 10 Γöé
Γöé Γöé ΓöéReceiveBufSize = 600 Γöé
Γöé Γöé ΓöéMaxMulticast = 20 Γöé
Γöé Γöé ΓöéUseReceiveChain = Never Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
1. The second ETHERAND Network adapter in a workstation with a TCMAC adapter
can be (1) another TCMAC adapter, (2) a Western Digital adapter, or (3) an
Ungermann-Bass adapter.
2. The module name for the second TCMAC (3Com) adapter, if any, in a
workstation must be prefixed with S. Specify STCMAC for the second TCMAC
adapter in the workstation.
3. Whenever a second TCMAC adapter is present in a workstation, the second
device driver identifier is ELNKII2$. Under these processing
circumstances, a separate declaration is required for each adapter.
4. The second ETHERAND Network adapter in a workstation with a TCMAC2 adapter
can be (1) another TCMAC2 adapter, (2) a Western Digital adapter, or (3)
an Ungermann-Bass adapter.
5. The module name for the second TCMAC (3Com) adapter, if any, in a
workstation must be prefixed with S. Specify STCMAC2 for the second
TCMAC2 adapter in the workstation.
6. Whenever a second TCMAC2 adapter is present in a workstation, the second
device driver identifier is ELNKMC2$. Under these processing
circumstances, a separate declaration is required for each adapter.
7. A Western Digital ETHERAND Network adapter cannot be in the same
workstation with another Western Digital ETHERAND Network adapter.
8. An Ungermann-Bass ETHERAND Network adapter cannot be in the same
workstation with another Ungermann-Bass ETHERAND Network adapter.
9. A special consideration is required for ETHERAND Network adapters that use
shared RAM and reside in a Personal Computer AT workstation. The values
selected on the adapter (using jumpers) for the Shared RAM address field
must match the selections made during the Communication Manager
configuration (specified in the Shared RAM location field). The
Communication Manager configured values are not used by the ETHERAND
Network adapter support. These are only used to cross-verify that no
conflicts exist with shared RAM regions used by other adapters.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 5-5. PROTOCOL.INI Module Parameter Descriptions Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéPARAMETER ΓöéASSOCIATED ΓöéDESCRIPTION Γöé
Γöé ΓöéMODULES Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéAdapterType ΓöéUBMAC ΓöéSpecifies the type of Ungermann-Bass Γöé
Γöé Γöé Γöéadapter. Either NIUPC or NIUPS must Γöé
Γöé Γöé Γöébe entered to specify an adapter type Γöé
Γöé Γöé Γöéfor the Personal Computer AT work- Γöé
Γöé Γöé Γöéstation or the Personal System/2 Γöé
Γöé Γöé Γöéworkstation, respectively. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöébindings ΓöéETHERAND ΓöéSpecifies one of the following Γöé
Γöé Γöé Γöéadapter module name options: Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé Γöéo TCMAC Γöé
Γöé Γöé Γöéo TCMAC2 Γöé
Γöé Γöé Γöéo WDMAC Γöé
Γöé Γöé Γöéo UBMAC. Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé ΓöéIf the address for the adapter is Γöé
Γöé Γöé Γöé"adapter 0", this module name is Γöé
Γöé Γöé Γöéspecified as shown in the following Γöé
Γöé Γöé Γöéexample: Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé Γöé"ETHERAND bindings = TCMAC" Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé ΓöéIf the address for the adapter is Γöé
Γöé Γöé Γöé"adapter 1", this module name is Γöé
Γöé Γöé Γöéspecified as shown in the following Γöé
Γöé Γöé Γöéexample: Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé Γöé"ETHERAND bindings = ,TCMAC" Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé ΓöéTwo adapter module names can be spec- Γöé
Γöé Γöé Γöéified, as shown in the following Γöé
Γöé Γöé Γöéexample: Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé Γöé"ETHERAND bindings = TCMAC,STCMAC" Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé Γöé"ETHERAND bindings = TCMAC2,STCMAC2" Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDataTransferΓöéTCMAC ΓöéNot applicable for OS/2 Extended Ser- Γöé
Γöé ΓöéTCMAC2 Γöévices systems. Do not specify. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDemand_DMA ΓöéTCMAC ΓöéNot applicable for OS/2 Extended Ser- Γöé
Γöé ΓöéTCMAC2 Γöévices systems. Do not specify. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDMAChannel ΓöéTCMAC ΓöéSpecifies the direct memory access Γöé
Γöé Γöé Γöé(DMA) level used for communication Γöé
Γöé Γöé Γöébetween the adapter and the computer Γöé
Γöé Γöé Γöémain memory. The defined value must Γöé
Γöé Γöé Γöébe based on the jumper configuration Γöé
Γöé Γöé Γöéof the adapter. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéInterrupt ΓöéTCMAC ΓöéSpecifies the interrupt level used Γöé
Γöé Γöé Γöéfor notifications between the com- Γöé
Γöé Γöé Γöéputer and the adapter. Be sure that Γöé
Γöé Γöé Γöéno conflicts exist between the Γöé
Γöé Γöé Γöévarious system components (such as Γöé
Γöé Γöé Γöéother adapters) in their use of Γöé
Γöé Γöé Γöéinterrupts. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIOAddress ΓöéTCMAC ΓöéSpecifies the starting address of the Γöé
Γöé Γöé Γöéinput/output port for the adapter. Γöé
Γöé Γöé ΓöéThe defined value must be based on Γöé
Γöé Γöé Γöéthe jumper configuration of the Γöé
Γöé Γöé Γöéadapter. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
│IOBase │WDMAC т │Specifies the starting address of the │
Γöé Γöé Γöéinput/output port for the adapter. Γöé
Γöé Γöé ΓöéThe defined value must be based on Γöé
Γöé Γöé Γöéthe jumper configuration of the Γöé
Γöé Γöé Γöéthe jumper configuration of the Γöé
Γöé Γöé Γöéadapter Γöé
Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIO_Port Γöé UBMAC ** ΓöéSpecifies the starting address of the Γöé
Γöé Γöé Γöéinput/output port for the adapter. Γöé
Γöé Γöé ΓöéThe defined value must be based on Γöé
Γöé Γöé Γöéthe jumper configuration of the Γöé
Γöé Γöé Γöéadapter. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
│IRQ │ WDMAC т │Specifies the interrupt level used │
Γöé Γöé Γöéfor notifications between the com- Γöé
Γöé Γöé Γöéputer and the adapter. Be sure that Γöé
Γöé Γöé Γöéno conflicts exist between the Γöé
Γöé Γöé Γöévarious system components (such as Γöé
Γöé Γöé Γöéother adapters) in their use of Γöé
Γöé Γöé Γöéinterrupts. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIRQ_Level Γöé UBMAC ** ΓöéSpecifies the interrupt level used Γöé
Γöé Γöé Γöéfor notifications between the com- Γöé
Γöé Γöé Γöéputer and the adapter. Ensure that Γöé
Γöé Γöé Γöéno conflicts exist between the Γöé
Γöé Γöé Γöévarious system components in their Γöé
Γöé Γöé Γöéuse of interrupts. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéMaxMulticastΓöé UBMAC ΓöéSpecifies the maximum number of Γöé
Γöé Γöé Γöémulticast addresses that can be in Γöé
Γöé Γöé Γöéeffect simultaneously. This value Γöé
Γöé Γöé Γöémust be greater than or equal to 2. Γöé
Γöé Γöé ΓöéA value of 20 is recommended. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéMaxRequests Γöé UBMAC ΓöéSpecifies the maximum number of Γöé
Γöé Γöé WDMAC Γöégeneral request queue entries that Γöé
Γöé Γöé Γöécan be concurrently outstanding. A Γöé
Γöé Γöé Γöévalue of 10 is recommended. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéMaxTransmitsΓöé TCMAC ΓöéSpecifies the number of TransmitChain Γöé
Γöé Γöé TCMAC2 Γöécommands that can be concurrently Γöé
Γöé Γöé WDMAC Γöéqueued by the MAC. A value of 10 is Γöé
Γöé Γöé UBMAC Γöérecommended. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéMemoryWindowΓöé UBMAC ** ΓöéSpecifies the starting address of the Γöé
Γöé Γöé Γöéarea in the PC memory space that is Γöé
Γöé Γöé Γöéshared between the PC and the Γöé
Γöé Γöé Γöéadapter. This value and the setting Γöé
Γöé Γöé Γöéon the adapter must be X'D8000'. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéNetAddress Γöé TCMAC ΓöéNot applicable for OS/2 Extended Ser- Γöé
Γöé Γöé TCMAC2 Γöévices systems. Do not specify. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöé
│RamAddress │ WDMAC т │Specifies the starting address of the │
Γöé Γöé Γöéarea in the PC memory space that is Γöé
Γöé Γöé Γöéshared between the PC and the Γöé
Γöé Γöé Γöéadapter. The value specified in this Γöé
Γöé Γöé Γöéfield must be the same as the value Γöé
Γöé Γöé Γöéspecified in the Shared RAM location Γöé
Γöé Γöé Γöéfield for the ETHERAND Network Γöé
Γöé Γöé Γöéprofile. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéReceive Γöé UBMAC ΓöéSpecifies the number of receive Γöé
ΓöéBuffers Γöé Γöébuffers (in bytes) that the MAC Γöé
Γöé Γöé Γöédriver will use. This value should Γöé
Γöé Γöé Γöébe large enough to hold the expected Γöé
Γöé Γöé Γöéreceived frames of average size. It Γöé
Γöé Γöé Γöéneed not be large enough to hold the Γöé
Γöé Γöé Γöélargest expected frame. Since the Γöé
Γöé Γöé ΓöéUSERRECEIVECHAINS parameter is always Γöé
Γöé Γöé Γöéspecified as Never, this parameter is Γöé
Γöé Γöé Γöénot necessary. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéReceive ΓöéWDMAC ΓöéSpecifies the number of receive Γöé
ΓöéBuffers Γöé Γöébuffers allocated in the host memory. Γöé
Γöé Γöé ΓöéA value of 16 is recommended. Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé ΓöéNOTE: The contents of this field Γöé
Γöé Γöé Γöéhave no relationship to the MINIMUM Γöé
Γöé Γöé ΓöéNUMBER OF RECEIVE BUFFERS field or theΓöé
Γöé Γöé Γöéactual number of receiving buffers Γöé
Γöé Γöé Γöéspecified in the associated IEEE Γöé
Γöé Γöé Γöé802.2 adapter profile. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéReceiveBuf- ΓöéWDMAC ΓöéSpecifies the size of the receive Γöé
ΓöéSize Γöé Γöébuffers (in bytes) that the MAC Γöé
Γöé Γöé Γöédriver will use. The value specified Γöé
Γöé Γöé Γöémust be large enough to hold the Γöé
Γöé Γöé Γöéaverage expected size of received Γöé
Γöé Γöé Γöéframes. Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé ΓöéNOTE: The contents of this field Γöé
Γöé Γöé Γöéhave no relationship to the RECEIVE Γöé
Γöé Γöé ΓöéBUFFER SIZE field specified in the Γöé
Γöé Γöé Γöéassociated IEEE 802.2 adapter profile.Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéReceiveBuf- ΓöéUBMAC ΓöéSpecifies the size of the receive Γöé
ΓöéSize Γöé Γöébuffers (in bytes) that the MAC Γöé
Γöé Γöé Γöédriver will use. The value specified Γöé
Γöé Γöé Γöémust be large enough to hold the Γöé
Γöé Γöé Γöéexpected received frames of average Γöé
Γöé Γöé Γöésize. It is not necessary that a Γöé
Γöé Γöé Γöévalue be specified to hold the Γöé
Γöé Γöé Γöélargest expected frame. The MAC Γöé
Γöé Γöé Γöédriver will handle large frames in Γöé
Γöé Γöé Γöémultiple receive buffers. Γöé
Γöé Γöé Γöé Γöé
Γöé Γöé ΓöéNOTE: The contents of this field Γöé
Γöé Γöé Γöéhave no relationship to the RECEIVE Γöé
Γöé Γöé ΓöéBUFFER SIZE field specified in the Γöé
Γöé Γöé Γöéassociated IEEE 802.2 adapter profile.Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéReceive- ΓöéWDMAC ΓöéSpecifies the number of entries in Γöé
ΓöéChains Γöé Γöéthe Receive Chain Header queue. The Γöé
Γöé Γöé Γöérecommended value for this parameter Γöé
Γöé Γöé Γöéis 16. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéSlotNumber ΓöéTCMAC2 ΓöéSpecifies the slot number (1 through Γöé
Γöé Γöé Γöé8) used by the adapter card in the Γöé
Γöé Γöé Γöécomputer. The system will default to Γöé
Γöé Γöé Γöéthe lowest numbered slot containing Γöé
Γöé Γöé Γöéan adapter of this type. Specify Γöé
Γöé Γöé Γöéthis parameter only when more than Γöé
Γöé Γöé Γöéone TCMAC2 adapter is present in the Γöé
Γöé Γöé Γöésame machine. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéTransceiver ΓöéTCMAC ΓöéSpecifies the transceiver configura- Γöé
Γöé Γöé Γöétion of the adapter. The specifica- Γöé
Γöé Γöé Γöétion of this parameter depends on the Γöé
Γöé Γöé Γöéspecific hardware configuration. Γöé
Γöé Γöé ΓöéThis parameter is used only for the Γöé
Γöé Γöé ΓöéEtherLink II** adapter for Personal Γöé
Γöé Γöé ΓöéComputer AT workstations, when using Γöé
Γöé Γöé Γöéthe Ethernet DIX Version 2.0 ETHERAND Γöé
Γöé Γöé ΓöéNetwork protocol. If used, specify Γöé
Γöé Γöé ΓöéOnboard. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéUseReceive ΓöéUBMAC ΓöéSpecifies to the MAC driver whether Γöé
ΓöéChain Γöé Γöéto use the ReceiveChain method of Γöé
Γöé Γöé Γöéreceived frame delivery. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Note: ** = Personal Computer AT workstation types only Γöé
│ т = Personal Computer AT workstation types only (1) │
Γöé (1)The device driver for a Western Digital adapter Γöé
Γöé automatically differentiates between adapter types Γöé
Γöé for the Personal Computer AT and those for the Personal Γöé
Γöé System/2 workstations with Micro Channel architecture. Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 5-7. Conversion for SrvheuristicΓöé
Γöé 15 values Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé Digit 15 Γöé OPLOCK Γöé NetBIOS Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 0 Γöé 35 sec Γöé 18 sec Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 1 Γöé 70 sec Γöé 35 sec Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 2 Γöé 140 sec Γöé 70 sec Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 3 Γöé 210 sec Γöé 105 sec Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 4 Γöé 280 sec Γöé 12 sec Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 5 Γöé 350 sec Γöé 47 sec Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 6 Γöé 420 sec Γöé 82 sec Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 7 Γöé 490 sec Γöé 117 sec Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 8 Γöé 560 sec Γöé 24 sec Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
Γöé 9 Γöé 640 sec Γöé 64 sec Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Characteristics:
o Locks: All rows read and updated by this transaction
o Releases Locks: Only when rows are committed
o Reads: Only committed data
o Updates: Only committed rows.
Repeatable Read (RR) provides the highest level of data integrity and
stability. RR transactions hold locks on all rows they read and all rows they
update. A transaction that runs a query against a table will always have the
same results returned within the scope of a given transaction.
Although this provides the highest level of stability and integrity, the
tradeoff is concurrency. Since locks are held on data that is only being
read, other applications that are waiting to update the same data may be
slowed down.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Characteristics:
o Locks: The row currently cursored and all rows updated by this transaction.
o Releases Locks: For unchanged rows, at next FETCH (which moves the cursor).
For updated and deleted rows, when rows are committed.
o Reads: Only committed data.
o Updates: Only committed rows.
Under Cursor Stability, only committed rows of data is read from the database
(e.g., the application will wait for another application to commit or roll
back a row it is changing). Also, Cursor Stability locks only the rows it is
currently reading and rows it has changed.
This level of data isolation provides better concurrency (and hence better
throughput possibilities) than Repeatable Read since it locks only the rows it
changes and the row that it is reading at that instant. The tradeoff for this
higher throughput is overhead (and possible performance) for the frequent
locking and unlocking that is performed.
An example of this level of data isolation might be a reservation system.
When you call your travel agent and ask what seats are available on a
particular flight, they may tell you 11A and 13H. You decide to take 13H and
they start the process to get you a boarding pass, then you change your mind
and want a window - 11A. In the time that the travel agent was working with
seat 13H, someone else came in and took seat 11A. The system was not locking
both seats, only the one that was being worked with at that moment.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Characteristics:
o Locks: Nothing
o Reads: All data, committed or uncommitted
o Updates: Not allowed. If an update attempt is made, the isolation level
reverts to Cursor Stability.
Uncommitted Read will read any row of data in the database, regardless of
whether it contains committed data. So, it provides the best performance for
read-only transactions. This level of data isolation may be appropriate when
you want to get the current status of a database, and do not need to care
whether all the data returned is from committed transactions.
When Uncommitted Read is being used, COMMITs are still needed. This is
because the system catalog tables are always accessed under the Repeatable
Read isolation level, where every row that is read is locked with a Read (or
Share) lock. If these locks are not released, another application may be
prevented from performing an operation that requires a change to one or more
system tables.
There is an important caveat for using Uncommitted Read: never change data
elsewhere in the database based on the results of an Uncommitted Read
transaction; the data on which the change is based may be rolled back and
never committed to the database.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
1. Use the ARI when several database calls may be packed into a single
function on the server.
2. Use the ARI to take advantage of an under-used server or to off-load an
overused requester.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
ΓöéTable 6-28. Column overhead for data types Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDATA TYPE ΓöéCOLUMN BYTEΓöéCOMMENTS Γöé
Γöé Γöé COUNT Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéSMALLINT Γöé2 Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéINTEGER Γöé4 Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDECIMAL(m,n)Γöé(m+2)/2 Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéFLOAT Γöé8 Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéVARCHAR(n) ΓöéCurrent ΓöéTo estimate average column size, use Γöé
Γöé Γöélength of Γöéthe average data size, not the maximumΓöé
Γöé Γöédata item +Γöédeclared size. Γöé Γöé
Γöé Γöé4 Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéCHAR(n) Γöén Γöé'n' is the length of each data item. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéLONG VARCHARΓöé24 ΓöéThe table row contains a descriptor Γöé
Γöé Γöé Γöépointing to the LONG VARCHAR data in Γöé
Γöé Γöé Γöéthe separate .LF file. This descriptorΓöé
Γöé Γöé Γöéis 24 bytes in length. Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéDATE Γöé4 Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéTIME Γöé3 Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéTIMESTAMP Γöé10 Γöé Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
access control
The means by which network administrators restrict access to network resources
and user programs and data.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
access control profile
A list of the access privileges assigned to users and groups for a particular
network resource in a domain. There are two types of access profiles. See
discrete profile and generic profile.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
access mode
A method of operation used to obtain a specific logical record from, or to
place a specific logical record into, a file assigned to a mass storage device.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
access path
In Database Manager, the path used to get to data specified in Structured Query
Language (SQL) statements. An access path can involve an index or a sequential
search, or a combination of the two.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
access plan
In Database Manager, a database object stored in the database that includes all
of the information needed to process the Database Services statements of a
single application program. An access plan is generated through processing of
the SQLBIND program, or through the pre-compile process if the bind option is
used. plans. See access path.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
access priority
In the IBM Token-Ring Network, the maximum priority a token can have that the
adapter will use for transmission.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
access procedure
In a local area network (LAN), the procedure or protocol used to gain access to
the transmission medium.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
ACDI port
A serial port such as COM1, COM2, or COM3 that can be programmed for
asynchronous communications through ACDI.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
active monitor
A function in a single adapter that initiates the transmission of tokens and
provides token error recovery facilities. Any active adapter on the ring has
the ability to provide the active monitor function if the current active
monitor fails. Synonymous with token monitor.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
adapter address
The address of the Media Access Control Service Access Point (MSAP).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
adapter number
A specific number that identifies an adapter when more than one adapter is used
in a workstation.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Adapter Support Program
The DOS software used to operate IBM Token-Ring Network adapter cards in an IBM
personal computer and to provide a common interface to application programs.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
additional server
A server in a domain other than the domain controller. See server. See also
domain and domain controller.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
address
A value that identifies the location of a register, a particular part of
storage, or a network node.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Adjacent Link Station (ALS)
In Systems Network Architecture (SNA), a link station directly connected to a
given node by a link connection over which network traffic can be carried.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
administrative authority
In Database Manager, a level of authority giving a user a wide range of
privileges over a set of objects, such as DBADM, which provides privileges over
all objects in a database, or SYSADM, which provides privileges over all
objects in a system.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
administrator
The person may be responsible for the designing, planning, installing,
configuring, controlling, managing, and maintaining of a network, system, or
database. See system administrator, network administrator, and database
administrator.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Advanced Program-to-Program Communications (APPC)
An implementation of the Systems Network Architecture (SNA) logical unit (LU)
6.2 protocol that allows interconnected systems to communicate and share the
processing of programs. See also logical unit 6.2 (LU 6.2).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
alert
1. In communications, an error message sent to the system services control
point (SSCP) at the host system.
2. In OS/2 LAN Server, an error or warning specified in the IBMLAN.INI file
that is sent to the user. See also system services control point.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
alert focal point
The location in a network specified as the forwarding node to the host system.
An alert focal point is a subset of the problem management focal point.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
alias
1. An alternative name used to identify an object or a database.
2. A nickname set up by the network administrator for a file, printer, or
serial device.
3. In a LAN Server/Requester system, a name used to identify a network
resource to a domain. Aliases are similar to network names but can be
used only through the LAN full-screen interface. See also network name.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
allocate
1. To assign a resource to perform a specific task.
2. In Advanced Program-to-Program Communications (APPC), a verb used to
assign a session to a conversation for its use.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
application program
1. A collection of software components, such as Communications Manager and
Database Manager that a user installs to perform particular types of work,
or applications, on a computer.
2. A program written for or by a user to perform the user's work on a
computer.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
application program interface (API) trace
A method used to trace points of the interface where user programs interact
with an API.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
application programming interface (API)
A formally-defined programming language interface which is between an IBM
system control program or a licensed program and the user of a program.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
application-embedded SQL
The Structured Query Language (SQL) statements coded within an application
program.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Asynchronous Communications Device Interface (ACDI)
An application programming interface (API) for asynchronous communications
provided by Communications Manager.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
attach
1. In programming, to create a task that can be executed asynchronously with
the execution of the in-line code.
2. To connect a device logically to a ring network so that it can communicate
over the network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
attach manager
The portion of Advanced Program-to-Program Communications (APPC) that manages
incoming allocation requests.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
attach queue
In Advanced Program-to-Program Communications (APPC), the queue of incoming
ALLOCATE requests that is managed by Attach Manager.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
auto-answer
A feature that enables a machine to respond without user action to a call it
receives.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
auto-start
In Communications Manager, a facility that starts communications features
without requiring the user to manually request the start.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
autocall
A feature that enables a machine to automatically dial a number to establish a
switched connection without user action. Contrast with autodial.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
autodial
A feature that enables a machine to automatically dial a number to establish a
switched connection, which requires user action. Contrast with autocall.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
automatic bind
In Database Manager, a feature that automatically binds an invalidated access
plan without the user explicitly rebinding the application.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
baseband local area network (LAN)
A local area network (LAN) in which information is encoded, impressed and
transmitted without shifting or altering the frequency of the information
signal.
Contrast with broadband local area network (LAN)
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
base operating system
The component of IBM OS/2 Extended Edition that manages system resources,
excluding Database Manager, &cm., and LAN Requester.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Basic Input/Output System
In an IBM personal computer, code that controls basic hardware operations such
as interactions with diskette drives, fixed-disk drives, and the keyboard.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
basic transmission unit (BTU)
In SNA, the unit of data and control information passed between path control
components.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
baud
The unit of modulation rate of an analog signal transmitted between data
circuit-terminating equipment (DCE). In data communications each baud can
encode one or more binary bits of computer data. Typically one, two, or four
bits are encoded.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
beacon
In the IBM Token-Ring Network, a frame sent by an adapter indicating a serious
network problem, such as a broken cable.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
beaconing
To send beacon frames continuously.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
binary synchronous communication (BSC)
A form of telecommunication line control that uses a standard set of
transmission control characters and control characters sequences, for binary
synchronous transmission of binary-coded data between stations. Contrast with
Synchronous Data Link Control (SDLC).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
bind
1. In Systems Network Architecture (SNA), the response/request unit (RU)
involved in activating a logical unit-logical unit (LU-LU) session.
2. The process whereby the output from the Database Services pre-compiler is
converted to an access plan.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
bind file
A file produced by the Database Services pre-compiler when the BIND option is
specified without the SYNTAX option. This file includes information on all
Structured Query Language (SQL) statements in the application program and is
used to later bind the application to a database.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
binding
The process of installing an application into a database. Binding is performed
either directly during an application program pre-compilation or through an
SQLBIND program execution that uses the output of a pre-compilation.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
BIOS
See Basic Input/Output System.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
block
1. A string of data elements recorded or transmitted as a unit.
2. To wait, usually for an input/output (I/O) event to complete or for a
resource to become available.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
bridge
In LAN, a device that connects IBM Token-Ring and PC Network together. See
gateway.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
bridge number
In LAN, a number that distinguishes parallel bridges (that is, bridges spanning
the same two rings).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
broadband local area network (LAN)
A LAN in which information is encoded, multiplexed, and transmitted with
modulation of a carrier. Contrast with baseband local area network (LAN).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
broadcast
A message sent to all computers on a network, rather than to specific users or
groups.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
broadcast topology
A network topology in which all attaching devices are capable of simultaneously
receiving a signal transmitted by any other attaching device on the network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
C & SM
See Communications and System Management.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
call accepted packet
A call supervision packet that called data terminal equipment (DTE) transmits
to indicate to the data circuit-terminating equipment (DCE) that it accepts the
incoming call.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
call connected packet
A call supervision packet that data circuit-terminating equipment (DCE)
transmits to indicate to a calling data terminal equipment (DTE) that the
connection for the call has been completely established.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
call request packet
A call supervision packet that data terminal equipment (DTE) transmits to ask
that a connection for a call be established throughout the network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
callable programming interface (CPI)
The means by which other application programs can run Query Manager functions.
The execution results in return codes and status information being returned to
the application. Also, Query Manager screens may be presented to the user.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
called address
The network user address (NUA) of the called data terminal equipment (DTE).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
calling address
The network user address (NUA) of the calling data terminal equipment (DTE).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
candidate key
In Database Manager, a key that is a valid choice for a primary key.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
carrier
On broadband networks, a continuous frequency signal that can be modulated with
an information-carrying signal.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
cascading
The connecting of network controllers to each other in a succession of levels,
to concentrate many more lines than a single level permits.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
CCITT (Comite Consultatif International Telegraphique et Telephonique)
See the International Telegraph and Telephone Consultative Committee.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
character string delimiter
1. In Database Manager, the characters used to enclose character strings in
delimited ASCII (DEL) files that are imported or exported.
2. In Query Manager, the default is a double quotation mark.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
child process
In the OS/2 program, a dependent process that is created by another process.
Contrast with parent process.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
circuit switching
A process that, on demand, connects data terminal equipment (DTE) through
telephone switching equipment and permits the exclusive use of a data circuit
between them until the connection is released. Synonymous with line switching.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
class of service
In Systems Network Architecture (SNA), a designation of the path control
network characteristics, such as path security, transmission priority, and
bandwidth, that applies to a particular session. The end-user program
specifies the class of service when requesting a session by using a symbolic
name (mode_name) that Advanced Program-to-Program Communications (APPC) maps
into a list of virtual routes, any one of which can provide the requested level
of service for the session.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
clause
In Structured Query Language (SQL), a distinct part of a statement, such as a
WHERE clause.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
clear to send (CTS)
A signal that is raised by the data circuit-terminating equipment (DCE) when it
is ready to accept data, usually in response to request to send (RTS) being
raised. ACDI will not transmit data without this circuit being raised. If
this circuit is lowered and remains lowered for more than 30 seconds, ACDI will
assume the connection is lost and will bring the connection down.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
clipboard
In Presentation Interface, an area of memory that holds data being passed from
one program to another.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
code page
1. A table that defines a coded character set by assignment of a character
meaning to each code point in the table for a language or country.
2. A mapping between characters and their internal (binary) representation.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
column data type
A data type used in Database Manager to specify the characteristics of a column
when defining a table for a database.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
column delimiter
In Database Manager, the character used to enclose columns in delimited ASCII
(DEL) files that are imported or exported. The default is a comma.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
column function
1. In Database Manager, an operation performed on a column or columns that
produces one value from a set of values. A column function is expressed
in the form of a function name followed by an argument enclosed in
parentheses; for example, SUM(COMM+SALARY).
2. In Query Manager, the term expression is used.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
COM
A representation of one of the asynchronous serial communications ports, (COM1,
COM2, and COM3), supported by the OS/2 program.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
commit
A process that causes data changed by an application or user to become a
permanent part of a database.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Common Services API
Application programming interface (API) verbs used to access services provided
by Communications Manager for user-written programs.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
common user access (CUA)
A part of Systems Application Architecture (SAA) that gives a series of
guidelines describing the way information should be displayed on a screen, and
the interaction techniques between users and computers.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Communications and System Management (C & SM)
1. The process of coordinating operations over an entire communications
system.
2. In Communications Manager, the function that supports generating and
sending of alerts.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Communications Manager
A component of the OS/2 program that lets a workstation connect to a host
computer and use the host resources as well as the resources of other personal
computers to which the workstation is attached, either directly or through a
host. Communications Manager provides application programming interfaces
(APIs) so that users can develop their own applications.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
concurrency control
In Database Manager, a feature that allows multiple user to run database
transactions simultaneously without interfering with each other.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
concurrent sort
In Database Manager, a method of balancing sort memory usage in concurrent
environments so that resources and performance remain optimized during multiple
sorts.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
CONFIG.SYS
A file that contains configuration options for an OS/2 program installed on a
workstation. See also configuration file.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
configuration file
1. In Communications Manager, a file that describes the devices, optional
features, communications parameters, and programs installed on a
workstation.
2. In Database Manager, a file containing values that can be set to adjust
the performance of Database Manager.
3. For the base operating system, the CONFIG.SYS file that describes the
devices, system parameters, and resource options of a workstation. See
also CONFIG.SYS.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
configure
1. To prepare a workstation component or program for operational use.
2. To describe to a system the devices, optional features, and programs
installed on the system.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
contention winner
The logical unit (LU) that can allocate a session without requesting permission
from the session partner LU. Contrast with contention loser.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
contention-loser polarity
In Advanced Program-to-Program Communications (APPC), the designation that a
logical unit (LU) is the contention loser for a session. Contrast with
contention-winner polarity.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
contention-winner polarity
In Advanced Program-to-Program Communications (APPC), the designation that a
logical unit (LU) is the contention winner for a session. Contrast with
contention-loser polarity.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
continuous carrier
On broadband networks, a condition in which a carrier signal is being
constantly broadcast on a given frequency. No further information can be
broadcasted on that frequency. Synonymous with hot carrier.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
control privilege
In Database Services, the authority to completely control a Database Services
object. This includes the authority to access, drop, or alter an object as
well as the authority to extend or revoke privileges on the object to other
users.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
conversation
In Advanced Program-to-Program Communications (APPC), a connection between two
transaction programs over a logical unit-logical unit (LU-LU) session that
allows them to communicate with each other while processing a transaction.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
conversation security
In Advanced Program-to-Program Communications (APPC), a process that allows
validation of a user ID or group ID and password before establishing a
connection.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
conversation security profile
The set of user IDs or group IDs and passwords that are used by Advanced
Program-to-Program Communications (APPC) for conversation security.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
conversation state
In Advanced Program-to-Program Communications (APPC), this determines which
verbs APPC allows a program to issue.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
conversation type
In Advanced Program-to-Program Communications (APPC), a conversation can be
either a basic conversation or a mapped conversation.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
conversation verb
In Advanced Program-to-Program Communications (APPC), one of the verbs a
transaction program issues to perform transactions with a remote program.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Corrective Service Diskette (CSD)
A diskette provided by IBM to registered service coordinators for resolving
user-identified problems. This diskette includes program updates designed to
resolve problems.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
correlated reference
In Database Manager, the combined correlation name and column name referring to
a specific column within a SELECT statement.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
correlated sub-query
In Database Manager, a sub-query (part of a WHERE or HAVING clause) applied to
a row or group of rows of the table or view names in the outer SELECT
statement.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
CRC error detection
A system of error checking performed at both the sending and receiving station
after a frame check sequence or block check character has been accumulated.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
cursor stability
An isolation level that provides more concurrency than repeatable read. With
cursor stability, a unit of work holds locks only on its uncommitted changes
and the current row of each of its cursors.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
custom build
A feature of the OS/2 installation program that allows a user, database
administrator, system administrator, or network administrator to create a
custom build diskette for installing of Database Manager, Communications
Manager, or LAN Requester. See also Custom Install mode.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
custom build diskette
A diskette created to be used for installing Database Manager, Communications
Manager, or LAN Requestor.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
custom install diskette
A diskette created using the custom build feature of the OS/2 installation
program. A custom install diskette contains the specific features and device
drivers needed for installing Database Manager, Communications Manager, and LAN
Requester on one or more computers.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Custom Install mode
The mode used when installing any component of the OS/2 program with a custom
build diskette. See also custom build diskette.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Customer Information Control System (CICS)
An IBM-licensed program that enables transactions entered at remote terminals
to be processed concurrently by user-written application programs. It includes
facilities for building, using, and maintaining databases.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
data carrier detect (DCD)
This signal is raised by the data circuit-terminating equipment (DCE) when it
and the remote DCE have recognized each other's carrier signal and have
synchronized themselves. If this circuit is lowered and remains lowered for
more than 30 seconds, ACDI will assume the connection is lost and will bring
the connection down.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
data circuit-terminating equipment (DCE)
1. The equipment installed at the user's premises that provides all the
functions required to establish, maintain, and end a telephone connection
for data transmission, and which does the signal conversion and coding
between the data terminal equipment (DTE) and the line. See also modem.
2. For an X.25 packet switching network, the equipment in a data station that
provides the signal conversion and coding between the data terminal
equipment (DTE) and the line.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Data Definition Language (DDL)
In Database Manager, a series of Structured Query Language (SQL) commands used
to define objects.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
data interchange format (DIF)
A format that presents data in rows and columns.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
data link control (DLC)
1. In Systems Network Architecture (SNA), the protocol layer that consists of
the link stations that schedule data transfer over a link between two
nodes and perform error control for the link.
2. In Communications Manager, a profile containing parameters for a
communication adapter.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
data link layer
In Open Systems Interconnection architecture, the layer that provides the
functions and procedures used to provide error-free, sequential transmission of
data units over a data link.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
data set ready (DSR)
This signal is raised by the data circuit-terminating (DCE) to indicate it is
on-line and ready to begin communicating. Some DCEs use this signal as a
power-on. indicator. ACDI expects this signal to be lowered after every
connection take down for a minimum of 100ms. Failure to do so may cause a
warning message to be displayed instructing the user to insure the DCE has, in
fact, gone on-hook.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
data terminal equipment (DTE)
The equipment that sends or receives data, or both.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
data terminal ready (DTR)
The on condition of the circuit connected to the RS232C modem indicating that
the terminal is ready to send or receive data.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
database
1. A systematized collection of data that can be accessed and operated upon
by an information processing system.
2. In Database Manager, a collection of information such as tables, views,
and indexes. With Query Manager, a database can also include such other
information as report forms, queries, panels, menus, and procedures.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
database administrator (DBADM)
1. An individual responsible for the design, development, operation,
security, maintenance, and use of a database.
2. In Database Manager, a user with database administrator (DBADM) authority.
Such users may access, create, alter, or revoke the right to access these
objects to other users or groups.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
database directory
A file maintained by Database Services that contains information about the
location of databases. A volume database directory exists on every OS/2 file
system where a database exists. A system database directory exists on the
drive where Database Services was installed. Synonymous with system database
directory. See indirect directory.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
database management system (DBMS)
A computer program that manages data by providing the services of centralized
control, data independence, and complex physical structures for efficient
access, integrity, recovery, concurrency control, privacy, and security.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Database Manager
A component of the OS/2 program consisting of Database Services and Query
Manager. Database Manager is based on the relational model of data and allows
users to create, update, and access databases.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
datagram
In &netb., a particular type of information encapsulation at the network layer
of the adapter protocol. When a message is sent as a datagram, the receiver of
the message sends no acknowledgement for its receipt.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Datastream Compatibility (DSC) mode
The SNA LU3 datastream used for printing.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
deallocate
To release a resource that is assigned to a specific task.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
dedicated server
A personal computer on a network that functions only as a server, not as both a
requester and a server.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
dependent logical unit (LU)
An LU controlled by a Systems Network Architecture (SNA) host system. A
dependent LU cannot send BIND commands.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
destination address field (DAF)
In Systems Network Architecture (SNA), a field in a FID0 or FID1 transmission
header that contains the network address of the destination. Contrast with
origin address field.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
device driver
The executable code needed to attach and use a device such as a display,
printer, plotter, or communications adapter.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
diagnostic tool
One of the OS/2 program utilities designed to gather and process data to help
identify the cause of a problem.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
direct privilege
In Database Manager, a privilege that is granted explicitly to a user.
Contrast with indirect privilege.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
disk operating system (DOS)
An operating system for computer systems that use disks and diskettes for
auxiliary storage of programs and data.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Distributed Function Terminal (DFT)
1. An operational mode that allows multiple concurrent logical terminal
sessions.
2. A hardware or software protocol used for communication between a terminal
and an IBM 3274/3174 control unit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
domain
1. A set of servers that allocates shared network resources within a single
logical system.
2. For database tables, it is the attribute or all possible valid values
associated with a column.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
domain control database (DCDB)
A collection of information residing on the LAN domain controller that
describes the current domain.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
domain controller
A server within the domain that provides details of the OS/2 LAN Server to all
other servers and requesters on the domain. The domain controller is
responsible for coordinating and maintaining activities on the domain.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
domain definition
A list of network resources and users that can be printed out by a network
administrator.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
DOS
See disk operating system.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
DOS mode
The mode that allows the base OS/2 operating system to run programs written for
DOS.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
double-byte character set (DBCS)
1. A set of characters in which each character is represented by 2 bytes.
2. A set of characters used by national languages such as Japanese and
Chinese that have more symbols than can be represented by the 256
single-byte positions. Each character is two bytes in length. Contrast
with single-byte character set.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
drive
1. The device used to read and write data on disks or diskettes.
2. In the OS/2 program, a diskette (created using the CREATEDD command) that
contains the contents of storage at a specified point in time.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
dump services
In Communications Manager, a menu-driven utility used to make a copy of a
portion of memory used by Communications Manager for analysis by IBM.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
duplex
Pertaining to communication in which data can be sent and received at the same
time. Synonymous with full-duplex. Contrast with half-duplex.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
dynamic link library (DLL)
A module containing a dynamic link routine (DLR) that is linked at load or run
time.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
dynamic link routine (DLR)
A program or routine that can be loaded by an application or as part of a
program.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
dynamic linking
In the OS/2 program, the linking of a program to a routine that is delayed
until load or run time.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
dynamic priority
In the OS/2 program, pertaining to a priority of a process that is varied by
the operating system. Contrast with absolute priority.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
dynamic SQL language (DSL)
The Structured Query Language (SQL) statements that are prepared and executed
within an application program while the program is running. In dynamic SQL, the
SQL source is contained in host language variables rather than being coded into
the application program. The SQL statement might change several times during
the application program's execution.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
embedded SQL
The Structured Query Language(SQL) statements embedded within a program and
prepared during the program preparation process before the program is executed.
See pre-compilation.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
emulation
The imitation of all or part of one system by another so that the imitating
system accepts the same data, executes the same programs, and achieves the same
results as the imitated computer system.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Emulator High-Level Language Application Programming Interface (EHLLAPI)
A Communications Manager Application Programming Interface that provides a way
for users and programmers to access the IBM 3270, IBM AS/400, or System/36 host
presentation space.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
end user
1. The ultimate source or destination of data flowing through an SNA network.
An end user can be an application program or a workstation operator.
2. The human user of a software program or product on a computer system.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Enhanced Connectivity Facility (ECF)
A set of programs used for interconnecting IBM personal computers and IBM
System/370 host computers operating in the MVS/XA or VM/SP environment. These
ECF programs provide a method for sharing resources between workstations and
host systems.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
error log
A file that stores error information for later access. See log.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
exchange station ID (XID)
In Synchronous Data Link Control (SDLC), a control field command and response
for passing station IDs between a primary and secondary station.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
export
To copy data from Database Manager tables to an OS/2 file using PC/IXF, DEL, or
WSF formats. In addition, in Query Manager this refers to copying Query
Manager objects from a database to an OS/2 file. Contrast with import.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
extended binary-coded decimal interchange code (EBCDIC)
A coded character set consisting of 8-bit coded characters used by host
computers.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
external resource
A file, printer, or serial device resource supplied by a server outside the
current domain.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
external server
A server outside the domain that defines and controls domain resources.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
flow control
In communications, the process of controlling the flow of data that passes
between components of the network. See also pacing.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
foreign key
In Database Manager, a column or set of columns in a table whose values are
required to match at least one primary key value of a row of its parent table.
See primary key. See also referential constraint, and referential integrity.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
form
A Query Manager object containing the specifications for printing or displaying
a report.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
format identification (FID) field
A field in each transmission header (TH) that indicates the format of the TH.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
frame
1. In high level data link control (HDLC), the sequence of contiguous bits
bracketed by and including the opening and closing flag (01111110).
Frames are used to transfer data and control information across a data
link.
2. A data structure that consists of fields predetermined by a protocol for
the transmission of user data and control data. Synonymous with data
frame.
3. In X.25 packet switching data networks, the contiguous sequence of
eight-bit bytes delimited by beginning and ending flags. Frames are used
at the frame level (level 2) of the X.25 protocol to transport information
that performs control functions, data transfers, and transmission
checking.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
full-select
A SELECT statement (without ORDER BY or FOR UPDATE) or more than one SELECT
statements combined by using set operators (UNION, INTERSECT, or EXCEPT).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
gateway
In communications, a functional unit that connects two computer networks of
different network architectures. Contrast with bridge.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
graphics
A picture defined in terms of graphics primitives and graphics attributes.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
group access list
A list of groups and the associated access authorities for each group in the
list.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
group SAP
A single address assigned to a group of Service Access Points (SAPs).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
half-duplex
A mode in two-way communication where only one user transmits at a time.
Contrast with full-duplex.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
half-session
In SNA, a component that provides function management data services, data flow
control, and transmission control for one of the sessions of a network
addressable unit. See session, half-session, and secondary half-session.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
heap
An area of free memory available for dynamic allocation by a program. The size
of a heap varies, depending on the memory requirements of a program.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Help
A feature that provides assistance and information to the user.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
high-level language application programming interface (HLLAPI)
A software product that usually operates in conjunction with a terminal
emulator, such as 3270 terminal emulation, and allows interaction between a
host session and an application program running in a personal computer session.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
home fileset
1. In Database Services, the condition in which the system database is in the
same OS/2 file system as the volume database directory. See database
directory.
2. A home fileset is optional for OS/2 program users. In OS/2 LAN Server, a
files resource on a server automatically assigned when a user logs on.
IBM PC LAN program users are automatically assigned a home fileset.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
host computer
1. In a computer network, a computer providing services such as computation,
database access, and network control functions.
2. The primary or controlling computer in a multiple computer installation.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
IBM AS/400 PC Support
The IBM licensed program that provides AS/400 system functions to an attached
personal computer via the 5250 Workstation Feature (WSF).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
IBM Operating System/2 Extended Edition
A program that contains the features of OS/2 Standard Edition Version 1.2. In
addition, this program contains an advanced relational Database Manager
component, a Communications Manager component and a LAN Requester component
that provide inter-systems communications, improved connectivity, terminal
emulation, and access to shared network resources.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
IBM Operating System/2 LAN Server
A program that allows resources to be shared with other computers on the
network. See also server.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
IBM PC Network
IBM PC Network is a low-cost broadband network that allows attached IBM
personal computers, such as IBM Personal System/2, IBM 5150 Personal Computers,
IBM Personal Computer ATs, IBM PC XTs, and IBM Portable Personal Computers to
communicate and to share resources.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
IBM Token-Ring Network
IBM Token-Ring Network is a high speed, star-wired local area network to which
a variety of IBM products can be connected.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
IEEE 802.2 interface
An interface adhering to the 802.2 logical link control (LLC) Standard of the
Institute of Electrical and Electronics Engineers (IEEE). This standard is one
of several standards for local area networks approved by the IEEE.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
image
In OS/2 LAN Server, a binary file that is structured to look like the files
used during a normal machine initial program load (IPL). Images are used to
load software on machines that are not loaded from their own fixed disk or
diskette drives. Synonymous with IPL image.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
image definition
Details of an image that identify it to the domain and tell the local area
network (LAN) software you intend to create an image.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
import
1. To copy data from OS/2 files into tables in a database. This can be done
using different options, such as INSERT, CREATE, or REPLACE_CREATE.
2. In Query Manager, import can be done using options such as APPEND or
REPLACE. In addition, Query Manager allows copying data from OS/2 files
into Query Manager objects. Contrast with export.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
IND$FILE
The default name for the IBM host file transfer program used by the host
computer to communicate with Communications Manager.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
independent LU
A logical unit (LU) that is not controlled by a Systems Network Architecture
(SNA) host system.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
indirect directory
In Database Services, the condition where the system database directory is on a
different OS/2 file than the volume database directory. See database directory.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
indirect privilege
In Database Manager, a privilege granted to users because they belong to a
group that has been explicitly granted the privilege. Contrast with direct
privilege. See also privilege.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
initial program load (IPL)
1. The initialization procedure that starts an operating system.
2. The process of loading programs and preparing a system to run jobs.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
input-output privilege level (IOPL)
A statement in the CONFIG.SYS file that enables certain application programs to
communicate directly with I/O devices.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
interactive processing
1. A processing method in which each user action causes a response from a
program or the system.
2. In Database Manager, a method of processing that allows users to interact
with the Query Manager panels and menus while a procedure is running.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
intermediate node
A node that provides intermediate routing services in a Systems Network
Architecture (SNA) network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
International Organization for Standardization (ISO)
An organization of national standards bodies from various countries established
to promote development of standards to facilitate international exchange of
goods and services, and develop cooperation in intellectual, scientific,
technological, and economic activity.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
International Telegraph and Telephone Consultative Committee (CCITT)
An international organization that recommends and publishes standards for the
interconnection of communications equipment.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
interprocess communication
The exchange of information between processes.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
interrupt
A suspension of a process such as the execution of a computer program caused by
an event external to that process, performed in such a way that the process can
be resumed.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
join
In Database Manager, a relational operation that allows for retrieval of data
from two or more tables based on matching column values.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
join condition
In Database Manager, a condition where two tables are brought together and
compared; rows from one table are selected when columns from that table match
columns (over a condition) from the other table. In Query Manager Prompted
Query, this is called join tables.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
kernel
1. The part of an operating system that performs basic functions such as
allocating hardware resources.
2. In Database Services, the kernel is a relational command processor.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
keyboard mapping
A table or profile containing the definitions assigned to keys on a keyboard
for use in terminal emulation.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
keyboard remapping
A Communications Manager facility that allows users to change the key
assignments on the keyboard they are using in terminal emulation.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
kilobyte (KB)
A term meaning 1024 bytes.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
LAN adapter
A card which is installed in a Personal Computer and is used to attach this
device to a local area network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
LAN Requester
A component of the OS/2 program that allows users to access shared network
resources made available by OS/2 LAN Servers. See requester.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
LAN Server
See IBM OS/2 LAN Server.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
leased line
Synonym for non-switched line.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
link
1. The physical medium of transmission, the protocol, and associated devices
and programming used to communicate between computers.
2. To interconnect items of data or portions of one or more computer
programs, for example, the linking of object programs by a linkage editor,
or the linking of data items by pointers.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
link level
A part of CCITT's X.25 Recommendation that defines the link protocol used to
get data into and out of the network across the full-duplex link connecting the
subscriber's machine to the network node. Link access procedure (LAP) and link
access protocol-balanced (LAPB) are the link access protocols recommended by
the International Telegraph and Telephone Consultative Committee (CCITT).
Synonymous with frame level.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
link protocol
The rules for sending and receiving data at the link level.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Link Service Access Point (LSAP)
In the IBM Token-Ring Network, the logical point at which an entity in the
logical link control sub-layer provides services to the next higher layer.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
link station
1. In Systems Network Architecture (SNA), the combination of hardware and
software that allows a node to attach to and provide control for a link.
2. On a local area network (LAN), part of a service access point (SAP) that
enables an adapter to communicate with another adapter.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
local area network (LAN)
1. Two or more computing units connected for local resource sharing.
2. A network in which communications are limited to a moderate-sized
geographic area such as a single office building, warehouse, or campus,
and that do not extend across public rights-of-way.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
local database
A database physically located on the workstation in use. Contrast with remote
database.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
local initiation
In Advanced Program-to-Program Communications (APPC), a conversation allocated
by a local logical unit (LU).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
local logical unit
See logical unit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
local logical unit profile
See logical unit profile.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
local session identification
A field in a format identification 3 (FID3) field transmission header that
indicates the type of session and the local address of the directly attached
logical unit (LU) or physical unit (PU).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
local station address
In communications, the location of a station that is attached by a data channel
to a host node.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
local transaction program
In Advanced Program-to-Program Communication (APPC), the transaction program at
the local end of the conversation. Contrast with remote transaction program.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
local workstation
the workstation at which a user is sitting.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
lock
1. In Database Manager, (a) a means of serializing events or access to data;
(b) a Structured Query Language (SQL) statement used to acquire control of
tables prior to executing statements that use them.
2. In Communications Manager, a password-protection system that can be used
to prevent access to some advanced functions. Synonymous with keylock.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
lock escalation
In Database Services, the response that occurs when the number of locks issued
exceeds the capacity specified in the database configuration. During a lock
escalation, locks are freed by converting a lock on a record for a table into
one lock on a table. This is repeated until enough locks are freed by one or
more processes.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
locking
The process by which Database Services ensures integrity of data. Locking
prevents users from accessing inconsistent data.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
log
1. A database object maintained by Database Services. It is a recovery log
created by the system that contains the information needed to rollback
cancelled transactions, complete import/export transactions and situations
in which a commit was started but not completed, and to rollback
transactions interrupted by system or application failures. See message
log and error log.
2. To record; for example, to log all messages on the system printer.
Synonymous with journal. See message log and error log.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
log record
In Database Services, a record of an update to a database performed during a
unit of work.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
logical connector
In Structured Query Language (SQL), a condition that connects expressions
within a WHERE or HAVING clause. The valid logical connectors are and and or.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
logical device
1. An input/output (I/O) device identified in a program by a label or number
that corresponds to the actual label or number assigned to the device.
Contrast with physical device.
2. In the OS/2 program, a redirected disk, file, printer, or other specific
device.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
logical link control (LLC)
The DLC.LAN sub-layer that provides two types of Data Link Control (DLC)
operation. The first type is connectionless service, which allows information
to be sent and received without establishing a link. The LLC sub-layer does not
perform error recovery or flow control for connectionless service. The second
type is connection-oriented service, which requires the establishment of a link
prior to the exchange of information. Connection-oriented service provides
sequenced information transfer, flow control, and error recovery.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
logical unit (LU)
In Systems Network Architecture (SNA), a port through which an end user
accesses the SNA network in order to communicate with another end user and
through which end users access the functions provided by system services
control points (SSCPs).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
logical unit profile
A logical unit profile is the set of parameters that define a local and partner
logical unit (LU).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
logical unit 6.2 (LU 6.2)
A particular type of Systems Network Architecture (SNA) logical unit (LU) that
provides a connection between resources and transactions programs running on
different network nodes. See Advanced Program-to-Program Communications
(APPC).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
lookup table
In Query Manager panel definition, a table from which columns can be presented
in the panel as output fields only.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
LU 0
A LU that uses SNA Transmission Control and SNA Flow Control layers. Higher
layer protocols are end user and product defined.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
LU 1
Defined for application program communication with single or multiple-device
data processing workstations (printers or RJE stations). The data stream
conforms to SNA Character String (SCS) or Document Content Architecture (DCA).
See also LU 2 and LU 3.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
LU 2
In SNA, a type of LU for an application program that communicates with a single
display workstation in an interactive environment using the SNA 3270 data
stream.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
LU 3
In SNA, a type of LU that communicates with a single printer using the SNA 3270
data stream. See Datastream Compatibility (DSC) mode.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
LU 6.2
Also known as APPC. Supports sessions between two applications.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
LU-LU session
In Systems Network Architecture (SNA), a session between two logical units
(LUs) in an SNA network. It provides communication between two end users, or
between an end user and an LU services component.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
MAC service data unit (MSDU)
The MSDU consists of the LPDU (the DSAP and SSAP address fields, the control
field, and the LPDU information field, if present) and the routing information
field (if the destination station is located on a different ring).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
machine ID
In OS/2 LAN Server, a unique name of up to eight characters that identifies a
computer to the network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Main Frame Interactive (MFI) presentation space
The 3270 terminal emulation presentation interface.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
mapped conversation
In Advanced Program-to-Program Communications (APPC), a conversation between
two transaction programs using the APPC mapped conversation application
programming interface (API). In typical situations, end-user transaction
programs use mapped conversation and service transaction programs use basic
conversations. However, either type of program may use either type of
conversation. Contrast with basic conversation.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
mapped conversation verb
A verb that a transaction program issues when using the Advanced
Program-to-Program Communications (APPC) mapped conversation application
programming interface (API). Contrast with basic conversation verb.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
master key
An access password used to unlock the keylock for Communications Manager. A
user who has access to the master key can also change the service key. See also
lock.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Media Access Control Service Access Point (MSAP)
In the IBM Token-Ring Network, the logical point at which an entity in the
medium access control (MAC) sublayer provides services to the logical link
control sublayer. See adapter address
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
medialess requester
A requester without a disk drive.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
medium access control (MAC)
In local area network (LAN), the sub-component of IEEE 802.2 application
programming interface (API) that supports medium-dependent functions and uses
the services of the physical layer to provide services to logical link control.
Medium access control (MAC) includes the medium access port.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
medium access control (MAC) frame
The frame that controls the operation of the IBM Token-Ring Network and any
ring station operations that affect the ring.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
medium access control (MAC) protocol
In a local area network (LAN), the part of the protocol that governs access to
the transmission medium independently of the physical characteristics of the
medium, but taking into account the topological aspects of the network in order
to enable the exchange of data between data stations.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
medium access port
A hardware-addressable component (such as a communication adapter) of an
Systems Network Architecture (SNA) node by which the node has access to a
transmission medium and through which data passes into and out of the node.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
message log
A file used to save or log certain types of messages and status information.
See log.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
message queue
A sequenced collection of messages waiting to be read by the application.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
messaging name
A name under which messages can be received.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Micro Channel
The architecture used by IBM Personal System/2. Synonymous with advanced I/O
channel.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
migration
The process of converting an earlier released Database Manager database to a
current Database Manager database. This allows you to acquire the capabilities
of the current or new database without losing the data you created on the
earlier released database.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
mode name
In Advanced Program-to-Program Communications (APPC), a name that a program
uses to request a specific set of network properties of a session the program
wants to use for a conversation. These properties include, for example, the
highest synchronization level for conversations on the sessions, the class of
service for the sessions, and the session routing and delay characteristics.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
model profile
In Communications Manager, a supplied configuration profile with pre-configured
options intended for use in the creation of a new profile.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
modem
A device that converts digital data from a computer to an analog signal that
can be transmitted on a telecommunication line, and converts the received
analog signal to digital data for the computer. Contrast with digital data
service adapter. Synonymous with modulator/demodulator. See also data
circuit-terminating equipment.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
module
A discrete programming unit that usually performs a specific task or set of
tasks.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
module definition file
A file used at link-edit time that describes the attributes for the executable
file being built (for example, load-on-call or pre-load attributes for
segments).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
mouse
A device used to move a pointer on the screen.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
MSAP
See Media Access Control Service Access Point.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
multipoint line
In data communications, pertaining to a network that allows two or more
stations to communicate with a single system on one line.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
multistation access unit (MAU)
In the IBM Token-Ring Network, a wiring concentrator that can connect up to
eight lobes to a ring network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
multitasking
A mode of operation that provides for concurrent performance or interleaved
execution of two or more tasks.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
NAU
See network addressable unit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
NCP
See Network Control Program.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
negotiable BIND
In Systems Network Architecture (SNA), a request unit (RU) that can enable two
logical unit-logical unit (LU-LU) half-sessions to negotiate the parameters of
a session when the LUs are activating the session.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
NetBIOS
An application programming interface (API) between a local area network adapter
and programs.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
netname
The name used in conjunction with the server name to identify a resource on the
network when it is shared. See also universal naming convention (UNC).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
network
A configuration of data processing devices and software connected for
information interchange.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
network address
An address, consisting of subarea and element fields, that identifies a link, a
link station, or a network addressable unit. Subarea nodes use network
addresses; peripheral nodes use local addresses. The boundary function in the
subarea node to which a peripheral node is attached, pairs local addresses with
network addresses and vice versa.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
network addressable unit (NAU)
In Systems Network Architecture (SNA), a logical unit, physical unit, or system
services control point (SSCP). An NAU is the origin or the destination of
information transmitted through the path control network. See also network
name.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
network administrator
The person responsible for the installation, management, control, and
configuration of a network. The network administrator defines the resources to
be shared and user access to the shared resources, and determines the type of
access those users can have.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Network Control Program (NCP)
An IBM-licensed program that provides communications controller support for
single domain, multiple domain, and interconnected network capability.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
network management
In the IBM Token-Ring Network, the conceptual control element of a data station
that interfaces with all of the layers of that data station and is responsible
for the resetting and setting of control parameters, obtaining reports of error
conditions, and determining if the station should be connected to or
disconnected from the medium.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
network management vector transport (NMVT)
The format used for C & SM data, such as alerts and link statistics.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
network name
In Systems Network Architecture (SNA), a symbolic name by which end users refer
to a network addressable unit (NAU), a link station, or a link. See also
network addressable unit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
network printer
A printer that is recognized by the host even after a 5250 emulation session
for that printer in no longer active. A job to be printed can be sent to the
spooler of a network printer and it will be printed when the session is
activated.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Network Problem Determination Application (NPDA)
A licensed program that helps the user identify network problems from a central
control point using interactive display techniques.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
network user address (NUA)
Up to 15 decimal digits that serve to identify data terminal equipment (DTE).
The first four digits (digits 0 to 3) of an NUA are known as the data network
identification code (DNIC); they identify the country and the service within
the country. Digits 4 to 12 indicate the national number. The final two digits
may be used for a subaddress.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
nickname
In Database Manager, when databases are cataloged on a workstation, the
databases can be referred to by a nickname. This nickname allows users with
SYSADM authority to catalog a database and give the database another name for
use on the local workstation.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
node
An endpoint of a communications link or a junction common to two or more links
in a network. Nodes can be processors, controllers, or workstations. See
peripheral node.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
node address
The address of a node in a network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
node directory
This directory contains the entries for all nodes referenced in the database
directories on its particular node. The information in this directory is used
for all communication network connections.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
node type
A designation of a node according to the protocols it supports and the network
addressable units (NAUs) that it can contain. Five types are defined: 1, 2.0,
2.1, 4, and 5. Type 1, type 2.0, and type 2.1 nodes are peripheral nodes; type
4 and type 5 nodes are subarea nodes.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
non-delimited ASCII format
In Database Manager, a file format used to import data. It is a sequential
ASCII file with row delimiters used for data exchange with any ASCII product.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
non-switched line
A connection between computers or devices using telephone switching equipment
that does not have to be established by dialing. See leased line. Contrast with
switched line.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
NPDA
See Network Problem Determination Application.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
NUA
See network user address.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
null character
1. A character defined in Query Manager profiles and used throughout Query
Manager functions to indicate a field that has no value.
2. The character hex 00, used to represent the absence of a printed or
displayed character.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
numbered frames
The information segments that are arranged in numbered order for
accountability.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
OAF
See origin address field.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
object
A table, view, index, query, form, procedure, panel, or menu created or
manipulated by using Database Manager.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
object code
The output from a compiler or assembler, which is itself executable machine
code or is suitable for processing to produce executable machine code.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
object file
The machine-level program produced as the output of an assembly or compiled
operation. Synonymous with object module.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
object module
Synonymous with object file.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
object name
A sequence of characters identifying an object created by a Database Manager
user.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
object names menu
In Query Manager, a menu listing objects such as tables and views, queries,
forms, procedures, panels, or menus.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
octet
A byte composed of eight binary elements.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
odd parity
A data transmission attribute in which the parity bit of a character frame is
set so that the sum of the digits in the character with the parity bit is odd.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
OIA
See operator information area.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
one-byte checksum error detection
The sum of a group of data items associated with the group for checking
purposes. The data items are either numerals or other character strings
regarded as numerals during the process of a checksum.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Operating System/2 Extended Edition (OS/2)
See IBM Operating System/2 Extended Edition (OS/2).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Operating System/2 LAN Server
See IBM Operating System/2 LAN Server.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
operational status
In Query Manager, the status of system or databases that is requested by the
user with SYSADM or DBADM authority. This status information may be displayed
and/or printed to help the user diagnose database problems or tune database
performance.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
operator information area (OIA)
In 3270 terminal emulation, the bottom line of the screen where status about
the communication session is displayed.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
operator information line
In ASCII terminal emulation, the bottom line of the screen used to display
messages and status information.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
optimization
The determination of the most efficient access strategy for satisfying a
database access.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
origin address field (OAF)
In Systems Network Architecture (SNA), a field in a FID0 or FID1 transmission
header that contains the address of the originating network addressable unit
(NAU). Contrast with destination address field (DAF).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
originator
In Communications Manager, a component or user application reporting an error.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
OS/2 Extended Edition
IBM Operating System/2 Extended Edition.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
OS/2 LAN Server
See IBM Operating System/2 LAN Server.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
OS/2 screen group
See screen group.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
output rows
In Query Manager, the rows of data from a table or tables resulting from a
query.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
pacing
In data communications, a technique by which receiving equipment controls the
transmission of data by sending equipment to prevent overrun. See also flow
control. See receive pacing and send pacing.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
pacing character
An indicator that signals that the receiving component is ready to accept
additional data.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
pacing group
In Systems Network Architecture (SNA), the path information units (PIUs) that
can be transmitted on a virtual route before a virtual-route pacing response is
received, indicating that the virtual route receiver is ready to receive more
PIUs on the route.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
pacing interval
1. In Systems Network Architecture (SNA), the duration between requests that
can be transmitted on the normal flow in one direction on a session before
a session-level pacing response is received, indicating that the receiver
is ready to accept the next group of requests.
2. In asynchronous communications, the duration enforced by the sender
between successive lines of data.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
pacing response
An indicator that signifies the readiness of a receiving component to accept
another pacing group; the indicator is carried in a response header (RH) for
session-level pacing, and in a transmission header (TH) for virtual-route
pacing.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
pacing window size
In Systems Network Architecture (SNA), the number of request units (RUs) that a
program can send before getting permission to send more.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
packet
In data communication, a sequence of binary digits, including data and control
signals, that is transmitted and switched as a composite whole.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
packet level
A part of CCITT Recommendation X.25 that defines the protocol for establishing
logical connections between data terminal equipment (DTE) and data
circuit-terminating equipment (DCE), and for transferring data on these
connections.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
packet switching
The process of routing and transferring data by means of addressed packets so
that a channel is occupied only during the transmission of a packet. On
completion of the transmission, the channel is made available for the transfer
of other packets.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
packet switching data network (PSDN)
A communications network that uses packet switching as a means of transmitting
data.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
packet window
A specified number of packets that can be sent by the data terminal equipment
(DTE) device before it receives an acknowledgement from the receiving station.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
packet-level data circuit-terminating equipment
In a packet switching data network, the equipment at the exchange that manages
the network connection side of the protocol at the packet (or network) level.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
page
1. In Database Manager, a unit of storage within a table or index whose size
is 4KB.
2. In a virtual storage system, a fixed-length block that has a virtual
address and is transferred as a unit between memory and disk storage.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
parallel session
In Systems Network Architecture (SNA), two or more concurrently active sessions
between the same two logical units (LUs). Each session can have different
session parameters. Contrast with single session.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
parent process
A process that creates another process (called child process)
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
parent row
A row of a parent table that has at least one dependent row.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
parent table
The table in a relationship containing the primary key that defines the
relationship with a dependent table. A table can be a parent in an arbitrary
number of relationships.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
parity
The determination whether the number of ones (or zeros) in an array of binary
digits is odd or even.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
parity bit
A binary digit appended to a group of binary digits to make the sum of all the
digits, including the appended binary digit, either odd or even as
pre-established.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
partner logical unit (LU)
In Systems Network Architecture (SNA), the remote participant in a session.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
partner transaction program
An Advanced Program-to-Program Communications (APPC) transaction program
located at the remote partner.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
path
1. The route used to locate files on a disk or diskette, consisting of a
drive and directories.
2. In the IBM Token-Ring Network, a route between any two nodes.
3. In Systems Network Architecture (SNA), the set of data links, data link
control layers, and path control layers that a path information unit
travels through when sent from transmission control of one half-session to
transmission control of another half-session. Synonymous with absolute
path name.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
path control layer
The layer that routes all messages to data links and half-sessions.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
path control network
The routing portion of an Systems Network Architecture (SNA) network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
path information unit (PIU)
In Systems Network Architecture (SNA), a message unit consisting of a
transmission header (TH) alone, or a TH followed by a basic information unit
(BIU) or a BIU segment. See also transmission header.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
PC Network
See IBM PC Network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
PC/IXF
See personal computer/integrated exchange format.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
PCLP
See personal computer local area network program.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
peer-to-peer
The communication between two Systems Network Architecture (SNA) logical units
(LUs) that is not managed by a host.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
peripheral node
In Systems Network Architecture (SNA), a node that has no intermediate routing
function, and is dependent upon an intermediate or host node to provide certain
network services for its dependent logical units (LUs). See node.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
permanent virtual circuit (PVC)
A virtual circuit that has a logical channel permanently assigned to it at each
data terminal equipment (DTE). A call establishment protocol is not required. A
permanent virtual circuit is the packet network equivalent of a leased line.
See switched virtual circuit and virtual circuit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
personal computer local area network program (PCLP)
This product provides the ability to share programs, data, and printer
resources among multiple personal computers connected to an IBM Token-Ring
Network, IBM PC Network, or IBM PC Network Baseband.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
personal computer/integrated exchange format (PC/IXF)
An OS/2 file format used to export and import table data.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
physical level
A standard that defines the electrical, physical, functional, and procedural
methods used to control the physical link running between the data terminal
equipment (DTE) device and the data circuit-terminating equipment (DCE) device.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
physical unit (PU)
In Systems Network Architecture (SNA), the component that manages and monitors
the resources of a node, as requested by a system services control point (SSCP)
using a system services control point-physical unit (SSCP-PU) session. Each
node of an SNA network contains a physical unit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
physical unit type
The classification of a physical unit according to the type of node in which it
resides. The physical unit type is the same as its node type; that is, a type 1
physical unit resides in a type 1 node, and so on.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
PIU
See path information unit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
plan
Synonymous with access plan.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
plan name
The name of an access plan. A Database Services access plan is the output from
the bind process.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
PLU
See primary logical unit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
point of consistency
In Database Manager, a point in time when all the recoverable data a program
accesses is consistent. The point of consistency occurs when updates, inserts,
and deletions are either committed to the physical database or rolled back (not
committed and discarded). Synonymous with commit point and sync point. See
rollback.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
point-to-point
Pertaining to data transmission between two locations without use of any
intermediate terminal or computer.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
point-to-point line
A communications line that connects a single remote station to a computer.
Contrast with multipoint line.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
poll
To determine if any remote device on a communications line is ready to send
data.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
polling
1. The process whereby stations are invited, one at a time, to transmit.
2. The process whereby a controlling station contacts the attached devices to
avoid contention, to determine the status of operations, or to determine
readiness to send or receive data.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Post Telephone and Telegraph Administration (PTT)
A generic term for a government-operated, common carriers service in countries
other than the U.S. and Canada. Examples of the PTT are the Bundespost in
Germany, and the Nippon Telephone and Telegraph Public Corporation in Japan.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
precision attribute
In Database Manager, the total number of digits in a decimal type column. The
precision cannot be greater than 31, and it must be odd. If precision is
specified as even, it is rounded up to the next odd value by Database Services.
See scale attribute.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
pre-compilation
The processing of a program containing Structured Query Language (SQL)
statements that takes place before a compile starts. SQL statements are
replaced with statements that are recognized by the host language compiler.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
pre-compiler
A program supporting pre-compilation of application programs with embedded
Structured Query Language (SQL) statements.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
predicate
In Database Manager, an element of a search condition expressing a comparison
operation.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Presentation Interface
An application programming interface (API) that allows users to write graphics
applications.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Presentation Manager
The interface of the OS/2 program that presents, in windows, a graphics-based
interface to applications and files installed and running on the OS/2 program.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
presentation space
1. The space that contains the device-independent definition of a picture.
2. In EHLLAPI, an area in memory that corresponds to a screen image.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
presentation space ID (PSID)
Synonymous with short name.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
primary half-session
In Systems Network Architecture (SNA), the half-session on the node that sends
the session activation request. See also primary logical unit. See half-session
and secondary half-session.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
primary index
An index that enforces the uniqueness of a primary key. Only one primary index
may exist for any given table.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
primary key
A column or an ordered set of columns, containing non-null values, whose values
uniquely identify a row. To be unique, a value cannot be duplicated in any
other row. These columns are identified as the primary key in the table
definition. The values in these columns are known as primary key values. A
table cannot have more than one primary key. See foreign key. See also
referential constraint.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
primary link station
In Systems Network Architecture (SNA), the link station on a link responsible
for control of the link. Contrast with secondary link station.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
primary logical unit (PLU)
In Systems Network Architecture (SNA), the logical unit (LU) that contains the
primary half-session for the LU-LU that sent the bind. See secondary logical
unit. See also secondary half-session and primary half-session.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
printer nickname
In Query Manager, a file that specifies which printer ID, printer type, and
page size to use when printing.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
privilege
In Database Manager, the right or authority to access a specific database
object in a specific way. These rights are controlled by users with SYSADM
(system administrator) authority or database administrator authority, or
creators of objects. Privileges include rights such as creating, deleting,
browsing tables, or connecting to a database. See also direct privilege and
indirect privilege.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
privilege level
1. Privilege level is a protection mechanism of the processor that provides
four hierarchical protection levels to ensure program reliability. At any
one time, a task executes at one of the four levels. A task executing at
one level cannot access data at a more privileged level, nor can it call a
procedure at a less privileged level. The most trusted service procedures
occupy the higher levels (levels 0, 1, and 2) while the less trusted
application programs are placed at the lowest level of privilege (level
3).
2. In Database Manager, the degree or extent to which a user can access
database objects. For example, a user with SYSADM (system administrator)
authority may have more privileges than a database administrator.
3. This term is also used by the base operating system to indicate to what
level of resources a piece of code has access to. See also direct
privilege and indirect privilege.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
procedure (PROC)
1. In a programming language, a block of code, with or without formal
parameters, whose execution is invoked by means of a procedure call.
2. In Database Manager, a set consisting of Query Manager commands, panel
commands, and procedure language statements or all of these. A procedure
allows a single command to initiate operations.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
procedure language statements
In Query Manager, the programming statements that are used in procedures.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Procedures Language 2/REXX
A superset of the SAA Procedures Language.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
professional office system (PROFS)
A facility that allows users to receive, create, send, store, and search for
information within an office environment.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
profile
1. An object that contains information about the characteristics of a
computer system or application.
2. In Communications Manager, a part of a configuration file.
3. In Query Manager, a file that contains Query Manager defaults.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
PROFS
See professional office system.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
prompted interface
An interface that consists of messages, menus, pull-downs, and panels that
guides the user through the steps necessary to perform a task.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
prompted query
In Query Manager, a series of prompts, menus, panels, messages, and helps used
to define queries.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Prompted View
A Prompted View prompts the user through a fixed series of steps for creating a
view definition. A view is an alternative representation of data selected from
existing tables or other views. The view can rename and rearrange columns, omit
unwanted columns or rows, define columns by expressions, group results, and
combine more than one table.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
protocol converter
A device that allows terminals to communicate with host systems by converting
one data stream to another data stream; for example, ASCII data stream to IBM
3270 data stream.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
protocol handler
The programming in an adapter that encodes and decodes the protocol used to
send data over a network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
PSDN
See packet switching data network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
PSID
Synonymous with short name.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
PTT
See Post Telephone and Telegraph Administration.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
PU
See physical unit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
PUBLIC
In Database Manager, the authority for an object granted to all users.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
PVC
See permanent virtual circuit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Q-bit
See qualifier-bit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
QBE
See query-by-example.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
QLLC
See qualified logical link control.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
QMF
See Query Management Facility.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
qualified logical link control (QLLC)
A logical link control protocol that allows the transfer of data link control
information between two adjacent Systems Network Architecture (SNA) nodes that
are connected through an X.25-based packet-switching data network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
qualifier
In Database Manager, a short identifier used to logically group objects
together. In previous versions of Database Manager, this was known as
authorization ID.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
qualifier-bit (Q-bit)
A bit in a data packet header indicating the type of information contained in
the packet. A 1-bit indicates control information, and a 0-bit indicates data.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
query
A request for information from the database based on specific conditions; for
example, a request for a list of all customers in a customer table whose
balance is greater than $1000.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Query Management Facility (QMF)
An IBM database management tool that allows extensive interactive query and
report writing support. It runs under the control of the Interactive System
Productivity Facility (ISPF), which in turn runs under Virtual Machine (VM/CMS)
or Time Sharing Option (TSO) on host computers.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Query Manager
The part of Database Manager that provides menus, panels, pull-downs, and
messages to assist, for example, in creating databases, editing data,
generating reports, and making changes to Database Services configuration
files. Query Manager also provides customization tasks like panels, menus, and
procedures.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Query Manager Callable Interface
In Query Manager, the feature that provides a mechanism for functions. This is
done through the use of a callable programming interface (CPI).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
query-by-example (QBE)
A language used to write queries graphically.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
queued attach
In Advanced Program-to-Program Communications (APPC), an incoming allocate
request that is queued by Attach Manager until the transaction program named in
the request issues a RECEIVE_ALLOCATE verb.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
RDS
See Remote Data Services.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
rebind
In Database Services, to create a new access plan for a program previously
bound. For example, if an index is added for a table accessed by a program
(and statistics have been updated), the program must be rebound in order for it
to use the new index.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
receive
1. To obtain a message or file from another computer. Contrast with send.
2. In Communications Manager, the command used to transfer a file from a
host.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
receive pacing
In Systems Network Architecture (SNA), the pacing of message units being
received by a component. See pacing. Contrast with send pacing.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Recommendation X.25
See X.25.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
record
1. A set of data treated as a unit.
2. In Database Manager, the storage representation of a single row of a
table.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
record format
The definition of how data is structured in the records contained in a file.
The definition includes record names, field names, and field descriptions, such
as length and data type.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
recovery
1. The act of resetting a system or data stored in a system to an operable
state following damage.
2. In Database Manager, the process of rebuilding databases after a system
failure.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
recovery log
A collection of records describing the sequence of events that occur while
running Database Manager. The information is used for recovery in the event of
a system failure while Database Manager is running.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
redirection
The assigning of a local device name to a remote shared resource on the
network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
referential constraint
In Database Manager, an assertion that non-null values of a designated foreign
key are valid only if they also appear as values of the primary key of a
designated primary table. See also foreign key, primary key, and referential
integrity.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
referential integrity
In Database Manager, the enforcement of referential constraints on insert/add,
update/change, and delete operations for database tables. See also referential
constraint.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
relational database
1. A database that is organized and accessed according to relationships
between data items.
2. A data structure perceived by its users as a collection of tables.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
relative path name
In Database Services, a path name to a resource that does not begin with a
drive designation. The resource is assumed to relate to the home directory.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Reliability, availability, and serviceability (RAS) programs
The programs that facilitate problem determination.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
remote
1. Pertaining to a system, program, or device that is accessed through a
telecommunication line.
2. In Advanced Program-to-Program Communications (APPC), indicates that the
partner logical unit (PLU) or transaction program is not at the local
node.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
remote controller
A device or system, attached to a communications line, that controls the
operation of one or more remote devices. Contrast with local controller.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Remote Data Services (RDS)
Remote Data Services enables an application using the Database Services
Application Programming Interface to access the Database Services and a
database on a remote workstation. The application does not need to know the
physical location of the database. Remote Data Services determines the
database location and manages the transmission of the request to the Database
Services and the reply back to the application.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
remote database
A database physically located on some workstation other than the one currently
in use. Contrast with local database.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
remote device
A device whose controller is connected to a system by a communications line.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
remote equipment
The modem and controller that provides the communications connection between a
communications line and remote device or system. This remote equipment is at
the other end of a data link from the host system.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
remote initial program load (RIPL)
The initial program load of a remote requester by a server on which the
appropriate image is located.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
remote initiation
In Advanced Program-to-Program Communications (APPC), a process by which a
conversation is allocated by a remote logical unit (LU).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
remote IPL
See remote initial program load.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
remote IPL requester
A DOS machine requiring its startup from a remote IPL (initial program load)
server.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
remote IPL server
A server that provides remote IPL (initial program load) support for one or
more remote IPL requesters.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
remote transaction program
In Advanced Program-to-Program Communications (APPC), the transaction program
at the remote end of the conversation. Contrast with local transaction program.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
remote workstation
A workstation that is indirectly connected to the system and needs data
transmission facilities. See also remote equipment. Contrast with local
workstation.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
repeatable read
In Database Services, the isolation level providing maximum protection from
other active application programs. When a program uses repeatable read
protection, rows referenced by the program cannot be changed by other programs
until the program reaches a commit point.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
report
In Query Manager, the displayed or printed data generated by a query and
formatted by a form.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
request
In Systems Network Architecture (SNA), a message unit that signals initiation
of an action or protocol.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
request header (RH)
A three-byte header preceding a request unit (RU). See request/response header.
Contrast with response header.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
request to send (RTS)
This signal is raised by ACDI prior to establishing a connection, and it is
lowered when the connection is brought down. This signal works in concert with
data terminal ready (DTR) in that it is always raised after DTR and lowered
before DTR.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
request unit (RU)
1. In Systems Network Architecture (SNA), a message unit that signals
initiation of a particular action or protocol.
2. A message unit that contains control information such as a request code,
function management header (FMH), end-user data, or a combination of these
types of information.
3. If positive, the response unit may contain additional information (such as
session parameters in response to BIND SESSION), or if negative, contains
sense data defining the exception condition. In Systems Network
Architecture (SNA), a term used for a request unit or a response unit.
Contrast with response unit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
request/response header (RH)
Control information preceding a request/response unit that specifies the type
of request/response unit and contains information associated with that unit.
See also request unit. See request header and response header.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
requester
1. In Server-Requester Programming Interface (SRPI), the application program
that relays a request to host computer. Contrast with server.
2. A computer that accesses shared network resources made available by other
computers running as servers on the network. See LAN Requester.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
reset packet
A packet used to reset a virtual circuit at the interface between the data
terminal equipment (DTE) and the data circuit-terminating equipment (DCE).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
response header (RH)
A header, optionally followed by a response unit, that indicates whether the
response is positive or negative and that may contain a pacing response. See
request/response header. Contrast with request header.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
response unit (RU)
In Systems Network Architecture (SNA), a message that acknowledges a request
unit. Contrast with request unit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
restart procedure
A procedure used by data terminal equipment (DTE) devices or data
circuit-terminating equipment (DCE) devices to clear all virtual calls and
reset all permanent virtual circuits.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
RESTRICT
In Database Manager, a referential constraint that can be applied to the DELETE
rule. RESTRICT prevents deletion of any rows of the parent table that have
dependent rows.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
results table
In Database Manager, the set of rows resulting from a query on one or more base
tables or views.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
reverse charging
An X.25 optional facility that allows data terminal equipment (DTE) device to
request that the cost of a communications session be charged to the DTE that is
called.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
reverse charging acceptance
An X.25 optional facility that enables data terminal equipment (DTE) to receive
incoming calls that request reverse charging.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
revoke
To revoke is to remove access or authority from a user or a group ID.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
rollback
In Database Services, the process of restoring data changed by Structured Query
Language (SQL) statements to the state at its last commit point. See point of
consistency.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
root table
In Query Manager panel definition, the table on which the panel is based. See
also sub-table.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
rounding rule
In Query Manager, the rules that apply to rounding decimal numbers up or down,
depending on the number. For example, the numbers 1-4 may be rounded down and
the numbers 5-9 rounded up; or the numbers 1-5 rounded down and the numbers 6-9
rounded up depending on the currently active profile.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
routing
In communications, the assignment of the path by which data will reach its
destination.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
routing table
In X.25 support, a table that controls the routing of incoming calls to
applications.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
row
In Database Manager, the horizontal component of a table consisting of a
sequence of values, one for each column of the table.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
row lock
A row lock prevents two applications, that are accessing the same database,
from updating the same row at the same time.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
row pool
In Query Manager, a type of holding area or buffer for rows retrieved from a
database. Query Manager allows a user to set a value parameter representing
the amount of RAM (random access memory) allocated to the row pool to store,
temporarily, rows retrieved from a database.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
RTS
See request to send.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
RU
See request unit and response unit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
RU chain
A set of related request/response units (RUs) that are consecutively
transmitted in one direction over a session. Each RU belongs to only one
chain, which has a beginning and an end indicated by control bits in
request/response headers within the RU chain.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SAA
See Systems Application Architecture.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SAP
See service access point.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
scalar function
In Database Services, a Structured Query Language (SQL) operation producing a
single value from another value and expressed in the form of a function name,
such as SUM, MAX, or AVG, followed by a list of arguments enclosed in
parentheses. A scalar function applies its operation to the argument or
arguments for each row being returned in the results table by a SELECT
statement.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
scale
1. To change the representation of a quantity, expressing it in other units,
so that its range is brought within a specified range.
2. In computer graphics, to enlarge all or part of a display image.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
scale attribute
In Database Manager, the number of digits in the fractional part of a decimal
type column. Scale can range from zero to the precision of the column. See
precision attribute.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
screen group
An OS/2 or DOS session. The OS/2 program allows multiple applications to run
concurrently, where each application can access the display screen. See also
session.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SCS
See SNA character string.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SDLC
See Synchronous Data Link Control.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
search
1. The process of looking for a specific item.
2. In Database Manager, used to locate rows or sets of rows in a table that
meet specific criteria.
3. To scan one or more data elements of a set in order to find elements that
have a certain property.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
search condition
In Database Manager, a criterion for selecting rows from a table. A search
condition consists of one or more predicates.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
secondary half-session
In Systems Network Architecture (SNA), the half-session on the node that
receives the session-activation request. See also secondary logical unit. See
half-session and primary half-session.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
secondary link station
In Systems Network Architecture (SNA), any link station that is not the primary
link station. A secondary link station can exchange data with the primary link
station, but not with other secondary link stations. Contrast with primary link
station.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
secondary logical unit (SLU)
In Systems Network Architecture (SNA), the logical unit (LU) that contains the
secondary half-session for a particular LU-LU session. A logical unit can
contain secondary and primary half-sessions for different active LU-LU
sessions. See primary logical unit. See also secondary half-session.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
self-referencing constraint
In Database Manager, a referential constraint that creates a relationship of a
self-referencing table.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
self-referencing row
In Database Manager, a row that is a parent and a dependent of itself. A row
of a self-referencing table in which the value of the foreign key in that row
matches the value of the primary key in that row.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
self-referencing table
In Database Manager, a table that is a parent and a dependent in the same
relationship.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
send
1. To send a message or file to another computer. Contrast with receive.
2. For Communications Manager, the command used to transfer a file to a host.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
send pacing
In SNA, the pacing of message units that a component is sending. See pacing.
Contrast with receive pacing.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
sequence
In Query Manager form definition, the order in which each column is displayed
or printed in a report.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
serial device
In OS/2 LAN Server, a resource (such as a modem or plotter) attached to an LPT
or COM port for direct input/output (I/O) use.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
server
1. On a local area network (LAN), a workstation that provides facilities to
other workstations.
2. An application on the host that processes Server-Requester Programming
Interface (SRPI) requests. Contrast with requester.
3. A computer that shares its resources with other computers on the network.
See also IBM Operating System/2 LAN Server.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
server alias
A locally known pseudonym (or nickname) for a Server-Requester Programming
Interface (SRPI) server.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Server-Requester Programming Interface (SRPI)
An application programming interface (API) used by requester and server
programs to communicate with the personal computer or host routers.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
service access point (SAP)
In local area network (LAN), the logical point at which an n = 1-layer entity
acquires the services of the n-layer. A single SAP can have many links ending
it.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
service transaction program (service TP)
1. A transaction program implemented by a transaction processing system.
Service transaction programs perform such functions as providing access to
remote DL/1 databases and remote queues.
2. Transaction programs that provide a system or generic service.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
session
1. A logical connection between two stations or network addressable units
(NAUs) that allows them to communicate.
2. The period of time during which a user can communicate with an interactive
system.
3. For the OS/2 program, a synonym for screen group.
4. In Database Manager, a group of processes (or tasks) associated with an
application.
5. In OS/2 LAN Server, a logical connection between a server and a requester
that begins with a successful request for a shared resource.
6. See half-session.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
session activation
In Systems Network Architecture (SNA), the process of exchanging an ACTLU
(activate logical unit) and a positive response between network addressable
units (NAUs). See also session deactivation.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
session deactivation
In Systems Network Architecture (SNA), the process of exchanging a session
deactivation request and response between network addressable units (NAUs). See
also session activation.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
session limit
In Systems Network Architecture (SNA), the maximum number of concurrently
active logical unit-logical unit (LU-LU) sessions a particular logical unit
(LU) can support.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
session partner
In Systems Network Architecture (SNA), one of the two network addressable units
(NAUs) participating in an active session.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
session security
A Systems Network Architecture (SNA) function that allows data to be
transmitted in encrypted form.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
session-initiation request
In communications, an Initiate (INIT) or logon request from a logical unit (LU)
to a control point that asks for the LU-LU session to be activated.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
session-termination request
In communications, a Terminate-Self (TERM_SELF) or Shutdown (SHUTD) request
from a logical unit (LU) to a control point or session partner, respectively,
that asks for the LU-LU session to be deactivated.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SET NULL
In Database Manager, a referential constraint that can be used with the DELETE
rule. SET NULL ensures that deletion of a row in the parent table sets the
values of the foreign key in any dependent rows to null.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
shared resource
A directory (files resource), printer, or serial device made available to users
on a network. The shared resources are directly attached to servers that share
them but are not attached to the requesters asking to use them.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
short name
In Communications Manager, the one-letter name (A through Z) of the host
presentation space or terminal emulation session. Synonymous with
short-session ID.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
short-session ID
See short name.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
shutdown
In OS/2 Task Manager, the procedure required before the computer is switched
off to ensure that data and configuration information is not lost. See also
Task Manager.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
single session
In Systems Network Architecture (SNA), a session that is the only session
connecting two logical units (LUs). Contrast with parallel session.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SLU
See secondary logical unit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SNA
See Systems Network Architecture.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SNA character string (SCS)
In SNA, a character string composed of EBCDIC controls, optionally intermixed
with end-user data, that is carried within a request/response unit. This
character string is an LU1 printing protocol.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SNA controller
A device in Systems Network Architecture (SNA) that directs the transmission of
information over the data links of a network. Its operation can be controlled
by a program executed in a processor to which the controller is connected or it
can be controlled by a program executed within the device.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SNA gateway
A feature that allows an OS/2 workstation to act as a communications controller
between a support workstation, such as a personal computer on a LAN, and an SNA
host. To the individual workstation, the SNA gateway is transparent.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SNA LU session type 6.2 protocol
A Systems Network Architecture (SNA) application protocol for communications
between peer systems.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SNA network
The part of the user application network that conforms to the formats and
protocols of Systems Network Architecture (SNA). It enables reliable transfer
of data among users and provides protocols for controlling the resources of
various network configurations. The SNA network consists of network
addressable units (NAUs), boundary function components, and the path control
network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SNBU
See switched network backup.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
soft checkpoint
In Database Services, the process of resetting the Redo/Restart point in the
database log. This is the point where the forward-roll part of recovery will
begin. A soft checkpoint does not cause any data to be written to disk.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
source address
1. The address of the Media Access Control Service Access Point (MSAP) from
which a medium access control (MAC) frame is originated.
2. A field in the MAC frame.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SQL
See Structured Query Language.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SQL communication area (SQLCA)
In Database Services, a collection of variables that provides an application
program with information about the execution of its Structured Query Language
(SQL) statements or its Database Services request.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SQL descriptor area (SQLDA)
In Database Services, a collection of variables used in the processing of
certain Structured Query Language (SQL) statements. The SQLDA is intended for
dynamic SQL statements.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SQL escape character
In Database Services, the symbol used to enclose an Structured Query Language
(SQL) delimited identifier. This symbol is the quotation mark (").
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SQL query
A query using the Structured Query Language (SQL).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SQL statement
In Database Services, a statement written in Structured Query Language (SQL).
See also Structured Query Language.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SQL string delimiter
A symbol used to enclose an Structured Query Language (SQL) string constant.
This symbol is the quotation mark.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SQLCA
See SQL communication area.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SQLDA
See SQL descriptor area.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SRPI
See Server-Requester Programming Interface.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SRPI router
A component of SRPI that directs requests to the applicable server and directs
responses to the applicable requester.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SSCP
See system services control point.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SSCP-LU session
In Systems Network Architecture (SNA), a session between a system services
control point (SSCP) and a logical unit (LU); the session enables the LU to
request the SSCP to help initiate LU-LU sessions.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SSCP-PU session
In Systems Network Architecture (SNA), a session between a systems services
control point (SSCP) and a physical unit (PU); SSCP-PU sessions enable SSCPs to
send requests to and receive status information from individual nodes to
control the network configuration.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
standalone
Pertaining to operations that are independent of another device, program, or
system.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
static SQL
The Structured Query Language (SQL) statements that are embedded within a
program, and are prepared during the program preparation process before the
program is executed.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
statistics
In Database Manager, the characteristic physical attributes about tables; for
example, the number of records, the number of pages, and so on. Statistics are
used during optimization as a basis for selecting accesses to tables. In Query
Manager users can select Run Statistics to determine optimal access to data
within a table.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
stop bits
In asynchronous communications, the bit or bits used to end the character frame
transmission.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Structured Query Language (SQL)
An established set of statements used to manage information stored in a
database. By using these statements, users can add, delete, or update
information in a table, request information through a query, and display the
results in a report. See also SQL statement.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
subarea
In communications, a portion of the SNA network consisting of a subarea node,
and their associated resources. Within a subarea node, all network addressable
unites (NAUs), links, and adjacent link stations that are addressable within
the subarea share a common subarea address and have distinct element addresses.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
sub-query
A full-select that appears in the WHERE or HAVING clause of an SQL statement.
See also full-select.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
sub-select
A SELECT statement that is part of another statement such as INSERT. That form
of the SELECT statement that does not include an ORDER BY, FOR UPDATE OF,
UNION, INTERSECT, OR EXCEPT.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
sub-table
In Query Manager panel definition, a subordinate table connected to the root
table. See also root table.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
sub-vector
A sub-component of the medium access control (MAC) major vector.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
summary functions
Used in queries and views to apply column functions that work on several rows
together. Some column functions include SUM, AVG, MIN, MAX, and COUNT.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SVC
See switched virtual circuit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
switched line
A telecommunication line in which the connection is established by dialing.
Contrast with non-switched line.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
switched network backup
A feature of the modem that allows a non-switched line to be used as a switched
line, or allows a switched line to be used as a non-switched line, depending on
the characteristics of the modem.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
switched virtual circuit (SVC)
A virtual circuit that is requested by a virtual call. It is released when the
virtual circuit is cleared. See permanent virtual circuit and virtual circuit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
sync point
Synonymous with point of consistency.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
synchronization level
In Advanced Program-to-Program Communications (APPC), the specification
indicating whether the corresponding transaction programs exchange
configuration requests and replies.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
synchronous
Pertaining to two or more processes that depend upon the occurrences of
specific events such as a common timing signal. Contrast with asynchronous.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Synchronous Data Link Control (SDLC)
A communications protocol for managing synchronous, code-transparent,
serial-by-bit information transfer over a link connection. Transmission
exchanges can be duplex or half-duplex, over switched or non-switched links.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
synchronous data transfer
A physical transfer of data to or from a device that has a predictable time
relationship with the execution of an input/output (I/O) request.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
synchronous transmission
In data communication, a method of transmission in which the sending and
receiving of characters are controlled by timing signals. Contrast with
asynchronous transmission.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
SYSADM (system administrator)
In Database Manager, a user with SYSADM authority. Such users can grant other
users or groups the right to access objects and can revoke such rights. Only a
SYSADM can create or drop a database and grant database administrator authority
to other users.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
system
A computer and its associated devices and programs.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
system administrator
In Communications Manager, the task of installing, configuring, setting up
local communications networks, and ensuring the proper use of Communications
Manager on all supported hardware.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
system database directory
Synonymous with database directory.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
system name
1. An IBM-supplied name that uniquely identifies the system. It is used as a
network attribute for certain communications applications such as Advanced
Program-to-Program Communications (APPC).
2. An IBM-defined name that has a predefined meaning to the COBOL compiler.
System names include computer names, language names, device names, and
function names.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
system services control point (SSCP)
In Systems Network Architecture (SNA), a control point in a host node that
provides network services for dependent nodes.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
system trace
A historical record of specific events in the execution of the Extended
Edition. The record is usually produced for debugging purposes. See also
trace buffer.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Systems Application Architecture (SAA)
A set of software interfaces, conventions, and protocols that provide a
framework for designing and developing applications across multiple computing
environments.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Systems Network Architecture (SNA)
The description of the logical structure, formats, protocols, and operational
sequences for transmitting information units through and controlling the
configuration and operation of networks.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
table
In Database Manager, a named collection of data consisting of rows and columns.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
table fields
In Query Manager, the two types of fields found in panels: output and input.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Task Manager
In the OS/2 program, the function that controls the starting and stopping of
programs, and which program has the input focus. It also allows the user to
shut down the system gracefully. See also shutdown.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
telecommunication facility
Transmission capabilities, or the means for providing such capabilities, made
available by a communication common carrier or by a telecommunication
administration. Synonymous with transmission facility.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
telecommunication line
Any physical medium, such as a wire or microwave beam, that is used to transmit
data.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
terminal
In data communication, a device, usually equipped with a keyboard and display
screen, capable of sending and receiving information.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
TH
See transmission header.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
throughput class negotiation
A packet switching data network optional facility that allows data terminal
equipment (DTE) to negotiate the speed at which its packets travel through the
packet switching data network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
token
1. In a local area network (LAN), the symbol of authority passed among data
stations to indicate the station temporarily in control of the
transmission medium. It consists of a starting delimiter, a frame control
field, and an ending delimiter. The frame control field contains a token
indicator bit that indicates to a receiving station that the token is
ready to accept information. If the station has data to send along the
network, it appends the data to the token. The token then becomes a
frame.
2. A character string in a specific format that has some defined significance
in Structured Query Language (SQL).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
token monitor
Synonymous with active monitor.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
token-ring
A network with a ring topology that passes tokens from one attaching device to
another.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Token-Ring Network
See IBM Token-Ring Network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
topology
The schematic arrangement of the links and nodes of a network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
TP
See transaction program.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
trace
1. A record of data that provides a history of events that occurred in a
system.
2. The process of recording the sequence in which the statements in a program
are executed and, optionally, are the values of the program variables used
in the statements.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
trace buffer
An allocation of space on a system for trace information. See also system
trace.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
trace services
In Communications Manager, a menu-driven utility used to trace application
programming interfaces (APIs) and data transmitted on communication links.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
transaction
1. In Database Services, a unit of processing that guarantees that all
requests within the unit of processing are either complete or undone.
2. In Communications Manager, an exchange between a workstation and a
program, between two workstations, or between two programs that
accomplishes a particular action or result.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
transaction program (TP)
A program that uses the Advanced Program-to-Program Communications (APPC)
application programming interface (API) to communicate with a partner
application program at a remote node.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
transaction service mode
Synonymous with mode.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
transfer file
To send a file from one computer to another.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
transfer request
A description of the file you want to transfer to your personal computer from
the AS/400 system or from your personal computer to the AS/400 system.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
transmission control layer
The layer within a half-session that synchronizes and controls the speed of
session-level data traffic, checks sequence numbers of requests, and enciphers
and deciphers end-user data.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
transmission frame
In data transmission, the data transported from one node to another in a
particular format that can be recognized by the receiving node.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
transmission group
A group of links between directly attached nodes appearing as a single logical
link for routing messages.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
transmission header (TH)
In Systems Network Architecture (SNA), the control information, optionally
followed by a basic information unit (BIU), or a BIU segment, that is created
and used by path control to route message units and to control their flow
within the network. See also path information unit.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
transmission service mode
A circuit-switched, packet-switched, non-switched, or leased-circuit service
provided to the public by a communication common carrier or by a
telecommunication administration.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
transmission service mode profile
A set of parameters that define a transmission service mode when connecting to
remote workstations.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
transmit
To send information from one place for reception elsewhere.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
twinaxial data link control (TDLC)
A communications function that allows personal computers, attached to the
workstation controller by way of twinaxial cable, to use Advanced
Program-to-Program Communications (APPC).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
twinaxial feature
See twinaxial data link control.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Unbind Session (UNBIND)
A request to deactivate a session between two logical units (LUs).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
uncommitted read
The isolation level that provides maximum concurrency. With uncommitted read,
no locking is done, and no data changes are allowed by the "uncommitted read"
application. In addition, the unit of work can view data that is uncommitted
and locked by other transactions.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
unformatted system services (USS)
A communications function that translates a character-coded command, such as
LOGON or LOGOFF, into a field-formatted command for processing by formatted
system services.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
unique index
1. In Database Services, an index that assures that no identical key values
are stored in a table.
2. In Query Manager, this is referred to as duplicates not allowed.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
universal access authority
A portion of the access control profile that contains the level of authority
given to all users that are not covered by any user or group entries in the
profile.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
universal access control (UAC)
A portion of the access control profile that contains the level of authority
given to all users not covered by user or group entries in the profile.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
universal asynchronous receiver/transmitter (UART)
An integrated circuit chip used in asynchronous communications hardware that
provides all the necessary logic to receive data in a serial-in parallel-out
fashion and to transmit data in a parallel-in serial-out fashion.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
universal naming convention (UNC)
A name used to identify the server and netname of a resource, taking the form
\\servername\netname\path and filename or \\servername\netname\devi cename.
See also netname.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
user ID
A unique name that identifies a user to the network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
user profile
In OS/2 LAN Server, an OS/2 command file containing commands that set
environment values and run programs automatically when a user logs on.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
User Profile Management
User Profile Management is automatically installed with the IBM Operating
System/2 program. It provides user ID validation and user and group management
facilities that are used by both Database Manager and Communications Manager.
Each installation of User Profile Management is local to the particular
workstation where it is installed and validates users accessing controlled data
or using programs that reside on that particular workstation. It also provides
the mechanism for users to LOGON to the system and LOGOFF from the system to
identify and authenticate system users.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
user types
Users and network administrators. A user is any person who uses a resource on a
computer. See also network administrator.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
USS
See unformatted system services.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
V.24
In data communications, a recommendation of the CCITT that lists the
definitions for interchange circuits between data terminal equipment (DTE) and
data circuit-terminating equipment (DCE).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
V.35
In data communications, a recommendation of the CCITT that defines data
transmission at 48 kilobits per second using 60-180kHz group band circuits.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
variable length string
A character or graphic string whose length is not fixed, but variable within
set limits.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
variable pool
A variable pool is a type of buffer area where variables are held for use by
objects. There are two separate variable pools; the procedure pool and the
Query Manager pool. A new level within a pool is created by an object each
time the object is run. The pool contains the variables created by the object
and exists until the object terminates at which time the variables are
discarded.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
view
An object maintained by Database Manager that is a logical table that does not
exist in physical storage but consists of data generated by a query that can be
obtained from one or more tables.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
virtual call facility
In data communication, a user facility in which a call setup procedure and a
call clearing procedure determine a period of communication between two data
terminal equipments (DTEs) in which user data is transferred in the network in
the packet mode of operation. All user data is delivered from the network in
the order it is received by the network. It is the packet network equivalent of
a dialed line. Synonymous with virtual call.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
virtual circuit
1. In packet switching, the facilities provided by a network that give the
appearance to the user of an actual connection. See permanent virtual
circuit and switched virtual circuit.
2. A logical connection established between data terminal equipment (DTE).
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
virtual machine (VM)
A functional simulation of a computer and its associated devices. Each virtual
machine is controlled by a suitable operating system.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Virtual Machine/Conversational Monitoring System (VM/CMS)
A time sharing system control program (CP) that manages the resources of an IBM
System/370 computing system in such a way that multiple remote terminal users
have a functional simulation of a computing system at their disposal. It also
contains the Conversational Monitoring System (CMS) that provides general time
sharing, program development, and problem solving facilities.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Virtual Machine/System Product (VM/SP)
An IBM-licensed program that manages the resources of a single computer so that
multiple computing systems appear to exist. Each virtual machine is the
functional equivalent of a real machine.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
Virtual Telecommunications Access Method (VTAM)
A set of programs that control communications between nodes and application
programs running on a host (System/370) system.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
VM
See virtual machine
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
VM/CMS
See Virtual Machine/Conversational Monitoring System.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
VM/SP
See Virtual Machine/System Product.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
VTAM
See Virtual Telecommunications Access Method.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
VT100
A Digital Equipment Corporation (DEC) ASCII terminal.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
WAN
See wide area network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
wide area network (WAN)
A network that provides data communication capability in geographic areas
larger than those serviced by local area networks.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
window procedure
In Presentation Interface, a code that is activated in response to a message.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
worksheet formats (WSF)
An OS/2 file format used to import and export data in work-sheet formats
supported by the Lotus products.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
workstation
A terminal or personal computer, usually one that is connected to a mainframe
or within a network, at which a user can run applications. See also system.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
workstation address
1. A number used in a configuration file to identify a workstation attached
to a computer port.
2. The address to which the switches on a workstation are set, or the
internal address assumed by the system, if no address is specified.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
workstation controller (WSC)
An input/output (I/O) controller card in the card enclosure that provides the
direct connection of local workstations to the system.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
WSC
See workstation controller.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
WSF
1. See work sheet formats.
2. Work Station Feature, as in 5250 WSF.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
X.21
In data communication, a recommendation of the CCITT that defines the interface
between data terminal equipment and public data networks for digital-leased and
circuit-switched synchronous services.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
X.21 bis
In data communication, a recommendation of the CCITT that defines the use on
public data networks of data terminal equipment (DTE) that is designed for
interfacing to synchronous V-series modems.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
X.21 feature
A feature that allows a system to be connected to an X.21 network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
X.25
In data communication, a recommendation of the CCITT that defines the interface
between data terminal equipment and packet switching networks. Recommendations
X.25 (Geneva 1980), and X.25 (Malaga-Torremolinos 1984) have been published.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
X.25 feature
A feature that allows a system to be connected to an X.25 network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
X.25 network
A service providing packet-switched data transmission that conforms to
Recommendation X.25 adopted by the CCITT.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
X.25 verb
In X.25, support library routine provided by the X.25 application programming
interface (API) to manage, control, or use an X.25 network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
X.25 verb request control block (XVRB)
A data structure passed to the X.25 support to request the execution of an X.25
API verb.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
X.32
In data communications, a recommendation of the CCITT that defines the
interface between data terminal equipment (DTE) and packet switching networks
through a public switched network, such as a public telephone network.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
XID
See exchange station ID.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
xmodem
An asynchronous communications data transfer protocol where data is transferred
in 128-byte blocks.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
XOFF
In communications, a flow control value used to signal the remote transmitter
to suspend data transmission.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
XON
In communications, a flow control value used to signal the remote transmitter
to start or resume data transmission.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
XVRB
See X.25 verb request control block.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
3101
An IBM ASCII terminal.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
3270 terminal emulation
A feature of Communications Manager that emulates the function of a 3270
workstation.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
5250 Work Station Feature
A feature of Communications Manager that allows a personal computer to perform
like a 5250 display station and use the functions of an IBM AS/400, IBM
System/36, or IBM System/38 system.
ΓòÉΓòÉΓòÉ <hidden> ΓòÉΓòÉΓòÉ
5250 WSF
See 5250 Work Station Feature.