home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
OS/2 Shareware BBS: 14 Text
/
14-Text.zip
/
DBPERF.ZIP
/
DBPERF.INF
(
.txt
)
Wrap
OS/2 Help File
|
1991-05-31
|
25KB
|
774 lines
ΓòÉΓòÉΓòÉ 1. PURPOSE ΓòÉΓòÉΓòÉ
Provide a single point of reference for performance inspections.
ABSTRACT
GOALS
ΓòÉΓòÉΓòÉ 1.1. ABSTRACT ΓòÉΓòÉΓòÉ
Most customers are happy with the performance of IBM's database engine once it
has been optimally tuned. Getting to the optimal state, though, can be a
difficult prospect at best. Difficulties that contribute to this problem
include the complexity of the operating system, the wide range of performance
improvement techniques, and the lack of performance tools and documentation.
This document is intended to ease this pain by consolidating many performance
tips and techniques into a central forum.
ΓòÉΓòÉΓòÉ 1.2. GOALS ΓòÉΓòÉΓòÉ
1. Reduce the time required to gather resources for performance tuning.
2. Provide general information detailing the relative costs and benefits of
specific techniques that may be used to improve performance for the IBM
OS/2 EE Database Manager.
3. Provide a comprehensive guide of performance improvement techniques for
the IBM OS/2 EE Database Manager.
4. Provide general rules of thumb that apply to these techniques.
ΓòÉΓòÉΓòÉ 2. DATABASE DESIGN ΓòÉΓòÉΓòÉ
INDEX STRUCTURE
TABLE DESIGN
SINGLE DATABASE
ΓòÉΓòÉΓòÉ 2.1. INDEX STRUCTURE ΓòÉΓòÉΓòÉ
The impact of indexes cannot be over emphasized. In read-only situations, it
is usually worth while to create an index on fields which will be used for
joins or will otherwise appear in 'WHERE' or 'SORT' clauses of an SQL query.
Indexes should always exist on primary keys of larger tables. Creating
appropriate indexes can reduce response times by 90% or more in certain
situations. In other situations, table access may not be required at all if
all of the fields needed by a query exist in the index.
There are situations where creating indexes can degrade system performance.
Updating, deleting from, or adding to a table with indexes takes longer than
updating a table without indexes, once the record is found. This is true
because the indexes must also be updated. If a table is write-only or has more
writes than reads, then this may be a serious consideration. Most tables,
however, do not fit this description.
Index performance has an interesting effect on imports. It is much faster to
import a table, and then create the index than to create the index and then
import the table. This is true because of the added overhead of updating the
index.
The 'EXPLAIN' tool can be used to see if an index is used in a particular
query. Through intelligent use of this tool and good table design, significant
performance improvements may be realized.
Rules:
1) Use appropriate indexes in columns referenced in 'WHERE',
'ORDER BY' clauses to speed reads.
2) Use 'EXPLAIN' to see how indexes are used (static SQL only).
3) Remove seldomly used indexes to improve updates, deletes, and
inserts.
4) Import tables before creating indexes.
ΓòÉΓòÉΓòÉ 2.2. TABLE DESIGN ΓòÉΓòÉΓòÉ
This area must be considered at application design time. The idea here is to
minimize joins where possible. Though separating tables may be a good idea to
reduce redundancy, sometimes it is worth while to merge these tables into one.
If it is known that two tables will be joined frequently and the two tables
could logically be thought of as a single table, then merging the tables could
save many costly joins in the future. For example, consider two tables, person
and address. It may make sense to separate these tables to reduce redundancy
if more than one persons may live at the same address. This would avoid the
same address from being entered several times. However, if most reports
required information from person and address (for example, home city and state,
and person's social security and employee numbers), then it may be better to
merge these tables into one.
Rules:
1) Normalize tables to decrease redundancy, and then
2) Merge large tables that will be frequently joined.
ΓòÉΓòÉΓòÉ 2.3. SINGLE DATABASES ΓòÉΓòÉΓòÉ
It is important to carefully limit the number of databases on a server.
Significant overhead is involved whenever multiple databases are opened on a
single server. A far better strategy is to place all of the tables from
multiple databases within a single database.
Rules:
1) When possible, have only a single database on a server.
ΓòÉΓòÉΓòÉ 3. APPLICATION DESIGN ΓòÉΓòÉΓòÉ
LOCKING
COMMITS
APPLICATION REMOTE INTERFACE (ARI)
REMOTE DATA SERVICES ALTERNATIVES
STATIC vs DYNAMIC SQL
ΓòÉΓòÉΓòÉ 3.1. LOCKING ΓòÉΓòÉΓòÉ
The idea here is to choose the best granularity possible for the sake of
concurrency. A reading process and a writing process may not hold a lock on
the same object. Processes may hold locks to read the same object. No two
processes may hold locks to write the same object. Obviously, if some process
has a table write-locked, then no one can touch the table until the process
commits or rolls back. As a general rule, then, in a multi-user system, it is
better to lock records than tables. If the locked list parameter is not set
high enough, then locks will be upgraded to table locks, and significantly
impact concurrency.
In a situation where concurrent access will not be a problem, then it is better
to take out a single table lock than row level locks, because there is less
overhead.
Rules:
1) In a multi user system, row level locking yields better
concurrency, thus better overall performance.
ΓòÉΓòÉΓòÉ 3.2. COMMITS ΓòÉΓòÉΓòÉ
Managing commits involves two conflicting goals. Commits do take a certain
amount of overhead, in the form of log maintenance and other things. Commits
do, however, free resources that may be used by other processes, most notably
locks. It is important to understand that locks are accumulated on the system
tables even in read only SQL (selects) and even in the uncommitted read level
of isolation, so even in these situations it is important to commit, to free up
locks on system tables.
Rules:
1) Reduce database operations between commits
(ie increase commits) to increase concurrency.
2) Never commit in the middle of a logical unit of work.
3) Reduce the number of commits to improve the performance within
a single process when the concurrency impact may be tolerated.
ΓòÉΓòÉΓòÉ 3.3. APPLICATION REMOTE INTERFACE (ARI) ΓòÉΓòÉΓòÉ
The Application Remote Interface is one of the most important, yet least used,
of all performance techniques. It is used to execute a block of code on the
server. When several different queries may be combined into a single procedure
to be executed on the server, system performance is improved in several ways.
First, communication costs are reduced. SQLCA's and SQLDA's must only be moved
to and from the server once. Second, the faster microprocessor on the server
may be utilized. Instructions that would be normally executed on the requester
are moved to the server, which is normally a vastly superior machine.
Finally, the intermediate control blocks required by Remote Data Services for
each SQL call do not need to be packed for every call. This leads to further
savings of the requester CPU, and usually offsets the additional responsibility
of the server.
To use the ARI, precompile, compile, and link the remote procedure to be called
into a DLL. Place the DLL on the server, and use the database manager API to
call the remote procedure. Data may be passed to and from the remote procedure
using SQLDA's.
Rules:
1) Use the ARI when several database calls may be packed into a
single function on the server.
2) Use the ARI to take advantage of an underused server or to
offload an overused requester.
ΓòÉΓòÉΓòÉ 3.4. REMOTE DATA SERVICES ALTERNATIVES ΓòÉΓòÉΓòÉ
For some platforms, Remote Data Services can not handle enough concurrent
connections to perform the required service. This is true because each process
connecting to the database uses an additional 120KB-150KB of memory on the
server. Eventually, memory runs out, and performance degenerates drastically.
There are several alternatives.
Consider a localized solution. This may seem obvious, but in many situations,
a local solution will suffice.
Use named pipes to pass requests to a service process on the server. This
process can then talk to the database through a single connection on the
server. This solution has been used quite effectively by many customers.
Implement a scheduler on the server. In this situation, there are several
service processes on the server. Each process is scheduled to be serviced by
one of the available server.
Rules:
1) Use Remote Data Services when the platform will work.
2) Consider the above alternatives when it will not.
ΓòÉΓòÉΓòÉ 3.5. STATIC vs DYNAMIC SQL ΓòÉΓòÉΓòÉ
The database manager supports two different types of SQL: static and dynamic.
Static SQL is bound before the program is executed, either at precompile time,
or bind time. With static SQL, the statements that are to be run must be known
in advance. Dynamic SQL is bound at run time. It must be bound each time the
cursor for the statement is opened. Obviously, it is better to pay the bind
penalty when the ".exe" file is created instead of when the program is
executed. Therefore, when it is known which SQL statements will be run in
advance, it is better to use static SQL.
An exception is that if an SQL statement contains a statement like "LIKE
:variable", then the optimizer does not have enough information to decide if an
index is usable. If the like clause looks like "ABC%", then an index is in
fact usable, and would be used for dynamic SQL.
Rules:
1) Use STATIC SQL when SQL statements are known in advance.
2) For statements in the form "LIKE :var", use dynamic SQL.
ΓòÉΓòÉΓòÉ 4. PS/2 HARDWARE CONFIGURATION ΓòÉΓòÉΓòÉ
SPLITTING THE LOG AND DATABASE
MEMORY
PROCESSING POWER
ΓòÉΓòÉΓòÉ 4.1. SPLITTING THE LOG AND DATABASE ΓòÉΓòÉΓòÉ
Some performance improvements may be made based on the disk configuration.
Better performance can be achieved by placing the database and the database
logs on different physical hard disks. This is true because some degree of
concurrency can be realized by accessing both drives at the same time. SCSI
devices can make the improvements even more pronounced. If only one physical
drive is available, then it is better to place the logs and the database on
contiguous partitions or the same partition. Also, the partition(s) should be
as close as possible to the center of the drive. For example, if a hard disk
were partitioned into drives C,D,E, and F, an acceptable configuration would be
to place the database on drive D and the log on drive E. This will minimize
the lateral movement of the disk head.
Rules:
1) Place the database and the database log on different physical
hard disk drives when possible to maximize concurrency.
2) If 1) is not possible, then place the database and log on
adjacent partitions, or the same partition.
3) Place the database and log on partitions as close to the center
of the disk as possible.
ΓòÉΓòÉΓòÉ 4.2. MEMORY ΓòÉΓòÉΓòÉ
If there is not enough memory on the server to adequately handle the number of
workstations, then performance will degenerate drastically as activity and the
number of database connections increases.. Each requester will use an
additional 120-150KB of memory.
Rules:
1) Make sure that the server has adequate memory resources to
handle the load expected of it.
ΓòÉΓòÉΓòÉ 4.3. PROCESSING POWER ΓòÉΓòÉΓòÉ
Contrary to intuition, the database manager is a CPU intensive application.
Therefore, improvements to the CPU will usually have positive effects on
transaction through put. There are exceptions, such as abnormally low
concurrency, or high update activity which may reduce the impact of a different
CPU on any given database server.
Rules:
1) Make sure that the server has adequate processing resources
to handle the load expected of it.
ΓòÉΓòÉΓòÉ 5. PERFORMANCE RELATED FEATURES ΓòÉΓòÉΓòÉ
RECORD BLOCKING
ISOLATION LEVELS
SQLLOO
REORG/RUNSTATS
ΓòÉΓòÉΓòÉ 5.1. RECORD BLOCKING ΓòÉΓòÉΓòÉ
Record blocking can improve the performance of read-only queries which read
multiple database rows. For such a query, when the requester does a single
fetch, a block of data is moved to the requester. The requester then reads
from its private block instead of requesting additional rows from the server.
Record blocking is transparent to the user application. Record blocking is a
bind time parameter. To ensure that record blocking is used only where it can
be effective, use the '/K=UNAMBIG' option when binding, and specify cursors
which will not be updated, deleted, or inserted as 'FOR FETCH ONLY'. This
series of steps will ensure record blocking for all fetch transactions. The
default is 'UNAMBIG' which uses record blocking for all cursors which are
fetch-only. Record blocking is a way to significantly improve performance
without doing much work. Gains will be most pronounced when many database rows
are expected to be returned from frequently used queries. To ensure that
record blocking will be used for dynamic SQL, then the record blocking
parameter should be set to 'ALL' at bind time.
Rules:
1) Use record blocking whenever possible for FETCH cursors.
2) Set the record blocking parameter to 'ALL' for dynamic SQL.
ΓòÉΓòÉΓòÉ 5.2. ISOLATION LEVELS ΓòÉΓòÉΓòÉ
Isolation levels, from strongest to weakest, are Repeatable Read, Cursor
Stability, and Uncommitted Read. Isolation levels, from fastest to slowest,
are Uncommitted Read, Cursor Stability, and Repeatable Read. If the isolation
levels required are not clearly understood, then it is best to err on the side
of caution and use the strongest isolation level. Otherwise, it is best to use
the weakest (and thus fastest), isolation level that can be tolerated. It is
important to note that an uncommitted read could give me a value that was never
committed, and thus never existed for the purposes of the database!
In general, the Repeatable Read will never yield inconsistent results. It
implements an industry standard locking scheme known as two phased locking.
Cursor Stability is acceptable unless there are dependencies between several
rows of the same table that must be preserved. Uncommitted Read is only
acceptable in situations which do not require consistent data, or in situations
in which it is known that the data being examined will not change.
Rules:
1) If isolation levels are not clearly understood, use the
Repeatable Read isolation level.
2) Use the weakest, and thus fastest, isolation level that may be
tolerated by the application.
ΓòÉΓòÉΓòÉ 5.3. SQLLOO ΓòÉΓòÉΓòÉ
SQLLOO (Lan Only Option) is a faster communication protocol than APPC. However,
SQL/LOO is not always a viable alternative. LOO is the default communication
protocol for RDS.
APPN may be as fast as SQLLOO for some applications. APPN is available
separately from IBM as an OS/2 EE extension.
Rules:
1) When possible, use SQL/LOO as the communication protocol.
ΓòÉΓòÉΓòÉ 5.4. REORG/RUNSTATS ΓòÉΓòÉΓòÉ
After significant insert, delete, and update activity, a database table may
become significantly fragmented, and the statistics used by the optimizer
outdated, forcing bad optimization strategies and overall performance
degradation. Running statistics can update the optimizer statistics for a
table. Reorganizing the table resolves the fragmentation problems. It also
puts the database records in the order of a specified index, which
significantly improves performance when table scans are needed.
Performance benefits of a 'Reorganize' will not be realized unless statistics
are run afterwards. Running statistics, however, does not mandate a
reorganization. Similarly, after a RUNSTATS, a rebind is necessary.
Rules:
1) When the system will be quiesced for a period of time, such as
overnight, reorganize the tables and run statistics.
2) Reorganize and run statistics after a significant number of
deletes, which cause fragmentation.
3) Run statistics after the size of a table significantly changes.
4) Runstats and rebind dependant access plans after a reorg.
ΓòÉΓòÉΓòÉ 6. METHODOLOGY ΓòÉΓòÉΓòÉ
PERFORMANCE MANAGEMENT TOOLS
SIMULATION/PROTOTYPING
BENCHMARK AND TUNING METHODOLOGY
ΓòÉΓòÉΓòÉ 6.1. PERFORMANCE MANAGEMENT TOOLS ΓòÉΓòÉΓòÉ
In order to be able to make the best performing system possible, it is
important to have the best performance data possible. The best way to gather
this data is from trace points placed within an application. The trace points
should record a unique identifier, the time that the trace point was
encountered, and any other pertinent information. In order to get the most
information without impacting system performance, this information should be
written to shared memory, which could be dumped on demand. The dump can then
be analyzed and recorded. It is important to save data from old runs, so that
an accurate picture of an application's performance history may be assimilated
and the performance characteristics of all components of the current system
assessed.
Rules:
1) Build performance gathering tools into the application.
2) Maintain an accurate and complete performance history.
ΓòÉΓòÉΓòÉ 6.2. SIMULATION/PROTOTYPING ΓòÉΓòÉΓòÉ
One of the most common mistakes made in any field is assuming. This is an
especially dangerous practice with OS/2 EE. Because there are so many tightly
coupled variables, it is almost impossible to predict what the overall system
performance will be. For this reason, it is important to prototype a working
benchmark system as quickly as possible. A good benchmark will represent the
real world as closely as possible in as many areas as possible, including the
number of users, size and composition of database requests, the size of the
database, and the LAN topology and system configurations. The prototype does
not have to LOOK like the final product, but merely to ACT like the final
product. For this reason, many people use a script of some type. Care should
be taken, though, to consider the amount of conflict that is to be expected in
the real world. For example, a script containing updates to the same record on
all machines would generate locking conflict that would not be expected in the
real world. Similarly, if none of the data sets overlapped, the amount of
conflict could be artificially low. Many people use some form of random number
stream which is recorded in a file, with a different stream on each machine.
This has the desired effect of being reproducible, and represents many real
world situations closely.
Rules:
1) Prototype early.
2) Prototype often.
3) Prototype late.
4) Represent the real world as closely as is practical.
ΓòÉΓòÉΓòÉ 6.3. BENCHMARK AND TUNING METHODOLOGY ΓòÉΓòÉΓòÉ
When tuning the database manager for performance, it is important to tightly
control the entire process. There are several things to keep in mind. First,
several performance runs must be made. The initial run should be discarded, as
DLL's must be swapped in, and the DBM initialized. Afterwards, from two to five
runs should be made under identical conditions and the average of all runs
taken. Second, only a single parameter should be changed between performance
runs. This is the only way that the results of each benchmark can be
accurately assessed. Finally, the parameters for which the largest gain is
expected should be changed first. In this way, the system may be tuned until
some goal is reached, and the tuning process completed with the confidence that
the most significant parameters have been refined.
Rules:
1) Control the environment.
2) Discard the first run.
3) Average more than one run.
4) Tune one parameter at a time, starting with 'heavy hitters'.
5) Set realistic goals, and stop when the goals are realized.
ΓòÉΓòÉΓòÉ 7. OS/2 EE CONFIGURATION ΓòÉΓòÉΓòÉ
TURN TRACE OFF
FILE SYSTEMS - HPFS vs FAT
QUERY MANAGER ROW POOL
SUGGESTED DATABASE MANAGER CONFIG PARAMETERS
SUGGESTIONS FOR LOG CONFIGURATIONS
SUGGESTED DATABASE CONFIG PARAMETERS
ΓòÉΓòÉΓòÉ 7.1. TURN TRACE OFF ΓòÉΓòÉΓòÉ
The OS/2 trace can significantly slow the system, sometimes by a factor of 2 or
3. If performance degenerates unexpectedly, first make sure that the OS/2
trace has not been turned on inadvertently.
Rules:
1) Make sure that the OS/2 Trace is not on.
ΓòÉΓòÉΓòÉ 7.2. FILE SYSTEMS - HPFS vs FAT ΓòÉΓòÉΓòÉ
The OS/2 database manager cannot take full advantage of all features of the
High Performance File System. This is particularly true of the cache. Since
data reliability is so important to the database manager, all log records must
be immediately written to the disk. This defeats the purpose of the cache,
which tries to avoid disk activity by keeping frequently accessed information
in memory. The database implements its own cache, called the buffer pool.
Even so, the database is slightly, but not significantly, faster on HPFS. The
log is faster on FAT.
Rules:
1) Use HPFS for the database and FAT for the logs.
ΓòÉΓòÉΓòÉ 7.3. QUERY MANAGER ROW POOL ΓòÉΓòÉΓòÉ
Query Manager scrolling performance can be improved by increasing the size of
the Query Manager row pool. Beginning with release 1.2, Query Manager began
formatting an entire row of data, instead of just a screenfull of data. This
made horizontal scrolling smoother, but required a larger row pool. For this
reason, the default row pool is no longer adequate for many applications. Note
that this impacts the scrolling performance in Query Manager only. This is a
Query Manager startup parameter.
Rules:
1) To improve the vertical scrolling performance in Query Manager,
specify a larger row pool.
ΓòÉΓòÉΓòÉ 7.4. SUGGESTED DATABASE MANAGER CONFIG PARAMETERS ΓòÉΓòÉΓòÉ
- Shared segments: 802
Since this is a 'high water mark' (ie the system takes only what it
needs, not to exceed 802), this parameter should be kept at the
maximum.
- Databases: (number of databases)
This parameter should be set to the number of DATABASES that may be
opened AT THE SAME TIME. This parameter should not be set higher than
the required number.
- Req I/O Block Size: 4096
The default is OK here.
- Srv I/O Block Size: 4096
The default is OK here.
- Remote Connections
This is used to estimate the optimal communications buffer size.
If the number is too small, the system will function, but with
slightly lower performance.
ΓòÉΓòÉΓòÉ 7.5. SUGGESTIONS FOR LOG CONFIGURATIONS ΓòÉΓòÉΓòÉ
A small number of large logs is better than a large number of small logs. The
size of the logs required will depend on the application. Some general
information about logs would be appropriate to discuss here. Logs in OS/2 1.2
are circular in structure. The same log space is used over and over. When a
piece of a log is no longer needed, the space is then reclaimed. In this
process, secondary logs are reduced to a file of size zero. Primary logs are
initialized. This reclaim process is most likely to occur when the entire log
space (primary + secondary) is completely full. At this point, users will see
a noticeable delay. The bigger the log space, the longer the delay. The
primary logs should be large enough to handle most situations, but secondary
logs should be allocated to avoid secondary log initialization delays.
Operations that require a large log space are reorgs and imports of large
tables. Read-only transactions are not logged. Update, delete, and insert
transactions are always logged.
As discussed previously, for performance, it is better to have the logs on a
different physical hard disk than the database. The database runs faster when
the log is placed on a FAT (non-HPFS) partition.
Rules:
1) Place the database on HPFS, and the LOG on a separate
physical FAT partition
2) A small number of large logs is better than a large number
of small logs.
ΓòÉΓòÉΓòÉ 7.6. SUGGESTED DATABASE CONFIG PARAMETERS ΓòÉΓòÉΓòÉ
- buffer pages >= 250 (server only)
This parameter is one of the most important parameters to consider when
tuning database performance. If it is not large enough, performance
will be significantly degraded. If the parameter is much too large,
then additional system resources will be used (memory). It is more
important to have the parameter large enough than small enough.
- deadlock check interval = (default)
This parameter does not significantly impact system performance. It
can be set to the default value and increased if deadlock is a problem.
- Lock List Pages = 25
This parameter will impact concurrency if it is too small. If the
application is expected to hold many locks concurrently (ie long
transactions with Repeatable Read level of isolation; long transactions
that update, insert, or delete large numbers of records), then this
parameter must be increased. Increase if concurrency problems show up.
- Lock List Percent per Application = 6
This parameter should be large enough for each connected application
to have access to the locks that it requires.
- Max Open Files = 255 (high water mark)
- Max Open Files Per Application = 25 (high water mark)
- Application Heap = 5
The application will fail with a specific return code if this parameter
is not set high enough.
- Database Heap = 5
The application will fail as above if this parameter is not high enough.
- Communications Heap = (RQRIOBLK * m + 65535) / 65535 where m is the
maximum number of concurrently open cursors used by an application.
- Sort List Heap = 2
This should be increased if large tables are sorted or joined. This is
also an important parameter to tune.
- Statement Heap = (?)
This parameter has no impact on runtime performance. Only precompiler
performance is affected.
Rules: 1) 'Buffer Pages' and Sort Heap are the most significant.
ΓòÉΓòÉΓòÉ 8. APPENDIX ΓòÉΓòÉΓòÉ
PERFORMANCE TECHNIQUES
PERFORMANCE PARAMETERS
ΓòÉΓòÉΓòÉ 8.1. PERFORMANCE TECHNIQUES ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéTechnique ΓöéImplementationΓöéExpected ΓöéCost ΓöéHeavy Γöé
Γöé Γöé Time ΓöéPayoff Γöé ΓöéHitterΓöé
Γöé Γöé Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéIncrease Memory Γöé Any ΓöéHigh ΓöéLow ($) Γöé Yes Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéTurn Trace Off Γöé Any ΓöéHigh (40-70%)ΓöéNone Γöé Yes Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéAp Remote InterfaceΓöé Early in Dev.ΓöéHigh (10-60%)ΓöéHigh Γöé Yes Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéRecord Blocking Γöé Any ΓöéHigh (5-40%) ΓöéLow Γöé Yes Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéIndex Restructure Γöé Any ΓöéHigh (0%-90%)ΓöéLow Γöé Yes Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéSplit Log/2 Drives Γöé Any ΓöéMedium(5-30%)ΓöéLow ($) Γöé Yes Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéCommits Γöé Development ΓöéHigh ΓöéHigh Γöé Yes Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéQuery Mgr Row Pool Γöé Any ΓöéHigh (5-60%) ΓöéLow Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéReorg/Runstats Γöé In ProductionΓöéMedium ΓöéMedium Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéPerformance Bnchmk Γöé All ΓöéHigh ΓöéHigh Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéComm Mgr Xmit BlockΓöé Any ΓöéMedium ΓöéMedium Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéTable Restructure Γöé Early ΓöéMedium ΓöéHigh Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéLocking Γöé Development ΓöéMedium ΓöéHigh Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéSQLLOO Γöé Any ΓöéLow ΓöéLow Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéHPFS/FAT partitionsΓöé Any ΓöéLow (2-5%) ΓöéLow Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéIsolation Levels Γöé Late in Dev. ΓöéLow ΓöéMedium Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéStatic SQL Γöé Development ΓöéLow ΓöéMedium Γöé Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
Heavy hitters are parameters with high payoff/cost ratios.
Implementation Time deals with the point in the development cycle
that is most appropriate for the implementation of the technique.
ΓòÉΓòÉΓòÉ 8.2. PERFORMANCE PARAMETERS ΓòÉΓòÉΓòÉ
ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéParameter ΓöéTradeoff ΓöéExpectedΓöéSuggested ΓöéHeavy Γöé
Γöé Γöé(Give / Take) ΓöéPayoff Γöé Value ΓöéHitterΓöé
Γöé Γöé Γöé Γöé Γöé Γöé
Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
ΓöéShared Segments ΓöéNone ΓöéLow Γöé802 Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéDatabases Γöé# db's / memory ΓöéLow Γöé# databasesΓöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéReq I/O block size Γöésize Xmit / memoryΓöéLow Γöédefault Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéSrv I/O block size Γöésize Xmit / memoryΓöéLow Γöédefault Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéRemote Connections Γöéspeed / memory ΓöéLow Γöé# remote cnΓöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéLog File Size Γöélog KB/DASD,speed ΓöéMedium Γöéhigh Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéPrimary Logs Γöélog KB/DASD,speed ΓöéMedium Γöélow Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéSecondary Logs Γöéinit time/run timeΓöéMedium Γöénone Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéSoft Check Γöéreclaim / speed ΓöéLow Γöédefault Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéBuffer Pages Γöémemory/speed ΓöéHigh Γöé250 Γöé Yes Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéDeadlock Check Int Γöéspeed/check time ΓöéLow Γöédefault Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéLock List Pages Γöémemory/concurrencyΓöéMedium Γöé25 Γöé Yes Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéLock List % per AppΓöé ΓöéLow Γöé100/num_apsΓöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéMax Open Files Γöé ΓöéNone ΓöéDefault Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéMax Appl Open FilesΓöé ΓöéNone ΓöéDefault Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéMaximum Appls Γöémemory/# connectnsΓöéMedium ΓöéNumber ConsΓöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéApplication Heap Γöé ΓöéNone ΓöéDefault (*)Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéCommunications HeapΓöémemory/ comm heap ΓöéNone Γöésee text Γöé Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓöéSort List Heap Γöémemory/speed ΓöéHigh Γöé6 Γöé Yes Γöé
Γöé Γöé Γöé Γöé Γöé Γöé
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
(*): (RQRIOBLK * m + 65535) / 65535
($): The solution will require additional hardware (GREENware).
Heavy hitters are parameters with high payoff/cost ratios.