home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Black Box 4
/
BlackBox.cdr
/
lan
/
novdiags.arj
/
LANPERF.DOC
< prev
next >
Wrap
Text File
|
1988-12-05
|
37KB
|
653 lines
*** BlueFish Copy (from Computer Library, November, 1988): Doc #15522 ***
Journal: PC Tech Journal June 1988 v6 n6 p44(7)
* Full Text COPYRIGHT Ziff-Davis Publishing Co. 1988.
--------------------------------------------------------------------------------
Title: Network complexity. (choosing a local area network) (includes a
related article on measuring LAN performance)
Author: King, Stephen S.
AttFile: Program: LAN PERFORMANCE MEASUREMENT LANPERF.C
Program: LAN PERFORMANCE MEASUREMENT LANPERF.EXE
Program: LAN PERFORMANCE MEASUREMENT LPTEST.ASM.
Summary: General-purpose office automation LANs can be evaluated on
the following criteria: interoperability; application programming
interfaces; communications protocols; network management; security;
costs; and market viability. Many current PC LAN solutions are flawed
because they were designed for older, single-user technology and future
products could come from larger computer-manufacturing interests with
more knowledge of distributed technology. There are two basic
approaches to network resource management: peer-to-peer and centralized.
High levels of storage and peripheral resources at the workstation level
and limited data sharing typify the peer-to-peer approach. Powerful,
centralized file servers and higher bandwidth leading to higher hardware
costs typify the centralized approach.
--------------------------------------------------------------------------------
Descriptors..
SIC Code: 7373; 3660.
Topic: Local Area Networks
Performance Measurement
Analysis
Hardware Selection.
Feature: illustration
table.
Caption: Network protocols.
Record#: 06 689 321.
--------------------------------------------------------------------------------
Full Text:
Network Complexity
Local area networks (LANs) may well be the ultimate platform for
production software, communications, and sharing computer resources, yet
today's PC LAN industry is in a profound state of disarray. Major
vendors are standardizing on different technologies; hardware/software
incompatibilities persist; and LAN applications are becoming more
difficult to integrate.
With this cover suite, PC Tech Journal launches a series of
in-depth LAN evaluations. Our intent is to apply consistent,
comprehensive criteria to distributed technologies that support
microcomputers as workstations. LAN performance is assessed using a
utility written in C (see the sidebar, "The LAN Performance Challenge,"
p. 46). The first network evaluated is Novell NetWare 2.1 (p. 58,
this issue).
The PC Tech Journal LAN series is designed to help developers and
integrators who create products for PC LAN platforms, as well as
end-user organizations that acquire, develop for, and maintain LANs.
Not long ago, this would have been a select group--few companies were
willing to invest in the new technology. Today, however, firms feel the
urgency of connectivity. Consequently, LANs are proliferating into
virtually every sector of the economy.
The focus of this LAN evaluation series is multipurpose office
automation networks supporting data management, communications, document
production, and group-productivity software. File servers,
workstations, and communications hardware are covered from the
standpoint of their interaction with the network software.
The LAN industry's priorities naturally correspond to the
ascending layers of the Open System Interconnection (OSI) protocol
stack. The higher the layer, the more the need for improvement in
existing protocols. The internationally acknowledged OSI model,
developed by the International Standards Organization (ISO), defines
seven layered communications protocols used by PCs, minicomputers, and
mainframes to converse across local and wide area networks (see table
1).
The OSI layers are represented in a vertical arrangement, with the
lower levels addressing hardware concerns; the middle layers covering
internet-working, routing, and flow control; and the upper layers
defining protocols for network applications and program-to-program
communications.
As the lower communications layers have improved, primary
technical concerns have migrated up the protocol stack. Thus, the
greatest deficiencies in the PC LAN industry are at the top of the
stack, in the session, presentation, and application areas.
In the early stages of the LAN industry's development, much
attention was directed toward the lower-level hardware concerns, such as
topology and media access--major elements of any LAN implementation.
But as the technology has matured, media-access methods have been
stabilized by the wide acceptance of IEEE's 802 model, which defines
specifications for Ethernet, StarLAN, and Token-Ring. Along with the de
facto standard ARCnet, the 802-derived topologies increasingly will
dominate the network landscape. (For more on LAN topologies, see "LAN
Hardware Standards," Art Krumrey and John Kolman, June 1987, p. 54.)
Technical concerns are substantial in the middle, subnet layers,
but adequate protocols are available, such as Transmission Control
Protocol/Internet Protocol (TCP/IP) and Xerox Network Systems (XNS),
both of which perform addressing, routing, and other internet-work
functions.
While work goes on defining the upper layers, the PC LAN industry
continues to depend heavily on NETBIOS, IBM's program-to-program
protocol for PC networks. Despite its wide use, NETBIOS barely
qualifies as a tre session-level protocol, and it si by no means
adequate to support complex multiuser applications on an internet-work.
Upper protocol layers should support global name service,
authentication, and a rich set of interprocess communications routines.
PC LAN vendors tend to tack on these features at the application level
instead of including them in the communications subsystem where they
belong. In contrast, minicomputer and workstation vendors have made
substantial progress in the implementation of these advanced
functions--for example, Sun Microsystems Inc.'s Remote Procedure
Call/External Data Representation (RPC/XDR) and Digital Equipment
Corporation's (DEC) Session Control.
Ultimately, the quality of services obtained by client
applications is governed by the client/server protocol at the top of the
stack--the application level. The current industry-standard protocol,
Microsoft Network's server message block (SMB), does not support a rich
set of client services. Some successful LAN vendors do not rely on SMB
and have their own file system interface. Novell's NetWare Core Protocol
(NCP) and Sun's Network File System (NFS) are examples of robust
client/server protocols that are open to programmers and developers.
ACROSS THE LAN-SCAPE
The diversity of LAN applications and implementation techniques
makes it difficult to establish evaluation criteria. New LAN products
and even new classes of products appear at an amazing rate. Nearly as
many variations of LAN technologies have emerged as there are types of
installation sites and applications. It seems for every LAN technology,
there is a different LAN implementation philosophy.
One striking dichotomy in the LAN industry is the two separate
approaches to network resource management: peer-to-peer and centralized.
The peer-to-peer view is typified by high levels of storage and
peripheral resources at the workstation, thus handing large system
administration responsibilities to each user. The centralized view, on
the other hand, holds that network resources are best managed and
maintained on powerful, centralized file servers. With this approach,
workstations need high processing power, but are not the ideal location
for mass storage, peripheral, and backup resources.
Biases toward either of these approaches impact design processes
for LAN products and network application software. The ironic credo for
advocates of centralized resources is, "Distributed doesn't mean
decentralized." This translates to: processing power may be distributed
to the end users' workstations, but the responsibilities for network
administration should be centralized, as they are on minicomputers and
mainframe systems. Supporters of the peer-to-peer approach, in
contrast, believe that the workstation is the center of the automation
universe, both in terms of resources and management responsibilities.
Although any LAN can include elements of both strategies, most
vendors fall easily into one camp or the other. LAN vendors such as
Banyan Systems, Novell, and 3Com support high levels of centralized
management and resources. With systems from these vendors, the network
can be administered by any end-user node, but administrative
responsibilities typically are held by a select group with special
privileges. Hard drives, communications equipment, and tape backup
units are generally situated at dedicated servers, not at workstations.
Vendors with a peer-to-peer orientation include TOPS and Apple
Computer. Many low-end PC LAN products lack centralized hardware
support and management facilities, and consequently fall into the
peer-to-peer category by default. Representatives of the low-end or
workgroup LANs are Network-OS, from CBIS; Port, from Waterloo
Microsystems; and 10NET, from 10NET Communications.
Peer-to-peer LANs are not as reliant on heavy communication
between nodes, because the stations store much of their data locally.
One disadvantage to this approach is that shared data are fragmented
onto local drives, making access by many users difficult. In some
applications, fragmented data may not be an issue. Peer-to-peer LANs
rely on a higher level of effort and systems knowledge on the part of
end users who share each other's equipment. This is impractical in many
business environments. Centralized resources are probably the better
choice in a system where end users are not overtly computer oriented.
LANs with more centralized resources require higher bandwidth to
support regular file I/O and queuing of requests to the servers, and
this can mean higher costs for hardware. The payoff is that the
centralized resource helps ensure the availability, integrity, and
backup of shared data.
STANDARD BEARER
If the two approaches are different in most other aspects, they
are affected equally by the atmosphere of evolving standards. LAN
standards must advance if the industry is to realize its potential to
provide computer users with a uniformly high level of distributed
services for the spectrum of applications. Only when standards have
become well defined will vendors be able to differentiate themselves by
the quality, depend-ability, performance, and cost of their products.
Without standards, products providing the same services are not
interchangeable and vendors can lock buyers into sole-source
relationships that encourage neither innovation nor rapid progress for
the industry.
Standards are often at war with proprietary interests. The
struggle for international telecommunications standards involving IBM's
System Network Architecture (SNA), OSI, and Integrated Services Digitla
Network (ISDN) is an example of this. Each of these interests wants
standards, but each would prefer standards that closely relate to its
own products. Witness the slim likelihood that protocol components from
vendors such as DEC and Hewlett-Packard (HP) will be freely
interchangeable in the near future--even if they support OSI. In the
minicomputer and mainframe industries, standards provide a common
language more often for efficient communications between dissimilar
systems, rather than open interchange of vendor components.
The PC LAN industry has similar examples, and worse, the standards
presently in place are based on older, single-user technologies and
address somewhat primitive network functionality. The most advanced
network functions are available only in proprietary technologies, and
the progress of LAN standards is far from keeping pace with product
development. A bigger problem with standardizing these advanced
technologies, even now, is that not all vendors will support them. For
example, major LAN vendors currently are adopting different electronic
mail (E-mail) and database engines, thus making standard development
platforms difficult to achieve.
The chances for the LAN industry to standardize rapidly on
much-needed distributed file systems, client/server interfaces,
store-and-forward capabilities, or session-layer protocols do not look
promising. As with the large systems industries, the best that can be
hoped for is that proprietary products from major vendors will talk to
each other in a relatively seamless fashion.
This problem is compounded by the possibility that LAN standards
may not evolve in "obvious" directions. Normally, standards are
developed by international organizations, or large industrial interests
in a particular industry. In some cases, though, when standards have
not provided what users require, a smaller company with a superior
product may spontaneously create a new standard against all odds--the
Hayes Smartmodem, for example, or Adobe's PostScript.
The LAN industry has reached the point where standards such as DOS
and NETBIOS have fallen too far behind user needs for advanced network
functions. If new products such as OS/2, LAN Manager, and advanced
program-to-program communication (APPC) do not fill the gap quickly,
products from other sources will.
One possibility is that LAN purchasers may become dissatisfied
with the slow development of PC LAN standards and instead buy their LANs
from workstation vendors such as Sun or apollo. These vendors can
support DOS applications with DOS coprocessors or software emulation and
thus enjoy the benefit of powerful network services not available with
PC LAN industry products. In the short term then, LAN products must be
evaluated conscientiously and with careful anticipation of the evolving
indsutry.
A GAUGE TO APPLY?
Despite this divergence of LAN philosophies, general-purpose
office automation LANs cna be evaluated against a common set of
criteria. The following considerations were developed from study of the
services provided by PC LANs, UNIX workstation systems, and
minicomputers and mainframes.
Interoperability. With the virtual explosion of new and differnet
LAN architectures into the market, interoperability is fast becoming a
priority for many organizations. Applications on a given LAN must be
able to interact with applications on other systems that might include
dissimilar LANs, workstations, minicomputers, and mainframes. Often,
two or more dissimilar architectures are supported within the same
organization--sometimes at the same site.
Part of the interoperability requirement is support for multiple
client operating systems. Also important are gateways providing
transparent protocol conversion between user application processes
running on different systems. TOPS has found an excellent niche
providing interoperability between Apple-, DOS-, and UNIX-based clients.
Other vendors, such as Banyan, Novell, and 3Com, are directing major
efforts to achieve similar functions.
Application programming interfaces. LANs are becoming the
platform of choice for PC application developers. A full-featured LAN
should support a large number of callable software routines; at a
minimum, support of standard DOS 3.X function calls should be provided.
Important intersystem APIs include IBM's high-level language application
programming interface (HILLAPI), APPC, NETBIOS, and the Berkeley socket
interface for TCP/IP.
The DOS criteria requires that client applications write and read
from network disk files in the same way they would with files on local
disks. Some of the APIs to be expected from a full-featured LAN include
services for file management, usage accounting, remote jobs, printing,
asynchronous servers, network diagnostics and management, name servers,
database servers, and interprocess communications. Advanced network
functions should be evaluated for the degree of programmer accessibility
provided by the vendor in the form of link libraries and well-produced
documentation.
Communications protocols. LANs depend on their communications
protocol stack for much of their performance and functionality.
Inefficient code at any protocol layer is a potential bottleneck. At
odds with performance is the need for well-structured protocol
interfaces between communications functions. Deficiencies in the stack
stifle a LAN's ability to grow and to internetwork. Many LANs need to
show improvement in this area.
Products based on older technologies tend to leave out layers or
combine different communications functions into a single component. Even
the high-end LAN vendors are ambivalent at times about highly layered
communications software. This is understandable when considering the
performance penalties associated with dividing communications processes
into discrete modules, eech with its own I/O interface. In the long
run, however, the benefits of many independent layers out-weigh the
drawbacks of any performance or development overheads.
Well-defined protocol components allow sections (modules) of the
stack to be modified or replaced without affecting the other layers.
Also important, a layered approach provides optional points of interface
between dissimilar systems. For example, if OSI-type layers are in
place, designers can interface systems at the data-link layers with a
bridge, at the internetwork layers with a router, or at the application
layers with a gateway. Ideally, a LAN vendor should provide more than
one type of protocol at each layer of the stack, allowing LANs to adapt
to different budgets, communications resources, and application demands.
Network management. Without strong network management
capabilities, local and wide area networks prove unreliable, even if
they are built with well-designed protocol stacks and mature distributed
operating systems. Network management tools should provide the system
manager with network monitoring and reconfiguration capabilities from
any node. Statistics on network traffic and server access patterns
should be available for off-line analysis. Full-featured network
management utilities are invaluable for diagnosing network faults,
managing connections, leveling loads, and planning capacity.
None of the current PC LAN offerings has fully developed network
management facilities, but the minicomputer and mainframe architectures
have had more time to develop sophisticated network management. DEC's
Digital Network Architecture (DNA) is a fine model for establishing
network management criteria. It allows a system manager or an automated
management process to monitor and configure components of the protocol
stack. DEC supplies a high-level network management command language so
that programs can enhance network management functions. These same
types of services are also necessary for the expansion of PC LANs.
Security. In many LANs, security is a priority and a constant
user concern. Users of stand-alone PCs are accustomed to the physical
security of storing their data on a local hard disk in their own
offices, which they lock up when they leave. When a stand-alone user
starts using a LAN, this type of security is not apparent.
LANs can provide excellent levels of security, but this involves a
host of concerns, including the vulnerability of data on shared hard
disks, the possible disclosure of data printed on network printers
located in common areas, the dangers of unencrypted passwords on the
wire, and the information dissemination provided by LAN-based wide area
E-mail systems.
Although all LANs have some security features, many are deficient
in this area , particularly those that use DOS to host their server
operating function. Some distributed systems, such as Sun Microsystems'
DFS, enhance their authentication and server security with techniques
based on the National Bureau of Standards' Data Encryption Standard
(DES). Shared resources, such as printers and directories, need more
protection than a simple password. A sophisticated LAN security system
includes functions such as login tracking, forced password change, file
and volume encryption, and audit trails.
Costs. The cost associated with an application on a LAN is
generally understood to be less than for the equivalent application on a
minicomputer. This widely held belief can be misleading, however. Cost
evaluations must take into account the many indirect costs associated
with LAN.
For example, the direct hardware and software costs for a fully
loaded 80286 workstation connected to a shared file server may be less
than $5,000--especially if the file server costs are distributed across
many stations. But even if the shared equipment costs are factored into
the price of a workstation, the figure remains unrealistically low. the
actual "loaded" cost for a 286 workstation, including cabling,
maintenance, training, support, physical improvements for central
servers, and consulting, can, in some cases, exceed $10,000. This is
close to the cost for a minicomputer workstation, with a portion of the
CPU expenses factored in.
Cost calculations for LANs are not particularly striaghtforward or
self-evident. The short-term savings of implementing a low-speed
topology such as StarLAN, for example, can be wiped out in the long term
by losses in productivity associated with low application performance.
Market viability. Many firms from diverse commercial sectors are
seriously developing and marketing LANs. These manufacturers can be
organized into no fewer than five classes:
1. PC-centric firms, including Banyan, Microsoft, Novell, and 3Com
2. IBM and its many value added resellers
3. "Voice vendors," such as AT&T and Northern Telecom
4. Minicomputer vendors, such as DEC, HP, and Prime
5. Engineering workstation vendors, such as Apollo and Sun.
Minicomputer and workstation vendors are included in LAN considerations
because the distinctions betwene micro- and minicomputer-based systems
are diminishing. This process is similar to the way in which the
distinctions between voice and data systems are diminishing, as voice
vendors go digital and are able to support PCs in addition to
telephones.
Standard features on minicomputer networks are often what PC LAN
vendors wish they could offer. Some minicomputer vendors, such as
Prime, are offering 386-platform version of their products that look
like high-performance file servers, but bring the advantage of terminal
support and facile connections to larger, shared processor systems.
Midrange processors from vendors such as DEC and HP increasingly are
configured as file servers. These minicomputers can support DOS clients
with SMB client/server protocols, and, in the case of the DEC VAX, even
Novell's network core services, running as a guest operating system.
Current solutions from PC LAN vendors are in many ways crippled by
the heritage of an older, single-user technology (the 8088 chip and
CP/M, for example). As a consequence of these deficiencies, portions of
the PC LAN-vendor market share may be swallowed up as the manufacturers
of the UNIX workstation, minicomputer vendors , and other large
computer-manufacturing interests bring the full weight of their
distributed technologies to bear on the DOS LAN marketplace.
LANS OF TOMORROW
Although LANs have matured enough to become the business solution
for work-group and departmental systems, the industry is less than 10
years old. LAN standards, product interoperability, and vendor
stability may take years to develop fully. As the industry grows,
market forces will eliminate many LAN products and vendors--the normal
shakeout in any maturing industry. LAN evaluators should bear in mind
that the technology they embrace today may not be available in years to
come.
Evaluating LANs is an increasingly demanding undertaking
considering the diversity of technologies and the transitional state of
the LAN industry. LAN products that offer the best cost/performance now
may prove less viable in the future when issues such as wide area
interoperability, standardized communications APIs, and networked
management become dominant concerns.
Computer science has created hardware and software advances that
could provide users with the services, dependability, and performance
they require now. Yet, a substantial gap remains between LAN
technologies and what users can purchase.
Although present in many industries, this effect is particularly
evident in the computer field where scientific research produces
advances in technology much faster than the commercial sector can bring
them to market. The gap between capabilities and deliverable products
is particularly acute in the PC network field as attempts are made to
build powerful distributed systems out of products based on older
single-user, single-tasking platforms.
The solution to this problem must lie with the companies that
manufacture and market network products. Remarkable opportunities exist
for firms that can adapt to the dynamic environment of the LAN industry
by delivering innovative products tailored to meet changing needs.
With this in mind, developers should be constantly aware of
changes in the LAN marketplace. Integrators and end-user organizations
must be equally cautions in assessing connectivity products. As the LAN
industry matures and LANs connections multiply, the best efforts of all
computer professionals will be required if LANs are to reach their
phenomenal potential.
THE LAN PERFORMANCE CHALLENGE
Quantifying LAN performance is difficult because of the myriad
variables affect network throughput. Network performance beceomes an
issue to users only when it noticeably affects applications tasks. Most
current LAN evaluation techniques have little relevance to the
performance of typical DOS LAN applications. Even determining what
constitutes acceptable performance is somewhat subjective. Complicating
this further are LANs that support diverse applications and users. A
network perceived as a good performer for word processing may do poorly
with demanding datamanagement tasks.
Manufacturers of LAN adapters and network test equipment view
performance in terms of network media utilization--a network moving 5
megabits per second (Mbps) on 10-Mbps media is 50 percent utilized.
This approach reveals much about the efficiency of low-level network
components, but says little about application performance.
Network vendors suppoorting large shared-processor systems often
use the speed of a standard software operation to represent performance.
This works for easily defined operations performed regularly, but such a
situation is not typical for most general-purpose LAN's. Application
usage patterns are difficult to predict. Other techniques evaluate the
data-transfer rate or the delay associated with it.
To correlate network performance with application performance, it
helps to look at end-to-end network throughput. For PC LANs, end-to-end
throughput can be conceived as the rate of data transfer between a
workstation application and the server. If data transfer between
network nodes does not significantly impede application performance,
then end-to-end throughput is adequate. If data-transfer rates
adversely affect application performance, end-to-end throughput is
insufficient, and the data-path subcomponents require examination.
Many discrete components lie in the data path between two network
nodes, any of which could introduce latency and limit trhougput. The
path from a client application to the server's disk travels down the
cleint's communications stack, through the network transmission media,
up the server communications stack, and arrives at the server's
operating system and disk channel. The return trip takes the same route
in reverse. A highly layered network might include the following
components in the data path: operating system, redirector or shell,
router software, data-link software, network card, cable system, caches,
and disk channel.
Ideally, LAN performance evaluations should account for end-to-end
throughput and the throughput of data-path components. End-to-end
throughput is limited by the slowest component. For example, a network
without disk caching may be limited by the throughput of its disk
channel. If caching is enabled, the disk channel is eliminated as a
bottleneck; other possibilites are the workstation's processor speed,
media-access software, or network interface card.
High-performance networks such as Token-Ring or Ethernet are not
generally thought to restrict throughput, but the advent of 80386
workstations and servers introduces this potential. Token-Ring's
maximum throughput is less than 350 KB per second (KB/s) when the
overhead of the network card and driver are considered. This is well
below maximum 386 throughout and may even limit some fast 80286
machines.
INTRODUCING LANPERF
PC Tech Journal has developed the performance utility, LANPERF, to
measure the throughput in KB/s of applications making DOS calls. LANPERF
can be downloaded from PCTECHline, PC Tech Journal's online service. It
may be run on one or more network stations and synchronizes testing for
multiple stations. Portions of LANPERF are written in assembly language
to minimize latencies introduced by the test code. Unlike most DOS
applications, LANPERF operates continuously, so its throughput
approaches the maximum data-transfer rate for DOS operations.
LANPERF can be used to compare the performance of different
configurations of a single network or of two different networks. It
reveals the effects on throughput made by changing components such as
caches, drives, network cards, workstations, server processor, and so
on, so that adjustments can be made to improve LAN performance. Some of
these components impact throughput substantially, so changing them can
yield striking results.
The LANPERF program should run on any network that supports DOS
3.1 or later. Because it uses DOS calls, LANPERF reports throughput
statistics for local diskette drives, hard disks, and RAM drives, in
addition to network drives. The typical throughput for a 286 machine
writing to a fast, local hard disk is approximately 100 KB/s; in this
test, the disk channel wsa the limiting factor. A 16-MHz Compaq Deskpro
386 running LANPERF measured a throughput of 2,067 KB/s when reading
from a local RAM drive.
LANPERF performs a read test and a write test, each of which runs
for a user-specified number of seconds. The read read test creates a
temporary file containing random data and reads it repeatedly; the write
test writes random data to a temporary file. For both modes, the block
size and file size may be set from the command line. To achieve
overlayed reads or writes, file size is set equal to block size;
otherwise operations will be sequential. The DOS extended open mode for
reads and writes may be specified, including deny-read, deny-write,
deny-read/write, deny-none, or compatibility mode. These parameters
allow LANPERF to simulate standard file operations made by DOS
applications. Changing any of them can impact network performance.
Varying block size, for example, has a dramatic impact on
application throughput. Blocks sizes of 512 bytes or larger are close
to the capacity of a network packet and result in high throughput
figures; small blocks (such as 1 byte) are processed individually and
require a packet per block. In testing, the throughput for 1,024-byte
reads by a 10-MHz AST Premium/286 was 98.13 KB/s. The same
configuration tested with 1-byte blocks yielded 3.58 KB/s, revealing the
cost of working in small block sizes.
File-open modes affect performance if network software supports
local caching in the client station. Local file caching lets
applications perform small read and write operations to local cache
buffers rather than the server. The contents of local buffers are
updated periodically on the server. Local caching will not work for
large data transfer because buffer size is typically small. Operations
that open files nonexclusively cannot use local caches due to
data-synchronization needs on the server.
Because applications do not generate continuous traffic, as
LANPERF does, the KB/s metric must be interpreted into application
terms. One approach is to measure times for loading network
applications into memory and standard application operations, such as
searches, sorts, indexing, copying, and so on. Then, using the same
configuration, run LANPERF to determine the throughput figures. This
reveals the levels of througput needed to perform application tasks in a
specific elapsed time. Application performance for another network
configuration can then be predicted by Comparing LANPERF throughout
results for both networks.
Although LANPERF provides a reliable method of comparing
throughput for various configurations, correlating application
performance with throughput measurements is not an exact science. No
standard profile for application demands exists; consequently, the LAN
evaluator has the burden of identifying application usage patterns and
interpreting LANPERF throughput measurements.
PERFORMANCE TRANSACTIONS
Although it is a useful measurement, continuous throughput is only
one aspect of LAN performance. A data-transfer operation does not reach
full througput levels instantly--a session must first be established
between two network nodes. This may involve exchanging session IDs,
sockets, or handle numbers, and negotiating transfer parameters. The
management of sessions take place on more than one protocol level, with
various layers conducting handshaking and initialization sequencies.
The DOS request-response architecture introduces sigificant delay
into file opertion when combined with the natural latency of the network
link. Every DOS request to the file server must wait for a response
before the next operate is performed. Opening and closing files also
incurs delay.
Many network applciations involve more than sustained transfer
operation and require smaller operations with high administrative
overhead (databases, for example). Such an operation does not achieve
throughput equal to the capacity of the session's link. Consequently,
the maximum throughput figure as measured by LANPERF is not the only
meaningful representation of network performance.
For repealed operations in which administrative overhead is
substantial, a transaction is a more significant representation. A
transaction correlates well to functionally related activities, such as
open-read-close, that are bounded by initialization and completion
sequences. For performance modeling purposes, many types of
transactions could be considered, each with different levels of
administrative overhead. A transaction for a message-passing session
comprises all operations required to establish the session, transfer
data, and terminate the session. A transaction for a database session
includes opening, locking, reading, writing, and closing files.
The performance characteristics of transactional operations in
network environemnts require a more sophisticated utility that can
introduced delays into operations and model complex application traffic
patterns. However, the transactional approach is not a fully developed
methodology. The LANPERF utility, as introduced, is designed to measure
throughput for sustained file operations involving many iterations.
Future enhancements to the program are anticipated.