home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Total Baseball (1994 Edition)
/
Total_Baseball_1994_Edition_Creative_Multimedia_1994.iso
/
dp
/
0001
/
00018.txt
< prev
next >
Wrap
Text File
|
1994-02-24
|
136KB
|
1,920 lines
$Unique_ID{BAS00018}
$Pretitle{}
$Title{The Changing Game}
$Subtitle{}
$Author{
Felber, Bill}
$Subject{Changing Game}
$Log{}
Total Baseball: The History
The Changing Game
Bill Felber
Introduction
In the season of 1906, at the arguable heights of their careers, the Hall
of Fame-bound trio of Joe Tinker, Johnny Evers and Frank Chance completed
approximately 50 double plays. Eight decades later, the deservedly anonymous
New York Mets infield of Kevin Elster, Gregg Jefferies, and Dave Magadan
turned roughly twice that number. May we infer that the finest middle infield
of a bygone era would be rejected as unfit for duty on a perfectly nondescript
modern pro team?
For the five-year period between 1921 and 1925, Rogers Hornsby batted
better than .400. In the past 45 years, not a single major league hitter has
reached that level of excellence for as much as one season, let alone half a
decade. May we conclude that, were he in his prime today, Hornsby would shame
Wade Boggs into anonymity?
The answers to those questions are, if not two resounding calls of "no,"
at least two very cautious offers of "not necessarily."
Baseball is not played in a time capsule, and neither its record book nor
its archives can be read as if it were. The game played on the artificial turf
of Busch Stadium that you watch today on television holds the same lure as the
contest your grandfather carriaged to Chicago's old West Side Park to see.
Teams contest for the same end, using fundamentally the same objects in a
format scribed basically by the same rules. But technological, sociological,
strategic and cultural forces have over decades refined those elements so that
today's performances cannot be accurately measured relative to yesterday's,
nor judgment as to the superiority of either made with precision, save in the
mind's eye, which is the only pertinent arbiter of such standards anyway.
Baseball today is different from the game of the turn of the century in
as many ways as American culture is different from the horse-and-buggy era.
Imagine paying a quarter for admission to the ballpark, another quarter for
access to the grandstand, and a third quarter for a seat. Imagine games played
before audiences of a few hundred, or maybe a thousand. Imagine visiting teams
arriving in town on trains, bunking two to a bed, then caravaning to the ball
yard in a grand parade through the town's streets. But never at night. And
never, ever on Sunday. Now imagine baseball as the only sport of widespread
popularity. No football to speak of, no basketball, no hockey; no golf, no
tennis, no track of consequence, horse racing only for the elite, and boxing
only for the disreputable. There was such a time only about a century ago.
In many ways, the game of baseball has changed precisely because America
itself has changed. Whether all that change has been for the good may be
argued. You might contend, for instance, that a part of laudable Americana
died out when the practice of uniformed players publicly trolleying to the
game--a means of stirring fan attention--was halted in the first decade of the
twentieth century. But most, if not all, aspects, of baseball's growth
alongside society were inevitable. The 50-cent admission charge established
by the National League in 1876 held forth for many years, but so did the
rather unsavory practice of treating players as peons, to the point of
doubling up sleeping arrangements. Philadelphia Athletics catcher Ossie
Schreckengost once actually had it written into his contract that teammate
(and bunkmate) Rube Waddell would be barred from eating animal crackers in bed
. . .because the crumbs irritated the catcher! Players today sleep in the most
modern of hotels, and they do not always even share rooms, much less beds. And
as the cost of living and the cost of operating a franchise both have
increased strikingly in the interim, so have the size of the grandstand and
the cost of a general admission ticket, the latter about fifteenfold.
In any era and at any price, a great championship battle has always held
the populace in relative thrall. Millions of fans watched on their living room
televisions in October 1993 as the Phillies and Blue Jays waged their World
Series struggle long into the night. Those fans studied every decisive play
from a half dozen angles on instant replay; they second-guessed managerial
moves and controversial umpire decisions. But was that excitement any greater,
measure for measure, than the grip in which the cities of Boston and Baltimore
were held during the final days of the race for the 1897 National League
pennant?
There was neither television nor radio then, but that did not stay the
enthusiasm of hundreds of thousands of rooters nationwide as the pulsating
battle for supremacy wound to a close. The principals were the two most
dominant sporting teams of their generation: The Boston Beaneaters and
Baltimore Orioles had divided the previous six pennants. Now, with less than a
week remaining in the 1897 season, they were locked in a virtual tie for
first, each having won better than seven of every ten games played and fated
by the schedule to meet for three conclusive games in Baltimore. So
all-encompassing was interest in the games' outcomes that Associated Press
telegraphers dispatched play-by-play accounts to every major subscribing
newspaper east of the Rockies. More than three dozen correspondents--an
unheard-of number for the era--covered the games in person. Twenty more
telegraphers tapped out accounts to cities where fans had gathered in theaters
or outside newspaper offices to follow the events on chalkboards. In Boston,
fan interest was so great that the game reports received triple the front page
space accorded to the activities of President William McKinley, who was in
Boston at that very time! Throngs numbering in the thousands massed daily
along Washington Street, Boston's Newspaper Row to watch mechanical
re-creations. There was a published report of 4,000 fans jamming Boston's
Music Hall to watch a similar simulation. The games at Baltimore's Union
Grounds drew as many as 25,000 spectators--more than twice the previous record
attendance for that facility!
The excitement of a great pennant race is a constant. Only the modes of
sensing that excitement change. Consider only a few of the more obvious
changes: The player pool has changed, albeit at times tardily, to
reflect the nation's accepted ethnic population base. And when that pool has
expanded to encompass Southerners, Irish, Jews, Latins or blacks, it has done
so in fundamental reaction to changes in national acceptance: the gradual
dying out of post-Civil War prejudice; the assimilation of the immigrant
population; the eventual willingness of white society to acknowledge blacks as
equals.
Technology has worked on the sport in ways as basic as improved methods
of construction of the ball and glove; as grandiose and obvious as the
abandonment of the unfenced pasture in favor of the comparative luxury of the
wooden park, thence to the brick-and-concrete stadium, and finally to the
multipurpose facility of recent decades.
Sociological alterations, as exemplified by population shifts from the
cities to the suburbs and the replacement of the trolley in favor of the
automobile, have leveled inner-city ballparks like Pittsburgh's Forbes Field
and replaced them with fringe-area facilities readily accessible only by car.
Attitudinal adjustments of the nation have been mirrored in the game on
the field. We were a prim and proper country in 1908, and our baseball was a
prim and proper game, heavy on the bunt and complete game, and very short on
the home run. We were a comparatively profligate bunch in the late 1920s,
winning and losing with abandon on Wall Street, and we liked baseball heroes
like Babe Ruth and Hack Wilson, who hit 'em far during the day and swigged 'em
often at night. The difference of only two decades is strikingly underscored
in a baseball statistic that might also speak volumes about off-field
attitudes: for the five years between 1906 and 1910, the Chicago White Sox hit
a total of only 27 home runs. Ruth hit more than that by himself in every
season save one between 1919 and 1933.
Baseball's labor-management machinations have at least generally mirrored
national patterns. The present major leagues can trace their ancestry back to
the 1870s, an age when even the legality of organized labor was questioned.
The motivation behind the organizers of the National League was to take
control of the competition away from a players' cooperative. The 1890s, the
era of some of the most violent union-management conflicts (the Haymarket
riot, the Pullman strike, et al.), also witnessed the last direct player
challenge to the authority of ownership, the Brotherhood War, which produced
the Players League. Unionization very gradually gained favor, although both
nationally and in baseball that process took decades. And true player free
agency, as with worker rights, often arrived only through the aegis of the
courts, if indeed it even exists today.
Finally, as the educational level of America itself has changed, the
strategies of baseball have evolved. The dominant function of today's "relief
ace" could hardly have been envisioned by the game's greatest minds as little
as two or three decades ago. The stolen base, home run, sacrifice--all have
come and gone, and in some cases come again, as strategic coups. It is
as judgmental to speculate whether changes that make the game of 1993
different from the game of 1907 are for the better as it is to posit whether
Joe Tinker was a better shortstop than Kevin Elster.
No one would contend that baseball has been, or is today, any more than a
general mirror of its times. But neither can it seriously be suggested that
the national pastime has failed to change with the changing years. For
purposes of this discussion, it is vital to recognize both of those realities.
For paradoxically, only by appreciating the game's evolution can you begin to
sense the marvelous continuum represented therein.
So by what context does one measure Hornsby's feats of the 1920s relative
to Boggs' of today? By the context of the technological, strategic, societal
and cultural changes that have wrought both of them. Could Joe Tinker play
shortstop for the Cubs of today? For that matter, could Ozzie Smith have
adapted to the scrub fields, primitive travel demands, incompetent training
aides and all-but-useless equipment of eighty years ago? These questions
cannot be answered with finality. But without considering the many changing
aspects of the game, attempts to even provide an answer become frivolous.
What follows is an effort--by examining some of the major causes of
change--to provide context to discussion of the evolving nature of baseball, a
sport that through more than a century has possessed only one enduring and
vital characteristic: it has, from the outset, been America's national
pastime.
The Leagues
On St. Patrick's Day of 1871, representatives of ten teams met in New
York City to organize what became known as the National Association. The
delegates set a $10 entry fee, elected a gentleman from Troy, N.Y., named
James Kerns, as president, and drew up a set of rules calling for each team to
play every other in a best-of-five-game series. The champion would receive a
pennant.
The National Association survived only a brief five years. But it is
remembered today as the first league organized for the conduct of professional
baseball in America. And its legacy proved lasting. Since that St. Patrick's
Day meeting in 1871, not a day has passed without at least one major league in
operation in the United States.
The National Association collapsed in 1876, both because of problems that
were inherent in its structure, and because of what was perceived at the time
as the unwholesome atmosphere surrounding some of its games. The latter was
readily recognizable at many game sites. Open betting on the day's results
often took place at the park, and players in uniform were among those making
wagers. The result was suspected bribery and open intimidation of umpires and
players. Heavy wagerers directed profanity at players whose errors jeopardized
their stakes. The open selling of liquor further exacerbated ill feelings in
what remained a temperate national climate. But the Association faced other
problems as well. Players refused to honor contracts, often jumping from team
to team, without penalty. Or they simply deserted. No wonder it was a
comparatively simple matter for one of the owners of the Chicago team, William
Hulbert, to foment what would prove to be a revolution during the winter of
1875-76.
Hulbert, in company with Albert Spalding, who was recognized as the top
pitcher of the day, formulated a list of rules designed to obviate the major
problems plaguing the National Association. Among steps taken was the drawing
up of a formal player contract designed to stifle once and for all the
practice of "jumping." In January of 1876, Hulbert met secretly with
representatives of the Cincinnati, Louisville and St. Louis clubs to outline
his plan. They agreed. Hulbert then confronted the more powerful Eastern bloc
of teams with his intention of establishing a reform league. He also outlined
the principles. Bookmaking and liquor selling would be banned from parks.
Sunday baseball was frowned upon, and the penalty for players taking bribes,
or betting on games, was established as expulsion, with no hope of
reinstatement. The membership fee was raised tenfold, to $100, in an attempt
to guarantee stability. A 50-cent admission price was established. The
National League had been born.
The rise of the National League and the several challenges to its
supremacy are amply documented in this volume, so this essay will defer to
Messrs. Voigt, Dellinger, Mann and Pietrusza, Hailey, Tygiel, et al. Let's
proceed to the changes in baseball on the field as distinct from those in
league structures, legal issues, franchise locations, and race relations.
Night Ball
By the early 1930s, talk was being heard of the revolutionary prospect of
playing major league games under artificial lighting. That the concept was
feasible there could be no doubt: a baseball game had been played at night as
long ago as 1880, only two years after the introduction of electric light, and
the Des Moines, Iowa club of the Western League installed lights to play
league games in 1930. The idea, which caught on in a Depression era of
dwindling attendance, was to stave off financial collapse. It was Cincinnati
executive Larry MacPhail who finally advanced the notion of staging big league
games around the normal working fan's hours. MacPhail had good reason to lobby
for the change; his Cincinnati franchise had drawn an anemic 206,000 fans in
all of 1934, not enough to offset expenses. MacPhail and Reds' club owner
Powel Crosley petitioned the National League for the right to play seven 1935
games at night, and the league reluctantly agreed, taking note of the
extenuating circumstance of the depressed attendance in Cincinnati. The first
of those games, played May 24, pitted Cincinnati against Philadelphia, and
skeptics were moved to silence when it attracted an audience of better than
20,000 to what proved to be a 2-1 Reds' victory. By 1941, night ball was a
fact in the majority of major league parks, and by shortly after the war's end
only Wrigley Field in Chicago among all major league stadia lacked lighting.
Today, most major league games--and virtually all minor league games--are
played during evening hours. Attendance figures partially reflect the reason:
prior to the advent of night baseball, it was considered exceptional if a
ballclub drew a half million fans for a season, and the entire National League
schedule of 1933 attracted only about 3.1 million fans. Today, several clubs
can anticipate drawing more than that many in a single home season, and a
minimum attendance of about 1.8 million is required merely to break even.
The Bat, Ball, and Glove
The bat, the ball, and the glove are baseball's utensils. Virtually every
child old enough to root, root, root for the home team owns at least one of
each. Their omnipresence serves as immutable evidence of the game's cultural
popularity.
Yet today's equipment is as changed from its predecessors of generations
ago as is baseball itself. Even the seemingly simple functions of each have
been redefined, in part a cause and in part an effect of the changing game.
Only a few of those changes are reflected in the rulebook; to most the
book has proven adaptable. Examination of the adjustments made to baseball's
basic tools illuminates the courses of change that the game of baseball itself
has taken.
For obvious reasons, rulemakers have always felt the need to at least
broadly define how the ball shall be made. Curiously, that definition has
changed very little over the course of more than a century. Notice how similar
are the two definitions that follow, the first from an 1861 convention of the
National Association of Base Ball Players, and the second taken from the
Official Baseball Rules of 1993:
1861. "The ball must weigh not less than five and one-half, nor more than
five and three-fourths ounces avoirdupois. It must measure not less than nine
and one-half, nor more than nine and three fourths inches in circumference. It
must be composed of India rubber and yarn, covered with leather."
Present. "The ball shall be a sphere formed by yarn wound around a small
core of cork, rubber or similar material, covered with two stripes of white
horsehide or cowhide, tightly stitched together. It shall weigh not less than
five nor more than five and one quarter ounces avoirdupois and measure not
less than nine nor more than nine and one quarter inches in circumference."
How greatly has the ball changed in a century and a quarter? It is about
5 percent smaller, about 9 percent lighter. Rather than an India rubber
center, it may have--and in professional ball does have--a cork center. The
stitching must be tight. . . but precisely how tight is not defined. And
that's it. In every other respect, the ball put in play in the amateur games
of 1861 would pass muster by modern rules.
That is not to say that the baseball of Civil War Days and the Rawlings
Official model of today are virtually identical. Today's ball is far more
resilient and travels greater distances. This is due to several factors.
Most obviously the modern baseball undergoes far less wear and tear. For
many years it was customary for a game ball, even a mushy, discolored or
lopsided one, to be kept in play until it was irretrievably lost. And the key
word was "irretrievably." In the nineteenth century, if a ball was hit into
the stands, it was obtained by ushers for continued play. If hit out of sight,
it was searched for. . . for as long as five minutes. Then and only then might
the host team be required to furnish a second ball. The idea of going through
a few dozen balls per game--common today--would have seemed frivolously
wasteful to Great-Grandpa.
That policy moderated with the passing years, but it was not until 1920
that league officials stipulated the use only of clean and new baseballs, both
in an effort to enhance offense and as a concern for player safety (since worn
and discolored balls frequently were hard to control or even see). Those
directives lent a new measure of consistency to the game, so that the ball a
batter swung at in the bottom of the ninth was not different from the one used
in the first-pitch ceremonies.
The only rules change of significance affecting the ball came in 1910,
and it authorized the use of a cushioned cork center as an alternative to the
rubber-centered ball that had been in vogue until that time. The cork-centered
ball was found to be more lively, an especially desirable trait considering
the depressed (and, to the baseball-going public, depressing) batting
averages. The cork-centered ball was introduced in time for the 1910 World
Series between the Philadelphia Athletics and Chicago Cubs, and the two clubs
batted .272, which was about twenty points higher than the regular season
league average. For the 1911 regular season, both leagues used the
cork-centered ball: National League averages rose by only four points; but in
the American League the climb was a heady thirty points, and the league
leader, Detroit's Ty Cobb, hit a stunning .420. A total of twenty-one American
League regulars bettered .300 that season; only eight had done so the year
before. In the National League, Chicago's Frank Schulte hit 21 home runs.
Schulte had tied for the home run title in 1910 with 10.
All other changes in the makeup of the ball itself--tighter winding of
the yarn, introduction of different and supposedly better kinds of yarn,
raised or depressed stitches--have been products of technology, not of the
rulemakers. About 1920, as batting averages soared and Babe Ruth began to
crash home runs in unheard-of profusion--there was controversy over the
substitution of Australian wool for the generic type in making baseball yarn.
Surely, fans speculated, this new wool must be the reason behind the livelier
ball. In fact, the explanation probably had more to do with improved methods
of winding the wool than with the wool itself.
The same rulebook that has licensed virtually no change in the parameters
of the baseball itself has brooked only minor adjustment with the bat, and
then, generally, only by way of greater specificity. Again, compare the rules
governing play in 1861 with the slightly more elaborate section from the
modern rulebook:
1861. "The bat must be round and must not exceed two and one half inches
in diameter in the thickest part. It must be made of wood, and may be of any
length to suit the striker."
Present. "The bat shall be a smooth, rounded stick not more than two and
three quarter inches in diameter at the thickest part and not more than 42
inches in length. The bat shall be one piece of solid wood, or formed from a
block of wood consisting of two or more pieces of wood bonded together with an
adhesive in such way that the grain direction in all pieces is essentially
parallel to the length of the bat. Any such laminated bat shall contain only
wood or adhesive."
The modern rule also contains an allowance for a small "cupping" of up to
one inch at the bat's end, and for use of a grip-improving substance on the
bat handle. But again, the stipulated differences of more than a century of
development are comparatively minimal. There is a length limit where once
there was none, but, at least in practice, the limit is functionally
irrelevant. In today's major leagues, it is virtually unheard-of for a bat to
exceed 36 inches in length, much less 42. The modern bat has gained
one-quarter of an inch in girth over its ancestor, and it need no longer
necessarily be of a single piece of wood, if laminated in such a way that the
effect of a single piece of wood is achieved.
More so than with the ball, changes in the bat have tended to develop
stylistically, and under the influence of the batters themselves. Bats, of
course, always have been highly personalized objects. With such a broad
allowance by the rules (no weight limit, no functional length limit) hitters
have tended to individualize their preference within widely recognized norms.
Place hitters of the years before 1920 coveted heavy "wagon tongue"
models with thick barrels capable of driving the ball over the infield, even
at the expense of bat speed (the flat-sided bats permitted in some years in
the 1880s never caught on). Cap Anson, legendary star of the Chicago White
Stockings, used just such a bat, reputedly weighing in at a manly three pounds
and then some. In the 1920s Babe Ruth menaced opposing pitchers with a 48
ounce model bat. But Ruth saw to it that the bat handle was tapered to
accommodate his smaller than normal hands. Heinie Groh, third baseman of the
Cincinnati Reds and New York Giants, was no slugger of Ruthian proportion. Yet
Groh's innovative "bottle" bat--with its narrow handle expanding precipitously
at the hitting area to a broad surface--not only served as a personal
trademark, but helped him to a .292 lifetime average and a starting role on
four pennant winners.
The modern bat bears no resemblance to any of those models. It is
sleeker, usually no more than thirty-five inches in length and no heavier than
thirty-three ounces. The reason is simple: batting instructors, who once
looked upon mass as the key factor behind a mighty poke, now focus on bat
speed instead. The faster a batter can savage a bat through the strike zone,
the greater the force applied to the ball. And the greater the force applied,
the farther the ball travels. Presto, light bats.
As for gloves. . . well, in the game's early days they did not exist.
Players were expected to catch the ball barehanded. For a time, they received
something of an aid in that effort by a rule recording an out if a ball was
caught on the first bounce. That made things a little easier. The use of
gloves was never formally barred, as, for instance, was the use of black
players in the old National Association rulebook; it simply was looked upon as
sort of sissified. There is no clear record of who first conceived the notion
of fielding with a glove. Al Spalding wrote that the first to don a glove was
an 1875 player for the National Association's St. Louis team named Charlie
Waitt. In a game that year Waitt donned a street-dress leather glove on his
fielding hand. Waitt, reportedly, was ridiculed league-wide. But as more
prominent players adopted Waitt's concept, the notion gradually came to be
accepted. It was not until the retirement in 1894 of Jeremiah Denny, however,
that the era of the barehanded fielder passed.
Two points ought quickly to be made about the use of early-day gloves.
First, their function was utterly different than it is today. The first
gloves, lacking webbing and lacing, merely provided protection for the hands,
which fielded the ball. Today's larger, better-padded, webbed, laced and
pocketed gloves might more appropriately be described as "fielding devices,"
because it is they, not the fielder's hands, that do much of the actual
fielding work.
Secondly and as verification of the first point, players of the
nineteenth century often wore gloves on both hands. For the throwing hand,
they would simply snip the glove at the fingers for dexterity. Those
photographs that remain of players of the era--and especially the ones
portraying fielding sequences--confirm that unusual tendency.
It was not until 1895 that stipulations concerning use of gloves were
included in the rules: those limited the size of gloves to ten ounces and
fourteen inches circumference for all players except catchers and first
basemen, who were permitted to use any size glove. (Today's rulebook,
conversely, takes a page and a half to specify dimensions, materials, lacings
and webbings for gloves. Today there are thirteen different size limitations
on the standard fielder's glove alone, ranging from palm width to the length
of each separate finger.) The transition from the glove as protection to the
glove as a tightly defined fielding aid came gradually but inexorably.
The first advance was development of a "pocket," that spot in the palm of
the hand where the ball was most easily and most naturally caught. As with the
origination of the glove itself, there is no firm and fast date for the
pocket's appearance: it simply sort of happened. And it did not happen
immediately. To the contrary, for several years after the introduction of the
glove fielders adopted a sort of "reverse pocket," they would excise the
leather from the palm area and leave that area bare, presumably for more
"touch." In all probability, the "pocket" was not invented by glove makers,
but by players themselves, taking advantage of the natural stretching the
glove's leather underwent with use. Today we call this "breaking a glove in."
Today, however, "pockets" are preformed by the manufacturer.
Credit commonly is given to a pitcher, spitballer Bill Doak of the St.
Louis Cardinals, for advancing glove technology from the primordial state. In
1920, Doak approached a glove manufacturer with a plan for a new personalized
glove. Many players liked personalized glove models, but Doak's was different.
It envisioned a pre-formed pocket, not one that would be fashioned through
constant wear. And it included a square of reinforced webbing between the
thumb and finger sections as an additional aid to fielding. Previously, the
fingers simply had been tied together if they were not allowed to act
independently. So advanced was Doak's model, by the way, that it remained
popular for almost thirty years. And every subsequent advance in glove design,
whether it be the hinged heel, short or long-fingered design, or advanced
webbing, can be traced to a concept originated by Doak.
In the 1930s, rulemakers mandated the use only of leather in the making
of gloves--the first change in rules on the subject since the initial size and
weight limitations were set in 1895. And in 1939, acting in response to Hank
Greenberg's introduction of an oversized mitt with a netted webbing, they
outlawed the use of netting, limited webbing to four inches from thumb to palm
(the present rule is four and one-half inches), and restricted the size of
first basemen's gloves as well. Weight restrictions were dropped in 1950, and
size limitations further defined.
To that date, no limitation had ever been placed on the size of the
catcher's mitt; after all, the larger the catcher's mitt, the harder it was
for a catcher to dig the ball out and make a throw to base. But in 1960
Baltimore manager Paul Richards discovered that there was something at least
potentially worse than having catchers who could not evict the ball from an
oversized mitt. And that was having catchers who could not catch the ball at
all.
Richards' problem was that his most effective pitcher was Hoyt Wilhelm,
and Wilhelm's most effective pitch was a knuckleball that proved as difficult
to catch as it was to hit. Baltimore catchers soared to the top of league
passed-ball rankings. Richards' solution was to devise a catcher's mitt of
nearly fifty inches in circumference, a huge thing perhaps twice the standard
size. If Baltimore catchers could not throw out base stealers with the new
mitt, they could at least have a fighting chance at halting Wilhelm's pitch.
Shortly after the appearance of Richards' oversized mitt, the rule was
amended to set a thirty-eight-inch circumference and fifteen-and-one-half-inch
diameter limit on catchers' gloves as well.
Even after catchers' gloves were restricted in size, however, questions
remained about enforcement of the 1950 size limits. So in 1972 the rules
committee drafted the present thirteen-point measuring system. Fortunately,
there is no record of a game ever being halted while a manager challenged the
legality of a fielder's glove on all thirteen points.
Spring Training
The precise origin of spring training, that marvelously contrived ritual
that today amounts to a one-month paid vacation in the sun for athletes,
media, and club officials, is unknown. With few exceptions, early day
ballplayers trained at home on their own. It is known that in 1870 the Chicago
White Stockings organized a trip to New Orleans, but that may have been mere
barnstorming rather than preparing players for the coming season. The
generally accepted beginning of spring training is 1886, when the White
Stockings and Philadelphia Phillies traveled, respectively, to Little Rock and
Charleston. The precise regimen of spring training has varied greatly from
decade to decade. Today, for instance, little actual "training" is done at
spring training, since players, many of whom earn millions of dollars, are
expected to report in shape, and the emphasis is on narrowing a roster of
forty players to the requisite twenty-four for opening day. Spring training
today amounts in large measure to an extended advertisement for the season to
come, with a bit of tryout camp thrown in for effect.
But that was not always the case. Early-day players commonly received
salaries of a few hundred or a few thousand dollars, supplementing that with
off-season jobs--like bartending--of questionable value to their athletic
careers. These players literally required a period of a month or so to work
back into shape. In the early 1900s, the New York Giants trained in the little
Texas town of Marlin, and their training was, in the strictest definition,
training. Each day began and ended with what amounted to a two-mile forced
march along the railroad tracks from the hotel to the park. The routine
consisted of batting and fielding practice, along with drills on the
fundamentals of play. If there was a scrimmage, it usually was an intrasquad
effort, or perhaps a game against a local team or minor league club. In 1906
the sixteen major league teams trained in 10 different states as far north as
Illinois; the notion of grouping in Florida and Arizona to make practice games
between major league aggregations more convenient would not gain full currency
for the better part of another decade. In 1911, the Yankees set up their
spring camp in Bermuda. At most early camps, conditioning was overseen by the
players themselves, since as a rule teams employed only the manager and a
single coach--if that.
The Pitcher
How prized is the pitcher?
Consider that of the nine positions, candidates for eight are winnowed
principally by their skill with the bat. Shortstops and catchers may progress
a few levels through the professional ranks on the strength of superior range;
outfielders may emerge by dint of speed, or catchers thanks to a God-given
arm. But fundamentally, not even an Ozzie Smith or a Benito Santiago moves far
past rookie league ball until they establish at least a minimal offensive
ability.
The only exception is the pitcher.
And pitchers always have been the exception, even before the designated
hitter rule legislated most of them out of that terra incognita known as the
batter's box. In any analysis of Ty Cobb's value as a player, the first thing
that comes up is his lifetime .366 batting average. But no one would think of
discussing Sandy Koufax's value to the Dodgers in terms of his .097 batting
average.
In fact, the pitcher is the one and only player whose defensive
contribution is so vital that the ability to hit is considered irrelevant. Red
Ruffing, the fine righthander for the New York Yankees of the 1930s and 1940s,
compiled one of the best batting records of any pitcher in the past three
quarters of a century, including a .268 career average. But when he was voted
into the Hall of Fame in 1967, it was on the strength of a 273-225 record,
3.80 earned run average, and on his status as the leading moundsman for seven
pennant winners.
Pitching has been the staple of the successful big league franchise
virtually since there has been big league baseball. Connie Mack is variously
quoted as having called it anywhere from 70 to 90 percent of the game. The
precise figure is not important. What is important is that Mr. Mack's axiom
remains generally accepted today.
And yet, despite the constancy of the importance of quality pitching,
both pitching styles and the rules governing pitching have undergone more
major changes than any other aspect of on-field play--so much so that the best
pitchers of 1993 have virtually nothing in common with the best pitchers of a
century ago. . . and little resemblance to their predecessors of as little as
two to three decades back.
Much of the evolution took place during the game's formative years, and
came via efforts by the rulemakers to settle on the proper balance of batting
to pitching. In the early years of professional ball--the 1860s and
1870s--pitching bore more similarity to the style employed today in fast-pitch
softball than in baseball. The ball was delivered underhand and without a
wrist snap--although pitchers fudged so much on the latter point that by 1872
the wrist movement was legalized--from a box set at a distance of 45 feet from
the plate. Legalization of the wrist snap spawned the development of various
"trick" pitches, notably the curveball, which is commonly credited to William
"Candy" Cummings, a much-traveled moundsman of that era who compiled a 124-72
record in the only six seasons he played in top-level professional baseball.
Whether Cummings or any of several other pitchers of his era first perfected
the art of making a ball curve, Candy generally got the credit--enough to have
been elected to the Hall of Fame in 1939 for that accomplishment.
Nineteenth-century pitchers worked under virtually ever-changing
conditions. For instance, the pitcher's "box" was moved back to 50 feet from
home plate after the season of 1880; then in 1893 it was eliminated altogether
in favor of the "rubber" placed at 60 feet, 6 inches. The underhand delivery
requirement gradually was modified to allow what in effect was a sidearm pitch
in 1883, and a full overhand delivery the following year. Rules governing the
ball-and-strike count--at one time nine balls were required to give the batter
a walk--changed frequently until they were stabilized at four and three,
respectively, in 1889. At various times pitchers were required to deliver a
high or low pitch, as requested by the batter; windups were banned, then
permitted again; the size of the "box" was altered almost routinely before
being consigned to extinction.
It would be difficult to generalize as to whether all of those changes
helped or hurt pitchers. Certainly batting averages tended to improve as the
distance between the mound and plate increased. Yet the underhand pitching
style--much easier on the arm--enabled most teams to play an entire schedule
with only one or two pitchers. And the best of them attained results that
would be unthinkable today.
By way of illustration, compare the statistics Providence's Hoss Radbourn
compiled in 1884 with the record of the last pitcher to win thirty or more
games, Detroit's Denny McLain, in 1968, and of the pitcher with the best
statistics in 1993, Atlanta's Tommy Glavine.
Radbourn McLain Glavine
Games 75 41 36
Innings Pitched 678.2 336 239
Victories 59 31 22
Complete Games 73 28 4
Earned Run Average 1.38 1.96 3.20
Strikeouts 441 280 120
Shutouts 11 6 2
Radbourn's statistics seem even more impressive when it is noted that his
Providence team played only a 112-game schedule. Yet of course the comparisons
are fair only as illustration of how greatly the pitching environment--the
rules, conditions and strategies--changed between 1884 and 1968 or the
present.
At least as dynamic a force as the rulebook in the evolution of the
modern pitcher has been the development of pitching strategy, notably new
pitches. For while the broad regulations under which pitchers work today are
not vastly different from 1893, the arsenal of pitches that have come into
vogue--and occasionally passed from it--has ranged widely, and sometimes
wildly.
Cummings's introduction (if, indeed, it was Cummings) of the curveball
marked the first major deviation toward finesse from what to up that time had
fundamentally been a power pitcher's game. Phonnie Martin threw a drop, or
slowball, and Al Spalding and Tim Keefe were masters of the change of pace.
These innovations took hold, but bolder experimentation was limited to a
handful of hurlers. While pitchers of the latter part of the nineteenth
century occasionally dabbled in "outshoots" or "rises," the best continued to
build their reputations with speed. "Cyclone" Young in Cleveland and Amos
Rusie, New York's "Hoosier Thunderbolt," were the best and in all likelihood
the fastest of them. Young won 27 games for Cleveland in 1891, his first full
season, and then over a twenty-two-year career accumulated 511 victories, a
total whose magnificence is best illustrated by the fact that the all-time
runner-up, Washington's Walter Johnson, trails by roughly a hundred. Young's
2,799 strikeouts--a record when he retired--further testify to his velocity.
As for Rusie, he won 36 games in 1894 and led the league in strikeouts five
times between 1890 and 1895. He also led five times in walks, initiating the
popular linkage between hard throwers and control trouble.
By the mid-1890s, earned run averages rose as a reaction to the shift of
the mound back to 60 feet, 6 inches. The legendary Baltimore Orioles of Wee
Willie Keeler had batted .343 as a team in 1894, and did not even lead the
league--Philadelphia did, at .349! In response, pitchers began to experiment
more readily with changes of speed, and with the ball itself. In Chicago,
Clark Griffith scraped the ball against his spikes and discovered that the
scuffs added to the break of his curve. Griffith became a twenty-game winner
for six consecutive seasons. Philadelphia's Al Orth, a "one-pitch wonder,"
mastered the art of changing speeds and won 203 games in fifteen years.
Equally as significant as changes in the approaches to pitching was the
increase in the numbers of pitchers needed. In 1876, Chicago's Albert Spalding
had been able to pitch in all but five of his team's sixty-six games. But by
the early 1880s the top teams were using two pitchers, and within another
decade--as the increased pitching distance, longer playing schedules, and more
tiresome overhand motion became accepted--staffs of fewer than four to five
were uncommon. The Detroit team of the 1884 National League utilized perhaps
the first pitching "staff" per se, with four hurlers--Frank Meinke, Stump
Weidman, Charley Getzien, and Dupee Shaw each working between 147 and 289
innings. Detroit's strategy did not count for much--the club finished
last--but within a decade Baltimore rode what amounted to a
four-to-six-pitcher rotation to the league championship. That staff's ace,
Sadie McMahon, pitched only about one-quarter of the total number of innings
worked by the sextet. In 1876, the eight National League teams basically
employed a total of thirteen pitchers; by 1886 that number was twenty-four; by
1896, for twelve teams, it was fifty-one.
By the turn of the century, the repopularization of two theretofore
lightly used pitches helped reestablish the pitcher as the game's dominant
player. Christy Mathewson, a fresh-faced college graduate from Bucknell,
brought with him to the New York Giants a pitch he called the "fadeaway,"
actually a reworked version of something known in the 1880s as an "outshoot."
Today it is called the screwball. The pitch, which acts like a reverse curve,
when thrown by a righthanded pitcher breaks toward a righthanded batter.
Mathewson might very well have become a great pitcher even without the
fadeaway, but with it he won 373 games, four times winning thirty or more, and
five times helping the Giants to pennants. So difficult was the pitch to throw
and control that no other major league pitcher of the era could master it.
The other dominant pitch of the first part of the twentieth century was
the spitball, advocated principally by two men, Jack Chesbro and Ed Walsh.
Chesbro came to the major leagues with Pittsburgh in 1899, and by 1901
when he incorporated the spitball into his routine--it would not be illegal
to doctor a baseball with foreign substance for two more decades--he became a
twenty-game winner. He won 28 games with the pennant-winning Pirates in 1902,
so greatly increasing his value that he became one of a cadre of early day
"free agents" who were recruited to the fledgling American League during a
three-year interleague "war." With the New York team of the young league in
1904, Chesbro's spitball took him to a twentieth-century record 41 victories,
although it also set up one of the most ironic finishes to any pennant race.
Because of its wild break, the spitball was considered one of the least
predictable of pitches. Chesbro had walked only 88 batters that season, fewer
than two every nine innings. His control of the devious delivery was
impeccable.
On the final weekend of that season, Boston and New York--virtually tied
for first--engaged in a five-game series, with the winner of three games to be
the champion. Chesbro's forty-first victory came in the series opener, but
Boston claimed the ensuing two. In the climactic fourth game--the opening
contest of a last-day doubleheader--Chesbro, seeking his forty-second win,
held a 2-2 tie entering the ninth. An infield hit, a sacrifice and a groundout
moved Boston's pennant winning run to third base with two out. The great
pitcher had been masterful to that point, walking just one and striking out
five. But in that most pivotal of situations, a Chesbro spitball bounced in
the dirt and skipped toward the backstop, a wild pitch that cost New York a
pennant.
Walsh, like Chesbro, perfected control of the elusive spitter, and
parlayed that to remarkable feats. A moundsman of modest ability prior to
employing the pitch in 1906, he won 17 games that first season, 24 the next,
and an astonishing 40 the year after that. Yet irony played a central role in
Walsh's career as well, for perhaps his best performance in that 40-win season
of 1908 came in defeat. At the climax of a peripatetic three-team race
involving Cleveland, Detroit and Chicago, Walsh's White Sox came to Cleveland
needing a victory to remain in contention. Walsh pitched a four-hitter and
struck out 15 batters. . . but Cleveland's Addie Joss achieved a rare perfect
game and won 1-0. The only run scored--no, not on a wild pitch--on a passed
ball.
Other so-called "freak" pitches came into vogue during that era as well.
Pitchers altered balls not only with spit or spikes, but with emery paper,
paraffin, mud, slippery elm, and who knows what else. But the ranks of
pitchers who relied on tampering for their success still constituted a
minority. Most, like Washington's Walter Johnson continued to rely on the
basic fastball.
Of course, most pitchers did not have a fastball the caliber of Walter
Johnson's to rely on.
And on that basis, pitchers and batters lived in happy coexistence for
about a decade, pausing only to occasionally admire the ascendancy of a new
star like Philadelphia's Grover Cleveland Alexander. Master both of the
fastball and curve, Alexander emerged in 1911 as a rookie 28-game winner, and
by 1915 he was leading the Phillies to the National League pennant on the
strength of a 31-victory season. With Philadelphia and later with the Chicago
Cubs, he led the league in victories six times between 1911 and 1920, becoming
generally acknowledged as the pre-eminent pitcher of the latter half of what
is commonly called baseball's "dead ball" era.
Alexander, along with Walter Johnson, continued to pitch in form beyond
1920, but that was not true of major league pitchers as a whole. A series of
factors, some mechanical, some societal, reshaped the game again following
World War One, and in most instances it was pitchers who suffered in the
reshaping.
The catalyst for much of that reshaping, ironically, was a former
pitcher. And a very good one. As a twenty-year-old rookie in 1915, Babe Ruth
won 18 games to help the Boston Red Sox to the world's championship. By the
following season, Ruth, a 23-game winner who added a 24th in the World Series,
was coming to be recognized as the Sox's ace. He led the American League in
earned run average (1.75), starts (41), and shutouts (9), and the following
season paced it in complete games (35) as well.
But by 1918 Ruth the pitcher was recognized as less of a hero than Ruth
the slugger. He pitched in 20 games that season--and won 13 of them--but
started nearly three times as often in the outfield, a response both to his
hitting and to the fans' clamoring to see him hit. Although by no means an
everyday player, the Babe tied for the league lead in home runs that season
(his total was a modest 11). But more significantly, he drew crowds, both to
Fenway Park and on the road. So in 1919 Boston manager Ed Barrow converted him
almost exclusively to the outfield. Ruth's response was to break the all-time
record for home runs--with 29--and to lead the league in runs scored, runs
batted in and slugging average as well. Traded to New York in 1920, Ruth
almost immediately became the most celebrated player in the game's history. He
hit a then-unthinkable 54 home runs, broke existing records for runs scored,
runs batted in, bases on balls and slugging average. To the public, Ruth was
"the Sultan of Swat," "the Bazoo of Bang," "the Infant Swatigy," "the Colossus
of Clout." Batting averages and home run production rose league-wide as other
players strove to imitate him. American League batters who hit .248 with 136
home runs in 1917 had raised those figures to .292 and 477 by 1921. In the
National League, the increases for the same period were from .249 and 202 to
.289 and 460. Part of that 150 to 200 percent increase in the home run could,
perhaps, be attributed to the banning--enforced gradually as of 1920--of the
spitball and other so-called "doctored" pitches, part to improved
craftsmanship on the part of the baseball makers, and part to the desire of
league officials to replaced soiled, scuffed balls with cleaner, whiter ones.
But in large measure, the change was simply a strategical one: batters swung
harder and tried to drive the ball farther than ever before. Once a
poke-and-run contest, baseball had become--thanks in good measure to Ruth--a
slugger's game. And the fans loved it: American League attendance soared from
1.7 million in 1918 to more than 5 million in 1920.
Unfortunately for pitchers, they proved less than capable of adapting to
the new and more thrilling style. The rule change barring use of the spitball,
emery ball, shine ball and other similar pitches removed at least a potential
weapon from all arsenals, save those of seventeen men who had used the spitter
in the major leagues prior to its being banned. (They were permitted to
continue throwing the pitch, which did not actually die out until the last of
those seventeen, Burleigh Grimes, retired in 1934.) New pitches were not
effectively developed to take the void. A few toyed with a knuckleball, and in
the late 1920s a nondescript pitcher for the St. Louis Browns named George
Blaeholder devised what eventually came to be known as the slider. But for the
most part, pitchers relied on the fastball, curve and a very occasional
changeup. With pitchers as with batters, raw power replaced guile and cunning
as the chief weapon.
The result was predictable: for the better part of two decades, batting
averages, home run totals, and earned run averages soared. Look at the table
of league earned run averages for the American and National Leagues between
1920 and 1930:
YEAR NL ERA AL ERA
1920 3.13 3.79
1921 3.78 4.28
1922 4.10 4.03
1923 3.99 3.99
1924 3.87 4.23
1925 4.27 4.39
1926 3.84 4.02
1927 3.91 4.12
1928 3.98 4.04
1929 4.71 4.24
1930 4.97 4.65
In the National League, earned run averages increased by a full 59
percent in that one decade alone. Home run totals more than tripled.
Strikeouts, the pitcher's logical counterweapon against the big flailer, also
increased, but by a far less imposing 6 percent. The differences are less
dramatic in the American League, but still large. And although pitchers
reasserted their competitiveness, if not their mastery, during the 1930s, the
average ERA by 1940 had fallen only to the 4.00 level. By 1940, however,
bat-happy baseball society had been conditioned to view a 4.00 ERA as good.
The era between 1920 and 1960 produced some exceptional pitchers, but few
changes in pitching style. In the mid-1930s, a rookie righthander in Detroit
named Eldon Auker bothered batters with an underhand delivery that would have
been reminiscent of the style in the 1870s. Auker's "submarine" pitch was
necessitated by an arm injury that made it difficult for him to throw in the
normal overhand fashion. He won 130 games in a 10-year career, pitched on two
pennant winners and one world champion, and his style would be resurrected in
the modern era by relief pitchers like Ted Abernathy, Kent Tekulve, Dan
Quisenberry and Gene Garber. In the National League, the New York Giants' Carl
Hubbell also reached back into time for a cudgel. Hubbell resurrected
Mathewson's fadeaway, renamed it the screwball, and mystified National League
opponents sufficiently to record five straight twenty-victory seasons between
1933 and 1937, leading the Giants to three pennants.
A more conventional, and more overpowering, form belonged to Lefty Grove,
who pitched 17 years for the Philadelphia Athletics and Boston Red Sox.
Grove's trademarks were a fastball many have called the swiftest ever and a
surly disposition. Four times a league leader in victories and nine times the
ERA king, Grove was the only pitcher to win 300 games in the hot-hitting era
of the 1920s and 1930s, an achievement often cited by those who point to him
as the best ever. His career ERA of 3.06 is more than one full point lower
than the league average for the years (1925-1941) in which he worked.
Pitching rules, which had remained virtually untouched since 1920,
underwent several adjustments between 1950 and 1969, and all of them appeared
to bear on the relative effectiveness of pitchers. The strike zone was
tightened in 1950--the new boundaries being the armpit and bottom of the knee
(they had been the top of the shoulder and bottom of the knee). When home runs
climbed to record levels by 1961, the old strike zone was restored, and earned
run averages decreased sharply, to a post-1920 low of 2.98 in the American
League in 1968. Rulemakers responded to that by lowering the mound several
inches and reducing the strike zone again. Averages and home runs climbed
again, as they did in the American League in 1973 when the designated hitter
rule was introduced.
But it would be overly simple and wrong to point merely to the rule book
as the fulcrum for all variations in pitching performance in the past three
and a half decades. Probably the most significant factor was the development
of relief pitching. Beyond that, pitchers perfected pitches they had only
toyed with before. The knuckleball was not new--it had been thrown since the
early part of the century, and in the 1940s the Washington Senators employed a
foursome of knuckling starters. But no one used it as effectively as Hoyt
Wilhelm and then Phil and Joe Niekro. Wilhelm pitched in an unprecedented
1,070 games over twenty-one years and established what at the time was the
career record for saves, with 227. Phil Niekro won over 300 games, and in
tandem with his brother Joe, in 1987 broke the record for most victories by
members of one family.
A sort of variation on the knuckleball, also developed years ago and
resurrected recently, was the forkball or "split fingered fastball." Credit
for its development generally is given to 1940s New York Yankee pitcher Ernie
Bonham, but the first famous exponent was Elroy Face, a relief pitcher for the
Pittsburgh Pirates of the 1950s and 60s. In 1959, Face compiled an 18-1 record
by the simple expedient of jamming the ball between his fingers before
releasing it: this gave it an unnatural dip as it crossed home plate. In the
late 1970s, another reliever, Bruce Sutter of the Chicago Cubs, re-invented
the same pitch, which he termed a "split fingered fastball." Sutter saved 37
games for the 1979 Cubs, an accomplishment of no small measure when it is
recognized that his team only won 80 times that season. In Sutter's wake,
entire pitching staffs began learning the split fingered pitch. Roger Craig
became a one-man traveling demonstration of the pitch's success. As Detroit
pitching coach, he taught it to the Tigers in 1984 and they responded by
winning the world championship. Then Craig taught it to journeyman Houston
righthander Mike Scott, and he blossomed into an 18-game winner capable of
recording over 300 strikeouts while leading his team to a divisional flag in
1986. Craig himself became manager at San Francisco, where his staff of
split-fingerers helped the Giants win the NL West title in 1987 and the
National League pennant in 1989.
The most widely used new pitch, however, was the one invented by
Blaeholder fifty years before--the slider. Acting much like a fastball but
with a sharp break, the slider supplanted the more leisurely curve ball in the
repertoires of dozens of major leaguers. Perhaps the pitch's most famous
exponent was Steve Carlton, who used it to become the second-winningest
lefthander of all time, behind only Warren Spahn. So disarming was Carlton's
slider that he staged a dramatic contest in the early 1980s with fastballer
Nolan Ryan to see which man would become the first pitcher in history to
record 4,000 strikeouts. Ryan eventually won that race, and went on to surpass
5,000. But Carlton himself finished with 4,136 strikeouts prior to his
retirement, more than any pitcher in history except Ryan. But if the evolution
of pitching suggests anything, it is that no one style, no single delivery,
and no simple rule change is perpetually dominant. In the 1960s, no two
pitchers could have been more stylistically different than Juan Marichal, the
high-kicker of the San Francisco Giants, and Sandy Koufax, the stylish
lefthander of the Los Angeles Dodgers. Marichal dabbled in every move, every
trick ever devised. He threw the fastball, the curveball, the slider, the
changeup, the screwball, and he delivered each of them overhand,
three-quarter, or sidearm, almost at his whim. Koufax relied on a fastball, a
curve, and exemplary control. Yet in 1963, for instance, each man won 25
games, and the name of each appeared among the league leaders in winning
percentage, earned run average, strikeouts, complete games, and innings
pitched. Between 1963 and 1966, Marichal averaged better than 23 victories,
Koufax 24.
Perhaps the most frequently debated question is whether today's pitcher
throws harder than his predecessor. It is, of course, almost impossible to
answer. To the degree that improved training and conditioning programs
encourage greater speed, it is logical to believe that the fastest hurlers of
recent years--Nolan Ryan or Rob Dibble or Randy Johnson--must be swifter than
Grove or Bob Feller, Walter Johnson, Cy Young. Ryan's fastball, like those of
Dibble and Johnson, was clocked on radar guns at about 100 miles per hour.
Old-timers, of course, did not have the advantage, or disadvantage, of
pitching to radar guns, so assessments of their speed must necessarily be more
crude. Feller's fastball, for instance, once was clocked against a speeding
motorcycle. The finding? About 100 miles per hour. The testimony of veteran
observers varies. Many picked Walter Johnson, but Johnson himself picked Smoky
Joe Wood. Billy Herman selected Van Lingle Mungo. Contemporaries like Wes
Ferrell said Lefty Grove was faster than Feller, but numerous sportswriters
sided with Feller as the fastest ever. Connie Mack, who played and managed
across seven decades, opted for Amos Rusie, the old-time "Hoosier
Thunderbolt." But Mack's opinion could have been colored by nostalgia: he
batted against Rusie. Nolan Ryan was generally considered the fastest of the
1980s pitchers, but for a time it was not even presumed that he was the
fastest Houston Astro. Until his crippling stroke, J. Rodney Richard was
conceded that title by at least some who saw both.
The Playing Field
Charles "Hoss" Radbourn was a pitcher of considerable note in the
National League of the 1880s. . . and a hitter of no special renown. In 1882
he won 31 games for Providence, and hit only one home run. But this story
isn't about any of his 31 victories. It isn't really even about his home run.
It's about playing conditions.
On August 17 of that year Radbourn was playing right field--as he
occasionally did when not hurling--for Providence, which was at home against
Detroit. Now the Providence field was not unlike most baseball fields of the
day: it was, in the literal sense, a field. There was little groundskeeping
and often no outfield barriers; even if there were, well-heeled fans who
wished to simply pulled their carriages up onto the depths of the playing
surface and watched from there.
On this particular date, the game developed into what cliche-prone
sportswriters of a later era might refer to as a "tight pitchers battle"
between John Ward of Providence and Stump Weidman of Detroit. Through
seventeen innings each man held the opponent scoreless. When Radbourn advanced
to the plate with one man out in the eighteenth, the sky was growing dark.
In his then-brief big league career, the Hoss had never hit a home run.
He was not alone in that distinction, for four-base hits were a rare sight.
(That season's league leader, George Wood of Detroit, hit only 7; the league
record was 9.) But Radbourn lashed at Weidman's pitch and sent it scurrying
past Wood in left field. As some witnesses reported it, the ball rolled close
to the leg of an especially spirited black horse hitched to a wagon.
Wood, of course, raced to the spot and reached for the ball. He was
prevented by, of all things, the horse's hind hoof, which swished through the
air and barely missed conking him. Wood reached again; again the horse kicked.
Radbourn, meanwhile, raced past second.
Desperately, Wood grabbed for a handful of grass, hopeful of appeasing
the critter. That did not work. Finally Ed Hanlon obtained a stick, reached in
and swatted the ball clear of danger. It was too late; as Hanlon prepared to
throw, Radbourn was being carried from the field in triumph.
The mere concept of what ought to constitute a major league ballpark has
evolved through at least five distinct transformations, each markedly
different from its predecessor, and each spurred by changes both in the game's
strategy and in the nation's sociology. The conditions attending to Charley
Radbourn's home run in Providence in 1882 may seem bizarre to us. But no more
bizarre, perhaps, than artificial turf will seem three generations hence.
The first parks, used in the first few decades of professional ball, were
simple open spaces with ruts worn by the players marking the baselines. At
games that attracted large crowds, the playing area often was defined by the
fans themselves, who formed a cordon around the circumference. In 1871, the
Forest City team representing Rockford, Ill., in the National Association
played on a field called by ballpark expert Phil Lowry "the strangest in major
league history." Trees virtually lined the baselines, so players chasing
pop-ups took their chances with physical peril if they watched the ball rather
than their step. Third base was on a hill, home plate in a depression, and the
outfield framed by a gutter draining an adjacent horse racing track.
There were few of the niceties we presently associate with a ballpark for
several reasons, not the least of which was that, since the game itself was
new, club owners often lacked the capital necessary to develop the grounds
beyond a rudimentary level. A grandstand might hold up to about 1,500
customers if it was expansive, but usually it held fewer. It was desirable,
but by no means certain, that the ground be level and free of gravel. Horse
droppings might literally pockmark areas of play. Except in Rockford, trees
were not much of a hazard, but even at the best of diamonds infields were
poorly sculpted and ill cared. There were rarely such things as a scoreboard
or dugout, and where outfield fences existed--they first came into being at
Brooklyn's Union Grounds--they might be as close as 180 feet from home or as
distant as 500 feet at all points. Some of the fields--Brooklyn's for
instance--doubled in the winter time as skating rinks, at which time they were
deliberately flooded. Imagine that happening at Yankee Stadium today!
Gradually over a span of years, ballfields assumed a more standardized
and slightly more familiar appearance. By the mid-1880s, most playing fields
had attained at least a semienclosed status. But distances to the fences
commonly were dictated as much by topography as any other consideration.
Chicago's Lake Front Park was, when built in 1883, considered the archetypical
modern facility, seating almost 10,000. Yet its cramped site near the lake
permitted only a 180-foot carry to left field, only 300 feet to dead center.
Such a field would be considered inadequate for fifteen-year-olds today. But
at Boston's spacious Huntington Avenue Grounds of a few years later, the
barrier to left was a comfortable 440 feet from home plate; it was 635 feet to
the fence in center. For part of the 1896 season, St. Louis' Robison Field did
not even have a fence entirely circling the grounds. At one point in right
field that year it was possible to hit a ball (in play) through a gap in the
barrier, and if so, the ball could roll unimpeded for more than 600 feet. . .
to a lake!
If there was a single, overriding concern about ballparks in the game's
first few decades, it was the danger of fire. Because wood was the common
building material, facilities were susceptible to that danger, and it intruded
on the occasion of a game more than once, sometimes with dire results.
Baltimore's Union Park was damaged by fire in 1894, the same season a blaze
destroyed Boston's South End Grounds in the third inning of a game between the
Orioles and Beaneaters. A game was halted by fire at Chicago's West Side Park;
several years earlier a contest actually had continued at the nearby 23rd
Street Grounds while fire consumed the grandstand. Brooklyn's Washington Park
fell to flames in 1889, New York's Polo Grounds was virtually destroyed in
1911.
With all of its inherent and obvious disadvantages, the wooden ball park
may seem to have been anachronistic as early as 1910 or so; furthermore, this
role in the development of the game may seem to have been quite fleeting. Was
it really anachronistic? Yes. Was its role fleeting? No. The era of wood, from
the opening of Brooklyn's Union Grounds in 1862 until the closing of the last
wooden grandstand, at Philadelphia's Baker Bowl in 1938, encompasses three
quarters of a century, or better than half the lifespan of the professional
game to date.
The demise of the wooden park was occasioned by a number of factors, fire
hazard being not the least of them. Some wooden parks were deemed to be
particularly dangerous. In 1903, a wooden rail gave way at Baker Bowl in
Philadelphia, and hundreds of fans fell, twelve to their deaths. In 1907 and
again in 1908, the building inspector for the city of Cincinnati submitted a
detailed bill of particulars on the hazards at the Palace of the Fans. Cracked
girders, decayed supports, unsafe flooring, and a defective bleacher platform
were only some of the problems. Construction problems were documented in St.
Louis and other cities as well. But the gradually widening acceptance of
baseball as a cultural event also played a part in the transition to more
permanent structures. The average attendance climbed from 100,000 per
franchise in 1890 to 365,000 in 1905. Larger, stronger and more durable
stadiums were needed. Because of the game's growing popularity, club owners
were able to provide such facilities. Motivation also came from the fact that
as new parks were constructed, the clubs could increase the numbers of more
costly box seats, thus increasing potential revenues.
Concrete and steel became the materials of choice. In Philadelphia in
1909, club owner Benjamin Shibe conceived and executed plans for a baseball
plant at the site of a former brickyard at the corner of Twenty-first and
Lehigh. The facility would be easily accessible from the city's center by
trolley line and would supplant the old, wooden Columbia Park, which had the
added disadvantage of being located near several breweries, thus subjecting
patrons to the constant odor of hops and yeast.
But Shibe Park would not only smell better, it would be the grandest
facility of its type ever conceived. A French Renaissance-style dome at the
home plate entrance gave the stadium a distinctive, almost churchlike,
appearance. The concrete grandstand and bleachers followed the first and third
base foul lines, with seating for 20,000. A huge scoreboard was installed in
left field. The facility's price tag was placed at a breathtaking half million
dollars.
The opening of Shibe Park set a standard that was soon and widely
matched. In Pittsburgh, Barney Dreyfuss already had begun construction of a
replacement for the old Exposition Park, the riverfront facility that had been
in use since 1890. The park Dreyfuss named Forbes Field opened June 30 at the
site near Schenley Park, and included elevators, lighting in the grandstand,
telephones, and even maids in the ladies rooms. He also conceived of providing
access to the upper levels of the triple-decked grandstand by means of ramps
rather than stairs, a practice that has continued to this day. The larger
capacity of Forbes paid almost immediate dividends when the Pirates celebrated
the park's inaugural season by winning the world's championship.
If there is one hallmark of the concrete and steel stadiums raised in a
dozen different cities between the years 1909 and 1923, it is their
individuality. When Charles Comiskey developed plans for his new concrete and
steel structure at Thirty-fifth and Shields in Chicago in 1910, he asked his
own star, pitcher Ed Walsh, to take a hand in the work. It may not be
surprising that Comiskey Park, both at its opening and for decades afterward,
was considered one of the most tasking layouts for hitters, with 363 foot foul
lines, 382 foot power alleys and a center field distance of 420 feet that,
year by year, enlarged to 455 feet. Particularly in the dead ball era, the
center field fence may as well not have existed at all. In Brooklyn's 22,000
seat Ebbets Field, which opened in 1913, the original carry to the barrier in
left was 419 feet. Yet a street limited the distance to the fence in right
field to a mere 301. (Construction of bleachers in the 1930s brought the left
field wall within a more manageable distance as well.)
Of course, the most unusual design of all the old parks was New York's
bathtub-shaped Polo Grounds, which replaced the wooden facility of the same
name after it was damaged by fire in 1911. The "new" Polo Grounds featured
foul poles only about 260 feet distant from the plate, with a center field
that arced to distances of nearly 500 feet.
With a few exceptions, the classic-era parks served their host teams well
for generations. But gradually in the 1940s and 1950s, and increasingly so in
the 1960s, interior wear and exterior conditions rendered many of those parks
unsatisfactory, at least in the eyes of their tenants. Those conditions were
varied, but they can be summarized as follows:
Access: The classic-era parks had been dependent on trolley, subway or
bus lines to deliver fans to their gates. But by the 1950s, America was a
motorized nation, and club owners came to feel the need for proximity to
modern freeways, as well as expansive parking lots. Brooklyn club owner Walter
O'Malley moved his team out of Ebbets Field and to Los Angeles when the
borough failed to deliver on such a facility. The Giants, beset at the Polo
Grounds by many of the same problems, fled the same year to San Francisco.
Size: When most of the "classic era" parks were constructed, crowds of
30,000 were considered exceptional. By the mid-1960s, however, operational
costs forced some clubs to average that much per home date just to show a
profit. Neither Forbes Field in Pittsburgh, Shibe Park in Philadelphia nor
Crosley Field in Cincinnati was capable of seating much more than 35,000; when
new and larger multi-purpose stadiums were built in those cities, the clubs
hastened to move into them.
Cost: Without exception, classic-era parks had been constructed using
private capital. By the 1960s, the cost of developing the kind of 50,000-seat
stadium required by a major league team was virtually prohibitive. But
municipalities, which had come over the years to view teams as community
assets, proved willing in many cases to support the construction. This
happened as early as the 1930s in Cleveland, and again in 1954 when the city
of Baltimore captured the former Browns from St. Louis. Since Dodger Stadium
opened in Los Angeles in 1962, there have been twenty-four new stadiums opened
for major league use, and the construction or renovation of all but one----the
exception is Joe Robbie Stadium in Miami--were financed by government.
Oftentimes, that public involvement has taken place as one part of a larger
urban-development effort, with the new park situated on once-blighted or
undeveloped land near the core city and forming the centerpiece of a massive
redevelopment project. This has been the case in cities like St. Louis,
Seattle, Minneapolis-St. Paul, Baltimore, and Pittsburgh.
But concurrent with that last trend, a new and significant factor has
been introduced. Whereas in the past ballparks were forced by the exigency of
private construction to conform to their surrounding, thus imbuing each park
inevitably with an individual flavor, public involvement reversed the
equation. Since the opening of Shea Stadium in New York and the Astrodome in
Houston, surroundings were altered to conform to the concept of an "ideal"
park, rather than the opposite. Freed from the constrictions of neighborhood
geography--and in an effort to maximize utility--designers gave their parks a
symmetry bordering on sameness. The result: Many have said it is almost
impossible to distinguish Riverfront Stadium in Cincinnati from Three Rivers
Stadium in Pittsburgh or Veterans Stadium in Philadelphia.
In truth, neither stadium designers nor club owners fell headlong into
the new age of modern "superpark" design, with whatever advantages or
shortcomings the era may contain. In fact, the postclassic era dawned with a
two-decade transitional period during which the factors noted above were
gradually assimilated into the classic motif.
Cleveland's Municipal Stadium provided the introduction to this
transitional period. Constructed in 1932 by the city of Cleveland, it was vast
(potentially holding more than 80,000), virtually symmetrical, yet situated
close to the central city on the lakefront. Evidence that the concept of
coexistence between a private ballclub and public stadium had not yet taken
firm hold is the fact that for about fifteen years after Municipal Stadium was
built, the Indians occupied it only in fits and starts, generally playing
their weekend games there, but maintaining the staid old League Park as their
weekday habitat. Not until 1947 did the Indians become full-time tenants of
the big ballpark.
For the first time in 1953, and again in 1954 and 1955, public facilities
were developed with the specific aim of attracting major league teams. It
worked in all three cases: to lure the Braves from Boston to Milwaukee, the
Browns from St. Louis to Baltimore, and the Athletics from Philadelphia to
Kansas City. The moves were unprecedented for the previous half-century, yet
sensible in that all three teams left cities which had proved incapable of
supporting two clubs. The stadiums in Milwaukee and Baltimore were constructed
basically from scratch; in Kansas City, Municipal Stadium had served for many
years as a minor league facility, but extensive renovation was undertaken. In
none of the three cases did the stadiums abandon the city for the open
country, but neither were they reliant on mass transit, either.
The era of the modern public superstadium ironically probably dates from
the opening of the last private stadium, Dodger Stadium in Los Angeles in
1962. Yet the species' zenith was achieved in 1965, when the Harris County
Domed Stadium, the Astrodome, opened in Houston. A multimillion-dollar project
in an era when that was a breathtaking sum, the Astrodome broke from so many
traditional rules and patterns of stadium design that it literally changed the
way the game was played.
The first and most obvious change, of course, was the roof that covered
the facility. Baseball had come indoors. No more would rain, wind or other
weather be a factor in a game's outcome. Beyond that, since grass would not
grow under the dome's roof, an artificial turf had to be installed. This
"Astroturf" as it came to be called, was faster and more durable than grass
and also was harder on the players' legs, so it required adaptations in team
strategy. Swifter, more agile fielders replaced slow-footed but hard-hitting
predecessors. Speed, whether for base stealing or cutting off base hits,
supplanted brawn in the game played inside on artificial turf.
Within a span of little more than a decade, artificial turf became the
most copied aspect of any single new ballpark built in America since the
owners of the Union Grounds in Brooklyn fenced in their lot. Not only did it
not wear out, not only was it easier to maintain during rain, but it stood up
better under the strain of multipurpose use for such nonbaseball occasions as
football games and musical concerts. Municipalities installed the stuff
virtually everywhere a stadium was built for use by more than one team: in
Philadelphia, Cincinnati and Pittsburgh in the span of one year alone. The
city of St. Louis originally built a new Busch Stadium in 1966 with a grass
surface, but replaced it with turf after a few years. So faddish had
artificial turf become that, in 1970 when Kansas City officials developed
plans for separate and individually designed football and baseball stadiums,
they still installed artificial turf on the baseball field. In fact, of all
the municipally funded ballparks opened since the Astrodome in 1965 and still
in use, fewer than half use a natural-grass surface today.
For purists there is some hope: when the new parks open in Cleveland,
Texas, Colorado, and Atlanta it will make seven consecutive parks that have
debuted with natural surfaces, including Camden Yards, new Comiskey, and Joe
Robbie Stadium (Atlanta's park will be constructed for the 1996 Olympics, and
then converted). And following the 1993 season the Kansas City Royals
announced plans to rip up their turf. The countervailing trend toward indoor
stadiums, by the way, is only slightly less dramatic. There are presently five
such, four having opened in little more than a decade (in Toronto, Seattle,
Montreal, and Minneapolis-St. Paul.)
The trend toward multiuse stadiums, which at one stage not long ago
appeared inexorable, may also be abating. Of the facilities opened since
Dodger Stadium in 1962, all but two (Arlington Stadium and Royals Stadium)
have seen some multisports use. But three others (Shea, Busch, and
Oakland-Alameda) have reverted to baseball-only status and all the newest
parks--in Baltimore, Chicago, Atlanta, Texas, and Colorado--are intended as
baseball-only stadiums. Although final plans for some of the parks are not yet
available, they appear to be following a trend toward asymmetrical designs
that are suggestive of turn-of-the-century parks. With the exception of the
park in Texas, they will be built in "downtown" areas.
These trends will reverse some design functionalities common to the
superstadium boom of the late 1960s and early 1970s. Parks built during that
era tended to share most if not all of the following characteristics: they
altered the landscape to conform to the "ideal" of a park rather than vice
versa; they were built on large open areas that included acres of parking;
they were symmetrical and predictable in design; they were proximate to
interstate highways; they were designed to be multipurpose; they eliminated
pillars but in doing so sacrificed proximity of upper deck seats to the
playing field; and they used artificial surfaces.
The future of ballpark design matters because it is the most
consequential nontalent factor affecting play. The parameters of those factors
may change: from trees or no trees in foul territory, to real or artificial
grass. But it is and always has been up to the individual clubs to adjust
successfully to windless King County Stadium or windy Candlestick Park, to
bandbox Wrigley Field or the capacious Astrodome. What can be said, and what
could always be said, is that in baseball, more so than in any other sport,
the term "home field advantage" is meant to be taken literally.
Strategy Before 1920
There is no single "correct" way to win a pennant. If a club can hit the
cover off the ball, it might have a chance. If it can field with the best,
that might be enough. And if its pitchers are dominant, that, too, might do
it.
Then again, maybe not.
If the history of major league baseball demonstrates anything, it is that
the search for a single winning formula is as elusive as the search for a
rainbow's end.
Since the National League of Professional Baseball Clubs first organized
for play in 1876, there have been 231 recognized "major league" seasons
played, most by the two currently operating leagues, but also including a
handful of "third majors." The table below lists the number of times the team
leaders in five major performance categories--which may be read as indicative
of a particular basic strategic bent--also won the league or divisional
pennant. The five categories (and the strategies they may represent) are
batting average (batting), slugging average (slugging), stolen bases (speed),
fielding average (fielding) and earned run average (pitching.) (The figures
for stolen bases are measured against only 208 seasons, rather than 231, since
no reasonably accurate records of stolen bases were kept until the mid-1880s.)
Category Winning league leaders Percentage
Batting Average 83 37.8
Slugging Average 94 42.9
Stolen Bases 44 21.6
Fielding Average 70 31.9
Earned Run Average 96 43.8
It stands to reason that if, through the seasons, ballclubs had found one
strategy to be more important than any other, that finding should be indicated
in a superior correlation between league leadership and pennants won. In fact,
as the table above indicates, any such superiority is quite minimal, if it
exists at all. The 44 percent correlation between leadership in earned run
average and championships is only about 1 percent higher than the correlation
between winning and slugging. The chart does seem to suggest that base
stealing speed has a weaker correlation to pennants than the other four
skills.
In recent years, the correlations have become even weaker and less
marked. Since 1961, there is only a 32 percent correlation between slugging
average or earned run average--still the most common trait of champions--and
pennants.
The table certainly cannot be read as conclusive. No one would affirm,
for instance, that raw stolen bases totals are the sole measurement of
emphasis on speed; that fielding average is the only gauge of defensive
ability; or that ERA is the one yardstick by which to assess pitching
strength. Yet if achievement in those five categories can at least be read as
a barometer of strategic superiority, then what the table does suggest is that
the least productive strategy contributes to victory approximately one quarter
of the time, the most productive less than half the time.
Why do strategies change? Why don't the modern Mets approach the
challenge of winning in the same fashion as the White Stockings of bygone
days? Many of the reasons are obvious. Plainly, changing conditions and rules
dictate some of the strategic adjustments. The White Stockings and their
counterparts of the 1880s would, for instance, have considered it folly to pay
more than one or two pitchers and an equal number of substitutes. Rules
regulated the appearances of nonregulars, and in a time of 80-game schedules
and underhanded deliveries, more bodies simply were not required. Night
baseball and modern-day transcontinental travel demands, too, place greater
strains on players.
Changes in park sizes, styles and equipment contribute to strategic
alterations as well. When, in the first quarter of the twentieth century,
improved manufacturing techniques made for an better grade of ball, managers
found it more productive to eschew the erstwhile popular sacrifice in favor of
waiting for a home run. The increasing popularity of artificial turf created
an intensified interest in defensive range and speed. And sociological
adjustments played a part as well. The 1920 outlawing of the spitball and
other "trick" pitches that involved defacing of the ball--occasioned, at least
in good measure, by sociological factors--plainly contributed to generally
higher batting averages throughout the 1920s and 1930s. The de facto banning
of the beanball and its first cousin, the knockdown pitch, in recent years
resulted in some degree from public complaints about the pitch's potential
danger.
But another, less obvious contributor to the constant ebb and flow of
baseball strategy is simple managerial practice. If a particular team employs
a new, or more often resurrected, strategy to success, the prospect is great
that competitors will incorporate that into their own plans. Often, these
strategic adjustments are of more transitory duration, but in terms of their
impact on individual pennant races they can be just as important.
It is overly simplistic to equate particular strategies with specific
time periods--to suggest, for instance, that because earned run averages
tended to be lower during the first decade of the 1900s, the emphasis at that
time was on pitching. Or to argue that teams stressed offense in the 1920s and
1930s because batting averages swelled, or to suggest that speed has become
the dominant force of the present generation. In fact, between 1900 and
1919--the commonly recognized dead-ball era--the league batting champion won
20 pennants, the slugging champion 19, and the earned run average champion
only 16. Conversely, between 1920 and 1949--the period of unbridled
hitting--32 pennants were won by clubs that led their league in ERA, only 24
by slugging leaders, and only 22 by batting average leaders. And only 10 of
the 36 league stolen base leaders since 1969 have won divisional pennants, a
correlation that is slightly greater than the average for all time, but hardly
compelling in making the argument for a contemporary strategic shift to speed.
Those numbers do not render the era labels meaningless, but they do
suggest that the successful managers--of every generation--may be following
their own strategies, rather than the obvious ones.
The art of strategy probably is as old as the game itself. The first
player in the first game approached the batting area for the first time and,
scanning the defense, wondered whether it would be wiser to take a strike, go
to right, or rip one over the left fielder's head. When Candy Cummings
discovered--if Candy Cummings indeed discovered--that he could make a baseball
curve, he was engaging in the development of strategy. So was the anonymous
manager who--faced with the dilemma of none out in the ninth and the winning
run at third--brainstormed bringing both his infield and outfield in to a
shallow depth, the better to cut off the run at home. When, in the 1880s,
Chicago's legendarily innovative "King Kelly"--perhaps apocryphally--dashed
from the his seat on the bench, yelled "Kelly now substituting" and snagged a
foul fly to save the game, he was enhancing strategy: at least, he was until
that particular practice was outlawed, and substitutions permitted only during
time-outs.
Perhaps the first recognized employer of what we might today consider
strategy on a prolonged basis was Ross Barnes, the second baseman of the
champion Chicago White Stockings of the National League's inaugural season in
1876. The league at the time had a rule that stipulated as fair any ball which
landed in fair territory, irrespective of whether it subsequently rolled foul
before passing a base. By that standard, many of the "bunts" of today--and a
number of chops as well--would be fair balls. Barnes developed the skill of
striking such "fair-foul" hits, and he did it so well that he led the league
in batting that first season, with a .429 average. Alas for Barnes, as would
be the case for some subsequent strategists of later ages--notably
spitballers--rulemakers reacted to his achievement by outlawing the strategy
that made it possible. And when in 1877 the requirement was established that a
ground ball pass first or third base in fair territory to be legitimately
fair, Barnes' average plummeted by over 150 points, to .272.
As would be expected, the development of strategy during the game's first
decades occurred in very broad and general terms. There was, for instance,
little thought given to the strategic advantages of relief pitchers,
platooning, pinch hitting, pinch running, or defensive replacement, for the
simple reason that until the late 1880s substitutions--save for injury--were
not even permitted. Naturally, the growing awareness of the value of
maintaining a reserve of players first focused on the pitcher's box.
As early as 1876, the first season of club-based professional leagues,
managers employed diverse approaches to pitching strategy. Four of the eight
teams, including the Chicago champions, stayed fundamentally with a single
hurler. In the case of Chicago manager Al Spalding, that pitcher was Spalding
himself, who pitched in 61 of the team's 66 games. But three other clubs
divided the mound work roughly equally between two men of reasonably balanced
skills. In the case of third-place Hartford, for instance, Tommy Bond pitched
408 innings with a 1.68 ERA, while old-timer Candy Cummings curved his way
through 216 innings with a 1.67 ERA. And fourth-place Boston went so far as to
divide the work among three pitchers, each pitching between 170 and 220
innings. Boston manager Harry Wright might have seemed very much the trend
setter had he stuck with that notion. But the very next year Wright jettisoned
all three of his 1876 arms and signed Bond away from Hartford to pitch 58 of
the club's 61 games. It was the "Wright" move; Boston won the 1877 flag.
Cincinnati employed a three-man staff that year. . . and finished last.
If we define a pitching "staff" as consisting of at least four pitchers,
each sharing a roughly equivalent part of the responsibility, then credit for
devising the first one probably belongs to Jack Chapman, who directed the
fortunes of several early-day National League teams. Chapman found himself in
Detroit in 1884, surrounded by little offense and even less in the way of
reliable pitching. The team's earned run average in 1883 had been 3.56, second
worst in the league and considerably higher than the overall 3.13 average.
This was still very much an era when a single hurler could carry a team's
fortunes: In Providence, Charles Radbourn would win 60 games and pitch 679
innings, the equivalent of 75 complete games. Other mound stars included Pud
Galvin (46-22) in Buffalo, Larry Corcoran (35-23) in Chicago, and Mickey
Welch (39-21) in New York. Chapman had no one who could hope to match such
standouts head-on day after day, so he did not try. Instead, he rotated five
men, none pitching more than 30 percent of the team's innings. The result:
well, it wasn't much. Detroit still finished last. Chapman took his approach
to Buffalo in 1885, where the notion of a four-man pitching rotation lasted
longer than Chapman himself; it remained throughout the season, but he was
dismissed after a 12-19 start.
From the mid-1880s, experimentations with multipitcher staffs became more
common, but no team won a pennant utilizing such an approach until Chapman's
successor in Detroit, Bill Watkins, resurrected the notion in 1887. That club,
too, featured five pitchers, none of whom did very much more than a third of
the work. Like Chapman, Watkins plainly was trying to mask a weakness. His
everyday lineup featured some of the game's greats: outfielder Sam Thompson
won the batting (.372) and RBI (166) titles, and the team led the league in
runs scored, doubles, triples, batting average, slugging, and fielding
percentage. But as usual, all of the great pitchers toiled for other teams:
Tim Keefe and Welch in New York, John Clarkson in Chicago, Galvin in
Pittsburgh. Watkins had only two proven arms--Lady Baldwin (42-13 in 1886) and
Charles Getzien (30-11)--and two lightly used reserves, Pete Conway, acquired
from Kansas City, and a twenty-three-year-old named Larry Twitchell. When
Kansas City's team folded after the 1886 season, Watkins signed the team's top
pitcher, Stump Weidman. He also promoted Twitchell to a semiregular status,
and those moves, combined with the availability of Conway for a full year,
left Watkins so deep in pitching that he could actually afford to let Weidman
go to the New York team of the American Association at midseason. Following is
the record of that Detroit staff for 1887:
PITCHER GAMES INNINGS WINS-LOSSES ERA
Getzien 43 367 29-13 3.73
Baldwin 24 211 13-10 3.84
Weidman 21 183 13-7 5.36
Conway 17 146 8-9 2.90
Twitchell 15 112 11-1 4.33
Suddenly the names of Detroit pitchers began showing up in the strangest
of places, like among the league leaders in key pitching categories. Getzien
led in percentage and was third in wins, Conway ranked second in ERA and
allowed fewer hits per nine innings pitched than anyone. The next season a
very funny thing happened: several teams ditched their reliance on a single
pitcher in favor of a staff. There remained a few holdouts: Boston's John
Clarkson pitched 483 innings in 1888, 620 in 1889, and 460 as late as 1891.
But within a decade of the Detroit staff's accomplishment, Boston's Kid
Nichols could lead the league in innings pitched with a comparatively modest
368. The era of a team asking one man to pitch as many as 400 innings was not
quite dead yet--it would surface here and there through the first decade of
the twentieth century--but it was dying.
The change to a multiple-pitcher staff may have been hastened by
Detroit's inability to snare one of the strong arms--a Clarkson or a
Galvin--but changing conditions and rules would have made it inevitable even
so. Occasionally, a strategy works so well that it must be legislated against.
Ross Barnes' was one such. But the all-time champions, both in devising new
strategies and in getting them banned, were the Baltimore Orioles teams that
flourished under manager Ned Hanlon in the 1890s.
Hanlon's Orioles achieved that mastery by a singular combination of
remarkable skill and superior innovative capacity. Among the strategies team
members are credited with devising or popularizing:
The hit-and-run-play: Stories as to the origin of the stratagem, whereby
according to a preconceived plan a runner breaks for the next base while the
batter attempts to drive the ball through a hole vacated by the fielder
covering the steal effort, are both numerous and hoary, and no definitive
judgment can be rendered. Cap Anson, longtime manager of the Chicago White
Stockings, is among those purported to have claimed this strategy as his own.
But the best available evidence tends to support the claim of the Orioles'
chief contemporary rivals, the Boston Beaneaters, and their manager, Frank
Selee. John McGraw, the famous manager who played for the Orioles, insisted on
the validity of Baltimore's claim. But even if Hanlon's Orioles cannot be
established as the originators, they certainly brought the play to its first
and lasting popularity. As worked most frequently by Baltimore on frustrated
opponents, John McGraw, leading off, would reach base, and then Willie Keeler,
a superlative hitter (lifetime .345 batting average) whose principal asset was
bat control, would direct the ball to the appropriate defensive weakspot,
often resulting in runners at first and third with none out.
The Baltimore chop: There is no question as to the origin of this play,
which has waned in strategic significance with the advent of the home run as a
game factor. But Orioles' hitters mastered and used that mastery to advantage.
The chop was deceptively simple: a hitter would employ an exaggerated downward
swing to drive the pitch almost directly into the ground in front of the
plate. On the hard Baltimore dirt, the result would be a simple infield
bouncer, but one recoiling so high off the ground that there would be no
defense; infielders could merely wait in vain frustration for the ball to
descend while the batter scampered to first unchallenged.
The bunt single: The sacrifice, of course, had been around for many years
prior to the emergence of the Orioles. But Baltimore players like McGraw,
Hughie Jennings, and Joe Kelley were among the first to use the bunt as an
offensive weapon, a means for reaching base. Dickey Pearce and Tom Barlow of
the old Brooklyn Atlantics pioneered in this regard, and Ross Barnes followed.
But McGraw especially was brash in his use of the bunt: it was occasionally
remarked in awe that he might even attempt to lay one down on Boston third
baseman James Collins, then considered the standard for measurement of
excellence at the position.
The Orioles weren't the only innovators of the 1890s. In Boston, the
Beaneaters honed their skill at the double steal, wherein the runner at first
broke for second, and when the catcher attempted to retire him the runner on
third tried to score. This rather daring technique required not only nerves
and teamwork but superior speed, and the Beaneaters had plenty of the latter
commodity with the likes of Billy Hamilton, whose more than 900 career stolen
bases represented the all-time record until Lou Brock's day. The Brooklyn club
of the same era is generally credited with originating the tricky cutoff play,
when an infielder intercepts an outfielder's throw to the plate in an effort
to retire the batter if he, thinking the ball will be thrown through, attempts
to advance an extra base.
But the Orioles devised other, less gentlemanly strategies as well. Their
first baseman, "Dirty Jack" Doyle got his nickname by tripping, jostling or
holding opposing runners by the belt; Jennings at shortstop or McGraw at third
were equally as likely to obstruct a runner. Oriole outfielders were known for
hiding extra balls in the tall grass to be put in play in emergencies. It was
said that catcher Wilbert Robinson always kept his pockets full of pebbles,
which he dropped in the shoes of batters as he squatted behind them. On
offense, the Orioles were by no means above cutting bases when an umpire's
back was turned. They could do all of those things because most games of the
era were officiated by a single arbiter, who could not hope to watch
everything taking place on the broad expanse. Ultimately, public disgust at
the Orioles' open flaunting of rules caused league officials to authorize
umpiring teams. Over time, the practice grew to using four umpires. The trend
started with the Orioles.
Possibly the most convincing evidence of the prominent role played by
Hanlon's Orioles in the development of baseball strategy is the fact that the
two superior minds of the subsequent generation of baseball officials were
former Orioles: McGraw and Jennings. It was they who, while piloting teams of
the first few years of the twentieth century to pennants, popularized
strategic innovations that would eventually assume permanent, prominent roles
in the planning of every major league franchise.
Jennings took over leadership of the new American League's Detroit Tigers
in 1907 following his retirement as an active player, and he became an
immediate success. The Tigers, a 71-78 team the previous year under Bill
Armour, leaped immediately to 92-58 and the championship. They followed that
up with pennants in 1908 and 1909 as well, the first American League club to
win three years running. In part, Jennings' success was a product of being in
the right place at the right time; his managerial star ascended in almost
precise concert with the development of Ty Cobb, who came up as an
eighteen-year-old rookie in 1905, and won batting titles in twelve of thirteen
years from 1907 through 1919. But give Jennings some credit as well for
analyzing his team's strengths and weaknesses, and for inventing methods of
overcoming the latter.
The prime example of that trait involved his handling of the Tigers'
catchers. Even in the years of their first two championships, catching was a
comparative liability for them. The regular, lefthanded batting Boss Schmidt,
hit just .244 and .265, and he seemed seemed especially bedazzled by
lefthanded pitchers. Jennings had dealt summarily enough with other weak links
by releasing them. But he did not want to dispatch Schmidt because of his
still sharp defensive skills and above-average throwing arm. Instead, Jennings
replaced Schmidt in the lineup against lefthanders, with righthanded Ira
Thomas and then with Oscar Stanage after Thomas was traded. What Jennings was
using was a platoon system, and it gradually caught on. New York Highlander
manager George Stallings applied the notion with outfielders Willie Keeler and
Birdie Cree in 1909, then took the idea with him to Boston when he assumed
control of the Braves in 1914. There, his judicious mixing of a half dozen
outfielders helped bring him a pennant.
Research compiled by Bill James for his "Historical Abstract" suggests
that the notion of platooning actually started with Armour, Jennings'
predecessor at Detroit in 1906, rotating Schmidt and Freddie Payne. But
Armour's platoon system attracted little notice, and he himself was soon
fired. It plainly is Jennings who deserves the credit for popularizing the
idea by demonstrating over a period of several seasons that it could work with
a pennant contender. (A solid case might also be made for manager Frank
Bancroft as the father of platooning, way back in the 1880s. Nothing new under
the sun, perhaps.)
In the National League, McGraw pioneered strategy of a very different,
but equally lasting type. There came to the Giants in 1908 a twenty-year-old
rookie pitcher named Otis Crandall--the players called him "Doc"--who showed
exceptional potential. Crandall won 12 games, but he lacked overpowering
speed, stamina, was hit hard in the later innings of games, and did his best
work in relief of other pitchers. To minimize the weakness and take best
advantage of the strengths, McGraw in 1909 designated Crandall as the club's
"relief" pitcher, chosen to enter in mid-game and rescue a faltering
teammate. In an era when starting pitchers were rarely removed--about
two-thirds of all starts that year were complete games--the concept of a
pitcher actually specializing in midgame appearances seemed demeaning. Yet
that is precisely what "Doc" Crandall did: starting only 8 games, entering 22
in relief of other Giants, and winning 5 of those games, with 4 of what would
come to be classified as saves and just 1 defeat. Over the next three seasons,
two thirds of Crandall's 120 appearances were in relief. He won 20 times,
saved 11 more and lost just six. He was not considered the equal of Christy
Mathewson, Rube Marquard or Jeff Tesreau, but he would have ranked in value
with any other of the Giants starters.
As intriguing as it was, Crandall's success did not spur an immediate
flood of imitators. Managers, who found quality starting pitching difficult
enough to locate, could not bring themselves to isolate one or more of their
better arms for emergency duty. One of the few mimics was Patsy Donovan of the
Boston Red Sox, who in 1910 converted righthander Charley Hall from an
occasional, and ineffective, starter into a reliever of fairly consistent
quality. Between 1910 and 1913, Hall made 136 pitching appearances for Boston,
just 51 of them as a starter, and in relief he won 20 of 24 decisions, saving
11 others. Fittingly, the 1912 World Series pitted Hall's Red Sox against
Crandall's Giants. Hall was the more widely used, pitching 10 2/3 innings in
two games with a 3.38 ERA. Crandall saw action in just one game, and he did
not allow a run, as the Giants eventually lost, four games to three.
While McGraw and Jennings innovated, game strategy during the dead-ball
era between 1901 and 1920 appeared to stress strong pitching, baserunning,
playing for a single run, and an emphasis on one or two players who, had 1990s
hyperbole been in fashion then, would have been called "superstars." The most
obvious of the latter was Cobb, who batted .350 in 1907, .385 in 1910, .420 in
1911, and .410 in 1912. In 1910, for example, Cobb's batting average was
nearly 100 points higher than any of his teammates, his slugging average 125
points superior. He was not the only early 1900s example of the near "one-man
team." In Cleveland in 1911, outfielder Joe Jackson, batted .408, slugged
.590, collected 233 hits, with 45 doubles, 19 triples and 126 runs scored. The
second highest totals on the team in each category were: .304, .396, 142, 25,
9, and 89. In 1909, Pittsburgh's Honus Wagner led his team to the pennant with
a .339 average. The second highest average among the Pirate regulars belonged
to player-manager Fred Clarke, at .287.
With the home run not yet developed as a viable option, and with league
earned run averages ranging between 2.30 and 2.70, managers often resorted to
the sacrifice or the stolen base, mindful of the importance of every run. It
is not possible today to reconstruct totals of sacrifices, but stolen base
records are available, and their counts rose higher and swifter than at any
other period of the game until the 1970s, as the evolution of the individual
and team stolen base records indicates. In 1900, two years after the modern
system for counting steals was developed (prior to that, any baserunner's
extra-base advance--whether via a pitched or batted ball--had been counted),
Brooklyn led the majors with 274 steals. St. Louis's Patsy Donovan and New
York's George Van Haltren set the individual standard with 45. Frank Isbell of
the new American League's Chicago team broke the modern individual record in
1901 with 52, and Isbell's Sox stole 280. In 1903 Frank Chance of the National
League Chicago Cubs and Jimmy Sheckard of the Dodgers upped the individual
mark to 67, and in 1904 the New York Giants raised the team record to 283. The
Giants broke their own record in 1905, stealing 291, and Cobb shattered the
modern individual record in 1909 with 76 steals.
Neither record lasted one season: in 1910, Eddie Collins of the
Philadelphia Athletics stole 81, and the Cincinnati Reds 310. And even those
standards were erased within one year, Cobb stealing 83 and the Giants 347 in
1911. Clyde Milan of Washington broke Cobb's record with 88 in 1912; then Cobb
broke Milan's mark in 1915, stealing 96.
There you have it: the individual record broken seven times, the team
record five times, all in a span of 15 seasons. Cobb's record did not fall for
forty-seven years, until Maury Wills stole 104 bases for Los Angeles in 1962.
And the Giants' team record of 347 steals is unsurpassed to this very day.
Wouldn't it be natural for a record in a newly established category to be
broken several times in quick succession, then finally to reach a
comparatively unattainable plateau? Yes. But consider that even the original
team mark of 274 set by the 1900 Dodgers would have stood into the 1970s. The
first twenty years of the century were not a case of a record gradually being
raised beyond reach, they were a case of teams simply stressing the running
game. The very worst team at accumulating stolen bases in the century's first
decade, the 1906 Boston Braves (who stole 93), would have won either the
American or National League stolen base championships thirty-eight times
between 1925 and 1960.
Strategy: 1920-Present
The reasons behind the switch--which occurred about 1920--from a
strategy based on the sacrifice and stolen base to one focusing on the
extra-base hit are numerous and complex. Changes in rules, park design,
equipment, and fan interest all played a part. The impact of those factors on
the changed game is underscored in the dramatically altered statistics of the
game in the 1920s and later, as compared with its predecessor. The raw numbers
of runs being scored provides the clearest contrast.
Prior to the season of 1920, the major league record for runs scored in a
season by an individual was 147, Ty Cobb scoring that many in 1911. The record
since the establishment of the 16-team, 154-game schedule, set in 1912, was
11,164. But in 1920, New York's Babe Ruth easily broke Cobb's individual
record by scoring 158. He broke it again in 1921 with 177, establishing the
standard that still exists. In all, Cobb's former record was broken thirteen
times in the American League alone between 1920 and 1940. Meanwhile, the total
runs scored record rose to 11,935 in 1921, then broke through the 12,000
barrier the following year, to 12,059. It was broken again in 1925 (12,592),
again in 1929 (12,747), and again in 1930 (13,695). And that record stood for
more than three decades, until it was surpassed in 1962, by which time each
major league had added two teams and eight more games to the playing schedule.
Power records similarly surged. Tris Speaker's dead-ball-era record for
doubles--53, set in 1912--fell to Speaker himself in 1923 (59), and was
surpassed in eight more seasons during the 1920s and 1930s; in 1936 alone five
players matched or bettered that pre-1920 record. The pre-1920 record for home
runs--Ruth's 29 in 1919--bears no comparison, of course, with subsequent
achievements. It had been raised three times by Ruth himself in 1927, and was
bettered in every single American League season until the war year of 1944,
when New York's Nick Etten led the league with only 22. Slugging averages,
which ranged between .310 and .340 during the dead-ball era, jumped by an
average of more than 20 points in both leagues in 1920 alone, and by 30 more
points the following year. The increase in the American League alone was
nearly 14 percent between 1919 and 1921. The league slugging average soared to
.421 by 1930 in the American League, to .448 in the National.
With the increase in power came a concurrent acceptance of the base on
balls as occasionally strategically prudent. Managers, operating on the theory
that discretion might be the better part of valor, instructed or allowed
pitchers to "work around" certain hitters like Ruth who were capable of doing
far more damage with a home run than a walk. For the pre-1920s, when pitchers
looked on a base on balls as pariah, the Chicago Cubs' Jimmy Sheckard held the
record by drawing 147 of them in 1911. That lasted only as long as it took
Ruth to be walked 148 times in 1920. The Babe raised that standard to 170 in
1923. The league record of 4,282 walks issued in the National League in 1911
lasted until 1925, when American League pitchers walked 4,315 batters. The
record was hiked biennially to 4,402, 4,611, 4,855 and 4,924 in the same
league between 1932 and 1938. Finally, in the 1920s and 1930s, the notion of
the relief pitcher as defined by McGraw years earlier first gained true
prominence. In 1919, the St. Louis Cardinals' Oscar Tuero had become the first
primarily relief pitcher to lead the league in appearances; he pitched in 45
games, 28 out of the bullpen. The achievement drew little notice, primarily
because his team finished seventh. But in 1923, the pennant-winning Giants'
Claude Jonnard and Rosy Ryan tied for the league lead in appearances, each
with 45. Ryan started 15 games that season, Jonnard but 1. The following
season, Firpo Marberry of the AL champion Washington Senators led the league
with 50 appearances, only 15 of them starts. Marberry repeated as most-called
upon in 1925 with 55 appearances, all in relief, in another pennant-winning
year. Marberry's role was by no means yet established; he would lead the
league three more times in appearances, twice as a reliever, once as a
starter. But the idea of a specialist in quality relief pitching for
first-rank teams had at last begun to gain acceptance. When the 1927 New York
Yankees blitzed the American League to win 110 games, their most frequently
called upon pitcher was rookie Wilcy Moore, who won 19 games despite starting
only 12. Moore pitched 38 times out of the bullpen. In 1901, National League
pitchers had completed 976 games, representing nearly 90 percent of the
schedule. By 1919, that figure had fallen to about 60 percent. In 1922, for
the first time in history, National League pitchers completed fewer than half
of all their starts. By 1930, the mark had fallen to 43 percent, and it held
at roughly that level through the 1930s and 1940s. As the perceived importance
of the complete game waned, a sort of temporary strategic miasma ensued;
managers, less unwilling to turn to the bullpen, had not perfected effective
strategies for its use. That began to change in the early 1940s when
Brooklyn's Leo Durocher developed the notion of a bullpen "ace," a late-inning
stopper capable both of helping his team regain leads and of buttressing a
successful but tiring starter's work by holding his advantage through the
final innings. For the first time, a manager appeared not to expect his
starter to finish, or at least not to mind if he didn't. Dodger starters
completed only 66 games in 1941--one of the lowest totals ever by a pennant
winner--and only 67 more the following season. But Durocher used the
hard-throwing Hugh Casey to win 14 games and save 20 those two years. The
notion was copied. Boston's Joe Cronin won the 1946 American League pennant
thanks in good measure to the relief pitching of Bob Klinger, who appeared 27
times in relief and saved a league-high 9 games. The New York Yankees' Joe
Page won 21 games and saved 33 in virtually exclusive bullpen action in
1947-1948. The notion of relievers as failed starters was gradually eroding.
As late as 1946, more than half the major league mound staffs were led in
appearances by a starter, and it was still possible for a starter, Cleveland's
Bob Feller, to lead the league in that category. But the trend was plain. In
1947, relievers led in appearances on ten of the sixteen staffs. By 1952, the
figure was thirteen of sixteen. In Philadelphia in 1950, a relief pitcher, Jim
Konstanty, won the Most Valuable Player Award by pitching in a record 74
games, saving 22 of them and leading his team to the National League pennant.
In Brooklyn, Joe Black won 15 games and saved 15 more for the pennant winning
1952 Dodgers. In Cleveland in 1954, Al Lopez presented a relief duo:
lefthander Don Mossi and righty Ray Narleski appeared in 82 games between
them, saving 20. The major league save record (although it was not an official
statistic until 1969), which had stood at 22 since being set by Marberry in
1926, was swollen to 27 by Page in 1949. Boston's Ellis Kinder matched that in
1953, and New York's Luis Arroyo topped it in 1961. Prior to 1949, only
Marberry in all of baseball history had saved 20 games in a year. Between 1949
and 1961, ten pitchers did it.
Player platooning, basically a dormant activity after the early 1920s,
was revived as a practice in the late 1940s, principally by Stengel. A platoon
player himself under McGraw with the 1920s Giants, Stengel in 1949 and 1950
alternated third basemen Bobby Brown, a lefthander, and righty Bill Johnson.
In 1951, Gil McDougald supplanted Johnson as the righthanded half of the
platoon. The Yankees won the world championship all three years. By 1955,
Stengel had expanded his platoon system, alternating righthanded Bill Skowron
with lefty Joe Collins at first base, and subbing righthanded Elston Howard
for Irv Noren occasionally in the outfield. Howard and utility man Tony Kubek
both were platooned at several positions in 1957 and 1958. Again, successful
managers took their cues from Stengel. Fred Haney's use of the first base
platoon of Joe Adcock and Frank Torre helped the Braves to the 1958 pennant.
The only manager to beat out Stengel for the AL pennant between 1949 and 1960,
Al Lopez, used a platoon system to do so in Chicago in 1959, alternating
righty Bubba Phillips and lefty Billy Goodman at third base, and righty Jim
McAnany and lefty Jim Rivera in right field. In Pittsburgh in 1960, manager
Danny Murtaugh often alternated at three positions: Hal Smith or Smoky Burgess
at catcher, Dick Stuart or Rocky Nelson at first, and Gino Cimoli or Bill
Virdon in center. But for Fred Hutchinson's use of platoons at three positions
in 1961 (Jerry Zimmerman and John Edwards at catcher, Elio Chacon and Don
Blasingame at second, Wally Post and Jerry Lynch in left), Cincinnati very
possibly might not have held off the Dodgers to win by four games. By the
mid-1960s, most teams were platooning at least one position.
The other significant change in strategy to evolve during the 1950s (and
early 1960s) was a growing acceptance of the strikeout as an acceptable price
to pay for home run power. In retrospect, that acceptance can clearly be seen
as a delayed reaction, for home run totals had begun to mount sharply in 1953.
For the past two decades, major league batters had averaged between 1,300 and
1,700 home runs; in 1953, they hit 2,076, a record 1,197 of them coming in the
National League alone. That represented a 22 percent increase over the
previous season. From 1953 through 1960, the record was raised only about 10
percent, and in fact the raw numbers of home runs flattened and occasionally
declined between the 1950s peak season, 1956, and 1960. But strikeouts rose
sharply. In 1953, major league batters struck out 10,220 times; by 1960, that
total had risen steadily to more than 12,800, a climb of more than 25 percent.
The strikeout explosion continued unabated through the 1960s, whether home
runs rose (as they did in 1961 and 1962) or fell. In fact, between 1961 and
1966, home run production remained virtually level in the major leagues,
despite the addition of two expansion teams. But strikeouts rose by more than
25 percent over the same period. The increase (part of which was attributable
to strategic concessions and part to rules changes), showed itself in the
individual strikeout totals as well. Until 1956, the record for most
strikeouts in a season was Vince DiMaggio's 134, set in 1938. Washington's Jim
Lemon broke it that season with 138. In 1961, Detroit's Jake Wood broke it
again, fanning 141 times. Harmon Killebrew of Minnesota raised the mark to 142
the following season; then Dave Nicholson of the Chicago White Sox increased
it to 175 in 1963. By 1969, when San Francisco's Bobby Bonds whiffed 187
times, the record was thought unassailable. Not so--Bonds himself beat it in
1970 with 189 strikeouts. Since 1966, there have been only six seasons when
the league leader struck out as few times as DiMaggio did when he set the
record.
Recent Strategic Changes
The game since 1970 features at least three more easily identifiable
refinements; two of them strategic, the third brought about by one of the
game's rare major rules changes. Those three are the further specification of
the role of the relief pitcher, the regeneration of stress on the running
game, and the implementation of the designated hitter.
The use of the bullpen as a strategic factor has progressed constantly
from Luis Arroyo's days in New York until the present. Again, clear evidence
is found in the record book. Arroyo's previously unmatched total of 29 saves
was surpassed in 1965 by Chicago's Ted Abernathy, who saved 31. Kansas City's
Jack Aker broke the record again the following season with 32. But that record
lasted only until 1970, when Cincinnati's Wayne Granger saved 35 games. The
Reds' Clay Carroll raised the record to 37 in 1972, and Detroit's John Hiller
saved 38 in 1973. In 1983, Kansas City's Dan Quisenberry saved 46. Since then,
three pitchers have saved more than 50 games in one season, including Bobby
Thigpen, whose 57 saves in 1990 established the current record.
Discounting the strike year of 1981, Arroyo's remarkable 29 saves in 1961
would have led the major leagues in no season since 1976.
The primary role of the bullpen had changed from rescuing incompetent
starters to part of a carefully worked-out--and often rigid--strategy for
victory. Relievers have more clearly defined jobs: a typical pen contains a
lefthander to retire only lefthanded batters, a righty for the same purpose, a
"closer" whose task is to record the final three outs, and a setup man, whose
job is to get the game to the closer. Only in emergencies are pitchers used in
roles that are different from their specialties.
With these changes have come additional recognition for the very best
relievers. In 1974, Mike Marshall of the Los Angeles Dodgers became the first
relief pitcher to win the Cy Young Award; his credentials included 15
victories, but also a league-leading 21 saves (and an incredible 106 games).
Three years later, New York's Sparky Lyle would win the AL award with 13 wins
and 26 saves. By 1979, a reliever's victory total had become extraneous: Bruce
Sutter was recognized as the National League's top pitcher that season despite
winning just 6 games. He saved 37. Rollie Fingers did the same thing for
Milwaukee in 1981, winning just 6 but saving 28. . . and he was named both Cy
Young Award winner and league MVP! Willie Hernandez won the 1984 Cy Young
Award for Detroit on the basis of only 9 victories, but 32 saves.
Dennis Eckersley did the same thing in Oakland in 1992 when he worked
just 80 innings and won just 7 games, but saved 51. By the time of Eckersley's
ascendancy as the game's premier reliever, the role of "super saver"--a
pitcher who appeared only in the ninth inning and only to protect a lead--had
been defined to a point of stridency. Managers almost never went to their best
relievers except in ninth-inning "save situations," even if the game might
hang in the balance.
It is difficult to assert with precision why the stolen base, which had
been approached with apathy by power-happy major league teams for decades, was
reinvigorated as acceptable procedure. The simplest of explanations may hold:
that players entered the major league ranks who could run, but not hit with
power. The changing playing conditions, notably the wider use of artificial
surfaces, may well have played a part, as may have the infusion of black and
Caribbean Basin players. Managerial choice certainly had something to do with
it: men like Al Lopez, Chuck Tanner and Whitey Herzog found it easier to
succeed with the versatility and greater athleticism afforded by the
baserunning threat than by the occasionally lumbering, one-dimensional
slugger.
But if the reasons behind the stolen base's surge are speculative and
complex, affixing the date of its arrival as a mainstream stratagem is less
so. It came from Venezuela to Chicago in the person of Luis Aparicio in 1956.
Prior to Aparicio, there had not been a genuine base-stealing threat--a
fellow capable of swiping 50 or more bases in a season--in more than a decade.
And the efforts of the comparative handful of fellows who could perform such
feats of speed in those days--George Case in Washington and Stuffy Stirnweiss
in New York--got lost in the glare of the home run era. Consequently, major
league stolen base totals had sagged to inconsequence. From 1920, when more
than 1,700 bases had been stolen, these were the figures for five-year
intervals through 1955:
YEAR NL AL TOTAL
1925 672 711 1,383
1930 481 598 1,079
1935 403 477 880
1940 475 478 953
1945 525 453 978
1950 372 278 650
1955 377 317 694
Aparicio sounded the call to the new emphasis on speed. As a rookie in
1956, he led the American League in steals. His total of 21 was certainly
nothing special, even for the sluggish 1950s, but the notion of a baserunner
as a weapon had not yet caught on, even in Chicago. The following season,
1957, Aparicio won the stolen base title again, this time with 28 thefts. In
1959, he stole 56 (Willie Mays, the National League champion, stole 27).
Aparicio would go on to win the stolen base crown in nine successive seasons,
topping 40 steals in four of the next five years.
But by 1959, Aparicio no longer was the whole story. Stolen base totals
had turned upward virtually league-wide. National Leaguers stole 439 bases
that season, their highest total in nearly a decade. In 1960, Los Angeles
shortstop Maury Wills joined Aparicio at the 50-steal plateau and National
Leaguers stole more than 500. Then in 1962 Wills eclipsed Aparicio and all
base stealers in the game's history by running successfully 104 times,
breaking Cobb's record of 96 that had stood since 1915. The Dodgers as a team
stole 198, the most by any major league club since 1918.
Baserunners seemed to establish a cause-and-effect relationship with
pennants, notably in the National League. Wills was a key factor in the
Dodgers' championships in 1963, 1965 and 1966. St. Louis obtained Lou Brock in
midseason of 1964 and promptly took off from mediocrity to the world
championship as he stole 43 bases (including 33 for the Cards). With Brock at
the heart of the Cardinals' offense, they won again in 1967 and 1968. In 1965,
major league runners stole nearly 1,450 bases, a level that hearkened back to
the dead-ball era for comparison. And that figure was no fluke: In 1969, more
than 1,000 bases were stolen in the American League alone, more than had been
swiped in all major league games as recently as 1960. By 1975, the major
league tally had surpassed 2,500; by 1987, it was over 3,500. In 1992, major
leaguers stole approximately 3,800 bases.
As was the case in the pre-1920s, every team had its "rabbit," and there
were so many that speed alone was no longer a guarantor of team success. In
1974, Brock broke Wills' single-season record by stealing 118 bases. But his
Cardinals finished second in the NL East. In 1976, Chuck Tanner's Oakland A's
stole 341 bases, missing by only six the all-time record McGraw's Giants had
established in 1911. Bill North stole 75 that season, Bert Campaneris 54, and
Don Baylor 52. Nine different Oakland players stole 20 or more. Yet the A's
finished second behind Kansas City, which stole "only" 218.
Today, the stolen base records--along with those for saves--are being
rewritten routinely. In 1992, Rickey Henderson became the first player to
steal more than 1,000 bases in a career.
The era of the designated hitter certainly has its strategic
implications, but all of the game's deep thinkers of the ages couldn't have
devised a way to use it had not rulemakers seen fit in 1973 to legalize it--on
a league-option basis. The premise of the DH is as simple as the realization
that most pitchers are miserable batters. It allows one player to bat
repeatedly for the pitcher without requiring the pitcher's removal from the
game. The two major leagues split over the DH when it was written into the
rulebook, and they have remained divided ever since--the American League
adopting it, the National League remaining with the traditional nine-player
format. In time, the rule also has been adopted by virtually every other
college and professional league. It has as well been accepted as a part of the
World Series, used from 1976 through 1985 in alternating years, and since 1986
used in games played in the home ballpark of the American League champion.
Most American League teams have used the DH position as a refuge for
older, perhaps slower ballplayers who are no longer capable of measuring up to
the daily demands of the field, but still considered effective offensively. In
that sense, the very first DH, the New York Yankees' Ron Blomberg, was an
accurate precursor: Blomberg batted .293 over eight major league seasons, but
never carried a big league reputation at either first base or left field.
Blomberg's legacy was exemplified in the fourteen men who served as their
teams' principal DHs during 1993: Harold Baines, George Bell, George Brett,
Chili Davis, Andre Dawson, Julio Franco, Kirk Gibson, Reggie Jefferson, Edgar
Martinez, Paul Molitor, Troy Neel, Kevin Reimer, Danny Tartabull, and Dave
Winfield. Several--Winfield, Brett, and Molitor--are Hall of Fame candidates.
Others--notably Baines, Bell, Davis, Dawson, Franco, and Gibson--had enjoyed
above-average careers. Martinez was coming off a serious injury. But in 1993,
all of them had one thing in common--they were defensive liabilities.
The designated hitter rule was adopted by the American League as an
effort to increase offensive production, and in that way spur fan interest. As
noted earlier, the late 1960s had been a pitcher-dominated phase of the game.
The AL earned run average, which in recent times had ranged between 3.67 and
4.16, had fallen through the mid-1960s to a low of 2.98 by 1968; by 1972 it
had moderated only to 3.07. (National League earned run averages had fallen in
the same fashion through the 1960s, but by 1972 had regained much of their
normal levels.) AL attendance fell as well: from a league-wide 12.1 million in
1969 to just over 12 million in 1970, to 11.9 million in 1971 and 11.4 million
in 1972. In simple terms, club owners worried that pitchers like Oakland's
Catfish Hunter and Vida Blue, Baltimore's Jim Palmer, and Detroit's Mickey
Lolich might through their very brilliance be stifling fan interest. The
change seemed to accomplish its task of injecting more offense into the AL
game; in 1973, the league earned run average climbed three quarters of a
point, to 3.82, one of the most dramatic one-season shifts in the game's
history. Other statistics reflected the rules change as well: the league
batting average rose from .239 to .259; teams scored 29 percent more runs, and
hit 32 percent more home runs.
For about two decades since the inception of the DH, the American League
has been the "offensive" league, the National League the "pitching" league.
Since 1973, the final ERA and batting average has been higher in the American
League than in the National for every single season. Logic suggests that the
DH was in large measure responsible. Through most of the 1980s, the
differences could be dramatic. In 1988, for instance, National League batters
hit .248, while American Leaguers hit .259. In 1985 the National League ERA
was 3.59, but the American League ERA was 4.15.
The principal points of debate concerning the DH have been twofold:
whether it inappropriately undermines one of baseball's appealing tenets, that
all participants be complete athletes; and whether it diminishes the strategic
interplay.
Its detractors argue that, logically, the DH must negatively impact on
strategy by removing one of the questions a manager must repeatedly consider
during the course of a game: whether to remove a reasonably effective but
trailing pitcher and replace him with a pinch hitter. The fewer decisions the
manager must make, the reasoning goes, the more muted become baseball's
strategic nuances. And as strategy dulls, so does the game.
Among those arguing against that reasoning has been Bill James, who in
his "Baseball Historical Abstract" argued that the DH actually enhances
strategy. James' contention was that with a normally inept-hitting pitcher at
bat, managers actually were forced into a series of obvious moves that could
not be viewed as options at all. A reasonably competent DH, by contrast, gave
managers some choice in the decision whether to bunt, steal, swing away, or
hit and run.