# Rickey and Roth and Lindsey and Cook

*This week marks the second edition of the SABR Analytics Conference, in Phoenix (http://sabr.org/analytics). I had planned to attend, as I had last year, but a bothersome health blip has held me back east. I am something of a sabermetrician emeritus, anyway, and would have been an interested spectator rather than a contributor of fresh insight. My best work in the statistical line, in collaboration with Pete Palmer, is long behind me. All the same, I continue to approach the game analytically and I appreciate the great advances in sabermetrics in recent years.** *

*To honor those advances and to place them in some historical perspective, I offer an excerpt from *The Hidden Game of Baseball*, published nearly thirty years ago—so far back that Pete and I were not yet ready to embrace the newfangled term “sabermetrics”! Let’s look at four baseball pioneers who plied their statistical trade before the world had heard of Bill James or my brilliant collaborator.*

Although the impulse to improve our understanding and appreciation of baseball through the laying on of numbers had been present from the game’s beginnings, it was not until August 2, 1954, in of all places *Life* magazine, that the New Statistics movement was truly born. On that date there appeared an article by the game’s designated guru Branch Rickey, supported considerably by statistician Allan Roth, which was optimistically titled “Goodby to Some Old Baseball Ideas.” With the aid of some new mathematical tools, it sought to puncture long-held misconceptions about how the game was divided among its elements (batting, baserunning, pitching, fielding), who was best at playing it, and what caused one team to win and another to lose. This is a pretty fair statement of what the New Statistics is about.

Although the old ideas remained in place despite his efforts, Rickey had shaken them to their foundations. He attacked the batting average and proposed in its place the On Base Average; advocated the use of Isolated Power (extra bases beyond singles, divided by at bats) as a better measure than slugging percentage; introduced a “clutch” measure of run-scoring efficiency for teams, and a similar concept for pitchers (earned runs divided by baserunners allowed); reaffirmed the basic validity of the ERA and saw the strikeout for the insubstantial stat it was; and more. But the most important thing Rickey did for baseball statistics was to pull it back along the wrong path it had taken at the crossroads long ago: to strip the game and its stats to their essentials and start again, this time remembering that individual stats came into being as an attempt to apportion the players’ contributions to achieving victory, for that is what the game is about.

“Baseball people generally are allergic to new ideas,” Rickey wrote. “We are slow to change. For fifty-one years I have judged baseball by personal observation, by considered opinion and by accepted statistical methods. But recently I have come upon a device for measuring baseball which has compelled me to put different values on some of my oldest and most cherished theories. It reveals some new and startling truths about the nature of the game. It is a means of gauging with a high degree of accuracy important factors which contribute to winning and losing baseball games….The formula, for so I designate it, is what mathematicians call a simple, additive equation:

“The part of the equation in the first parenthesis stands for a baseball’s team offense. The part in the second parenthesis represents defense. The difference between the two—G, for game or games—represents a team’s efficiency.”

What we have here is the first attempt to represent the totality of the game through its statistical component parts. Another way of stating the formula above is to say that if the first part—the offense, or runs scored—exceeds the second part—the defense, or runs allowed—then G, the team efficiency or won-lost percentage, should exceed .500. This is a startlingly simple (or rather, seemingly simple) realization, that just as the team which scores more runs in a game gets the win, so a team which over the course of a season scores more runs than it allows should win more games than it loses—and by an extent correlated to its run differential!

How did Rickey and Roth come up with the formula? “Only after reverting to bare ABC’s was any progress noted. We knew, of course, that all baseball was divided into two parts—offense and defense. We concluded further that weakness or strength in either of these departments could be measured in terms of runs.” Once mathematicians at M.I.T. confirmed for them that the correlation of team standings with run differential was 96.2 percent accurate over the past twenty years, the task became to identify the component parts of runs.

In the [preceding] formula, the first segment of the offense (H + BB + HP) (AB + BB + HP), is the On Base Average. The second segment is Isolated Power, multiplied by .75. The third segment, applicable to teams but not to individuals, is percentage of baserunners scoring, or run-scoring efficiency (“clutch”); RBIs were not, Rickey stated, a suitable measure of individuals’ clutch ability.

In the defensive half of the formula, the first segment is simply opponents’ batting average. The second is opponents reaching base through pitcher’s wildness. (Rickey divided the opponents’ On Base Average into these constituent parts in an attempt to isolate “stuff” from control.) The third segment indicates a pitcher’s “clutch” ability, and the fourth, his strikeout ability, multiplied by only .125 because it was not very important. The fifth segment of the defense, F for fielding, was deemed unmeasurable. “There is nothing on earth anyone can do with fielding,” Rickey declared, but he did indicate that fielding was far less significant than pitching as a proportion of total defense: He ventured that while good fielding might account for the critical run in four or five games a year, it was worth only about half as much as pitching.

Rickey and Roth’s fundamental contribution to the advancement of baseball statistics comes from their conceptual revisionism, their willingness to strip the game down to its basic unit, the run, and reconstruct its statistics accordingly. The Rickey formula (though perhaps Roth deserves even more credit) has been superseded in terms of accuracy. The method of correlating runs with wins has been improved in recent years, and the formula for analyzing runs in terms of their individual components has, too. But the existence of the space shuttle does not tarnish the accomplishment of the Wright brothers (Orville and Wilbur, not Harry and George).

In recognizing that traditional baseball statistics did not give an adequate sense of an individual’s worth or of a team’s prospects of victory, Rickey anticipated the future. Twenty-eight years later, a writer for *Discover *magazine, surely unaware that he was echoing baseball’s Mahatma, described the impetus to the New Statistics: “Sabermetricians have tackled this problem [the inadequacy of traditional offensive measures] by devising a new statistic, one that directly measures a player’s ability both to score and to drive in runs. The number has been calculated by various analysts under various designations: batting rating, run productivity average, runs created, and batter’s run average, to name a few. It usually comes down to this simple fact: The total number of runs a team scores in a season is proportional to some combination of its hits, walks, steals, and other factors that result in batters getting on base or advancing other runners. Although the number of runs scored by a particular hit depends on how many men were on base, the differences tend to cancel themselves out over a season.”

This understanding did not evaporate in the years between Rickey’s article and the dawn of sabermetrics by that name. In 1959 the scholarly* Operations Research Journal *published an article by George R. Lindsey titled “Statistical Data Useful for the Operation of a Baseball Team.” As far as baseball people were concerned, Lindsey might as well have been writing in Icelandic. Lindsey and his father had recorded play-by-play data of several hundred baseball games in order to evaluate such long-standing perplexities as whether in facing a righthanded pitcher, a lefthanded hitter did possess an advantage over his righthanded counterpart, and if so to precisely what extent (he did, by about 15 percent); whether a team in the field should set its infielders for an attempted double play with the bases loaded early in the game and no outs (it should); whether a man’s batting average can serve as a predictor of future performance in a given at bat or game or season (at bat and game, no, season, yes); and more.

Lindsey followed this article with one that is even more central to the issues raised by Rickey and revived by the New Statisticians. In 1963, again in *Operations Research, *he published “An Investigation of Strategies in Baseball.” He wrote in his abstract, or summary, of the article:

*The advisability of a particular strategy must be judged not only in terms of the situation on the bases and the number of men out, but also with regard to the inning and score. Two sets of data taken from a large number of major league games are used to give (1) the dependence of the probability of winning the game on the score and the inning, and (2) the distribution of runs scored between the arrival of a new batter at the plate in each of twenty-four situations and the end of the half-inning. . . . [Note: the twenty-four situations are all the combinations of baserunners, from none to three, and outs, none, one, and two.] By combining the two sets of data, the situations are determined in which an intentional base on balls, a double play allowing a run to score, a sacrifice, and an attempted steal are advisable strategies, if average players are concerned.* *An index of batting effectiveness based on the contribution to run production in average situations is developed.* [Emphasis ours.]

Where Rickey had added the On Base Average and Isolated Power to arrive at a batter rating—and it was a good one, far more accurate in its correlation to run production than was the batting average—Lindsey employed an additive formula based on the run values of each event: .41 runs for a single, .82 for a double, 1.06 for a triple, 1.42 for a home run. (These values are not quite right, but they’re close….) To illustrate how Lindsey’s method was applied, let’s look at the 1983 records of three substantial National League players, Dale Murphy, Mike Schmidt, and Andre Dawson. Note that Lindsey’s method is to express all *hits *in terms of runs, but not the *outs; *these he brings into the picture through the traditional averaging process, dividing the run total by at bats. Yet an out has a run value, too, though it is a negative one.

How did Lindsey arrive at these values? It is a bit complicated for the general reader, but those with the appetite for probability theory we refer to the bibliographical citations at the back of the book. In brief, Lindsey devised a table, based on observation of 6,399 half innings (all or part of 373 games in 1959-60); he recorded how many times a batter came to the plate in any one of the twenty-four basic situations. Moreover, he deduced what the run-scoring probability became after the batter had hit a single, double, whatever, by computing the difference between the run-scoring value of the situation that *confronted *the batter—for example, man on first and nobody out—and that of the situation which prevailed after the batter’s successful contribution. That difference represents the run-scoring value of that contribution.

With these new values, proper weighting became possible, in, say, the slugging percentage. A home run was demonstrably not worth as much as four singles, nor a triple as much as a single and a double, and so on. What Lindsey did not account for was such offensive elements as the base on balls or hit by pitch; this had been done the year before in a formula proposed at a conference at Stanford University by Donato A. D’Esopo and Benjamin Lefkowitz. This formula, which they called the Scoring Index, is too complicated to go into, but in any event it was only marginally an improvement on Rickey’s, which similarly had accounted for walks and hit by pitch as well as total bases. The Scoring Index over-credited these events, to the extent that in ranking the top hitters of the National League in 1959, Joe Cunningham, whose slugging percentage was .478 to Henry Aaron’s .636, rated higher than Aaron, just as he did in On Base Average.

The term Scoring Index reappeared in 1964, but was defined differently by Earnshaw Cook in *Percentage Baseball, *a book which created considerable media stir for its controversial suggestions to revise baseball strategy in line with probability theory. Among these suggestions was to start the game with a relief pitcher and pinch-hit for him his first time up; to realign the batting lineup in descending order of ability; to restrict severely the use of the intentional base on balls and sacrifice bunt, etc. Indeed, Cook’s Scoring Index did not appear in a form intelligible to the layman until the appearance of his next book, *Percentage Baseball and the Computer *(1971), in which the “DX,” as he abbreviated it, was represented by:

The first component is simply On Base Average; the latter is a bizarre amalgam of power and speed in which, in effect, baserunning exploits are averaged by plate appearances in the same manner as total bases are. The rationale, evidently, is that net stolen bases (steals minus times caught stealing) adds extra bases in the way that doubles do to singles. This is not quite so, but in any event, the formula works pretty well in spite of its logical shortcomings. At the time of its introduction, the DX was the most accurate measure of total offensive production yet seen and the first to combine ability to get on base in all manners; to move baserunners around efficiently through extra-base hits; and to gain extra bases through daring running.

The original Cook book was highly abstruse in its detail and, despite the hubbub which met its publication in 1964, it is regarded today as perhaps a setback to the cause of improving baseball’s statistics. If the job was going to be *that *much trouble, why bother?

*If Percentage Baseball, *despite its brilliance, was not an open sesame to the unlocking of baseball’s secrets, a genie came forth in 1969 with the appearance of *The Baseball Encyclopedia, *compiled for Macmillan by Information Concepts, Inc. (ICI). But that is a story for another day.

Pingback: A Counting Game | Dusty Little Fascinations

John, I have finally started reading The Hidden Game of Baseball. In fact I am reading the newest release, thank God for Kindle.

Last night, in fact, I read the above in the book and decided to pull my dust-collecting MacMillan off the shelf. My question is, how much of the statistical information has been found to still be wrong? I’m looking for a ball park swag (i.e. 10%, etc). I guess my question has to do with how accurate it was and still is.

Thanks

Jim

Baseball has recorded its activity more copiously and accurately than has any other human endeavor. That errors have lurked within the records should be unsurprising. One may trust the major-league record as rendered today (say on baseball-reference.com) to a 99+ percent degree.