Offense vs. Defense article

Postby maligned » Thu Jul 31, 2008 3:44 am

elderjefferson,
The point we were making was this: raw offensive value per PA is NOT the best measure, because PAs are not predetermined in real-life play. The only thing that is predetermined in real baseball is outs. To have the best value ratings, you must consider a player's offensive value per out within the context of his team. Look back through Bbrool's three-post description to understand a way to do this.
If you consider value per PA, you will always overvalue SLG and undervalue OBP because you will neglect the benefit OBP gives in not consuming outs within his team's context (and thus creating MORE PAs for his team through a season). Bbrool and I both gave examples of two player cards that have the same NERP per plate appearance. However, in both our examples, the player with the better OBP creates many more opportunities for his team over a season and his team ends up scoring significantly more runs because of MORE PAs created by lack of out consumption.
maligned
 
Posts: 55
Joined: Tue Jul 03, 2012 2:34 pm

Postby maligned » Fri Aug 01, 2008 5:40 am

elder,
I'm not sure what your ultimate aim is, but here are a couple tools for you, for whatever it is you're working on. This article contains two very good tables (Tables 1 and 4) that tell you 1) the average number of runs expected in a given inning with a given number of outs and given baserunners and 2) the percentage chance of scoring at least 1 run in a given baseball situation. It includes all possible situations over a 15-year span.

http://baseballanalysts.com/archives/2006/07/empirical_analy_1.php

Second, if you go to Deantsc's original article that started this string (page 1) he gives an excellent formula called NERP that predicts how many runs any player will contribute to his team based on his raw data. He outlines how to do this with Strat data in his article (our recent discussion on this string is related to how his ideas could be tweaked a bit, but his paper will give you a great foundation for evaluating players).

Hope this helps.
maligned
 
Posts: 55
Joined: Tue Jul 03, 2012 2:34 pm

Postby childsmwc » Fri Aug 01, 2008 11:37 am

Dean,

I had never noticed your original article on what methodology you are using for pricing. I am actually using a slighly different version of Paul Johnson's formula's (I believe an earlier version). I played around with the different permutations that he offers, and I think I found that when applied to stat data output, the original formula was a bit more accurate.

I think I might revisit which version I use.

We are both on the same page in how we value the players, but there are some interesting articles that take some of these linear models to the next step.

Here is a good read:

http://www.baseballthinkfactory.org/btf/scholars/furtado/articles/Why_Do_We_Need_Another_Player_Evaluation_Method.htm
childsmwc
 
Posts: 55
Joined: Tue Jul 03, 2012 2:34 pm

Postby MARCPELLETIER » Mon Jul 27, 2009 11:58 pm

I just wanted to underline the effort from Dean and congratulate him for applying on Strat a model based on some of the best tools available for sabermetricians.

I truly appreciated the simplicity of the article, despite discussing topics that can surprise for their complexity.

One limitation I see is that we based our evaluations of Strat cards on the values generated by real baseball analyses. I believe the next step is to base the evaluations of Strat cards based on the values generated by Strat models.

One easy example is clutch. In real-life, there is virtually no clutch effects, whereas in strat, the effect of clutch is obvious. A perfect evaluation of Strat cards MUST include the value of clutch.

Another less obvious example is the value of events themselves. To give one example, the value of walks, based on the NERP formula, is 0.33. This value is estimated from real games. In real baseball, the frequency of walks is not aleatory. There is roughly a 40% increase of non-intentional walks frequency with men on base and first base open, compared to other situations. In part, this strategy from pitchers make sense, because such situations are the ones in which the value of walks is the least. For example, the value of a walk with two outs and a man on third is only 0.18, mostly because such walks don't move any runner and put runners on-base which are not likely to score, considering that two outs are already in. In comparison, the value of walk when beginning an inning is 0.41, and the value of a walk with bases loaded is obviously of 1 run exactly. The value of 0.33 included in the NERP formula is in fact the sum of the following product: (the value of walk in any situation X the frequency of this situation).


In strat (the "pitch around" set aside), the frequency of walks is "roughly" aleatory: they happened equally in all situations. You'll find less walks which are worth 0.18 in Strat, but you'll find more walks which are worth 0.41 or 1.00. The net result is that walks are worth more than 0.33 in Strat. I haven't calculated it, but it would be easy to do it, and I wouldn't be surprised that it's closer to 0.37 than to 0.33.

But then again, maybe not. Because of the "pitch around" rule, which we cannot set aside. I'm not sure of all adjustments made by Strat creators, but perhaps they have slightly reduce the frequency of walks in other situations (through the MAX rules) in order to increase it through the PITCH AROUND rules. Which could bring the value of walks back to roughly 0.33.

The only way we could be certain is to generate DATA, for example through the use of CD-ROM, and calculate empirically, by ourselves, the value of walks, and of singles, and of doubles. Otherwise, we can build a model that reflects what goes on in Strat, and generate NERP formulas based on such model.

Of course, one may simply say that we want a rough estimate of the Strat cards, and that the values estimated by "real" analysts are sufficiently close for this estimation. I believe Dean makes that point in the article.

But think of all the possible events where we are perhaps making slight underestimations when based on real-life: the value of single in Strat, considering that the new running rules, which sharply increases the possibility of retiring a runner on path compared to real-life. The value of doubles, particularly in ATG leagues which are loaded of centerfielders with negative arms. The relative value of Homeruns in small ballparks, which is correlated with the presence of the best pitchers, both of which make baseball environments entirely different than "real-life". The value of wp, which happen in STRAT as often with men at third base than with men at first base (in real-life, of course, it's much easier to get to second base than to run home). The value of "super-relievers", throwing 200 strong innings. The value of clutch. The value of gbA vs gbC.

This said, Dean has already started such analysis, with his estimation of of arms and catcher arms based in STRAT. A nice start.

*********
On the controversy raised by maligned, on whether or not Dean penalizes sufficiently players that are getting outs (and thereby preventing their teams from having additional at-bats).

My understanding is that the answer depends on which formula Dean is using. The first formula, which includes a -0.25 value for outs, does include the detrimental value of NOT getting on-base. The second formula, which includes only a -0.085 value for outs, does not.

[b:99606de380]EDIT: THE PREVIOUS PARAGRAPH AND THE ARGUMENT FOLLOWING IS ILL-FOUNDED. THE TWO DIFFERENT FORMULAES EXPRESS TWO DIFFERENT MEASURES THAT HAVE NOTHING IN COMMON WITH THE PROBLEM RAISED BY MALIGNED AND BBROOL[/b:99606de380]


FIRST FORMULA
BR (BATTING RUNS) = .47 * SINGLES + .78 * DOUBLES + 1.09 * TRIPLES + 1.4 * HOME RUNS + .33 * (WALKS + HBP) - .25 * OUTS


SECOND FORMULA (NERP)
NERP (New Estimated Runs Produced) = .318 * TB + .333 * (BB + HBP - (gbA * .1875)) + .25 * H - .085 * AB

which is equivalent to:
NERP = .48 * SINGLES + .80 * DOUBLES + 1.12 * TRIPLES + 1.44 * HOME RUNS + .333 * (WALKS + HBP) - .085 * OUTS - .333 * (gbA * .1875).


Roughly, the value of any event can be broken in three parts: the value of getting on base, the value of advancing runners, and the value of "inning-killer". The value of an out for the three parts is respectively: 0 (at least, when double plays are considered separately) ; -0.10 (the negative value is expected here, because some "outs" will bring about the lead runner to be retired, for example, when the runner on third is chased down at homeplate, leaving the batter safe on first); and -0.16 (the value of decreasing the chances of having other hitters getting at-base).

see http://www.tangotiger.net/rc2.html

The second formula ignores the last part of negative value, but the first formula incorporates both negative values, thus yielding a value of roughly -0.26 per outs.
Last edited by MARCPELLETIER on Wed Aug 05, 2009 12:10 am, edited 1 time in total.
MARCPELLETIER
 
Posts: 55
Joined: Tue Jul 03, 2012 2:34 pm

Postby Mean Dean » Sun Aug 02, 2009 3:47 pm

BTW, I have revised my article since this original post (I don't even think the original link works anymore); see my blog :)

I honestly was never sure what the objection to the value of the out was, other than "it seemed wrong." The value of the out is based on the formula, and the formula is based on accurately predicting how many runs real baseball teams score. Unless there is some reason why an out is much more damaging to a SOM offense than it is to a real offense, I don't see what the problem is.

[b:3eead336d8]lucky[/b:3eead336d8], there is definitely a lot of potential to the type of research you describe. It is time-consuming, but it can be done. Some of it is very straightforward, assuming again that you put in the time. e.g., to measure the significance of the speed rating, run 50-100 Mets' seasons with Reyes as a 17 speed, and then run 50-100 more with him as a 9 speed. Compare how many runs the team scores in each scenario, and that should give you a very good idea of the maximum effect that one player's speed can have on the offense.

Doing that with something like clutch hitting would be more complex, because it would require you to "reconstruct" the entire card. But since the PC game does let you edit the cards, it is still potentially possible.
Mean Dean
 
Posts: 55
Joined: Tue Jul 03, 2012 2:34 pm

Postby maligned » Tue Aug 04, 2009 6:08 pm

Lucky, nice to have you back. Again, I too believe Dean has done some amazing work. And I too find the NERP formula very valuable. I will continue, however, to challenge the notion that future performance can be calculated in a linear equation based on every 216 die rolls. As many have stated before, [b:1e73d6e3d8]offensive output is not linear[/b:1e73d6e3d8].
You can DEFINITELY calculate how many runs a team scored in the past with their raw data by applying the NERP formula. This has never been disputed. You cannot, however, calculate the value of a player's future performance compared to other players without including a real-life team context parameter of some kind because of the opportunity-creating nature of non-out results.

A simple example using this formula:

NERP = .48 * SINGLES + .80 * DOUBLES + 1.12 * TRIPLES + 1.44 * HOME RUNS + .333 * (WALKS + HBP) - .085 * OUTS - .333 * (gbA * .1875)

Player A: one single, one double, 2 outs; NERP value 1.11
Player B: one homerun, 3 outs; NERP value 1.185

Player B has a higher NERP value for these 4 plate appearances. But we would all obviously prefer to have Player A's results in real future situations. In the same number of plate appearances over a season, Player B's stats seem to suggest he produces about 12-13 more runs, but the extra outs allowed for Player A's team will result in 225 to 250 more team plate appearances--meaning he'll contribute to an average team scoring 25 extra runs because of extended opportunities.
You cannot, obviously, create a player rating using NERP/out because that assumes a 9-player team of the same person.
But you CAN compile ratings that assume average results from 8 other players combined with strat data of your player in question, then calculate NERP per out of the 9 and find a true, predictive value for each player based on the differences in the 9-player output.
NERP alone, even the latest and greatest version, is designed to tell us past results or results per plate appearance--not results per out. Baseball games are based on outs--not plate appearances.
maligned
 
Posts: 55
Joined: Tue Jul 03, 2012 2:34 pm

Postby Mean Dean » Wed Aug 05, 2009 1:23 am

I don't disagree with the principle you're laying out; I agree that the ideal measure of a player's runs created would be to figure out his actual team's total runs scored with him and without him.

I do disagree with the idea that it's necessary to go to such lengths in order to get an acceptable level of precision. The example you give is very extreme; one player with .500 OBP/.750 SLG, the other with .250 OBP/1.000 SLG. If there were actual teams with .250 or .500 OBPs, then yeah, I could buy that the value of an out would be far different on one team than on the other. But given that pretty much any team that will be realistically put together will have an OBP in the .325-.375 range, I really think that it's perfectly acceptable for us to use the same value of an out for all of 'em. (And that's why it does work with the real-life data. It's back-engineered, no doubt... but the back-engineering should be as applicable to SOM as it is to real life, so far as I can tell.)

Additionally, in order to do what you're describing, you would need to figure out the "actual" runs scored of a fictional team, which I'm not sure how one would do. You seem to suggest basing it on the players' Diamond Dope "actuals", but by the same theory you're describing, wouldn't those values themselves then change by virtue of putting the players on a specific team?

So, I don't disagree with your theory as a theory, but I think applying it is likely to be impractical and kind of killing a fly with a bazooka.
Mean Dean
 
Posts: 55
Joined: Tue Jul 03, 2012 2:34 pm

Postby maligned » Wed Aug 05, 2009 4:06 am

Dean,
I'm not saying you have to know a team of fictional players to create a rating; I'm saying it's not that difficult to use average output to represent the performance of ALL of the other 8 players in an offense.
For example, you can simply find the average offensive data from 10 or 12 80M leagues. Create an "average player" from this and plug it into your spreadsheet. Find the NERP/216 for each player you want to rate PLUS the NERP/216 times eight for your average player data, then DIVIDE all this by the number of outs recorded by the nine, and finally MULTIPLY by the number of outs in the season.

(NERP/216 of player + NERP/216 of average player times 8)/(outs for the 9)*(outs for the season) = team runs scored for the season

You will then have the number of runs scored by your player on a team of average players in 80M leagues. Compare this with results of other players in the same formula to see their relative results. You obviously have to include injury variable data and all the rest, but it's honestly not that much more work than NERP alone. It's also quite quickly adjustable if you want to consider a stronger or weaker offensive environment.

You are correct in that in many cases it doesn't make a big difference, but this simple philosophical switch does make a big difference in ratings for .400/.400 guys and .325/.475 guys. You will see in your final results that OBP will be respected much more as it should and SLG will be devalued as compared to straight-up NERP values per 216 plate appearances.
maligned
 
Posts: 55
Joined: Tue Jul 03, 2012 2:34 pm

PreviousNext

Return to Strategy

Who is online

Users browsing this forum: No registered users and 72 guests

cron