**Brian 1:**Rodgers' new deal is a fantastic bargain. He's one of the truly elite QBs in the league today, and guys like that don't grow on trees. But more scientifically, just look at this super scatterplot I made of all veteran/free-agent QBs. The chart plots Expected Points Added (EPA) per Game versus adjusted salary cap hit. Both measures are averaged over the veteran periods of each player's contracts. I added an Ordinary Least Squares (OLS) best-fit regression line to illustrate my point (r=0.46, p=0.002).

Rodgers' production, measured by his career average Expected Points Added (EPA) per game is far higher than the trend line says would be worth his $21M/yr cost. The vertical distance between his new contract numbers, $21M/yr and about 11 EPA/G illustrates the surplus performance the Packers will likely get from Rodgers.

(This plot includes for all free-agent or veteran extensions since 2006. Cap figures are averaged for each player's career and, to account for cap inflation, are adjusted for overall league cap ceiling by season. Only seasons with 7 or more starts were included.)

According to this analysis, Rodgers would be worth something like $25M or more per season. If we extend his 11 EPA/G number horizontally to the right, it would intercept the trend line at $25M. He's literally

*off the chart*.

**Brian 2:**Brian, you ignorant slut. Aaron Rodgers can't possibly be worth that much money. No NFL player is worth an entire fifth of a team's salary cap. That's insanity--and not like the Insanity Workout kind of insanity, either. More like the Vicky Medoza kind.

I've made my own scatterplot and regression. Using the

*exact same methodology*and

*exact same data*, I've plotted average adjusted cap hit versus EPA/G. The only difference from your chart above is that I swapped the vertical and horizontal axes. Even the correlation and significance are exactly the same.

As you can see, you idiot, Rodgers' new contract is about twice as expensive as it should be. The value of an 11 EPA/yr QB should be about $10M.

**Brian 1**: I think you're the insane one. There isn't a team in the league that would pass up the opportunity to lock up an all-pro QB for $21M/yr. Look, I made a graph and did a regression that was statistically significant. It's science. Rodgers is a bargain. What do you have against science? What are you, a Republican?

**Brian 2**: A graph and a regression doesn't make something scientific! Besides I made my own regression, and the facts back up my perspective.

**Brian 1**: So you're saying it's all about perspective? Deep.

**Brian 2**: Right. If you're perspective is that of a total idiot, then you'd be correct.

**Brian 1**: It's your, not you're.

**Brian 2**: Whatever.

**Brian 1**: I'll give you the last word.

**Brian 2**: Rodgers is not a bargain.

**Brian 1**: You're an idiot.

So, which Brian is analytically correct? Whatever you think about Rodgers, and there's not much debate he's one of the very best, which analytic approach is right?

You are asking which direction the causation flows. Clearly salary is driven by performance. Otherwise, giving a quarterback a raise would make him a better quarterback. Ergo, a vote for Brian 2.

Shouldn't the answer be neither? Without understanding how much a marginal EPA [i]should[/i] be worth, it's kind of a meaningless exercise.

In today's market, who are you losing to free agency because you can't afford them any more and their EPA/G is substantial? What free agents are you losing out on signing? I have a feeling that if you didn't do this deal, all you'd end up with is a worse QB and a whole bunch of cap room with nobody to spend it on.

ASG: Agreed. Brian 1 is correct when he says that no team would pass on a QB like Aaron Rodgers for $21M/year.

You can replace Aaron Rodgers with five or six decent starting players, but only so many players can take the field at once. QB is a position where depth is less valuable than any other since the only time you use a backup QB is in games where the outcome is already decided or if the starting QB is injured. You don't benefit from having two healthy QBs.

Basically, go back to your posts about the Gladiator Effect. That's the problem with Brian 2's argument. You have to look not only at what you save by not paying Aaron Rodgers, but at what you can spend on instead by not paying him, and compare those two things.

This is my favorite thing ever. Well, not ever. This year, anyway.

I'm still trying to figure out what's going on. One question that would help: what do the curved lines represent?

The curved lines are standard errors for the OLS regression line.

I assume the curved lines are 95% confidence intervals for the regression lines.

The OLS assumes no error in the independent (x) variable, and thus minimizes only the squared error (distance to the regression line) in the dependent (y) variable. This is why the regression line changes when Brian flips the axes.

Brian is making something of a joke when he comments on the R value not changing when the axes are flipped -- the correlation doesn't depend on the regression line, just on the variability in the data.

To decide which method is correct, we need to see which method is violating its assumptions. Since the OLS method assumes no error in the x-variable, it seems relatively clear that the second plot (where EPA/G is on the x axis) is incorrect. We know the cap hits exactly (as far as I'm aware), so there's no reason to minimize the error with respect to that variable -- the first regression line seems ok. If for some reason we were concerned w/ error in both variables we'd have to do a reduced major axis (RMA) fit, which minimizes the sum of squares error in both variables, but that doesn't seem to apply here.

As a final note -- I wouldn't necessarily draw the conclusion that Rodgers is "off the chart" at this price, if only because the new point still lies within Brian's confidence interval. This is not considering impacts like the gladiator effect.

-Peter

Brian 2's chart is correct, (as is said above, $$ is a result of talent), but Brian 1's argument is correct. Rodgers is worth $30m.

"...The only difference from your chart above is that I swapped the vertical and horizontal axes..." Changing the dependent variable in the regression is really quite a bit more than that.

"...which analytic approach is right?"

Can I say neither?

One thing that they should be looking at is performance over replacement and cost over replacement. An eyeball average is 7.5M salary and 3EPA, so the Ravens are spending 13.5M of cap for 8EPA.

The salient question is whether the Ravens could get more expected EPA for that 13.5M in salary cap elsewhere. (My gut says this is a good move for the Ravens. Would you give up Rodgers in exchange for David Garrard and Calvin Johnson?)

Assuming, for the sake of discussion, that it makes sense to rehire Rodgers, another question to ask is whether they could have retained him for less, and it's pretty clear that Rodgers was in a strong negotiating position as a free agent coming off the Super Bowl win.

Don't get hung up on the 'above replacement' concept. That's addressed by the intercept (constant term) in each plot.

I'm not sure I agree with Peter's analysis. Both EPA/G and $/yr are known (subject to model uncertainty converting exactly known play results into EPA), but the "true" EPA/G and $/yr are both unknown. If we run imaginary seasons in our NFL simulator, players are not going to negotiate the same contracts each time. I don't really see a reason to treat the two variables differently in the fit.

I do think we should try to think more carefully about what the uncertainties are for each of these data. Generally, any time you see data without errorbars, you should hear alarm bells in the back of your mind.

X, I'm not convinced by my own analysis either...if nothing else I wanted to get my thoughts down so people were on the same page as to why the regression slope changed, if nothing else.

I agree that the key to this is thinking about the uncertainties in the data, and what exactly we want to know at the end of the analysis.

I guess what we could say is: "What is the empirical relationship between the 'value/production' of a QB and his 'cost', and how does Rodger's new contract compare?" Then we can make the assumptions that (a) EPA/G gives an estimate of a QB's 'value' with gaussian error and (b) cap hit gives an estimate of a QB's 'cost', either without error or with gaussian error (we have to assume gaussian errors to perform the standard fits since the sample size is small). If we assume cap hit gives 'cost' with no error, thats the analysis from Brian 1. If we are interested in the relationship of 'value/production' with some more nebulous 'cost' (that I can't really articulate at the moment) that cap hit is merely an estimate of, then we'd need to perform the fit minimizing the sum of squares error in both variables.

If you are trying to wrap your head around these different regression slopes, think about it this way geometrically, looking at the first plot (where EPA/G on the y-axis):

You get the Brian 1 result by minimizing the squared distance in the y direction from the regression line to each point. (Error only in EPA/G)

You get the Brian 2 result by minimizing the squared distance in the x direction from the regression line to each point. (Error only in cap hit)

You get a third result (which will fall between the other two, and is called the reduced major axis) by minimizing the squared distance perpendicular to the regression line from the regression line to each point. (Error in both variables.)

Just my thoughts between meetings....

-Peter

OK, here's what I think.

Brian 1's chart asks the following question: if you know a team decided to pay a player $X, what does that tell us about the player's eventual performance? The chart shows that, for instance, when a player was paid $10 million, he returned about 5 EPA/g for his team.

Brian 2's chart asks the following question: if you know a player returned X EPA/g, what does that tell you about what the team paid the player? The chart shows that, for instance, when a player returned 5 EPA/g, he was paid about $7 million.

Those two numbers are different -- in one case, $10 million, and in the other case, $7 million. This is normal, because they're asking two different questions.

---

It's easier to see that they SHOULD be different with a more obvious example. Say, lottery tickets.

Suppose there's a $1 Powerball-type lottery with a 50% payout rate. There are a bunch of different prizes, from $5 to $5 million. People buy as many tickets as they like.

Brian 1 is answering: if you know a person bought 200,000 tickets, how much do you expect them to win? The answer: $100,000.

Brian 2 is answering: if you know a person won $100,000, how many tickets do you expect they bought? The answer: I dunno, maybe, 10 or 20? Because, hardly anyone buys 200,000 tickets, but *someone* has to win the big prizes.

So, Brian 1 finds that $100K is associated with 200K tickets. Brian 2 finds that $100K is associated with 20 tickets.

They're both right, for their respective questions.

The question we really want to ask is: given that Joe Blow won $100K with his portfolio of winning tickets ... how much should we pay for that portfolio of tickets for the next lottery? The answer is NOT $200,000 (Brian 1). The answer is $10 (Brian 2).

---

Now, what if teams KNOW that Aaron Rodgers is going to stay at 11 EPA/g? In that case, maybe they SHOULD pay him $21 million. But they don't know that. How do we know they don't know that? Because, look how far they were off on all the other QBs. They thought Matt Hasselbeck was as good as Tony Romo! They thought Tom Brady was about the same as Jay Cutler! Clearly, QB performance is unpredictable (probably mostly from luck). That means you have to regress Rodgers' past performance back to the mean, just like you have to regress lottery tickets back to the mean.

-----

Another way of putting it: Brian 1 is asking, how much would a team have to spend on a QB and *expect* 11 EPA/g? The answer to that one is, indeed, $21 million. But Brian 2 is asking, how much should a team spend for a player who *previously produced* 11 EPA/g? The answer to that one is, around $10 million, because he's probably not truly an 11 EPA/g player.

Brian 2 is the question we actually want answered.

BTW, I found a season's worth of baseball team salary data (I don't know which year). Same kind of situation happens.

For every $6.1 million a team spent, it won an extra game. But for every extra win a team had, it spent only $1.75 million extra.

Same idea as the lottery example.

OK, thought of an easier argument.

A regression shows how a change in X implies a certain change in Y. NOT the other way around. For instance, buying a Chevrolet is associated with having one extra car in your driveway. But having one extra car in your driveway is NOT associated with one extra Chevrolet. (It's associated with, maybe, 0.1 extra Chevrolets, because there are other kinds of cars too.)

Brian 1 says, "A team choosing to pay $21MM is associated with 11 extra points per game." But that doesn't work the other way -- it does NOT mean a player associated 11 extra points per game is associated with $21MM. So, throwing that extra "Rodgers" point on the graph is invalid. Only when a team chooses to pay $21MM for Rodgers can you do that. And that hasn't happened.

Brian 2 says, "A player performing at the rate of 11 extra points per game is associated with the team having paid $11MM for him." That one is OK, because, yes, Rodgers does qualify as having performed at 11 extra points per game. (Technically, you can only say that's associated with *having been paid* $11MM, but you can argue further from there what that should mean for his future.)

So, Brian 1 loses, under the "If X implies Y, it doesn't follow that Y implies X" rule.

Both Brian 1 and Brian 2 are ignorant sluts! Actually, I'm not a skilled statistician, but I question the use of EPA as the measure of production. Teams care about wins more than they care about points. Points are a means to the end, but it seems to me that EPA doesn't fully account for a player's "clutch" potential, which is the ability to make a positive play in a high-leverage situation.

My subjective view is that Rodgers is still incredibly valuable in this regard and therefore WPA/G would be the better measure of production.

Brian, you're eventually gonna tell us the answer, right?

they are both wrong, because they use EPA/$ as the criteria. :)

Clearly, the first one is minimizing the deviations in epa/g, whereas the second is minimizing the deviations in salary. These are not the same.

Typically in data analysis your x axis is a "known" quantity like a timestamp, and you fit your measurements on the y axis to minimize the error of the ordinate.

What caliber targets and protection does a $25+ million QB get under the current cap?

Phil, Peter, X, and all the commenters---thanks. Excellent insight. The truth is I saw this apparent paradox and it confused the heck out of me. I had some similar insights as in your comments above, and quant-extraordinaire Eugene Shen helped clarify things for me. But in all honesty, I don't know the 'right' answer, and was hoping smart folks like you guys would do all the hard thinking for me. It worked!

Here are my thoughts:

Another consideration is the normality of the data. Ideally the data comprise a Guassian distribution. EPA/G is very normal, but salary is not. It's very power law-ish. Just a few rich guys and lots of poor guys.

OLS works for Guassian distributions because it minimizes the square of the errors. The square function is not chosen arbitrarily, but is derived directly from the Guassian function. So when the y (dependent) variable is non normal, OLS fits lose their special meaning and are not sacrosanct.

There are error-minimization functions other than OLS that could be applied. For example, least absolute error produces a symmetrical fit, so that you get the same results regardless of how the axes are configured. Peter mentioned RMA (regressing to both axes) above.

The cause/effect consideration is hard to untangle. I think it really is a matter of perspective... For example, from the player's perspective, if he reliably performs around 11 EPA/G (independent x), how much money can he expect in return on the FA market (dependent y). But from the team's perspective, if they buy $21M worth of QB on the FA market (independent x), how much performance can they expect (dependent y)?

You might say (as I think someone above did), arbitrarily paying a person a lot of money does not "cause" him to play well at QB, as the Jets proved with Mark Sanchez (zing!). Case in point--if you paid me $20M to be an NFL QB, I'd average -100 EPA/G.

...BUT I've left an important systematic linkage out of the discussion: The Market. Paying someone $20M to play QB doesn't cause someone to be skilled, but purchasing a $20M asset in a competitively priced market provides a systematic linkage from pay to performance. Like buying a race car...all other things being equal, paying $100k for a car rather than $50k for a car in a competitive market means I should expect a faster car. Money does "cause" performance, but only indirectly via the market process.

So, my hunch right now is that Brian 1's analysis is the useful/meaningful one. Here's why: We know cost as a certainty with no error, but EPA/G is variable with a Guassian distribution. If we accept my argument on causation above, then performance should be the y (dependent) axis and pay should be the x (independent).

Therefore, I'm thinking Rodgers is a bargain at $21M--assuming he continues as an 11 EPA/G guy. But perhaps GB is smartly regressing that a bit, saying he's a "true" 8 EPA/G guy going forward, which would make his value lie right on the regression line.

Just my current opinion. Not 100%...

I'm thinking that, unless you know something about front offices of teams using statistical analysis in their decision-making processes that I don't(you certainly do), that invalidates this, that QB's are primarily paid based on perceived market value and nothing else. I doubt if it'd matter if he was getting way overpaid based on relative value per EPA/G. If perceived market value doesn't match Value per EPA/G, a QB would be correct in passing up on a contract extension that was in line with perceived value per

EPA/G, and waiting for a better offer to come his way.

How do your numbers take into account the value of denying opponents a good quality QB? In other terms, the EPA/G a QB contributes to his team denies that EPA/G to a potential opponent that might sign him. Do either of the above charts account for that somehow? I think it might be reasonable to add a little to the value of keeping someone like Aaron Rogers just off of that alone.

Thanks for your excellent analysis on this.

I normally hate these large contracts, but Rodgers is probably the one player where I wouldn't complain. It's certainly more sensible than Calvin's megadeal, simply because of positional value. Either way, I think both Brians would agree that Rodgers' contract make this Flacco contract look even worse.

I really wonder if we have finally hit the peak, for how much teams are willing to pay players, or if Rodgers did indeed give the team a break.

I think the "correct" value is somewhere between the two ideas. It's worth remembering not all the QBs in those graphs are free agents today. Many are locked in at salaries below what they'd be paid if they could void their contracts and renegotiate them.

Brian,

one can fit data where there are errors in x and y. Or one can simply look at the correlation and see that epa/g and salary do not have much to do with each other (it was below .5)

you are wrong in thinking that salary has no error, it most certainly does (see your mark sanchez example, for example)

As to the fitting, BIP is actually close to the answer, an approximation to that is indeed simply averaging the two fits.

but like i said before, the biggest problem is using EPA/g.

For the anon that said WPA/G is a better measure of QB talent, I bet that EPA/G is a more reliable predictor of future WPA/G than past WPA/G, much like a team's offense outside of the red zone is a more reliable predictor of future red zone performance than past red zone performance.

Math version: The slope for the first graph is the sample version of cov(EPA/G,caphit)/var(caphit). The slope in the second graph is cov(EPA/G,caphit)/v(caphit).

If you put the two best-fit lines on the same graph, say graph 1 then you have these two slopes (the second one is the reciprocal since you flipped the axes):

Brian 1's slope: cov(EPA/G,caphit)/var(caphit)

Brian 2's slope: var(caphit)/cov(EPA/G,caphit)

If you do some algebra you can see that

Brian 1's slope / Brian 2' slope = cov(EPA/G,caphit)^2 / (var(caphit)var(EPA/G))^2 = R^2 < 1

So Brian 2 is always going to have a steeper slope. So what we see is that Brian 1's you should expect 0.33 EPA/G per $1 mil while Brain 2 thinks you should get a much higher return of about double that (because 1/R^2 ~= 2).

Phil Birnbaum explained the intuition for this really well. But I think he's under the impression the graphs show the average EPA/G on the previous contract and the new contract cap hit but I'm under the impression is EPA/G on a given contract and avg cap hit under that contract. That flips the reasoning around from "graph 2 makes more sense" to "graph 1 makes more sense."

You can think of graph 1 which predict EPA/G based on expectations about EPA/G (reflected in willingness to pay). Graph 2 predicts yours past expectation about EPA/G based on realized EPA/G which is about as uninteresting/hard to interpret as it sounds. Graph 1 corresponds to predicting how much you'll win in the lottery based on # of tickets and graph 2 predicts how many tickets you bought based on how much you won.

Anyway, I wouldn't use either of these as a way of assessing if Rodgers is "worth it." The assumption embedded in graph 1 is that the QB market is efficient. The implicit model is that teams offer contracts of x $/year based on expectations about EPA/G that are unbiased but have error. But if that is true then then natural interpretation is that Rodger's $21 mil/year doesn't mean he is a good value at 11 EPA/G for just $21 million, it means that they don't expect him to continue to be an 11 EPA/G guy and are expecting something more like 8 EPA/G.

In practice we know the NFL labor market is far from efficient. If it were efficient and we plotted value against caphit for running backs and for QBs we'd see the same slope. But RBs are paid a lot more per unit of value (say EPA/G) than QBs (right?). So we need another model of how contracts are drawn up and shouldn't assume salaries reflect unbiased expectations about performance.

fellas,

both graphs are wrong. both LS fits are wrong. you can always calculate a LS fit, but sometimes you shouldn't. when one performs statistical analysis there are assumptions made when certain techniques are applied. these assumptions cannot be ignored - doing so leads one to this apparent "problem".

It is an ill-posed question. Please do not "vote" on which fit you "feel" is best.

Just occurred to me. Maybe try repeated this exercise, but with a much smaller sample size: say, 20% of all the QB plays (take every fifth snap, for instance, and ignore the four in between). That should make it clear with interpretation is correct. One of them will be so wildly off, I'm guessing, that it will become obvious.

You're forgetting that all regression inference is conditioned upon X (not Y). Thus, your statement: "According to this analysis, Rodgers would be worth something like $25M or more per season." is simply wrong. What that regression tells you is that "given an elite quarterback has a cap hit of $12M, we expect them to have about 5 EPA/G".

Moral of the story, never make inference for X conditional on a value of Y!

Linear regression assumes that the X variable is known with certainty and the variance (aka error term, random fluctuations, measurement error, "shit happens") is all associated with the Y variable.

In this case, we know his salary with certainty; he's going to get paid $21MM. His expected performance is something like 8 EPA/G based on what he will be paid, with uncertainty associated with the performance. The uncertainty is random variance in performance as well as teams or agents making mistakes regarding performance and paying a player the "wrong" amount of money (but salary is still locked in).

On the flipside, the alternative is to use his performance (with no error) to predict what you should pay him. In this case, 11 EPA/G is worth about $10MM. The problem with this approach is that it removes the risk of performance variations. If you know exactly what you're getting then you don't have to pay extra to get it. There's also the implication that salary has a random variance term that it really doesn't.

Linear regression also falls on its face a bit in cases like this since performance variation isn't normally distributed, and the variation is more likely to be something like lognormal (small chance of a great season, greater chance of something closer to expected).

I upvote Steve's statistical explanation (Phil's intuitive example is good as well). Although I'm not sure that "RBs are paid a lot more per unit of value (say EPA/G) than QBs" necessarily implies an inefficient market.

Brian is correct. Extending Rodgers @ $11 m a yr. Is completely unrealistic.