In hockey, scoring the first goal is important. Last season, every single team in the league had a better record in games where they scored first compared to games where they did not.
However, one has to wonder: is scoring the first goal as important it's made out to be? For example, hockey media types love to harp on the importance of scoring first, invariably citing team A's record when managing to do, or how team B's losing streak is explicable through its tendency to surrender the lead early in the game. Not only does this emphasis conflate cause and effect, but it's insufferably repetitive and trite. One would intuitively expect the team that scores first to have a higher probability of winning, and it's fairly obvious that such a relationship exists. In fact, I suspect that the probability of winning when scoring first is not significantly different than what would otherwise be expected on a mathematical basis.
Moreover, scoring in general is important, whether it be the first goal of the game or the last one. Any given goal is more or less significant and potentially determinative of the game's outcome. To make the distinction between the first goal and any other goal scored during the game just smacks of arbitrariness. Scoring first is probably more correlated with winning than, say, scoring second, but I'd be surprised if the difference was large, and shocked if it was large enough to warrant the special attention.
Thus, this post seeks to answer two questions:
1. When a team scores scores first,what percentage of the time does it win the game? Is this value any different from what probability theory predicts it to be?
2. How much more important is the first goal than the second goal?
The first question can be answered through application of the poisson distribution. As Alan Ryder explains in this paper, goal scoring in hockey is essentially a poisson process. Ryder has determined that a team's probability of winning by z goals in regulation at any particular point during the game can be found through application of the following formula in Microsoft excel.
Pr(Win by z) = EXP(-(mt+vt)) * (mt/vt)^(z/2) * BESSELI(2*SQRT(mt*vt),ABS(z))
where:
m = that team's average goals for per game*
v = that team's average goals against per game*
t = the time remaining in regulation divided by 60
z = the margin of victory
* - Adjusted goals ought to be used here as they provide the best measure of a team's true ability to score and prevent goals.
Through use of this formula, the theoretical probability of winning in regulation for a team that scores first can be determined.
In the 2007-08, there were 1222 games that had at least one goal scored in regulation. On average, the first goal was scored just prior to the 12 minute mark of the 1st period. Thus, our value for t is 0.803.
The m and v values are, for the purposes of the formula, 2.639 and 2.545. These figures are adjusted to reflect the following:
1. That the team that scores first is, on average, slightly better than the average team.
2. That the team that gives up the first goal is, on average, slightly worse than the average team.
3. That the home team is more likely to score the first goal.
For determining the probability of winning, the z value ranges from 0 to sufficiently high n (~10), as the team that scores first must only maintain the existing margin -- or increase it -- in order to win. For the probability of losing, z is -2 to sufficiently low n (~-10), as the trailing team must outscore the opposition by 2 or more goals in order to win. For the probability of a tie, z is -1, as this will restore the original margin of zero.
Thus, all of the necessary input variables having been determined, the theoretical probability of winning when scoring first can be computed. Below is a comparison of the theoretical probability against the actual probability.
As can be seen, the actual and theoretical probabilities more or less mirror one another, save for the fact that probability theory predicts ties to occur less frequently than they actually do. This reflects the fact that teams do, to some degree, play to the score, particularly as the end of regulation nears. The upshot of there being more ties in reality is that the first goal is somewhat more valuable than what the values expressed in the chart would otherwise indicate. For example, while the actual probability of winning is nominally lower than its theoretical counterpart (0.599 vs 0.615), if one examines only those games resolved in regulation, the actual probability of winning is slightly higher than the theoretical probability (0.764 vs 0.744). Nonetheless, the important part is that the theoretical and actual values are essentially equivalent to one another. If the actual probability of winning was substantially higher than the theoretical probability, the large amount of emphasis placed on scoring first may be justifiable. However,the fact that they are virtually the same means that the advantage conferred by scoring first is neither surprising nor contrary to expectation, thus making it unworthy of mention.
What about the importance of the second goal vis-a-vis the first goal?
Scoring second is very nearly as highly correlated with winning as scoring first. And yet, it is the latter that -- rather unfairly -- receives all of the attention. In this sense, the emphasis that's placed on scoring first seems more than a little arbitrary. It's simply not very accurate to accord the first goal special status when, in actual fact,the vast majority of goals scored throughout the course of a hockey game are significant.
Saturday, December 20, 2008
Sunday, December 14, 2008
Worst post-67 Cup Winning Team?
The columns in the above list show, from left to right, the season, the cup winning team during that season, that team's adjusted winning percentage (AW%) during the regular season, and how that team ranked in the league in terms of AW% during that season. The teams that I've highlighted are teams that I feel are arguably the worst post-67 teams to win the cup, or teams that are generally included in that discussion by others.
A few general comments:
1. Those Hab teams of the late 1970s were very, very good.
2. The Oilers dynasty teams, despite putting up some gaudy offensive totals, don't appear to be much better than the average cup winning team.
4. The 89' Flames were probably the best non-dynasty team of all time.
Now, the analysis:
The 91' and 92' Penguins
While their AW% is pretty unspectacular for a team that managed to win the cup two consecutive years, a lot of this probably has to do with the fact that Lemieux only managed to play 90 regular season games in total during those two seasons. That probably explains the regular season success/playoff success discrepancy. With a healthy Lemieux, neither of those teams are close to being the worst post-67 to win it all. Not even remotely.
The 86' Canadiens
Contrary to popular belief, the 86' Canadiens were not a mediocre team that Roy carried to the cup. Despite receiving average goaltending for the majority of the regular season (sv%=0.873), they had the third best AW% in the league. Considering that they, rather fortuitously, managed to avoid playing both the Oilers and the Flyers during their road to the cup, it's not really surprising that they managed to win. Not the worst post-67 team to win the cup.
The 95' Devils
Admittedly, their regular season numbers were pretty underwhelming, finishing 10th in a 26 team league in AW%. However, a lot of this, I think, had to do with bad luck. They were averaging 30.1 SF/G and 25 SA/G during the regular season and, despite playing tight defensive hockey, had a team save percentage of only .901. Presumably, they just weren't getting the bounces. All of this was to change in the playoffs, though. They went 16-4, scored 67 GF while allowing 33, averaged 30.4 SF and 23.2 SA, all the while starting every series on the road against tough competition (DET, PHI, PIT, BOS). Highly impressive.
The 93' Canadiens
Like the 86' team, the 93' Habs benefited from not having to play the truly elite teams during their cup run (PIT, DET, CAL, BOS). The difference is that the 93' team was much more reliant on goaltending and luck (12-1 in one goal games) to do it. Also, their regular season was fairly mediocre by cup-winning standards. Still, they only managed to lose 4 games en route to winning. Probably not the worst post-67 cup winner, but we're getting warmer.
The 90' Oilers
I don't know too much about this team, but the fact that Ranford won the Conn Smythe suggests that they, like the 93' Canadiens, were pretty dependent on goaltending. However, they were still the 5th best team that year and were only one year removed from their dynasty. Not the best post-1967 cup winner by any stretch of the imagination, but not the worst ceither.
The 04' Lightning
The 04' Lightning were one of the best teams in the league during the regular season. While one might point out that they played in the league's worst division that year, AW% takes schedule difficulty into consideration. They were the 4th best team despite regularly playing the likes of FLA, ATL, WAS, and CAR. Their shot differential was impressive too (30 SF/G, 25.3 SA/G), so it's not as if their success was being driven by the percentages. Why, then, have I chosen to include them in the discussion? Well, that team was extraordinarily fortunate on the injury front that year. They only lost some 35 man games to injury that year, the majority of which belonged to Andre Roy. It can be safely assumed that this had a lot to do with their success that season, and the fact that they were fairly average in both the following and preceding seasons lends support to this. Still, they're not the worst.
The 06' Hurricanes
It's no secret that the Hurricanes were the recipients of tremendous good fortune in terms of their opponents sustaining bizarre and debilitating injuries to key players all throughout the postseason. Injuries to Koivu, Roloson, and virtually the entire Sabres defence contributed more to that victory than any single Hurricane player. What will surprise most, though, is how ordinary Carolina was during the regular season that year. While their 112 points might give the impressive that they were an elite team, this was largely the product of: a) playing one of the easiest schedules in the league b) doing well in the shootout and c) outperforming their goal differential by winning close games. Their AW% was 13th in the league -- barely above average. This was, without question, the worst post-expansion team to win the cup.
EDIT: As requested, here are the Top 50 post-expansion teams according to AW%.
Saturday, December 13, 2008
Parity
Since the 2005-06 season, there’s been a lot of talk in the media about the amount of parity that currently exists in the NHL. While I’m inclined to agree with this, I have a feeling that people are simply looking at the (presumably diminished) spread in point totals and making their conclusions on that basis. This is, of course, completely misguided and incorrect.
Points totals themselves are not necessarily indicative of reduced parity. For in order to measure parity, you first have to measure team strength, and point totals do not adequately measure team strength.
To be sure, point totals are correlated with team strength. Hockey would be a very strange game if this were not true. However, there are certain problems with point totals that preclude its use as a proxy for team quality.
For one, points totals are influenced by overtime and shootout success, and I would argue that overtime and shootout success have very little to do with how strong a team is. When I use the term ‘team quality’, I’m referring to how good a team is at actually playing hockey. And when I use the term ‘actually playing hockey’, I’m basically referring to how good a team is at winning in regulation. The distinction between regulation and extra-regulation results might seem arbitrary at first, but there's good reason for it. For one, overtime and shootout success has almost nothing to do with regulation success. Observe:
Moreover, extra-regulation results are not very repeatable across seasons, especially compared to regulation results.
The fact that extra-regulation results have virtually nothing to do with regulation results and have little to no repeatability suggests that they are largely the product of randomness. If something is largely random, then it cannot be thought of as an underlying ability. And if something cannot be thought of as an underlying ability, then it ought not to be part of a metric that ostensibly measures team strength. And yet, shootout and overtime success does have a sizable affect on point totals. Hence, my reluctance to use point totals as a metric for team strength.
However, the inadequacy of point totals goes much deeper than this. Even before the advent of 4-on-4 overtime and the shootout, points were not the best metric for team strength. The reason for this is that point totals only reflect wins and losses while completely ignoring the margin of victory. If there are two teams with similar point totals, one of them tending to win convincingly and lose narrowly, the other tending to win narrowly and lose convincingly, then the former team is, in almost all cases, the better team. The concept is an intuitive one. If you disagree with the assertion that a team’s goal differential better conveys its ability relative to its point total or place in the standings, then you’re probably at the wrong site.
Granted, goal differential per se, while better than points, is not the best available metric. Several corrections need to be made to it for this to be true. Firstly, shootout and empty net goals should be excluded from the totals, as they provide no useful information. Secondly, raw goal differential is problematic in that not all teams play identical schedules. Some teams, usually by virtue of playing in a stronger division or conference, are burdened with a more difficult schedule than average. If you thought that the 2005-06 Phoenix Coyotes and the 2005-06 Carolina Hurricanes had equally difficult schedules, then you would be mistaken. Thus, some attempt should be made to correct for schedule difficulty. Finally, it is not so much a team’s absolute goal differential that is important, but its GF-GA ratio. A team that scores 200 goals and concedes 100 is better than one that scores 400 and gives up 300. Furthermore, simple goal differential is too sensitive to scoring context for it to provide any useful information on league parity, as it would lead to the spurious conclusion that there was less parity in higher scoring seasons. These two problems are avoidable by using each team’s Pythagorean expectation instead– essentially, its theoretical winning percentage determined through the following calculation:
(Adjusted goals for)^2 / [(adjusted goals for)^2 + (adjusted goals against)^2]
The resulting metric can be termed adjusted winning percentage.
AW% is important as provides us with a suitable metric for assessing team strength. By computing the standard deviation in AW% in any particular season, we’re essentially measuring parity.
What, then, does AW% tell us about the amount of parity in the NHL over the last ten years?
A few comments. Firstly, parity in the pre-lockout NHL was pretty invariant on a year to year basis (mean: 0.094, ST DEV: 0.008). Only 1996-97 is anomalous, with all of the remaining values falling between 0.092 and 0.101. Secondly, there is clearly more parity (read: the standard deviation in AW% is smaller) in the post-lockout NHL relative to the pre-lockout NHL. The difference may not seem like much, but the 2005-06 and 2006-07 values are separated by one SD from the pre-lockout mean. The value for 2007-08 is 4 SD(!) from the pre-lockout mean. That's a fairly significant difference.
Parity in the new NHL seems to be more reality than fiction. Teams really are less separated in ability now compared to five or ten years ago. I find this interesting as the purpose of having the shootout and three point games seems, to me, like a ploy designed by the NHL with the intention of creating the illusion of parity. However, the fact that the new NHL is characterized by genuine parity has in some sense obviated this purpose. That considered, perhaps the NHL should do away with three point games and the shootout. I certainly wouldn't complain.
Points totals themselves are not necessarily indicative of reduced parity. For in order to measure parity, you first have to measure team strength, and point totals do not adequately measure team strength.
To be sure, point totals are correlated with team strength. Hockey would be a very strange game if this were not true. However, there are certain problems with point totals that preclude its use as a proxy for team quality.
For one, points totals are influenced by overtime and shootout success, and I would argue that overtime and shootout success have very little to do with how strong a team is. When I use the term ‘team quality’, I’m referring to how good a team is at actually playing hockey. And when I use the term ‘actually playing hockey’, I’m basically referring to how good a team is at winning in regulation. The distinction between regulation and extra-regulation results might seem arbitrary at first, but there's good reason for it. For one, overtime and shootout success has almost nothing to do with regulation success. Observe:
Moreover, extra-regulation results are not very repeatable across seasons, especially compared to regulation results.
The fact that extra-regulation results have virtually nothing to do with regulation results and have little to no repeatability suggests that they are largely the product of randomness. If something is largely random, then it cannot be thought of as an underlying ability. And if something cannot be thought of as an underlying ability, then it ought not to be part of a metric that ostensibly measures team strength. And yet, shootout and overtime success does have a sizable affect on point totals. Hence, my reluctance to use point totals as a metric for team strength.
However, the inadequacy of point totals goes much deeper than this. Even before the advent of 4-on-4 overtime and the shootout, points were not the best metric for team strength. The reason for this is that point totals only reflect wins and losses while completely ignoring the margin of victory. If there are two teams with similar point totals, one of them tending to win convincingly and lose narrowly, the other tending to win narrowly and lose convincingly, then the former team is, in almost all cases, the better team. The concept is an intuitive one. If you disagree with the assertion that a team’s goal differential better conveys its ability relative to its point total or place in the standings, then you’re probably at the wrong site.
Granted, goal differential per se, while better than points, is not the best available metric. Several corrections need to be made to it for this to be true. Firstly, shootout and empty net goals should be excluded from the totals, as they provide no useful information. Secondly, raw goal differential is problematic in that not all teams play identical schedules. Some teams, usually by virtue of playing in a stronger division or conference, are burdened with a more difficult schedule than average. If you thought that the 2005-06 Phoenix Coyotes and the 2005-06 Carolina Hurricanes had equally difficult schedules, then you would be mistaken. Thus, some attempt should be made to correct for schedule difficulty. Finally, it is not so much a team’s absolute goal differential that is important, but its GF-GA ratio. A team that scores 200 goals and concedes 100 is better than one that scores 400 and gives up 300. Furthermore, simple goal differential is too sensitive to scoring context for it to provide any useful information on league parity, as it would lead to the spurious conclusion that there was less parity in higher scoring seasons. These two problems are avoidable by using each team’s Pythagorean expectation instead– essentially, its theoretical winning percentage determined through the following calculation:
(Adjusted goals for)^2 / [(adjusted goals for)^2 + (adjusted goals against)^2]
The resulting metric can be termed adjusted winning percentage.
AW% is important as provides us with a suitable metric for assessing team strength. By computing the standard deviation in AW% in any particular season, we’re essentially measuring parity.
What, then, does AW% tell us about the amount of parity in the NHL over the last ten years?
A few comments. Firstly, parity in the pre-lockout NHL was pretty invariant on a year to year basis (mean: 0.094, ST DEV: 0.008). Only 1996-97 is anomalous, with all of the remaining values falling between 0.092 and 0.101. Secondly, there is clearly more parity (read: the standard deviation in AW% is smaller) in the post-lockout NHL relative to the pre-lockout NHL. The difference may not seem like much, but the 2005-06 and 2006-07 values are separated by one SD from the pre-lockout mean. The value for 2007-08 is 4 SD(!) from the pre-lockout mean. That's a fairly significant difference.
Parity in the new NHL seems to be more reality than fiction. Teams really are less separated in ability now compared to five or ten years ago. I find this interesting as the purpose of having the shootout and three point games seems, to me, like a ploy designed by the NHL with the intention of creating the illusion of parity. However, the fact that the new NHL is characterized by genuine parity has in some sense obviated this purpose. That considered, perhaps the NHL should do away with three point games and the shootout. I certainly wouldn't complain.
Monday, December 1, 2008
Shot Quality Part Two: Shot Qualify For
In previous post, I analyzed the utility of shot quality against [hereafter SQA]. The purpose of this post is to explore the utility of its counterpart -- shot quality for [SQF hereafter]. Now, if the data for SQA is reliable and valid, then the data for SQF ought to be as well. It just wouldn't make sense otherwise. Nonetheless, I think that it's necessary to show that SQF is a legitimate construct in its own right, not only to further affirm the utility of shot quality as a whole, but to allow SQF to be identified as one of the main components of team offence.
The fact that shot quality is fraught with arena bias is no less relevant to SQF than it is to SQA. One way to get around this is to simply use the data on road shot quality. The problem is that, of the three seasons of data for SQF (2003-04, 2005-06, 2006-07), only one decomposes the information into home and road situations. Nonetheless, the 2006-07 data allows for estimations of the degree of bias present at each arena. These estimations of bias can then be used to adjust the 2003-04 and 2005-06 data so as to reduce the distortion. The result is what I've termed 'adjusted shot quality' -- a relatively bias-free estimation of the relative dangerousness of shots that a team directs at the opposition net.
Adjusted SQF is a clearly superior metric to unadjusted SQF. For one, it has more construct validity. If the shot quality data is actually measuring what it purports to, then there should be a positive correlation between SQF and shooting percentage at the team level. As the graph below shows, the correlation between save percentage and SQF is higher for the adjusted SQF data than it is for the unadjusted data.
But what about reliability? It goes without saying that the values for unadjusted SQF are going to be reliable on a year to year basis due to the element of arena bias. However, will the same be true of adjusted SQF? That is to say, once the effect of arena bias is removed, is SQF still a reliable phenomenon?
[In the event that the information that the graph is supposed to convey isn't apparent, the blue columns show the correlation between a team's SQF in 2005-06 and a team's SQF in 2006-07. The red columns show the same correlation, except for the years 2003-04 and 2005-06. The columns on the right are for unadjusted SQF; those on the left are for adjusted SQF.]
The answer is clearly yes. The interyear reliability for adjusted SQF is almost as high as that for unadjusted SQF. Thus, SQF is a substantially reliable metric, and this effect is largely independent of measurement bias.
Thus, just as SQA is a reliable and valid component of team defence, so too is SQF a valid and reliable component of team offence.
Sources:
2003-04 shot quality data -- study done by Ken Krzywicki
2005-06 shot quality data -- study done by Ken Krzywicki
2006-07 shot quality data -- study done by Alan Ryder
Wednesday, November 19, 2008
Shotblocking and Save Percentage
In a recent post by the Contrarian Goaltender, he collectively analyzes data from 1999-00 to 2007-08 in order to determine the relationship between shot attempts against and save percentage One of his findings was that there was a negative correlation between blocked shots and save percentage over this period.
To what extent, however, is this effect mediated by shot quality, as measured by Alan Ryder? On the one hand, if the majority of blocked shots are those coming from the point and other peripheral areas of the offensive zone, then the shots that "get through" would tend to be those from areas closer to the net. One would expect this to effect to be reflected by the shot quality data. On the other hand, if blocking shots has the effect of interfering with the sightlines of the goaltender, then this effect would not necessarily be reflected in the shot quality data. Both of these effects, if real, could account for the fact that teams that block more shots tend to have lower save percentages, on average. However, without analyzing the date, it's unclear to what extent each process is operative.
I think that it goes without saying that both processes are, to some degree, at work here. It would be unreasonable to suspect that the relationship between save percentage and shot blocking can be entirely accounted for by one single factor; indeed, most causal relationships that exist in complex phenomena, hockey included, are multifactorial. What I'd like to determine, however, is how much of the correlation can be accounted for by the shot quality data.
What I did was analyze the relationship between team blocked shots, team save percentage, team shot quality, and team shot quality neutral save percentage for each season between 2002-03 and 2007-08. I determined the correlations for each season individually because the total number of blocked shots is not uniform over time. Specifically, the pre-lockout values are significantly lower than the post-lockout values, and the 2002-03 values are much lower than that for 2003-04 (and every other season, for that matter). Thus, analyzing the data as a whole would preclude interpreting the results with any degree of confidence. It is worth noting that 2002-03 is no arbitrary cutoff -- there is simply no data on shot quality prior to this season. As is the case with the shot quality data, there is a clear arena bias with respect to the recording of blocked shots. This is evidenced by the fact that the standard deviation in home blocked shots is much higher than for that for road blocked shots. The data:
What I did, then , is incorporate the figures for both total blocked shots and road blocked shots. Presumably, road blocked shots would provide a better indication of the 'true' number of shots blocked as it less subject to arena bias. Here are the inter-variable correlations:
It should be noted that the shot quality data for 2006-07 and 2007-08 is for road shot quality only -- thus, I was unable to include the correlations between road shot quality neutral save percentage and road blocked shots for any season prior to 2006-07. The correlations between road save percentage and road blocked shots were also excluded for these seasons.
Several points:
1. There are slight, yet consistently negative, correlations between total blocked shots and save percentage. Although I doubt that any of the specific correlations are statistically significant, the fact that all of them are in the same direction is suggestive of an underlying relationship. Also, this accords with The Contrarian Goaltender's finding of a correlation of -0.30 between blocked shots and save percentage over a larger data sample.
2. The correlations between total blocked shots and shot quality neutral save percentage show a similar trend in that for every season the relationship is mild yet inverse. Thus, shot blocking seems to have an effect on goaltender save percentage that is residual and cannot be accounted for by the shot quality data. Of course, it's also clear that shot quality is also partly driving the relationship. For one, there seems to be somewhat of a positive correlation between blocked shots and shot quality against. Additionally, the fact that the relationship between blocked shots and SQN % is not as strong as the relationship between blocked shots and save percentage proper also shows how shot quality partially mediates the correlation. If it did not, these two groups of correlations would be near identical.
3. The correlations for the 'road' data are generally weaker than the overall correlations, which is counterintuitive as both road shot quality and road blocked shots are less distorted by bias. This can probably be explained by the attenuated sample size for road games in each individual season (41 vs 82 games), which diminishes the resolution of the data. Indeed, this also occurred in previous posts where the shot quality figures themselves were analyzed.
Sunday, October 26, 2008
Shot Quality
I think that in the case of the vast majority of hockey fans, there is a considerable amount of resistance to non-mainstream hockey statistics. Shot quality is no exception to this.
I'm not going to go into specifics in terms of what shot quality actually is. There are multiple articles at Alan Ryder's site that describe in detail what shot quality is and how it is measured. Furthermore, if you're reading this blog, there is a good chance that you're already acquainted with the idea of shot quality.
Rather, the purpose of this post is to explore the construct validity of shot quality, as well as its reliability. Because the figures for shot quality against (hereafter SQA) are more readily available than those for shot quality for, this post will deal with the former.
Firstly, the issue of validity. The biggest issue with shot quality, I think, is concern over the degree to which the shot quality figures actually mirror reality. Do teams with higher SQA (i.e. worse) really give up more dangerous shots against, or is it merely an artifact produced by arena bias or some other element of the measurement process? I think that the best way to answer this is to look at the relationship between team SQA and team save percentage. If teams with lower save percentages tend to have higher SQA, then that's fairly solid evidence that shot quality is measuring something genuine. The data:
For every single season, there is a negative correlation between SQA and save percentage at the team level, meaning that, on average, the teams that give up more dangerous shots against tend to have lower save percentages. Granted, not all of the correlations are statistically significant, but I think the fact that all of the correlation are in the same direction is suggestive of a reasonably strong relationship. Interestingly, the correlation is somewhat lower post-lockout than pre-lockout. I suspect that part of this might have to do with the fact that the figures for 2006-07 and 2007-08 are only for road games only, which was done in order to reduce the effect of arena bias. The save percentage figures I used for 2006-07 and 2007-08 are for away games only, so it can't be caused by my attempting to correlate road SQA against with overall save percentage. Rather, the cause probably relates to the fact that the sample size has been halved, thus reducing the resolution of the data.
But about it's reliability? Is there a correlation between SQA at Time A and SQA at Time B? The easiest way to determine this would be to look at it's split-half reliability at the team level, which would involve, for example, determining the correlation between a team's SQA against in odd numbered games and the corresponding figure for even numbered games. However, because I only have the figures for the entire season, I'm not able to do this. What I can do, though, is determine its inter-year correlation at the team level. Is SQA repeatable in this sense?
The data is fairly unequivocal. For each season pairing, there is a considerable correlation, and the degree of relationship is highly consistent across seasons. Indeed, the strength of the correlation is remarkably similar to the inter-year correlation for shots against over the same period. Admittedly, the pre-lockout values could be partly driven by arena bias, but that seems unlikely considering that the values after 2006-07 are for road shot quality only and the correlation is equally strong. Therefore, it can be said with some degree of confidence that a team's SQA for any given season is highly correlated with the values for both the preceding and following seasons, and that this relationship does not appear to be mediated by arena bias.
While shot quality is not likely to be embraced by mainstream hockey fans anytime soon, I feel that it has tremendous utility in that it quantifies one of the two components - the other being shots against - of team defense. Based on these findings, SQA can be said to be:
1. A real measurement of the average relative dangerousness of the shots allowed by any given team, as manifested in the lower team save percentages of teams that allow relatively more dangerous shots on average.
2. A reliable, enduring element of team defense, as manifested in the substantial inter-season correlation at the team level.
I'm not going to go into specifics in terms of what shot quality actually is. There are multiple articles at Alan Ryder's site that describe in detail what shot quality is and how it is measured. Furthermore, if you're reading this blog, there is a good chance that you're already acquainted with the idea of shot quality.
Rather, the purpose of this post is to explore the construct validity of shot quality, as well as its reliability. Because the figures for shot quality against (hereafter SQA) are more readily available than those for shot quality for, this post will deal with the former.
Firstly, the issue of validity. The biggest issue with shot quality, I think, is concern over the degree to which the shot quality figures actually mirror reality. Do teams with higher SQA (i.e. worse) really give up more dangerous shots against, or is it merely an artifact produced by arena bias or some other element of the measurement process? I think that the best way to answer this is to look at the relationship between team SQA and team save percentage. If teams with lower save percentages tend to have higher SQA, then that's fairly solid evidence that shot quality is measuring something genuine. The data:
For every single season, there is a negative correlation between SQA and save percentage at the team level, meaning that, on average, the teams that give up more dangerous shots against tend to have lower save percentages. Granted, not all of the correlations are statistically significant, but I think the fact that all of the correlation are in the same direction is suggestive of a reasonably strong relationship. Interestingly, the correlation is somewhat lower post-lockout than pre-lockout. I suspect that part of this might have to do with the fact that the figures for 2006-07 and 2007-08 are only for road games only, which was done in order to reduce the effect of arena bias. The save percentage figures I used for 2006-07 and 2007-08 are for away games only, so it can't be caused by my attempting to correlate road SQA against with overall save percentage. Rather, the cause probably relates to the fact that the sample size has been halved, thus reducing the resolution of the data.
But about it's reliability? Is there a correlation between SQA at Time A and SQA at Time B? The easiest way to determine this would be to look at it's split-half reliability at the team level, which would involve, for example, determining the correlation between a team's SQA against in odd numbered games and the corresponding figure for even numbered games. However, because I only have the figures for the entire season, I'm not able to do this. What I can do, though, is determine its inter-year correlation at the team level. Is SQA repeatable in this sense?
The data is fairly unequivocal. For each season pairing, there is a considerable correlation, and the degree of relationship is highly consistent across seasons. Indeed, the strength of the correlation is remarkably similar to the inter-year correlation for shots against over the same period. Admittedly, the pre-lockout values could be partly driven by arena bias, but that seems unlikely considering that the values after 2006-07 are for road shot quality only and the correlation is equally strong. Therefore, it can be said with some degree of confidence that a team's SQA for any given season is highly correlated with the values for both the preceding and following seasons, and that this relationship does not appear to be mediated by arena bias.
While shot quality is not likely to be embraced by mainstream hockey fans anytime soon, I feel that it has tremendous utility in that it quantifies one of the two components - the other being shots against - of team defense. Based on these findings, SQA can be said to be:
1. A real measurement of the average relative dangerousness of the shots allowed by any given team, as manifested in the lower team save percentages of teams that allow relatively more dangerous shots on average.
2. A reliable, enduring element of team defense, as manifested in the substantial inter-season correlation at the team level.
Wednesday, October 22, 2008
Save Percentage and Shots Against
Recently, at Hfboards and elsewhere, I've come across the assertion that, among NHL goaltenders, there is a positive relationship between save percentage and shots against. Some examples.
Naturally, I was skeptical of the claim , not least because no compelling evidence was ever offered in support of it. Moreover, there doesn't appear to be any reason why this should be so. Conceivably, there could be some degree of trade off between shot quality and shots against at the team level, in that teams that are better at preventing shots do so at the expense of allowing more quality ones (at least proportionately) and vice-versa. However, there doesn't appear to be much of a correlation between the two.
The chart shows the correlation between shots against and shot quality against (at the team level) in the right column with the corresponding season in the left column. In only two of the seasons (2002-03 and 2003-04) was there something of a relationship. What's interesting, however, is that the correlations are positive, meaning that, in those particular seasons, the teams that were better at preventing shots also tended to allow lower quality shots. There is absolutely no evidence of a trade off between shots against and shot quality.
Of course, it's possible that there could be a relationship between save percentage and shots against independently of shot quality. It's often been suggested that goaltenders play better when they're frequently tested and poorer when underworked. If this were true, a correlation between save percentage and shots against should emerge at the team level. But what does the data say?
As can be seen, there isn't much of a pattern between the two variables. In some years, the correlation is positive; in other years, negative. What's important, I think, is that all the values are fairly close to zero. The only year where the correlation is even remotely significant at the 5% level is 1999-00, but the direction is contrary to what would be expected . The important point is that teams that give up more shots against do not tend to have higher save percentages.
There is absolutely no evidence that high shot totals have an inflationary effect on goaltender save percentage.
Why, then, is it often argued that such a relationship exist? More than anything else, the phenomenon seems to be driven by wishful thinking on the part of the claimants, a disproportionate number of whom belong to certain fan bases. I'll say no more.
Subscribe to:
Posts (Atom)