(2) Detroit v Pittsburgh (4)

There isn't much that I can say about these two teams that hasn't already been said.

This is a matchup between two quality teams.

While it was not obvious that the Pens were the best team in the East at the outset of the playoffs, I think that it would be difficult to argue that point now.

They've been dominant since the start of the second round despite having a less-than-impressive showing against the Flyers.

I feel confident in saying that this year's team is better than last year's, if only for the fact that they're now able to consistently outshoot the opposition.

Needless to say, the Wings are also an excellent team that, like Pittsburgh, is clearly deserving of its place in the Finals.

What concerns me about Detroit is their injury situation. They missed a few regulars in several of the games against Chicago and, while some of those players have returned to the lineup, I doubt that any of them have fully recovered at this point. I understand that Datsyuk is out for game 1, and that will hurt them.

This series will be closer than last year's finals. If not in the outcome, then certainly in terms of the play. Last year, the Pens were decisively outclassed in all three games at Joe Louis and marginally outclassed in their own building. The Wings outshot the Penguins in all six games last year, and had an aggregate shot advantage of roughy +80. I just don't see that happening again this year.

Even though the gap between these two teams has narrowed over the last 12 months -- or, more accurately, since Valentines Day -- I still think that Detroit will win. The Wings are, fundamentally, the best team in the league (well, either them or San Jose) and I just can't pick against them. I also want them to win, and this has surely influenced my decision.

I think there's also something to be said for the competitive imbalance between the two conferences. Back in 2007, the Senators had, not unlike Pittsburgh, advanced to the final without too much difficulty, and had looked good doing it.

Of course, as is currently the case, the West was the stronger conference in that year. The Ducks had had a more difficult road to the finals than Ottawa by virtue of facing tougher opponents. And yet, it seemed that few people took that into account judging the relative strength of the involved teams. I'm not going to make that same mistake.

Detroit in 7.

## Saturday, May 30, 2009

## Monday, May 18, 2009

### PP S% Percentage: Correction

Well, I feel like a bit of an idiot.

In my post examining powerplay shooting percentage, I concluded that a sizable component of the team-to-team variance in powerplay shooting percentage was non-random.

For the 2008-09 season, the standard deviation in team powerplay shooting percentage was about 0.018. I determined that the predicted standard deviation -- that is, the standard deviation that one would expect if powerplay shooting percentage was entirely random -- was roughly 0.011. Had that figure been correct, that would have meant that a mere 1/3 of the variation in powerplay shooting percentage could be accounted for by randomness.

However, I had erred in calculating that figure. The mistake is an embarrassing one: instead of using each team's shot totals, I used each team's PP S% multiplied by 1000 (the shooting percentages at behindthenet are expressed in this manner). Those familiar with Excel are aware that an incorrectly typed formula can sometimes have profound consequences.

As each team's shot total was roughly double their actual shot total, and the predicted standard deviation was much lower than it should have otherwise been.

Upon doing another set of simulations -- this time hopefully correctly -- it seems that the predicted standard deviation is somewhere in the area of 0.016.

This means that about two-thirds of the variance in team powerplay shooting percentage can be explained through randomness, which is considerably different from my original (flawed) estimate.

However, as identified by Vic Ferrari in the comments section, there was another reason why my study was somewhat flawed.

Vic correctly pointed out that the standard deviation in team powerplay shooting percentage will vary around the 'true' mean standard deviation from year to year. This means that using the standard deviation from any single season as an approximation for the true standard deviation is problematic.

I made the mistake of representing my findings as more or less definitive when that wasn't necessarily the case.

This methodological problem also applies to my earlier study that dealt with even strength shooting percentage.

Obviously, this somewhat limits the extent to which my findings are generalizable.

What I've done, then, is to apply this methodology to other seasons to see if the contribution of randomness to the overall variance is similar. I also did this for even strength shooting percentage (5-on-5 shooting percentage, strictly speaking, as the numbers are from behindthenet), as well as even strength shooting percentage when the score is tied.

The results:

Some observations:

1. It appears that all of the team-to-team variation in EV shooting percentage when the score is tied is due to randomness.

I don't think I've erred this time -- there was absolutely no interyear correlation between 07-08 and 08-09 for team EV shooting percentage when the score is tied (r=0.004).

2. By contrast, the contribution of randomness to EV shooting percentage in general is much lower, and appears to be somewhere around the order of 50%.

This implicates the 'playing to the score effect' as one of the non-random causes of team-to-team variation in EV S %.

3. Contrary to what my last post on the subject would have one believe, team powerplay shooting percentage actually appears to be more random in its distribution than EV shooting percentage, not less.

This seems counterintuitive to me and I'm not sure how much confidence can be placed in these findings. Perhaps there is some other flaw in my methodology that I've overlooked.

In my post examining powerplay shooting percentage, I concluded that a sizable component of the team-to-team variance in powerplay shooting percentage was non-random.

For the 2008-09 season, the standard deviation in team powerplay shooting percentage was about 0.018. I determined that the predicted standard deviation -- that is, the standard deviation that one would expect if powerplay shooting percentage was entirely random -- was roughly 0.011. Had that figure been correct, that would have meant that a mere 1/3 of the variation in powerplay shooting percentage could be accounted for by randomness.

However, I had erred in calculating that figure. The mistake is an embarrassing one: instead of using each team's shot totals, I used each team's PP S% multiplied by 1000 (the shooting percentages at behindthenet are expressed in this manner). Those familiar with Excel are aware that an incorrectly typed formula can sometimes have profound consequences.

As each team's shot total was roughly double their actual shot total, and the predicted standard deviation was much lower than it should have otherwise been.

Upon doing another set of simulations -- this time hopefully correctly -- it seems that the predicted standard deviation is somewhere in the area of 0.016.

This means that about two-thirds of the variance in team powerplay shooting percentage can be explained through randomness, which is considerably different from my original (flawed) estimate.

However, as identified by Vic Ferrari in the comments section, there was another reason why my study was somewhat flawed.

Vic correctly pointed out that the standard deviation in team powerplay shooting percentage will vary around the 'true' mean standard deviation from year to year. This means that using the standard deviation from any single season as an approximation for the true standard deviation is problematic.

I made the mistake of representing my findings as more or less definitive when that wasn't necessarily the case.

This methodological problem also applies to my earlier study that dealt with even strength shooting percentage.

Obviously, this somewhat limits the extent to which my findings are generalizable.

What I've done, then, is to apply this methodology to other seasons to see if the contribution of randomness to the overall variance is similar. I also did this for even strength shooting percentage (5-on-5 shooting percentage, strictly speaking, as the numbers are from behindthenet), as well as even strength shooting percentage when the score is tied.

The results:

Some observations:

1. It appears that all of the team-to-team variation in EV shooting percentage when the score is tied is due to randomness.

I don't think I've erred this time -- there was absolutely no interyear correlation between 07-08 and 08-09 for team EV shooting percentage when the score is tied (r=0.004).

2. By contrast, the contribution of randomness to EV shooting percentage in general is much lower, and appears to be somewhere around the order of 50%.

This implicates the 'playing to the score effect' as one of the non-random causes of team-to-team variation in EV S %.

3. Contrary to what my last post on the subject would have one believe, team powerplay shooting percentage actually appears to be more random in its distribution than EV shooting percentage, not less.

This seems counterintuitive to me and I'm not sure how much confidence can be placed in these findings. Perhaps there is some other flaw in my methodology that I've overlooked.

### Playoff Predictions -- Third Round Part Two

(4) Pittsburgh vs Carolina (6)

That the Penguins-Capitals series went seven games was hardly surprising. What was surprising was the degree in which the Penguins outplayed the Capitals.

The Penguins outplayed the Capitals in at least five of the seven games, and had the edge in shots in all seven. The aggregate shots for the series were 256-180 in favor of Pittsburgh. It's not often that one team manages to outshot the other so decidedly over the course of a best of seven series. When it does happen, there can be no doubt as to which of the two teams was better, especially when the outshooting team is also the victor.

The Capitals had no difficulty at all in outshooting the opposition over the course of the regular season, and the fact that they were so visibly outplayed calls for an explanation. I'm not sure whether Washington played extremely poorly or Pittsburgh played extremely well. It's likely that both are true to some degree. In any event, the Penguins were clearly the better team and deserved to win.

I've written about Boston at length on this blog and frequent readers are well acquainted with my thoughts on the team. At the risk of belaboring the point, I'll keep my comments brief.

The Bruins are a reasonably good team that were eliminated by another reasonably good team. Some people have commented that the Bruins picked the wrong time of year to play their worst hockey, but I don't think that's accurate at all. The Bruins were trading scoring chances with the opposition all season long, yet came out ahead on the basis of strong goaltending and luck. Against Carolina, the Bruins again traded scoring chances in an entertaining and evenly-matched affair. They did not play poorly. The difference was that, unlike during the regular season, they didn't receive an undue share of the bounces. It's as simple as that.

It's funny -- if you asked the average hockey fan which of the two Eastern series' went according to expectation, and which of the two did not, they'd likely identify Pittsburgh-Washington as the former, and Carolina-Boston as the latter. I couldn't disagree more.

As for the series in question, I'm forced to go with the Penguins. The Penguins, as I mentioned earlier, were very impressive against the Capitals. If they play even somewhat similar against the Hurricanes, they should advance. Carolina, having defeated two quality opponents, is obviously a good team, and I don't anticipate that this will be an easy series for the Penguins. At the same time, I don't see them losing.

Pens in 6.

That the Penguins-Capitals series went seven games was hardly surprising. What was surprising was the degree in which the Penguins outplayed the Capitals.

The Penguins outplayed the Capitals in at least five of the seven games, and had the edge in shots in all seven. The aggregate shots for the series were 256-180 in favor of Pittsburgh. It's not often that one team manages to outshot the other so decidedly over the course of a best of seven series. When it does happen, there can be no doubt as to which of the two teams was better, especially when the outshooting team is also the victor.

The Capitals had no difficulty at all in outshooting the opposition over the course of the regular season, and the fact that they were so visibly outplayed calls for an explanation. I'm not sure whether Washington played extremely poorly or Pittsburgh played extremely well. It's likely that both are true to some degree. In any event, the Penguins were clearly the better team and deserved to win.

I've written about Boston at length on this blog and frequent readers are well acquainted with my thoughts on the team. At the risk of belaboring the point, I'll keep my comments brief.

The Bruins are a reasonably good team that were eliminated by another reasonably good team. Some people have commented that the Bruins picked the wrong time of year to play their worst hockey, but I don't think that's accurate at all. The Bruins were trading scoring chances with the opposition all season long, yet came out ahead on the basis of strong goaltending and luck. Against Carolina, the Bruins again traded scoring chances in an entertaining and evenly-matched affair. They did not play poorly. The difference was that, unlike during the regular season, they didn't receive an undue share of the bounces. It's as simple as that.

It's funny -- if you asked the average hockey fan which of the two Eastern series' went according to expectation, and which of the two did not, they'd likely identify Pittsburgh-Washington as the former, and Carolina-Boston as the latter. I couldn't disagree more.

As for the series in question, I'm forced to go with the Penguins. The Penguins, as I mentioned earlier, were very impressive against the Capitals. If they play even somewhat similar against the Hurricanes, they should advance. Carolina, having defeated two quality opponents, is obviously a good team, and I don't anticipate that this will be an easy series for the Penguins. At the same time, I don't see them losing.

Pens in 6.

## Sunday, May 17, 2009

### Playoff Predictions -- Third Round

(2) Detroit vs Chicago (4)

The fact that the Wings only narrowly got by Anaheim might be of concern to some people, given that Chicago is probably a better team than the Ducks.

On the other hand, the Wings-Ducks series was only really close results wise. I'm not suggesting that the Ducks are a weak team or that they played poorly -- that's not my position. Nonetheless, it was clear to me -- both from watching that series and from looking at the underlying numbers -- that one team was easily better than the other. And the fact that the series went seven games and was decided by a mere goal doesn't change that fact.

In the other Western semi-final, it seemed that the Blackhawks were the slightly better team. While the Canucks had the better special teams, Chicago was better at controlling the play at even strength. Prior to the series, some appeared to pick Vancouver on the basis of the Luongo factor. Although it was reasonable to expect Vancouver to have some advantage in goaltending going into the series, that advantage never really materialized. Perhaps that was the difference.

There isn't really too much that can be said about this series. Chicago is a strong team that can probably compete -- at least broadly speaking -- with the Red Wings at even strength. However, I think that the Wings powerplay might be the difference maker here, and if the Hawks fail to play disciplined, this series might be over quickly.

Wings in 6.

I'll do the Pens-Canes series tomorrow.

The fact that the Wings only narrowly got by Anaheim might be of concern to some people, given that Chicago is probably a better team than the Ducks.

On the other hand, the Wings-Ducks series was only really close results wise. I'm not suggesting that the Ducks are a weak team or that they played poorly -- that's not my position. Nonetheless, it was clear to me -- both from watching that series and from looking at the underlying numbers -- that one team was easily better than the other. And the fact that the series went seven games and was decided by a mere goal doesn't change that fact.

In the other Western semi-final, it seemed that the Blackhawks were the slightly better team. While the Canucks had the better special teams, Chicago was better at controlling the play at even strength. Prior to the series, some appeared to pick Vancouver on the basis of the Luongo factor. Although it was reasonable to expect Vancouver to have some advantage in goaltending going into the series, that advantage never really materialized. Perhaps that was the difference.

There isn't really too much that can be said about this series. Chicago is a strong team that can probably compete -- at least broadly speaking -- with the Red Wings at even strength. However, I think that the Wings powerplay might be the difference maker here, and if the Hawks fail to play disciplined, this series might be over quickly.

Wings in 6.

I'll do the Pens-Canes series tomorrow.

Subscribe to:
Posts (Atom)