Hockey may be trailing baseball and the other major sports when it comes to advanced statistical analysis, but the gap is starting to close, and Gabriel Desjardins, lead man of Behind The Net and contributor to Puck Prospectus, is one of the people at the forefront.
"I have to give a lot of credit to the guys at Citizen Sports/Protrade," said Desjardins via e-mail. "My original idea for Behind the Net was a Win Probability model, which I pitched to Jeff Ma just before the lockout. Bad timing. I worked with the Protrade guys on other sports for a couple of years, but Roland Beech (of 82games) was a huge supporter of my hockey work and gave me the push I needed to get back into hockey analysis."
He continued: "Hockey is so far behind the other sports on the analysis side that it has a bit of a wild west feel to it, and a sense that some random blogger somewhere might have figured out something that many GMs and coaches don't know. That's no longer true in baseball, for sure. I really respect the Oilers blogger community for their contributions to hockey analysis, and trying to bring some of their brilliance to the rest of the league keeps me going."
For years, the only metrics hockey fans had to evaluate and compare players were the raw numbers (goals, assists, points) and what I've considered to be a heavily flawed statistic in plus/minus. The issue with plus/minus, to me, is that it punishes players for playing on bad teams, and rewards players for playing on good teams. An example I use, and one I proposed to Desjardins, is Boston Bruins defenseman Zdeno Chara. During his final year with the Islanders in 2000-01, he was a team-worst minus-27. The following season, after being acquired by the Ottawa Senators in the now infamous Alexei Yashin trade, Chara instantly improved to a plus-30, a 57-goal swing in one year. Did he suddenly get that much better over night? There has to be a better way to measure players.
Two of the concepts that Desjardins uses at Behind the Net are "quality of teammates" and "quality of competition."
We know that if you're an NHL winger there's a difference between having Joe Thornton as your center as opposed to a guy like Jordan Staal. One of the players (Thornton) is significantly better at distributing the puck and setting up his teammates.
But how much better? Can we put a number on it? Apparently, we can.
"Everything starts with the idea of relative plus/minus, which I've called a player's 'rating,'" said Desjardins. "This is just his team's plus/minus when he's not on the ice subtracted from his plus/minus. It's an attempt to account for a player's performance relative to his team so that players on bad teams aren't unjustly penalized for their teammates' performances. It's not perfect, but it's rare that you build a perfect metric by subtracting A from B."
"Quality of Teammates is pretty straightforward, he said. "It's just the average rating of a player's teammates. If he only plays with players who outperform their team, then it'll be high. You can identify which line a player plays on almost immediately by looking over a roster's worth of Quality of Teammates. I also wanted to make these metrics computationally simple so that anybody could reproduce them if they wanted to."
And about that Thornton or Staal question?
"The difference between playing with Staal and playing with Thornton is approximately one goal per 60 minutes of 5-on-5 time on ice," said Desjardins. "So that would change your plus/minus by about 15-20 goals per season. So while Marleau-Thornton-Setoguchi was plus-16, Marleau-Staal-Setoguchi would be around even. That sounds about right to me -- Thornton is probably worth 2-3 more wins per season than Staal. It's difficult to say exactly since these statistics are simple first-order metrics and they don't separate out the impacts of the other players on the ice."
Below is the rest of my chat with Gabriel Desjardins:
Quality of Competition is a slightly different animal. It is just the average "rating" of all opponents faced by a given player, weighted by ice-time. It behaves differently than the quality of teammates because playing ratings tend to vary by team. So it is a very good metric for determining who faced the toughest competition on a given team, and, in a general sense, it can isolate which players faced the toughest competition overall (Bouwmeester, Lidstrom, Jan Hejda, Seabrook/Keith, Anton Volchenkov, Sami Pahlsson, etc...). But it has some quirks -- the Flames and Oilers, for example, came out with overwhelmingly negative ratings -- which come from not having an iterative adjustment of competition like what's done with Power Rankings to make sure teams aren't dragged down by facing tough opponents.
Hockey has made some good steps in collecting dynamic information, but we're still missing too much of the action. We have no idea if passes connect, if shots are screened, or where players are positioned. The Blues scored a goal on the Sharks this season because Dan Boyle went to the bench while the Blues had possession in his end. If you're trying to figure out why that goal happened, the data collected at the game is of little use. The NHL needs to put tracking devices on every jersey and the puck; once that happens, analysts will be able to get the same sense of the game you can get from watching it closely in person.
Defense. How can we measure it?
It's difficult. Ultimately, you need to look at how many goals were scored against an individual player and who was on the ice with and against him, as well as whether he started out in his defensive zone or in the offensive zone. This last point is a brilliant observation by Oilers blogger Vic Ferrari. It seems to me that if we knew which team/player had the puck at any given moment, we'd be able to make some big strides on that issue. But watching film and making a subjective judgment is probably the best way to do it at the moment.
How does all of this compare to the SABR explosion in baseball in recent years, and what are the most useful statistical tools developed most recently?
I think the single most brilliant realization anyone has made in the last few years is that coaches have a huge impact on player opportunities based on who they send out for faceoffs. Vic Ferrari deserves a lot of credit for noticing this: Some guys always go out for offensive zone faceoffs; some guys start in the defensive end almost all the time, and once they win the face-off or break out of the zone, they go back to the bench and somebody else gets their offensive opportunity.
It's like the proverbial problem with football: the guy who moves the ball the first 99 years gets no credit, while the guy who runs the ball in from the one-yard line gets big credit for a high-percentage play. So much of the rest of what's involved in analyzing a player depends on whether he starts in his own zone or in the other team's. The other thing you notice is that most coaches seem to know what they're doing. In baseball, you'll still see a manager bring in his 5th-best right-handed reliever to face the other team's lefty-hitting 3-4-5 hitters in the 7th inning of a tie game. I like to think of this crappy Joe Mantegna movie, Comrades of Summer. Mantegna is managing the Russian national baseball team and one of his coaches is a Cuban guy who seems to know the game pretty well. Every time a player makes the wrong decision, the Cuban guy tells them what to do: "Manuel says pitcher covers first base." Mantegna assumes Manuel was some brilliant Cuban baseball guru. But eventually Manuel is wrong about something and Mantegna finds out the truth: "Manuel" is the Russian Baseball Manual, and the assistant coach was just reading from it. Managers make such gross strategic errors in baseball that many fans can point them out.
But how many hockey fans would know whether it was a good idea for Columbus to send Manny Malhotra out for every defensive face-off in the last two minutes of the game and then get a change if he won the draw and broke out? How many fans would even notice this? I've played hockey for 25 years and I feel like I barely know how to watch a hockey game. I don't think this contradicts my earlier assertion that a random blogger could figure out something that many GMs don't know -- teams are good at analyzing their own players, but not necessarily the guys on other teams, especially in other divisions.