Evaluating our ECAC Model for 2021-22

The final ECAC standings from the season (photo: College Hockey News)

Before the season, there was a ton of uncertainty how the ECAC would play out. Two thirds of the league had just cancelled their seasons and not played at all. Those teams also had two years worth of roster turnover to go along with not playing. Then, on top of that, the transfer portal waiver that allowed all transfers immediate eligibility caused chaos in all NCAA sports and roster turnover unlike we’ve ever seen before. The sentiment was that you might as well throw darts at a board. We posted our ECAC standings predictions for the 2021-22 season based off of a model to try make sense of it all. While models are by no means perfect, it’s an objective way to rank teams using data and make a prediction. With the question marks surrounding the ECAC going into the season, it definitely was useful to have a more informed prediction. The purpose of this article to evaluate what the model got right and wrong in order to improve it for this coming season. Let’s get into it!

Overall

Before we delve into specifics with each individual team, let’s take a look at how the model performed overall. Here were the projections.

  1. Quinnipiac
  2. Clarkson
  3. Cornell
  4. Harvard
  5. St. Lawrence
  6. RPI
  7. Colgate
  8. Union
  9. Brown
  10. Dartmouth
  11. Princeton
  12. Yale

And here were the actual results.

  1. Quinnipiac
  2. Clarkson
  3. Harvard
  4. Cornell
  5. Colgate
  6. RPI
  7. Union
  8. St. Lawrence
  9. Brown
  10. Princeton
  11. Dartmouth
  12. Yale

Not too shabby, eh? We nailed almost half the teams in exactly the right spot. Quinnipiac, Clarkson, RPI, Brown, and Yale were the 5 teams we got perfect. For 5 other teams, we were only off by 1. Harvard and Cornell were flipped, Dartmouth and Princeton were flipped, and Union was one spot higher. Lastly, Colgate was off by 2, and St. Lawrence was off by 3. That’s still a great performance though. I’d say we nailed the 10 teams within one spot, did good with Colgate, and St. Lawrence is the only team that I’d say is really a miss.

To put the model’s performance in the proper context, it needs to be compared to other predictions. Luckily for us, I entered the model into a fan prediction contest run on USCHO by Lugnut92 (shoutout to him for that). The contest also included the media poll, the coach poll, and the fan average (among some other fun ones).

The model won the contest with a score of 18. The scores were computed by taking the difference of each team’s prediction from their actual placement in the standings, then squaring that difference, and summing those up. For our prediction that meant: 0 for Quinnipiac, 0 for Clarkson, 1 for Harvard, 1 for Cornell, 4 for Colgate, 0 for RPI, 1 for Union, 9 for St. Lawrence, 0 for Brown, 1 for Princeton, 1 for Dartmouth, and 0 for Yale. 1 + 1 + 4 + 1 + 9 + 1 + 1 = 18. This score is actually the 2nd best ever, and the contest dates back to the 2002-03 season. Seems like it did the trick. For any fans who also feel like tooting their own horn, the fan average outperformed both the media and the coach poll, so you can say you know more than both of them obviously!

Team By Team

Now, let’s go team by team to review predictions more in-depth.

Teams We Nailed

Quinnipiac

The Bobcats were a team I was always high on due to returning the most scoring in the league, having the 2nd best recruiting class, and having by far the most experience on the blue line. Here’s what I said about them in my opening paragraph, “Simply put, Quinnipiac will be a national championship contender this year. Their roster is that good, and the model rates them as by far the best team in the ECAC (and I agree with it).” I got some pushback on this from various ECAC fans/media who were high on Clarkson and thought that they were either equal or better. Well, here I am to rub it in your face that I was right. Just kidding… sorta. Quinnipiac pretty much lived up to the expectations the model laid out for it though. They definitely were championship contenders, putting up a good fight against the natty favorites in Michigan after they won their first round matchup in the NCAA tournament. They played Michigan pretty evenly throughout the game, but Michigan finished its chances and Quinnipiac didn’t (until the 3rd at least). Anyways, the model predicted that they would have both the best offense and best defense in the league. It wasn’t too far off offensively as Quinnipiac and Clarkson were tied for 2nd with 3.31 goals per game, and Harvard was first by only 0.01 goals per game at 3.32. It was spot on defensively as the Bobcats weren’t just the best defensive team in the ECAC; they were the best defensive team in the country allowing only 1.26 goals per game, which is truly astounding.

Clarkson

Clarkson was a team that the model and I were also high on. They returned the 2nd most scoring in the conference, had a good recruiting class, and returned the 2nd most experience on the blue line plus the ECAC Rookie of the Year in net. This led to having the 3rd ranked projected offense. The reason they weren’t higher is that the model viewed Quinnipiac and Harvard as having more firepower at the top. Since Clarkson finished tied for 2nd with Quinnipiac, I’d say the model was pretty right there. Defensively, Clarkson was also ranked 3rd, but they were extremely close to Cornell in the model. They ended up having the 3rd ranked defense, so the model was pretty much right on there as well. The only thing that I was really wrong about with Clarkson was predicting them to make the NCAA tournament. They were definitely good enough, but the teams that struggled after missing a year really deflated the top ECAC teams in the pairwise. Clarkson also lost some non-conference games they probably should have won (Alaska, swept by ASU, UNH), which would have pushed them into the NCAAs.

RPI

RPI had more roster turnover of any team going into the season. There were 8 seniors who graduated after 2020, and then, they had 8 transfers after the season got cancelled on top of that. They brought in 18 new players. This led to both the coaches and the media being pretty low on the team. The coaches put them 8th and the media put them 7th. The model had them 6th, which is where they ended up finishing. Despite all those losses, RPI’s returning scoring was ranked 8th in the conference, which is probably what tripped up all the others. By sheer numbers, you would expect that the returning scoring would have been more towards the bottom. The model also really liked RPI’s recruiting class (specifically the transfers) and ranked it 3rd in the conference. The model put RPI’s projected offensive rank at 7th, and they finished 6th with a lot of the transfers playing key roles. Defensively, the model liked RPI returning Kjellberg, Johnson and Hallbauer plus bringing in Baxter as a transfer. RPI ranked 4th defensively by the model. They finished 5th, but that was because Harvard did better than the model expected not because RPI’s defense underperformed. Overall, the model saw them as the middle team, and RPI ended up being right there. On their best nights, they’d compete with some of the best teams in the country, and other nights, they’d lose to teams like Brown and Yale.

Brown

Brown was a team expected to be pretty bad with 12th place and 10th place predictions. The model had them higher at 9th, and Brown delivered there. The Bears were returning the 10th most offense in the league, which isn’t great, but their recruiting class was ranked 7th, which was solid. That propelled them up to a tie for 9th in projected offense. The defense was expected to be worse though due to having to turn to backup goalie Luke Kania to start and not much experience returning on the blue line. The model ranked their defense 10th, which ended up being spot on. It was way off on the offense though with Brown putting up not only the worst offense in the league but also the entire country. It was still good enough for a 9th place finish in the league overall, but this feels like more of an instance of Brown benefitting from all the bottom 4 teams being extremely bad rather than the model nailing them.

Yale

Yale was a team that I was blasting all pre-season as the worst in the ECAC and one of the worst in the country. They returned the least offense in the league by far. To put into perspective how bad it was, they were returning 20 total goals, and 14 of those were from Justin Pearson. To make matters worse, they had the 11th ranked recruiting class with very little talent or impact expected from the newcomers. Their total offense was ranked last by the model. On defense, they only returned one experienced defenseman and were going to have to start a rookie in net. With their defense being really bad in 19-20, the model put their defense last in the league as well and with good reason. The end result had them being the worst team by far, and that’s where they finished. They also ended up being the 2nd worst team in the country by the pairwise. The model nailed Yale for sure.

Teams We Just Missed

Harvard

There was a lot that the model got right about Harvard. Their offense was expected to be explosive with all they were both returning (3rd in the league) and bringing in with their recruiting class (1st in the league). They definitely delivered there with the league’s best offense. The model had their offense projected for 2nd, but considering that they were first by only 0.01 goals per game, I think we can say the model got them right. Defense is where the model mainly missed on Harvard. After a mediocre defense in 19-20 and returning only a few experienced blue liners, the model thought their defense would be about the same despite returning Mitch Gibson in net. It ranked their defense 7th in the conference. Harvard’s defense finished well above that in 4th. The reason they were able to do that has to do with a change in their style in play. Harvard was known as a run and gun type team who would score a ton offensively but give a lot back on the defensive side. This year, they became much more of a possession based team, which really improved their defensive performance. Kudos to them for that, especially since they weren’t returning a lot of defensemen going into the year. Unless I was able to get corsi numbers for junior hockey players and college hockey players, I don’t think this is something that I could have predicted with a model, but luckily I wasn’t too far off on the team since they were projected for 4th and finished 3rd.

Cornell

Cornell lost a lot from their team that was ranked #1 in the country at the time of COVID hitting. They had only the 6th most returning scoring. While the model ranked their recruiting class 4th, that was only good enough to bump them up to 4th in total projected offense. They finished in 4th in the league in actual offensive production, so the model was right on there. Despite the losses, it still liked Cornell’s defense a lot and ranked it 2nd in the conference, which is where they finished defensively too. The reason the model was wrong overall was because Cornell just barely had a better defense than Clarkson and Harvard, while those two had better offenses by more of a margin. This makes a lot of sense because the model had Clarkson, Cornell and Harvard all neck and neck for the 2-4 spots in that order. This doesn’t feel like too much of a miss because Harvard just barely surpassing Cornell was well within an acceptable margin of error.

Union

Union was an interesting team to predict for the model. They hadn’t released their roster when the season was about to start, so I had to use a source to get their roster info. They were right about everything except for Christian Sanda returning to Union after entering the transfer portal. Their returning scoring was ranked 11th before that but 8th after. They also had a solid recruiting class that was ranked 6th in the conference, but overall, their projected offense was ranked 8th. Their offense ended up being 7th in the ECAC, so the model was pretty right there. Defensively, the model liked their returning experience on the blue line, and it really liked their addition of Connor Murphy out of the transfer portal from Northeastern. It ranked their defense 6th, but it had them basically neck and neck with RPI and SLU defensively for the 4-6 spots. They finished tied for 7th, so the model was fairly accurate there too. Overall, it was pretty spot on with Union. The reason Union finished in 7th instead of the model’s projection of 8th had more to do with SLU really underperforming and falling below Union rather than Union overperforming.

Princeton

Princeton had a clear strategy with their roster construction as they brought back a ton of 5th years. This really boosted their returning scoring where they were 7th best in the conference. It came with a huge sacrifice in terms of their recruiting class though, which the model ranked worst in the conference. The model didn’t really like this strategy too much though because even though they returned most of their roster, they had really struggled to score in 19-20. Their total offensive projection was 11th, but they finished 8th in actual offense. That strategy clearly worked better than the model thought it would. For defense, the model liked that they were returning Jeremie Forget in net and thought its defensemen experience was okay. It ranked their defense 9th and thought it would improve from a poor year in 19-20. It was way off there too as the Tigers proceeded to have the 2nd worst goaltending and 3rd worst defense in the country, both league worsts. Princeton was predicted for 11th but finished 10th. While overall the model was slightly off on Princeton, the process to get there was pretty flawed. It actually showed me a couple things. First is that player development and returning players is probably more important than the incoming freshmen who are less likely to contribute. Second is that goaltending is very volatile year to year, and even the best projections won’t work if the goaltending fluctuates a lot. This second point comes back up later with other teams.

Dartmouth

Dartmouth was a team I always felt that the media and coaches were too high on. They were going through a coaching change, returned little offense (9th in the league), and returned little experience on the blue line. I would have been shocked if they finished higher than the bottom 4, and the model had them 10th. Their projected offense was tied for 9th because even though their recruiting class was ranked 8th, it wasn’t enough to pump them up from returning little from 19-20. The model was right, and their offense finished 9th. Defensively, the model absolutely loved Clay Stevenson as a rookie goaltender, but their defense was so atrocious in 19-20 that they were still projected for 11th defensively. That was where they finished defensively as well, but they could have been a lot better if they hadn’t been dumb with their goaltending situation at the beginning of the year. They kept giving Justin Ferguson starts despite him going into the season with a career SV% below 0.800. He didn’t do well last season either and had a 0.872 SV%, and Stevenson was consistently over a 0.920 SV% throughout the year. Despite Dartmouth finishing 11th instead of 10th, it feels like a team the model nailed, and it seems like the main reason it was wrong was due to it being wrong about Princeton.

Teams We Missed

Colgate

Colgate was a team that defied the model more than any other in my opinion. They were a team in the similar mold as Princeton with pretty solid returning scoring but an underwhelming recruiting class. They had the 5th most scoring returning but a 10th ranked recruiting class. This is a team where the player development was very clear. Colton Young went from 4 points in 21 games to 32 points in 38 games. I have no idea how a model is supposed to predict something like that, but obviously, changes like that make a huge difference. While that’s definitely the most extreme example, basically every Colgate forward improved from a year ago. A team that was projected to be 6th offensively ended up being 5th, but with how close the model had SLU, RPI and Colgate, that made a bigger difference than you’d think. Defensively, the model wasn’t that high on Colgate mostly because it thought Carter Gylander was underwhelming as the starter his freshman year. It ranked them 8th defensively. What happened was Mitch Benson played his way into the starting role and had a great year. Their defense ended up 6th with the goaltending boost. Those two combined propelled Colgate over both SLU and RPI compared to the predictions. They provide another example of underrating returning scoring and the volatility of goaltending (this time in the other direction).

St. Lawrence

Technically, we were off on SLU more than any other team, but I think that more has to do with luck. They were returning the 4th most scoring in the conference with pretty much all their important players coming back. Their recruiting class was only 9th, but the strength of their returning scoring had them projected for 5th in total offense. They missed that mark by quite a bit with the 10th ranked offense in the ECAC. So what happened? Basically the opposite of Colgate. Cameron Buhl went from 15 points in 17 games to 8 points in 32 games. David Jankowski went from 12 points in 15 games to 16 points in 33 games. It keeps going too. Luc Salem: 10 in 17 to 10 in 37. Nicholas Trela: 8 in 17 to 4 in 28. Greg Lapointe didn’t play at all after being diagnosed with cancer (our best wishes to him on making a full recovery). I’m not sure if those players had their numbers inflated due to a small sample size, or if they just struggled. As you can see though, it was essentially across the board as a team. Defensively, the Saints were also projected for 5th. The model really disliked their defense for the most part funnily enough, but they were returning Emil Zetterquist in net. He had a 0.926 SV% the season before, and the model loved him and thought he’d really boost their defense. Volatile goaltending was once again a downfall as he fell to a 0.904 SV%, and the model was right to dislike the defense in front of him. This gets a bit into another discussion though, which is luck. Below the surface, SLU had good metrics. Their corsi was above 50%, and their shot share was just below 50%. The two most luck-based factors (the two factors of PDO) are shooting % and save %. We already went over that goaltending can fluctuate a lot year to year even with a good returning starter (Jeremie Forget, Emil Zetterquist). Shooting % usually goes along with the talent and skill on the roster, but there’s still a lot of luck that goes into it. St. Lawrence had one of the lowest shooting percentages in the country at 7.1%. Obviously, they lacked finishing talent, but that’s still below where I would have expected a team that was supposed to be solid to finish. If they had gotten even below average finishing, I bet they finish higher than Union at least.

Since the teams that the model was most wrong about had lots of returning scoring but went in opposite directions, it’s tough to know how to adjust the model for that. Player development isn’t linear and sometimes they do have bad years after having great ones. The common theme of goaltending fluctuating a lot though for the teams that we really missed the mark will definitely lead to a change designed to accommodate that. Overall, the model did great and got a lot right. There isn’t much that I think needs to be changed.

Leave a Reply