The 90th annual Academy Awards have come to a close. It was an exciting night overall — there was much-needed discussion of the industry’s treatment of women and people of color and the state of inclusion behind the camera, along with a bold proposal for inclusion riders in contracts.7 And also, a costume designer won a Jet Ski from host Jimmy Kimmel for brevity in an acceptance speech.
Meanwhile, we’re left feeling pretty good about our Oscar predictions. Our model — which handicaps the Oscar race by looking at the precursor awards, assigning numerical weights to each award based on its recent success in predicting the Oscar winner — did well this year. We issued projections for eight categories, and the model got seven or eight right (depending on how you count). It successfully indicated the winners for best picture, best actor, best actress, best director, best supporting actor, best supporting actress and best animated picture. And one of the two films we had tied in this year’s gnarly best documentary category won as well:
Models are just a handy way of trying to bring order to a complicated world, and we’ve always been upfront about the limitations in how we forecast the Academy Awards. (This isn’t like our presidential election model, which has all kinds of empirical study behind it; Oscar voters are a small, insular group whose members aren’t so kind as to consent to being polled.) This Oscar season, I was really worried that the world had left our approach behind.
The core assumption at the heart of our model’s design is that Oscar voters aren’t super dissimilar from the people who vote in other award shows, and that by looking at enough of what those people prefer, we can approximate what Oscar voters themselves prefer. My worry lately has been that as the academy expands, the organization could become more dissimilar from those other groups — the Screen Actors Guild, the Producers Guild, the Writers Guild, the Directors Guild, the big-city critics and the Hollywood Foreign Press Association — and over time the predictive power of the precursor awards would erode. Or maybe it’s the opposite, and the academy is becoming more like the bigger guilds, so the model is plausibly improving. I don’t know.
Our model remains really good at calling actors, actresses and directors. I’ve run the model for the past four iterations, and in that time, it’s called 15 out of 16 acting races correctly. (In 2016, it gave a mild edge to Sylvester Stallone over actual best supporting actor winner Mark Rylance.) It’s gotten three out of four directors races right. I consider that track record pretty outstanding for just some simple weighting of award shows.
It’s best picture that gets me (even though we called it right this year).
On the one hand, the model was designed to show the state of the race — who’s on track to win, and who’s in the lead — and in that regard it’s been good, as every recent winner has been flagged as either the front-runner or a top contender. On the other hand, it’s batting .500.8 That’d be great in baseball, but it’s underwhelming in forecasting the Oscars. Given the recent changes to the academy, I’d be seriously reconsidering how we handle best picture had this year gone badly. But the model’s identification of “The Shape of Water” as a particularly strong favorite in a crowded field has eased my worry slightly, though not eliminated it.
There is no foolproof, ironclad way to predict best picture, barring an elaborate heist of PricewaterhouseCoopers that I in no way have been planning for five years. We’ve got something decent that points us pretty consistently in the right direction. But models don’t exist to solve the world, just to mimic it and help us understand it, and sometimes only briefly. There may come a time when we’ll have to ponder pulling the plug on this model. One really good year is enough to convince me that it’s not today. But that doesn’t mean it’s too early to think about what a successor could look like.