Inside the Process of Ranking the Top 100 NBA Players


Want to know how Sports Illustrated ranked the top 100 NBA players of 2021? We take you behind the process.

The methodology behind Sports Illustrated’s Top 100 list has remained much the same since its inception in 2014: evaluate every player for the coming season, assess their quality independent of their role and usage within their team, and then rank them. The concept is simple, but the science is inexact. And no matter how much time, effort and thought goes into it, the task remains somewhat flawed. How do we eliminate the role of context when drawing distinctions about athletes in such a heavily contextual team game?

Well, unfortunately, it’s the only way to execute a task like this with any sense of fairness. Just as we can’t punish a player for ending up on a bad team, we shouldn’t overvalue one in a vacuum simply because he’s in a perfect spot. What we can do when looking at players as individuals is ask the basic question of how malleable and translatable their skills might be across a variety of contexts. This is easier to discern when a player is the focal point of his team, and less so when looking at complementary pieces.

In the same way that the league’s best players tend to define the way their teams play and win (they not only work in the system—oftentimes, they are the system), the truly elite role players are the ones we can trust to provide much the same quality and breadth of skills no matter where they land, even when the fit isn’t perfect. That’s the type of thought process I gave when considering each player. Oftentimes, I found myself slightly devaluing the league’s nonelite scorers, who thrive under more specific circumstances and usage, while giving a bump to multifaceted, all-around contributors who seem to find ways to fit in regardless, or are so good at one specific thing that it stands out no matter where or with whom they might play.

Naturally, athletes in any team sport are beneficiaries or victims of circumstance, and sometimes both. This isn’t a video game, where we can swap, say, Zach LaVine and Victor Oladipo, simulate the season and see what happens to their teams’ results (although that’s kind of a fun idea). We can incorporate analytics and consider all the information we want, but this type of list will always, to some degree, be determined in the abstract. It’s one of the flaws of the process. In the end, tackling any task like this—requiring the list maker to make subjective decisions in an objective fashion—at some point, in the end, boils down to personal taste, particularly when splitting hairs.

When you step back and look at the list as a whole, there’s a basic structure that made some degree of sense while working through it. You start with the superstars, you quickly move into All-Star territory, followed by a group of players on the cusp of stardom mixed with former All-Stars beginning to decline, as well as some of the NBA’s most reliable, all-around impactful players. You’re never dealing in absolutes here, but that type of thought process gets you to about 50 to 55 players. (Mike Conley and John Wall at Nos. 50 and 51 is roughly where that part of the list ends.)

Assembling the list is much easier working down from the top. Thanks to raw and advanced statistics, awards and accolades, and oftentimes many years of elite performance, there’s little dispute about who the best players in the NBA are. After that, a player’s relative goodness becomes much harder to project and discern. The lines start to blur. Because this list is purely for the upcoming season, at some point you begin having to place promising talents still approaching their peak (think Shai Gilgeous-Alexander, Jaren Jackson Jr. and Deandre Ayton) relative to the NBA’s most consistent, unshakable role players (like Marcus Smart or Robert Covington) and older guys with more in the tank (Al Horford, JJ Redick, etc.) This is where things get especially tough—trying to guess at the probability of improvement and decline relative to how good a player was when we last saw them.

As you work down into the 85 to 100 range, oftentimes you have to ask yourself whether a player should be on the list at all and why (which leads you to a second list of 25 designated snubs). It’s not perfect, and it’s never going to end up exactly how you expected. The emphasis on performance history—and perhaps, my own de-emphasis on what we saw in the bubble, at risk of overreacting—led to a lot of these fringier decisions, and also helped when splitting hairs. As a result, I essentially chose to opt out on making a true value judgment on bubble breakouts like Tyler Herro and Michael Porter Jr., hoping to see them expand their games defensively, and, at the very least, produce for an entire season before placing them ahead of more established talent. Rather than classify them as true snubs, I deferred Herro and Porter to the Watch List instead.

It would be foolish to act like any of this is definitive, when it’s very much a “feel” exercise. There’s no perfect way of doing this. This list will never contain the situational luxury all 30 NBA teams have when making evaluations. GMs know what they have, what they need, who their own best players are and how to accentuate their skills, and can subsequently decide which players are best, relative to their team’s own situation. They never have to make any decision in a vacuum. That’s one of the strengths and also one of the flaws of this type of thought process: Here, we get to throw out trade value and contracts to form an educated, thoughtful hierarchy of talent, but in a real NBA context, no choice, big or small, can be properly executed absent those pieces of information. Here, we’re just asking who the best players are and giving credit.

In attempting to level the context and consider players on their own merits, we can at least gain a better appreciation for the depth and quality of talent all around the league. I hope you’ll enjoy the Top 100 with that in mind.