Skip to content
Published December 4, 2023

A long, long time ago, when he wasn’t cooking meals for his wife, tafokints had a simple idea: rank the top 100 Melee players in the world. The community had seen rankings made before, but none of them had ever reached the scale of SSBMRank: his pet project for the scene. Ten years later, this idea has now become an established part of Smash. Barring one year, in which the community genuinely seemed in apocalyptic circumstances, and another year, which barely had events, the Top 100 has been an annual mainstay.

However, the rankings are undeniably controversial. From how the panel system works, the lack of an event qualification system, the shadowy “illuminati” nature of the rankings themselves, to even the actual order of the list, there’s a lot to dive into. In many ways, it’s become a source for projected emotions for all sectors of the scene. Are you a tournament organizer? “The rankings” are why top players aren’t going to your event. Are you a top player? “The rankings” are a source of stress. Are you a commentator or content creator? Well, you need “the rankings,” because what else are you supposed to talk about? Are the rankings in this figurative room with us right now?

In today’s column, I’m going to break down what the rankings are and what they do well. But this will be the first of a multi-part series I’m running on the rankings. For next week, I will then discuss a few commonly brought up alternatives to the rankings before talking about why rankings are so important and offering some suggestions that don’t fundamentally destroy SSBMRank, yet still address some common concerns and fit a strong long-term vision for community growth.

How Do The Rankings Work?

To be transparent, this is not the first column I have written about the rankings. If you read my piece last year, you’ll know that I’ve talked about the history of the Top 100. I also shed some light on the actual process. While I’m going to retread much of the same ground I’ve already covered, this time, I’ll be explaining a few key differences between the process this year vs. last year.

Firstly, no one really “owns” the rankings. Technically speaking, anyone can release a Melee Top 100. SSBMRank is only as legitimate as the community deems it appropriate. Full disclosure: Melee Stats only “runs the rankings” because the community has socially recognized us as three dudes who can do it. Theoretically, anyone could make a Top 100. Now, calling it “SSBMRank” or “the community Top 100” might be kind of a dick move, but that’s a story for another time.

Getting back to SSBMRank itself, it is a representative panel- based power ranking of the 100 Melee players with the best results. The way it works is that we offer “ballots” to a select group of people, who range from tournament organizers to top players to seeders of majors. This group of people is what we call the panel. Each panelist receives one ballot, which contains about 120 to 130 qualifying players to assign scores to, with each score being given based on how the panelist grades the quality of any player’s results. This is done in a spreadsheet, which also has links to separate spreadsheets and databases for head-to-heads for each qualifying player, as well as their placements. The head-to-heads and placements of the “ballot” are collected from a mix of majors (as defined by Liquipedia) and regionals (which we broadly define as weekend events featuring a significant number of ballot players).

NOTE: There’s a longer process behind the actual selection of the “ballot” players. It involves some variant of the following steps: we track the results of the previous period’s Top 50/100 players, see who’s beaten them, and then note down who’s beaten the people who have beaten the previously ranked players, as well as people who have made major Top 32s. Following that, players are assorted into different ranges of their potential spots on the list, based on their head-to-heads. We then fine-tune the last group of players until we come up with a final ballot. While the criteria for what counts as “significant enough for Top 100 consideration” will vary person to person, from my experience as a panelist, 120 people, if anything, is fairly generous. Most years usually have about 105 players whose results are close enough together in terms of quality and quantity to make them stand out from everyone else. Importantly, this does not mean that there are 105 players who are clearly better in skill – this is strictly through a resume-by-resume basis. Other panelists may not agree with me here; this is only my opinion. 

The panelists are then given the following instructions (or some variant of them):

  • Based on the quality of results from X to Y, rate each player on a scale of 1 to 10 (10 being best).
  • Exhibitions, online tournaments, and tournaments that did not complete are NOT to be counted.
  • You can consider results at locals and invitationals if you choose, but you should know that the environment surrounding these events may not always be the most serious in nature.
  • If you, as a panelist, are a player who’s on the ballot, any score you use for yourself will not be counted.

Panelists also have three additional options for ballot players: “excluding” them for insufficiently attending events, choosing “I don’t know,” (meaning that the panelist is leaving a player’s score blank and doesn’t wish for it to be counted) or “scoring low,” which means that the panelist has deemed that player sufficiently worse than everyone else they’ve given a score. However, panelists are still required to give “exclusions” a vote, in case the rest of the panel doesn’t agree. Once all the panelists have submitted their ballots,  everyone’s scores are put together, normalized, and, broadly speaking, combined into a final list that becomes the Top 100.

What’s Good About The Rankings?

Any discussion about rankings has to acknowledge the fact that Melee doesn’t have a traditional league or a scene-wide circuit. Without developer support (which we now know is a dead end), we don’t have any comparable professional infrastructure to other esports. The vast majority of people with deep investment in the scene – especially Top 100 players and hopefuls – have logistical roadblocks when it comes to continued involvement. Travel is expensive, and more specifically, travel for Melee. But it turns out that when you promise people social recognition for their efforts, those same people are more likely to show up.

I have zero doubt that the rankings are a unique source of motivation for the vast majority of the Top 100 – broadly speaking, the bottom three quarters of the list. Without SSBMRank, I don’t think we see the rise of players like Joshman, Pipsqueak, Salt, or Zuppy as equally possible. Although it’s not something I could concretely prove, it is worth noting that the lack of official rankings during the pandemic was a common source of complaints, and something that so many people lamented that we, Melee Stats, were asked multiple times to create online rankings. Why us? Because the official administrator of the rankings had a conflict of interest with Nintendo and could not acknowledge the existence of Slippi.

The rankings are also inherently flexible when it comes to incorporating events and its loose eligibility requirements. Although there are occasional events with controversy surrounding whether they count, don’t count, or the extent to which they count, it’s worth realizing that this has been an inherent feature of SSBMRank – not a bug. Because Melee doesn’t have a centralized league or circuit, attendance rates for tournaments are drastically volatile across the competitive field. It would be easier if we had events that consistently had every notable player in attendance above others, but we don’t. Our two most prestigious supermajors can barely get the entire Top 10 under the same roof without a DQ.

Think about it for a second. What benefit and actual change on the list would be reflected by saying “Genesis sets count more than GOML sets” or “Big House sets count more than Smash Con sets?” When it comes to tournaments like locals, The Off-Season, or Redemption Rumble, the responsibility lies on the panelists to fully contextualize results as they deem fit. It has never quite been the rankings’ place to determine the weight of every individual event.

If anything, the decision to prioritize certain events may end up with the impact of creating a pay-to-be-ranked list. Using Liquipedia and last year’s Top 100 as my frame of reference, each of Trif, Suf, Medz, Colbol, Palpa, Wevans, 404Cray, Gahtzu, and Faceroll attended only one major, yet had significant regional activity to make the list. At the same time, you had Hungrybox attending 17 majors. To add on to that, the average attendance rate for smashers ranked 51 to 100 on last year’s list was 3.1 majors per person. If the attendance requirements were higher or more strict (like, say, 8 majors, with no regional activity path), there’s a good chance that half of the entire Top 100 would not even make it onto the list.

I also don’t buy the idea that ‘dodging’ events is in your best interest at the top level. If your goal is to be No. 1, which I believe it should be for the entire Top 10, a lack of events can only hurt you in achieving this goal. This year, Cody Schwab’s the currently leading contender for No. 1, and he’s by far the most active player in the top five. He’s probably the best example of how the benefits of high activity on a regional and major level can give you a better shot at No. 1. With all respect to Zain, who is also one of the most active top players in the scene, it is hard to imagine that he benefited in any way from Cody attending Arcamelee and winning it over Jmook.

NOTE: I decided to look up the attendance rates of every single Top 10 player since 2017. For what it’s worth, the only years with significant disparities within the Top 10 that, in my mind, came possibly close to inactivity ever benefiting a player were 2019 Leffen and 2018 Armada. The former example I truly believe was a rare miss on the part of the panel, who valued Leffen’s more stable head-to-heads over Mango winning three majors and attending with noticeable lows. The latter, however, involves the most untouchable player in the history of the game. All in all, one possibly “unfair,” indirect punishing of activity in five years, and that too for Leffen, whom the panel valued accommodating for his success in few events despite unique distance from the rest of the scene.

More than these points, however, I genuinely believe that the rankings represent the community. Far from being only friends and favorites of Melee Stats, the Top 100 involves people from all spheres of the scene with a shared interest in ranking players. The whole point of a representative panel isn’t only to give the nerds power. The panel is supposed to reflect the community coming together, in lieu of Melee lacking traditional support, to celebrate our best players. Besides, the entire point of allowing panelists to vote “exclude” is to offer a fail-safe in case the rankings’ loose criteria on eligibility proved to be inadequate on its own. If the goal was to give the nerds control over the scene, it would be only me and the rest of Melee Stats. Now, in all fairness, that’s exactly how I wanted the All-Time Top 100 to happen, but that’s because I purely wanted to establish Melee Stats as the authoritative voice on the topic. SSBMRank is not like this at all. The list genuinely reflects the community. It has a uniquely strong claim to do so more than any other content piece in the scene.

With that said, I can acknowledge that there’s room for improvement. Many of these areas stem from fundamental ‘dilemmas’ that SSBMRank will always run into and need to pick a “less bad” option. For example, the loose criteria around events and what qualifies as a major, though convenient to the players and panelists, doesn’t necessarily help bolster the events ecosystem. There’s also a widespread belief that dodging tournaments, though not in any individual’s best long-term interest for number one, can still be functionally rewarded and incentivized at the top level in order for someone to protect their rank. Above, I wrote that this wasn’t typically the case for people in contention for number one, but I do think that it’s worth exploring this in a bit more detail and to examine if this could apply this year.

Some of these issues truly come down to philosophical differences. What do you believe a global rankings should accomplish for the scene? What is overstepping the boundaries of creating the list and what is enhancing it for a better future for the scene? Are there sources of complaints which, while understandable, are truly not the problem of the ranking? In the next week of MMM, I’m going to break these issues down, explore a few commonly offered alternatives, try my best to put everything in context, and propose some suggestions.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Melee Stats

Subscribe now to keep reading and get access to the full archive.

Continue reading