There’s two ways to get involved in the Melee community. The first way is to play the game. The second way is to watch other people play the game. Do the second one enough times, and you’ll get a sense for which players are good. If you’re lucky enough, you’ll build a mental model for how good each player is relative to their peers. And finally, if you’re good enough at doing that, you’ll get a ballot for the Top 100 every year.
For those of you that live under a rock, the Top 100 is an annual ranking of the Melee players who, more or less, had the hundred best resumes of a given year. It’s one of the hottest topics of discussion within Melee, alongside the rule set, alternative controllers, wobbling, and Hungrybox’s popoffs. You know what? People love rankings so much that they hate them. It’s confusing, I know.
Bro I knew you were gonna quote RT and not just reply LMAO more minions this way tho I get it
The whole context of the convo is about you asking for an algorithm as if it would solve the problem, which based on examples like this I don’t think it would
— Aiden (@aidencalvin) August 11, 2022
In today’s column, I’m giving a brief summary of “the rankings.” First, I will explain exactly what “the rankings” entail. Then, I will provide the pros and cons of the annual Top 100 as it exists today. Finally, I’ll give my own opinion on the topic, as well as talk about why we care so much about it.
A Brief History of SSBMRank (How The Rankings Work)
Every time we’ve had an edition of the Melee Top 100, from 2013 to now, it has been done by a panel. Whoever has had editorial control of the rankings, be it tafokints or PracticalTAS, has reached out to whom they have deemed knowledgeable or appropriate members of the community to evaluate a set of players within a given year. In the original SSBMRank, which tafokints headed, voters were asked to rank players based on skill. It was not a particularly scientific process.
This wasn’t his fault – it just came with the territory of starting something new and exciting for the scene. For example, in 2013, Mew2King had a ballot, and he infamously ranked Hungrybox lower because his method of assessing players involved taking multi-character expertise into consideration. He wasn’t the only one. On some ballots, aMSa received high scores simply because of how he performed in friendlies. Time showed that one to be weirdly predictive, but I think everyone can agree that ‘friendlies’ isn’t a reason to rank someone higher or lower in 2022. This was far from the only problem with SSBMRank. The following year, the panel simply forgot to include Professor Pro on the ballot. Yet in spite of SSBMRank’s numerous issues, it served as an entry level resource for anyone interested in the scene. For all the flaws it had, it was a great marketing tool for the community. As long as the Top 10 or so “made sense,” everyone could move on.
Today, I found everyone's 2013 and 2014 SSBMRank ballots in my archives.
Adorably, @MVG_Mew2King wrote a comment on each player he rated for his 2013 ballot.
This made my day.
— Tafo (@tafokints) December 19, 2019
Obviously, the Top 100 remained an annual mainstay of the scene. It also built a more structured process behind it so that it could have more ‘official’ legitimacy behind it. Voters were eventually given head-to-head spreadsheets of qualifying players, who were decided on by people like Wheat and Tafo. In 2018, PracticalTAS took the reins of the Top 100 as its new project manager, with Panda obtaining the rights for it. Even the old prompt was changed from a skill-centered focus (rank the players in order of how well you think each player would perform in 10 hypothetical events), to the results-oriented one (assess who performed the best at events within a given time-frame) we know today.
One thing, however, remains: we continue to use a community-panel process for determining the Top 100. Here’s how it works, broadly speaking:
- If you’re interested in having a ballot, you beg PracticalTAS for one.
- If you’re accepted, you will receive a spreadsheet which contains 120 or so players, hand-picked by Wheat, who have had the best years. This is determined through a combination of using the most recent rankings – summer and annual – for the majority of qualifying players. Then, typically, Wheat scours through each of their major and regional head-to-head results in order to find remaining ‘borderline’ players for the ballot, in other words, previously unranked players whose names show up the most. Without going into specific situations, this is the kind of qualifying process that’s undergone to determine who qualifies for the ballot.
- You then, as a panelist, will be asked to put a number to each player’s results from a scale of 1.0 to 10.0 – 10.0 being the best, 1.0 being the worst – assessing their major, regional, and local results as you see fit. This is due at a date set by PracticalTAS.
- PracticalTAS compiles all the ballots, which each of the panelists’ scores together, and pulls some nerd shit (basically accounting for outliers, as well as weighing each player’s numerical values on individual ballots) to come up with the overall panel’s cumulative ranking of each player.
- The community has its final Top 100 rankings.
For the purpose of this discussion, the argument for and against “the rankings” entails of the following assumptions:
- The Top 100 rankings, as we know them, are panel-based.
- The qualifying process for which players are on the ballot stays the same, handpicked by an administrative leader within the ranking team.
- Topics of “what counts” between locals, regionals, and majors remain sorted out by each individual panelist’s subjective assessment of it rather than explicitly defined by whomever is managing the rankings.
- Arguments for and against the panel are taken agnostic of who specifically runs it – Panda, MeleeItOnMe, or any other entity or individual. Discussions about these groups, or any other groups, are beyond the scope of this column.
What’s Wrong With The Rankings
The Top 100 is an insane process. For people in the administrative end, it is a pain in the ass to write blurbs, sift through tons of applications for ballots, gather head-to-heads, and collect placements of 120 players – let alone to decide who qualifies and who doesn’t. For individual panelists, they have to go through more than a hundred player results and submit a ballot just to see that the final product does not necessarily reflect their views. By the way, this is all while they see the entire process mocked or flamed online by random shitters. I’m not trying to milk sympathy points here – clearly I do it because it’s more fun than it hurts my feelings or whatever – but it’s obviously not some flawless process.
Top players talk all the time about how “dropping in the rankings” scares them from attending tournaments. For tournament organizers, this is just about the worst thing you could do for a ranking’s legitimacy, let alone the scene. If the goal is to rank players by some metric of “skill” and “results,” why would you do something that potentially incentivizes players who win to not go to tournaments in the off-chance that they’d lose? Good luck trying to get iBDW to a Northeast event when he’s convinced that it will hurt his rank to play anyone who isn’t in the Top 5. No matter how true it is or isn’t, if a top player decides an event isn’t worth their time, there is no mechanism or vehicle in a panel-based ranking system to tell them otherwise. The closest thing we have to doing that today is having panelists subjectively decide to punish “dodgers” or determine what are legitimate reasons for not attending events. With that said, come on. Are you really going to drop Leffen below KoDoRiN for attending fewer tournaments when he’s across the world? If you talk to most panelists, the answer is “of course not.”
A potential byproduct of this process is that you end up with a bunch of tournaments whose prestige and importance are constantly undermined by spread out attendance. In other words, you get a lot of “Mickey Mouse” majors and little crossover. This obviously screws over tournament organizers who aren’t leading Genesis, The Big House, or Summit. Hell – even those majors are somewhat ruined when somebody DQs or decides to not play Melee there. It’s not rocket science to say that having more good players attending events is a net positive to the scene. That’s not just because “it feels good,” but because notable players make it easier for events to pitch themselves to sponsors and gain good stream viewership, one of the few areas that majors can monetize.
The most commonly suggested alternative is an algorithm. Importantly, we have to acknowledge the basics: that an algorithm is not, by any means, an “objective” ranking, as algorithms are developed by individuals. But to steel-person this for a second, it is “deceptively objective” in that enough people believe the presence of an algorithm is more “objective” than a panel. The argument goes something like this: fewer people will complain. Less manpower is needed to fine-tune an existing algorithm. There’s guided transparency into “what counts” for the algorithm and for “how much.” If anyone complains at their rank, point to the almighty algorithm.
Another frequently brought up option is Blur’s suggestion of a tennis-style placements-based rankings. This type of ranking has precedent in Smash, as it was used during the MLG era. Though it places less emphasis on subjective assessments of skill, it instead focuses on volume of attendance. A community unified around a placements-centered ranking would constantly incentivize top players to go to more tournaments. From a standpoint of selling the scene to sponsors and creating an infrastructure around competitive Smash, this is obviously very important. If someone complains that they’re better than a player ranked above them, too bad. Go to events. This type of ranking would also hypothetically tie into a circuit of sorts; a series of selected events that everybody knows “counts most” ahead of time. The goal is to get notable people to events, sell those events to sponsors, and bring in money toward the scene.
In Defense of the Rankings
Melee’s operated for so many years independent of our game’s developer. We’ve had to build everything, more or less, through our blood and sweat. It sounds dumb, but there’s a beauty to the unhinged style of representative democracy that we have in so many sectors of our community. The rankings are no different. We love them because, in a sense, we “own” them. Where else can you find a communal and celebratory process like the Top 100, but a community like Melee, where we do everything together?
Furthermore, it’s not fair to attribute the prominence of “Mickey Mouse” majors to the pressure of the rankings themselves. Top player streamers constantly talk about how much easier it is to stream Melee from home – and how much money they typically lose in subscribers if they attend tourneys. That sounds like a problem that goes way beyond just rankings. Even if it didn’t, is it relevant? A panelist’s job is to evaluate results and subjectively put a number to a resume that’s associated with someone’s name. A panel is full of panelists with different numbers for different resumes. If we’re gonna call this process throwing shit at a wall, that’s just a byproduct of democracy. By the way, if the rankings were so bad, why hasn’t there been a long sustained alternative to the PGR or SSBMRank? Well, I guess there sort of was, but on that point, I will simply say, as my dear friend Ambisinister once said, “lol.”
-blur when you mention BlurRank in your video
— D (@The_D_ssb) March 9, 2022
Besides, just because players complain about their ranking doesn’t mean it isn’t accurate – not even if they win an event right after a given ranking period. What if the players are just full of shit? I’m obviously being glib here – panel rankings are not perfect – but on a serious note, it’s important to take complaints on a case-by-case basis. If ten players are unhappy with their spot on the Top 100, but 90 players are happy, is it really worth turning over the whole process? How about players who are unhappy, but for completely different reasons? The mere presence of dissatisfaction isn’t a justification to re-examine the way we do things.
It’s also not clear-cut that the problems which come with a panel-style system would be solved with either the proposed algorithm style approach or the tennis-style approach. As far as the former’s concerned, I don’t see the benefit of functionally replacing the entire panel with one person’s continuously fine-tuned opinion on what events count more than others and who makes the top 100. At worst, it replaces a panel’s opinion with an individual’s opinion. You could “democratize” the process among panelists to determine numerical values for sets and events – but that seems like an even bigger pain than ranking players. Though it could be gradually fine-tuned to not be the case, I also think an algorithm in practice is more likely, if anything, to make top players extremely selective about what events they attend, especially if there’s an activity requirement and the player is capable of winning majors.
The tennis-style rankings approach is interesting to think about, though puzzling as far as its current feasibility. The tradeoff could potentially be worth it in a scene that is actually unified, rather than fragmented into different formalized spheres. In an ideal world where we have multiple events that people want to attend, one centralized production company and a ‘league’ of TOs willing to work together, I could see this working. Of course, that’s not the world we live in now. Could we take steps to get there? Sure, but one of the first steps shouldn’t be to abandon a community tradition that people, through all their complaining, still see as legitimate. In practice, this could really hurt international players, or people in isolated regions who don’t get a chance or have the resources to travel.
Maybe if you’re ruthless, you can tell them “too bad” and prioritize top player attendance over inclusion. However, as simply as I can put it, I disagree. It’s better to have a healthier international scene than it is to do something that would alienate so many, just for the sake of possibly getting a top player to an event. If the sustainability of the community depends on being willing to throw so much promising talent under the bus for the sake of appeasing a small group of people, then we’ve lost our heart.
Why This All Matters & My Opinion
From your average r/smashbros poster to iBDW, everybody has an opinion on rankings. Why? Why does this feel so important? If you talk to a lot of panelists about why they take the process really seriously, something you’ll hear a lot is that rankings affect professional opportunities for players involved. A No. 50 vs. No. 51 spot on an annual ranking doesn’t sound like much, but to a sponsor, the difference between “Top 50” and “not Top 50” affects salary negotiations. There is indeed a pressure on panelists to be fair to player resumes because it could actually impact their chances of turning Melee into a career.
But I think it’s deeper than that. Random people in the scene – and I mean this non-derogatorily – create their own Top 100 ballots and share them with their friends. “Who’s in your Top 25” is a question that everyone talks about and clearly tries their best to answer. I really think it just boils down to us collectively valuing the process of communal recognition, feeling like we’re part of something together. On the players’ end, it’s seeing your effort toward the game appreciated by other community members. On the “fan” or panelist’s end, it’s telling the players, “I see you.” There’s a sense of belonging that comes behind the existence of the Top 100 because it’s represented by members of the scene itself rather than an official entity or a predefined process.
Close your eyes for a second. Imagine if there were a rolling rankings that entirely replaced the Top 100. Or if there was, heaven forbid, an algorithm. Imagine we all just stopped having an opinion on who was Top 25, 50, 100, etc – or that we gave up having opinions on them because the process was completely outside of the average community member’s hands. Wouldn’t it feel awfully lifeless? Wouldn’t something be fundamentally missing from the scene?
To me, that “something” is the participatory element of communal ownership over how we recognize the contributions of our most dedicated players. I am clearly biased as a panelist myself, let alone who isn’t convinced that the other approaches to rankings fundamentally solve any of the issues inherent in the current one. However, I truly think we’d lose part of our soul if we replaced the Top 100. We should keep them and recognize them as the community’s most sincere attempt to express our gratitude and thanks to the players whose play has inspired all of us – all while still retaining the competitive spirit that keeps our game alive.