Last Friday, Ludwig announced “The Match:” a first-to-ten exhibition held between Cody Schwab and Zain to determine the No. 1 spot on SSBMRank. It will be happening on December 15, marking the first time in the Top 100’s history that such an exhibition has had direct rankings-related stakes, specifically for the highest position. I mention this as context because – and this is not to brag – I have known about this event for the last couple of months. If anything, my knowledge of “The Match” has, in part, inspired me to write many columns about the rankings themselves, their history, and even Cody-Zain.
In last week’s Monday Morning Marth, I examined what I believe the Top 100 rankings do well and addressed some common misconceptions. Today, I’m going to talk about some areas where I believe there’s a bit more ground to have concerns about SSBMRank, but I’m also going to dive into real issues that each of the problems hint at. I will then bring up some commonly proposed alternatives that I don’t believe address these concerns.
Is SSBMRank Too Powerful?
SSBMRank carries the weight of a league-wide standings, one which prospective sponsors, in practice, treat as legitimate. As a result, the rankings have outsized influence over the careers of multiple people ranging from up-and-comers to actual major contenders. In fact, they have such an influence that a notable player straight up asked to be left off the list. For whatever it’s worth, don’t say that to criticize this person for doing so, but as an example of the innate pressure that SSBMRank is capable of having on the players.
https://twitter.com/Swiftssbm/status/1732744649989562403
While I believe the system around qualifying is ultimately accommodating to the players, I can also understand the belief that it’s much more stressful to navigate a subjective system than it is to follow a concrete set of rules. The reality is that if you asked many top players what they prefer, they would overwhelmingly say concrete and inflexible guidelines.
Now, if the Top 100 were only a subjective content piece meant to hype up players, I would not actually care about this problem. I would say, “too bad, it’s our opinion.” In a sense, I said a much politer version of that in response to Swift. But simultaneously, the level of SSBMRank’s community recognition carries the authoritative weight of a league standings. It isn’t just an opinion; it has official status in the scene.
Although it sometimes truly is the best way to handle conflict, it may not always be the play to tell the top players “too bad.” It’s objectively not good for the scene to have its best players afraid of attending events. SSBMRank ultimately has a responsibility to players, one that goes beyond being a list of cumulative opinions, no matter how accurate they may be. Clearly, there is a messaging problem here.
Does SSBMRank Reward Dodging?
In last week’s column, I talked about why I don’t really think dodging events is beneficial to a player’s chances of getting the top spot on SSBMRank. I even brought up Cody Schwab as a great example for how this can only benefit you. While I still think it’s overwhelmingly in someone’s best interest to attend tournaments and try to win them, I do want to talk about the real problem at hand. It’s less that avoiding tournaments always helps top players and more that a player could preserve the impact of prior accomplishments by not attending other events.
Granted, this is not something the panel should encourage, but it could happen. In fact, it may actually happen this year with Leffen and Plup potentially receiving high positions on the list. There’s a high likelihood of those two players getting the fourth and fifth spots on the Top 100. This is despite both of them having fewer combined events than moky.
as frustrating as rankings can be as a competitor they are incredibly beneficial
that said i very often see "top players are the enemy!" kinda talk from a lot of ranking people which is disheartening, and im sure that them seeing people mald about rankings is disheartening
— moist | moky (@moky_dokie) December 7, 2023
To be clear, I understand that these dilemmas don’t necessarily have clear cut answers. Given that Leffen won a whole major and Plup won two big regionals, it is not totally unreasonable for someone to believe they had distinct accomplishments. But even if the panel’s decisions on their actual spot perfectly captured what the community wanted to reward, it would still have the direct impact of rewarding someone whose resume has significantly low events. Dodging may not directly help, but it can contextually preserve an existing state of favorable affairs for certain players. Maybe the community truly doesn’t care about this, and the panel reflects that, but I don’t agree that it should.
Does SSBMRank Count The Right Events?
Discussions about what counts and what doesn’t count are always present in the community. It used to be locals which everybody argued about, then it was Smash Camp, before it became The Off-Season – and now it’s all three. I won’t lie: this is one of the more frustrating things to talk about. As I mentioned, event eligibility is purposefully ambiguous. Although the Top 100 has an internal system to count majors and regionals for the head-to-head table, it’s not typically been the rankings’ place to outright say a tournament doesn’t count.
I’ve thought about this for some time, and there’s a much bigger picture. The real issue here is the belief that the rankings are too outwardly neutral on the tournament ecosystem. Now, I don’t think this has always been a bad thing. Because Melee is loosely decentralized, I can understand wanting to prioritize flexibility over structure. You could also argue that this is truly not the Top 100’s problem. If tournaments want to be taken seriously to a major degree, they have to ultimately make that case to the public and the players. The rankings, for a long time, have prioritized being a reflection of the public and players’ perceptions of events rather than proactively putting a foot in the ground.
However, the irony here is that the Top 100’s established neutrality has had the impact of ‘major eligibility’ for the rankings essentially being determined after the fact. If you talk to most people about the differences between a “major” and a “not a major,” it’s, broadly speaking, how many Top 5 or Top 10 players were in attendance and actually competed. Long story short: the reason Riptide isn’t a major is because Zain DQ’d. This particular case speaks to a disproportionate amount of leverage that top players have over event prestige, which also factors into how SSBMRank counts majors and regionals for head-to-heads.
unsure how to solve the growing issue of top seed DQs into Mickey Mouse brackets in a way that doesn’t involve locking wheat in a tiny tiny room until top 32 has started
but it’s getting a little ridiculous
— chrome (real) (@chromeohnine) July 17, 2022
Does the lesser prestige of certain tournaments have any concrete impact in attendance or viewership? I can’t state that I know the answer, but at the very least, the Top 100’s neutral approach toward event prestige does not necessarily improve the lopsided dynamic that the players have over events, and, to an extent, even the rankings themselves.
Is SSBMRank Not Transparent Enough?
I talked to a few people behind the scenes about issues they had with the Top 100, and the “lack of transparency” was another issue that came up. When I pressed them about how public everything was, from qualifying eligibility to weekly content about tournament results, they brought up the fact that though these elements were true, they still had no idea where their current “spots” were at any given time, as well as what tournaments would “count” for them. At least from top players I’ve spoken to, the sentiment seems to be that the entire process for SSBMRank is shadowy.
To be honest, I’m not really sure what to make of this concern. On one hand, I don’t think SSBMRank “owes” it to the players to offer updates on which part of the list they’re on at any given point. I also just don’t believe that the process is entirely shadowy – particularly because I’ve written about it numerous times here, sent it to people, asked for their thoughts, and received no response in particular. It is hard to be told, “you’re not transparent enough,” when you have been the person attempting to be as transparent as you can possibly be, particularly when you’re told this by people with large platforms.
But on the other hand, I can understand the frustration that someone may feel at not knowing their position on a list until after the fact. In a league-wide standings, you could theoretically see your spot relative to other competitors and have a clear goal. In a panel system, however, and in most cases, the best you can do is guess a range of where others may place you. That may not necessarily be the worst issue in the world, but it is one I’ve heard enough times to where I’d bring it up as a common complaint.
Alternative 1: A Tennis-Style Points System
One of the most commonly suggested alternatives to a ranking based system is a placements based one. This one would assign points toward each event and placements at them, with the final list being composed of the players with the highest cumulative points throughout the entire year. The pros of this system are easy to understand: there’s no ambiguity, it incentivizes attendance, and it’s entirely transparent. Don’t like your spot on the list? Go to more events and win them. With all that said, if Melee used a tennis approach for last summer, here’s an example of what our Top 10 would look like.

Now keep in mind – this is one particular tennis-based system. But I’m less concerned about the actual order of the list and more about the process behind it. If you ask most panelists what their problems are with a tennis-style system, they will mostly complain that the list will become “bad.” While I totally understand that angle, it misses the forest for the trees. It’s less that the list itself will be “bad” as much as the process behind creating the list will be entirely different from what the community values through the panel.
What makes the current Top 100 great is the fact that there’s multiple different perspectives on how to evaluate players that come together and clash to make a Top 100 that reflects what the community thinks together. Casting all of that aside to make one concrete, unchanging system, is a very big change, and one that doesn’t fit a decentralized grassroots scene. Worse yet, this system would financially gate-keep being a top player.
To be clear, I am not opposed to this idea being implemented in some other capacity to complement the rankings. I also must confess there’s a higher chance of me being wrong about this than in the past. If Ludwig were to put $100,000 into the community and start treating up-and-coming players like Rolex watches, that would certainly be one possibility. But would the community ultimately be willing to step up to support a league with BTS-style content around it and a tennis-style standings? Is it worth taking this risk to do that while turning over the existing process? Is SSBMRank the place for this to happen? I don’t believe it is.
Alternative 2: The Almighty Algorithm
I will not hide how much I hate the idea of an algorithm, but to steel-person it a little bit, it does offer a superficial appearance of ‘authority’ and ‘objectivity’ in the scene. An algorithm would also be easy to maintain, as it would essentially involve one person or group, and as a result be flexible over time to that person’s values and ranking philosophy. On a marketing level, it’s pretty easy to sell this idea to the community. In fact, people try it all the time.
Ran an Elo ranking on 2022 Melee data just to prove how bad it would be… oh boy pic.twitter.com/jTklsUI229
— Jeremy (@ETossed) February 13, 2023
Overwhelmingly though, it leads to a terrible list. People have tried this idea numerous times, and practically each time, they’ve had to go through painstaking measures to make their particular algorithm “good” before eventually giving up or asking someone else to do free work for them. The reality is that algorithms don’t really work with Melee results without insanely granular levels of weighing different head-to-heads (in order to filter out the Hanky Pankys and Lukademus’ of the world).
Even if SSBMRank were to run an algorithm-based system that churned out the perfectly accurate list, I’m fundamentally opposed to the process. I cannot stand the idea of one person lording over the entire scene and using their values as “objective” ones to rank players by. In a panel system, you at least get the opportunities for dissent. The algorithm wouldn’t be “objective” at all. The best case scenario is it would be a more complicated process for either the panel or one person to churn out the same list as before.
Alternative 3: The Mango System
This is a new system that was proposed by none other than Mango. At its core, the idea is that every player gets to submit their best performances of the year (Mango said six best, but he mentioned that the actual number wasn’t as important as the concept) to the panel. The panel would then make a Top 100 based on those performances only. It’s easy to get the appeal of this idea. From their perspective, the players can only benefit from attending events and they have much more control over their results. It turns the process toward them in a way that allows them to submit their best results to the panel akin to a student being graded only on their best results. This system, in particular, is very popular among top players. It would admittedly be very politically convenient to run it for that reason alone.
But I rarely saw this idea actually implemented, so I asked a dear friend, s-f (a Melee Stats contributor and panelist) to use last year’s summer period as the basis for a “Mango System Summer Rank” (at least for his best guess on what each player would choose for their best six results). Here’s what a Top 10 from that period would look like, in comparison to what the summer rankings actually were. Note: s-f did not count Mango, Leffen, or Plup as active due to Mango’s criteria.
| SSBMRank Summer 2023 | The Mango System Summer 2023 |
| Zain | Zain |
| Cody Schwab | Cody Schwab |
| Jmook | Jmook |
| Leffen | moky |
| moky | aMSa |
| Mango | Hungrybox |
| aMSa | KoDoRiN |
| Hungrybox | Aklo |
| Plup | Axe |
| KoDoRiN | Polish |
Discounting the actual order of this list once again – since the whole appeal involves its impact on a players attendance rather than being applied to an already existing set of performances – I’m genuinely uncertain about what problem this system solves. If anything, it would formalize the players having greater leverage over the events and attending fewer tournaments. In a case where someone determines they have gone to enough events to secure a high rank, they can essentially rob other people of the ability to defeat them, as they have no incentive to attend anything else, leaving panelists in the same dilemma that the players are unhappy about with regards to rank protecting. It might even leave the panelists in a tougher spot. Does a win over Mango or Skerzo “count” if it came at an event that neither of those two players counted for their own resume?
Although I agree that a ranking should incentivize people to attend events and ultimately grow the scene, let’s be real. With all due respect, this specific system coddles the players. No sport outright ignores losses; that would be fundamentally against the values of competition. The best case outcome is something similar to what exists now, with a more winding path and maybe something a bit more convenient to the top echelon.
Having spoken to Mango about this, I was hit with the “keep 11-100 the same, but have the Mango System for the Top 10,” alternative. To that, I’d say that I don’t think it’s fair to essentially determine who qualifies for this system before the actual tournaments of a given ranking period; that feels like favoritism. But I did keep it in mind for my next system.
Alternative 4: An Eldtrich-Hybrid System
Although all these ideas on their own have different pros and cons, one immediate suggestion would be to create a system that implements features from all of them. It could vary in a variety of ways, so I’m going to do my best to create one edition of this.
The first thing: keep the vast majority of the Top 100 (21-100) the exact same: a panel based vote on the quality of results. But as the list heads into the more “visible” part what we could do is reach out to each of the eligible players with the 20 highest panelist scores, and ask them privately to submit each of their four highest placing major results for this year. After this process, there will be a SEPARATE ranking within SSBMRank called “The SSBMRank Premier Standings.” This will be a placements-based system that essentially finalizes the Top 20, as they have been deemed by the panel “worthy” of entering the premier standings by their scores. However, as a check on the panel by the executive team (in this case, myself), we have, to everyone’s knowledge, predetermined 14 events which count for the premier standings.
EDITOR’S NOTE: You could also do this in a different way – if there is a player whose final score ends up in the Top 20, but they are simultaneously deemed inactive by the panel, they could be deemed as an “honorable mention” or “hidden boss” within the premier standings, with perhaps two open spots for events that could retroactively be considered majors.
Just for the sake of argument, I’m going to post what a Top 20 would look like with a version of this system for 2023. The only difference is that I will be doing this for my arbitrary pick of 20 players rather than the panel’s, due to the 2023 rankings not being complete yet. I will also pick the following events to “count:” Genesis, Collision, Major Upset, Battle of BC, Tipped Off, LACS, CEO, Fete, GOML, Smash Con, Riptide, Shine, Big House, and Arcamelee. I wrote this before Santa Paws, so I am not counting that.
| Player | Total Points |
| Cody Schwab | 572 |
| Zain | 444 |
| Jmook | 362 |
| Plup | 282 |
| Mang0 | 216 |
| moky | 198 |
| Leffen | 186 |
| Hungrybox | 162 |
| aMSa | 162 |
| Aklo | 90 |
| Wizzrobe | 78 |
| lloD | 42 |
| Axe | 36 |
| Zuppy | 34 |
| KoDoRiN | 33 |
| Trif | 24 |
| Polish | 24 |
| Salt | 16 |
| Joshman | 13 |
| Morsecode762 | 10 |
Full disclosure: I don’t think it’s a good system. Although it largely keeps most of the list the same, at the very top, it gives players way more “leverage” over the panelists, while giving events more “leverage” over the players to get them to attend. Lest you think this is only a problem with my specific hybrid system, all of them would be long and complicated. Fine-tuning the specifics would be hard to attempt on the fly.
But remember, I’m not worried about the actual order – it could theoretically be a perfect list with the right adjustments. The important part is that maintaining such a list, purely selfishly, would just not be fun for me and, in all likelihood, the rest of the team, which brings me to my next point.
The Key Rule of Smash
Top players want to be able to attend events, compete, and have fun without the pressure of “their rank” lingering in the back of their minds. Tournament organizers want the rankings to formalize their ‘authority’ in the scene so that they can make their events more successful, get players to attend their events, and enjoy running them. But everyone knows this – I haven’t really seen a public articulation for why the rankings actually exist. I’m going to do my best to offer one.
I genuinely love ranking players. The process of going through their results, no matter how time-consuming it can be, is genuinely fun for me and several other people. I love learning about results that I otherwise would have never noticed. The order of each player is important, but more than anything else, it’s an overarching structure to share some of the best stories in the scene. Everybody understands, to a degree, that the rankings reflect tournament performances and are ultimately “content,” but I don’t think that quite captures what makes them special: the stories.
The last few months have shown me that the passion that people have for this project is proof of its legitimacy within the scene, even as people have criticisms of it. My involvement in the Top 100 has genuinely been a privilege and a joy. Even though it has also come with stress and a lot of ugliness, privately and publicly, the attention this series has received is not the worst part – a least not without being the best part too. Here’s the key: every single argument over the rankings is our community trying to figure out what’s fun, and how the rankings can match their version of fun, not a war with opposing factions. We are all on the same team here, and just want what’s best for Melee.
For next week’s column, I’m going to offer four proposals for SSBMRank that I believe reconcile everyone’s version of fun the best. It is ultimately not my decision to enact anything new, but that’s okay. I will jump into these ideas as a show of good faith and an attempt to offer a potentially fun future for everyone.
