Skip to content
Published March 18, 2024

In September 2021, I was riding high. Merely days after I called my shot and stated that I believed Cody Schwab (then known as iBDW) would win Riptide, he went and did it. This, mind you, was around the same period of time in which I routinely farmed side-bets against other people. Little did I know, however, how truly bad things were going to get for me in the predictions department.

Although the money kept rolling in on overall side bets, my hit-rate on major predictions within the Monday Morning Marth space has been quite terrible. At this point, the “Curse of Monday Morning Marth” has become a running gag: my semi-official endorsements have become like the Grim Reaper to any prospective player’s chances of winning a tournament. It’s now gone on for two-and-a-half years.

In this period of time, I’ve started to wonder about the impact of my words. Do they truly just damn the players whom I think will do well? In today’s column, I’m going to perform a statistical analysis on where I’ve gone wrong, how bad it’s gotten, and determine if the Curse of Monday Morning Marth actually exists.

Methodology

Now, it’s important to note that I have not made official predictions for every single major. Although we have a pretty good data set to evaluate, technically speaking, I could have maybe evaluated the average placements of major contenders at events I didn’t preview, perhaps to see if any of my bullish predictions on players had any correlation with their results down the line. In the end though, I still think the events I have previewed since offer a pretty good look into how off the mark I ended up.

Anyway, the first thing I did was review each of my Monday Morning Marths following the original Riptide. I then decided to track every event I gave previews and – importantly – predictions for. I went into this a little bit last week, but as I talked about above, this was a problem well before aMSa won The Big House 10. For each of my predicted players, I then tracked roughly how many sets away they were from winning the event, making sure to exclude results in which my predicted players ended up DQ’ing (which accounts for my “Cody wins Full Bloom 2024” and “Zain wins Riptide 2023” calls). I just thought those made for messier data, and pre-tourney DQs are too big of a variable to account for.

Tournament My Pick Sets Away From Winning
Mainstage 2021 Hungrybox 2
Smash Summit 12 Zain 3
Genesis 8 Plup 8
Smash Summit 13 Zain 5
Battle of BC 4 aMSa 4
GOML 2022 Zain 3
Smash Summit 14 Leffen 6
Collision 2023 Cody 3
Battle of BC 5 Moky 7
LACS 5 Cody 3
Fete 3 Hungrybox 3
GOML 2023 Jmook 11
Super Smash Con 2023 Mango 5
Shine 2023 Zain 1
The Big House 11 Zain 4
Genesis X Mango 7

After collecting this information, I went through each of these events and selected other respective “major contenders” attending them and made sure to include their respective average sets away from winning the event. What qualifies as major contenders, of course, is inherently subjective, but I believe the way I chose to proceed is fairly defensible. I treated Zain, Cody, Jmook, Leffen, Plup, Mango, aMSa, Hungrybox, Wizzrobe, and moky (2023 onward) as my group of major contenders.

Tournament Present Major Contender Count (Non-MMM Picks) Sets Away From Winning (Average)
Mainstage 2021 2 2.5
Smash Summit 12 6 3.7
Genesis 8 6 4.4
Smash Summit 13 6 3.8
Battle of BC 4 4 2.5
GOML 2022 5 3.8
Smash Summit 14 8 4.1
Collision 2023 5 4.4
Battle of BC 5 7 3.7
LACS 5 6 3.7
Fete 3 3 1.7
GOML 2023 6 3.2
Super Smash Con 2023 5 3.6
Shine 2023 4 4.8
The Big House 11 8 4.8
Genesis X 8 4.0

Once again, I excluded pre-tournament DQs from my calculations, as well as whichever member of this group I predicted to win. I then ran a scatter plot comparing the performances of major contenders I predicted to win and major contenders I did not predict to win for each tournament.

The slight implication of this graph shows that my predictions have, more or less, gotten slightly worse over time (though marginally) and that they’re about a full set away from how most contenders perform in the field. To see how the two numbers directly compared to each other, I then examined how they looked, side by side.

There definitely is a noticeable difference between how players I choose perform vs. how those I don’t choose perform, but it’s small enough for me to believe it’s within the margin of error. More or less, my typical major “winner” of choice still finishes in the middle top eight, while the other realistic choices barely finish above it.

Considerations

  • Is it worth noting the performances of major contenders at events I don’t make predictions for?
  • What other potential ‘independent’ variables could I measure against a major contender’s performance and compare to my having selected them to succeed? I have currently only compared my picks to my “non-picks,” which makes for binary categories.
  • Is anyone else predicting majors in a formalized fashion that I could learn from and measure to see how they make predictions?
  • Are there certain events that I tend to make “better” predictions for than worse? How do the different “prestige” levels of majors (Genesis/Big House vs. Smash Con/GOML, for example) play into my accuracy rating?

Conclusion

I do not think there is evidence to prove that predictions make players perform worse. As a result, I do not think the Monday Morning Marth Curse exists. However, it is very funny. I may be unlucky or, perhaps, extremely bad at predicting who will win Smash tournaments. Let’s go with the first one.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Melee Stats

Subscribe now to keep reading and get access to the full archive.

Continue reading