TL;DR: you can skip to the tier list below, if you want to

A Tier List is a ranking of the character based on how “good” they are. The problem comes from defining “good”.

Let’s take Rock-Paper-Scissors. Rock beats Scissors, Paper beats Rock and Scissors beats Paper. Is there a better option than the others? Of course not, they all are equally strong because, on average, if both player picked randomly, all options will win an equal amount of times.

On the other hand, let’s say that Rock has a 55% (5.5 in Match-Up terms) probability of winning against Scissors, Scissors has a 52% probability of winning against Paper and Paper has a 53% probability of winning against Rock.

Now it is pretty clear that Rock is stronger **if both players are picking randomly**, since it leads to the most number of wins.

If players, instead, can choose which option to use, we start our mindgames: yeah, Rock is stronger, but because it is stronger its counterpick, Paper is also strong. Scissors is strong because it is the counter-counterpick to Rock and Rock gets a bit stronger because it is the counter-counter-counterpick to itself.

We can write this relationship in the form of a matrix, called the payoff matrix:

R P S

R 50 47 55

P 53 50 52

S 45 48 50

Let’s imagine a round-robin tournament in which players choose one option and stick with it until the end.

Which option will the players choose the most? How many players will choose each option?

To answer this question, we’ll have to have a bit of a digression.

Let’s say you go to Rock city, where everyone plays Rock because they think it’s stronger. If you go there and use your payoff matrix, you can actually see that the best option for you is playing Paper.

Amazed by how good you played in the tournament, the players in Rock city wonder how you managed to choose. You explain your method to them, and the next tournament they all use the same method.

On the next tournament, there will be a lot of Paper because it was the counter to the most played option, but also some Scissors because they were the counter to the second most played option.

If you repeat this process multiple times, you’ll eventually end up in a state in which the percentage for each option stays constant (although different one from the other). This is called Nash Equilibrium.

The method we are talking about works as follow (warning, it requires some basic linear algebra, you can skip this if you want):

Let’s call r, p and s the percentage of Rock, Paper and Scissors players in the last tournament. If we create a vector v = (r, p, s). If we multiply the payoff matrix M by v we get a new vector, x1, of which the first value is the expected strength of the Rock option, the second value is for Paper and the third is for Scissors.

If we compute x2 = M*x1, x3=M*x2 and so on, we eventually get to a point where x is not changing enough. THAT is the Nash Equilibrium.

By the way, if you run the math on the RPS example, we get 50.6%, 51.6% and 47.6%

Now, to go back on Yomi, I took the data from the Yomi Composite Chart and run the math with that MU chart (which is what we call payoff matrix).

The resulting Tier List is:

S: Degrey, Zane

A: Grave, Lum, Troq, Setsuki, Valerie, Geiger, Onimaru

B: Argagarg, Menelker, Persephone, Gloria

C: Jaina, Rook, Quince, Gwen, Vendetta, Midori, Bal-Bas-Beta

This study makes a couple of assumptions:

First of all, it doesn’t take into account counterpicking. This isn’t really an issue if the selection is double-blind at every step of the game or if players stick with the same character all the time.

Secondly, it assumes that at the first step of computations every character is equally played. More skewed initial conditions will require more steps to converge.

Thirdly, it assumes that characters are only picked based on their power. We know that is not true, and we have many more Rook players than our Tier List would assume we do.

The first assumption isn’t too big of a problem: our objective is optimizing the first game, because even if you are counter-picked you can counter-counterpick on the next game to level the field and so on.

The second assumption, again, is not a problem: in my calculations, 3 steps were enough to converge, but with real world data you should converge to the same results in more or less the same amount of step.

The third assumption, instead, is the main problem. Players are not very logical, and they may resist changing their main because the meta is unfavorable. If you take this into account, and your objective is maximizing the wins you’ll get in the next tournament, you could take recent data and figure out the best characters to play.

For example, using the Historical Chart, which features data since 2014 (you might want to use only data from 2018 onwards to be more precise, but that might be too little data points), the new “Tier List” (which isn’t a Tier List but rather the characters you could pick from the best choice to the worst) is:

S: Troq

A: Grave, Degrey, Lum, Zane, Setsuki, Valerie, Geiger, Onimaru

B: Argagarg, Quince, Menelker, Persephone

C: Jaina, Rook, Gloria, Gwen, Vendetta, Midori, Bal-Bas-Beta

The beauty about this method is that you can change the MU chart freely and even tweak it for specific tournaments. For example, let’s say you want to guess how the meta will be shaped for an 18xx tournament. Some characters that are considered “good” perhaps were good because the top played characters where good MU to them, but with those out of the question the standings are changed.

To compute the results for 18xx you could simply set a starting population in which the banned characters have 0 coverage. As such, the first step of computation will not keep the banned characters in mind when trying to find the best options, and you could simply pick the top-rated non-banned character from the tier list that method outputs.

And, finally, here’s the Sheet I used for those computations. It’s not the cleanest sheet, since it was meant for quick testing, but here’s a quick overview:

In the Tier List sheet you see the results of the computations. There’s not much to play around here, except maybe you can change the column in the apriori winning% column to X if you want the first step, Y if you want the second step and Z if you want the third step.

Computations is where the magic happens.

Firstly, the MU chart. If you don’t believe the chart I used is a good chart, you can simply change those numbers. Heck, if you want to, you can even put your own values in case you have an especially hard time with certain characters (but if you do so, keep in mind you can only compute the first step, since from the second onwards you’d be assuming that everyone has your same problems with the same characters).

Secondly, the historical data. You can change that to whatever you want. If you set a value of 1 to the game column, you are computing the results with the assumption of equal initial popularity.

Last but not least: the amount of steps. If you want to compute more steps, you can simply add another column on the right and clone the now second-last column into the newly created column.

In conclusion: keep in mind this is just maths. Psychology plays a big role in evolving a metagame, and that can only partially be accounted for in computations.

Also, despite us talking about S+ and C tiers, there’s actually an extremely small difference between the worst character and the best character, so play whoever you like the most.