News Shop
Events Chat

Codex AI - Has anyone started on one or thought about it?

Hey All,

I’m currently working on a program to have two AI play each other repeatedly so that they can eventually get smarter and smarter at playing Codex. I have a couple of goals with making this program

  • Determine how fair this game is? So far, it looks to be around Chess fair at tournament levels. With machine learning being as accessible as it is nowadays, I figure we can do for Codex what we’ve done for Chess and Go.
  • Assist program to let you know all possible moves. One of the things I have to build anyway for the AI to play this is to let it know what all moves are possible, so I figure I wrap a nice UI around this part of the program and players new and old can see all possible moves (and know if the move they just planned is legit or not)
  • Unrelated to Codex, but I want to see how multiple AIs can evolve differently from each other. Some can favor extreme aggression, while others might find success dragging games out. Seeing what the AIs do to each other might give us more insights into how else we could be playing this game.

I tried to go through the topics as well as I could, but I never saw anyone working on a project like this. The closest I saw was the tracking sheet at Codex Tracking Spreadsheet and The Google Sheet that had all the cards. Let me know if there was anything beyond that I am missing.

For first iteration, I’m just going to focus on Bashing vs Finesse. This is about my third attempt to handle all the special rules that show up on almost every card, so even once I’m past the Neutral cards I’ll just be adding a hero at a time. The programming language will be R. Once I get something working I’ll post what I can on Github.


No one has worked on this that I know of. I look forward to seeing the progress you make, and if I can I will try to contribute.

1 Like

I love watching ais play games. Please, post about your progress often, and game logs if they’re readable.

1 Like

Hey dude, I have an abandonned rules implementation. I’ll maybe post again when people can use it to play bashing vs. finesse


I wish someone would make an app that helps streamline pbf on mobile.

Using Google sheets is as close as I could get you MVashM for now.

Seriously nothing there yet besides loading all the cards from the Google Sheet.

I’ll be flying to Germany soon and coming back on Friday. I’ll see what I can do from the hotel at night. My current goals are just to deal valid hands for now. After that, purchasing cards from the hand (I’ll deal with heroes later).


I wrote an implementatuon in Excel VBA, but the tedim of having to code every card individually made me stop after finishing most of Bashing vs. Finesse.

Making a rules enforced version of Codex is a long and slow slog of a first step.

Machine learning isn’t really my specialty, but I think it would take much more effort than that in order to make an AI that would be able to apply logic and/or past results to the infinitely many possible board states.

My RL opponent is working on a rules enforced version of Codex also, but 99% likely there will be zero effort toward programming an AI that would be a capable opponent.

My plan for a Codex AI is more along the lines of classical AI. For a basic game plan to guide tech choices, randomly pick one hero and one of your three specs for tech 2. For each tech 2, I would have a set of tech-order lists, with slots for hero spells (from the first choice) and bias that based on both your heroes and the enemy heroes (rules like “no might of leaf and claw vs future” and “have midori? Consider rhino”). The individual turns would be a standard alpha-beta min maxing search through game states. I haven’t thought hard enough about how I would prune the search tree frim considering every possible hand from the opponent (including each possible tech choice) but I would probably compile a set of likely techs (opponent went present tech2? Assume every turn could produce Hyperion). After that refinement would just be improving the various heuristics that guide the search and adding more/better tech order strategies.

In my experience, the biggest difficulty in creating an AI for a game like Codex is devising an adequate heuristic to evaluate game states. The trade-offs between board control, hand size and econ are difficult for humans to understand, so I imagine designing a good algorithm which enables an AI to choose between different possible plays would be really hard. It’s made more difficult by the fact that the Codex decision tree branches faster than any game I’ve ever tried to write an AI for (the number of possible game states at the end of a turn of Codex is very very large).


Yeah the branch factor is really high and it’s not always clear which state is strictly better. On the other hand, if the goal is a fun opponent rather than a perfect codex bot, those choices about how to order game states are opportunities to give each ai some personality. One flavor might choose never to go down on cards, while another will only go down on cards if it results in either a deployed upgrade or a reduction in enemy board, unless it’s hand size is two at the beginning of the turn.

Go has an even higher branch factor and google got a computer to play that pretty well. The Machine learning he is proposing just needs a list of all possible moves, and it will eventually learn all that underlying logic, after playing a billion games (or more) in theory.

It is theoretically easier than the classical AI you suggest, in that once you have a way to ensure the AI only takes legal moves, you can just let it run until it learns.

1 Like

Are you sure Go has a higher branch factor?

1 Like

I admit I am not actually sure, but Go is orders of magnitude beyond chess, and I was assuming codex was similar to chess in that regard.

Well, it depends on how much ‘help’ you give the AI. But if you purely give it a list of possible moves, I would expect you’ll find that Codex is orders of magnitude beyond Go. The combinatorics in Codex are crazy.

I could see that. The point of machine learning I was getting at was that it is better for those high branch factors than a classical approach.

1 Like

Not sure how this factors in, but Go and Chess both have 0 luck. I think there is 1 best move (or several equally good moves) for every single board combination. In codex risk, bluff, and luck is to be considered.

Discounting symmetry my gut says there are more individual Go board states than codex game configurations. The Go board is 19x19 and each intersection can be one of three states. Plus you have to remember a little bit of history to avoid a ko loop. Any given game of codex limits you to only needed to track the state of 85 cards per player and any tokens that materialize. And the first turn for each player only has to track 13 cards.

While I concede that machine learning (assuming a valid fitness function) may be more likely to develop a perfect player, I believe a heuristic guided classical AI is more likely to produce fun resilts first. Besides, if you are programming for the fun of it, each heuristic in the search offers potential for infinite noodling.

1 Like

You mean there are more Codex board states, right? There are only 19x19x3=1083 possible Go board states. As you said there are 85 cards per player, each of which can be in any of 12 different states (Codex, Hand, Discard, Deck, in play, patrolling in one of 5 zones, workered or removed from game). There are then 8 different states for tech buildings and 5 states for addon. That alone is 85x12x8x5=40800 states for each player. And that’s without even considering floating gold, damage done to units, heroes and buildings, hero levels, and various runes which will increase the number of possible states by at least 5 more orders of magnitude.

(Technically I suppose there are infinite possible Codex states, as there are no limits to how many runes can be on a unit…)