A week or two ago, after several years away, I became interested in Chess 2 again. I remember back in the day, that while the Steam version of the game, developed by Ludeme Games, had terrible AI, that there were some people working on Chess 2 AI, and I had a lot of hope for a playable AI opponent - whether there was an eventual update to the Steam version (which didn’t look likely even at that point), or a new computer implementation, maybe without the fancy 3d graphics. But unfortunately, that never happened, and the people who used to post about Chess 2 theory look like they haven’t posted in here in almost 5 years.
After those 5 years of waiting, I guess that if I want to play Chess 2 without a human opponent (something I often prefer, because it lets you work out strategy without time pressure or feeling stupid when you blunder), I am going to have to roll up my sleeves and do it myself.
So, the first thing I checked was the installed game directory to see if there was a settings file where you could adjust values (even increasing the AI’s time-to-think, which I later found out actually does effect a noticeable improvement in the AI’s play). But no such luck. However, the names of the dll files looked promising, and a quick download of DotPeek confirmed my good fortune - the project was written in C# and was easy to decompile. Further, there is a dll called Chess2.AI.dll, so with luck I could just replace that one dll (just a small issue of designing a functioning Chess 2 AI first).
There are two things I found very quickly here, one good and one less good:
- Chess2.AI.dll contained the start of an implementation of a Minimax algorithm, complete with move generator and alpha-beta pruning. It did not yet have an Evaluation function, but that is not the hardest part of the process.
- Chess2.AI.dll was not referenced from anything in the main program. The fact that Zac included this dll in the deployment at all was pure luck, because it wasn’t used by Chess 2 in any way. Chess2.dll actually includes its own AI, which is a Monte Carlo implementation. I didn’t dig into it TOO much, but from what I can tell, it generates random moves and then plays out the game using random moves until it reaches a checkmate or runs out of time, and then chooses its move from whatever generated the best random series. It’s not an invalid way of writing an AI, but I think that it’s not as effective as Minimax, which is why the Training Dummy is so dumb.
So I want to pause here to mention my biggest concern in working on this project. I know that I’m not going to get in trouble for modifying the game on my own computer for my own purposes (to create an AI that I can play against), but if I want to allow anyone else access to this AI, then I’m going to have to distribute the .dll for them to play with. And while I was originally hoping to just modify Chess2.AI.dll, it turns out that at a minimum, I will also have to modify Chess2.dll as well. And additionally, the main guts of the program (the thing that starts up games and instantiates the AI for Bot Games, is hosted in something called “Assembly-CSharp.dll”, so if I want to actually reference the Chess2.AI.dll, I would have to modify that one as well. So for anyone else to use my AI, they will have to modify multiple dlls in their Chess 2 install directory, some of which could drastically modify their game if I screwed something up. I could certainly use an installer and launcher for this, so they don’t have to touch the game files directly, but they’d have to trust that I’m not doing anything shady or stupid with those files. Now, one of the ways to ease people’s concerns is to post the source code in a public GitHub repository, so anyone who wants to can examine the source code and compile it themselves. And people who don’t want to do that can still download the binaries, trusting that SOMEONE has validated the project. But, I’m not sure that I can legally put decompiled code from this project in a public GitHub repository. I’m fine with putting MY code in a public repository, but most of this is someone else’s code, even if it’s not the original but a decompiled version. It was at this point that I sent an email to Ludeme Games to ask for source code access and permission to work on it. Even at the time, it seemed unlikely to bear fruit (but worth a try), but I’ve since been informed that he’s already made a public post about this sort of thing and has denied permission. So, I’m probably not getting a response to that email anytime soon.
That said, I’m still going to work on this AI. It should be fun.
So, back to the algorithm. First, the algorithm lacks any sort of Evaluation function. A Minimax algorithm has 3 main parts:
- A Move Generator takes a board state and finds all legal moves. This part is implemented in Chess2.dll. I assume it was needed for the Monte Carlo AI and possibly for just regular 2-player play. This is the hardest part of building an AI, in my opinion, so I’m glad it’s already done.
- An Evaluation function. This is a function that looks at a board and determines who is winning. Basically, you iterate moves until a certain depth and then say, “well, I can’t look ahead any further because it would take too long, so let’s just see who looks like they’re winning here.” This can be very simple, like just counting up piece values, or can include any number of additional innovations to help increase the accuracy and complexity of the evaluation. This was missing entirely, but I was able to throw together a rudimentary one pretty quickly.
- The Minimax algorithm itself. This algorithm uses the Move Generator to generate moves for the current board state, then goes through those board states sequentially. For each one, it recursively calls itself, which means that it generates moves for THAT board state and goes through them sequentially and so on. It does this to a certain depth, and then when it’s as deep as it can go, it calls the Evaluation function to generate a score. If at any time, it finds a checkmate, then that move is obviously best for the checkmating player and worst for the other player. There’s also a concept of alpha-beta pruning, which I won’t go into here (partially because I don’t understand it quite as well as I should, so I need to research it some more). This algorithm was implemented in Chess2.AI.dll, but there are some problems with it. For one, it is VERY slow. Searching to a depth of 4 (meaning White moves, then Black, White, Black, and then evaluate the board state), takes somewhere on the order of 1 minute, and a depth of 5 is essentially unusable (because it scales exponentially with depth).
Why is it so slow? Well, a couple of reasons. The easiest to fix is to use Iterative Deepening. This means that first, we evaluate to a depth of 1, then 2, then 3, then 4. At each depth, we reorder the moves from best to worst and re-evaluate them in that order at the next depth. Because of the way that alpha-beta pruning works, this should allow it to prune more branches faster. And it sounds like we’re repeating work this way, but since each depth is so much more expensive than the one before, this should be balanced out by the savings in pruning more branches. I did implement this. However, one of the problems that I ran into at this point is related to the next point.
Which is that the algorithm as it was fleshed out, passes around some heavy data structures and thus has to do some intensive computing at each step. For checking the piece count, it iterates through each piece type and then looks it up in a grid of board squares in order to count it. I spent some time working on representing the board as a string instead of a Grid data structure, and then just do a count of “R” within that string to determine how many rooks there are (“r” for Dark). This optimization helps a little, but there is more optimization that can be done on that. Additionally, once a board position is evaluated, I’m storing it in a position cache so that if we ever have to evaluate that position again (through transposition), we can just look it up. We have a limited amount of memory, so we can’t store every possible board position (even assuming that we can convert all matter in the universe into RAM for our Chess 2 AI), so it uses a clever little data structure called an LRU Cache. Whenever the LRU Cache reaches its capacity, it removes the least recently used evaluation. And it manages to do this with O(1) lookups, O(1) insertions, and O(1) deletions.
However, even with that, we still have a long way to go with optimizations. The Move Generation algorithm works by taking the current board state, telling the game we’re ready to make a move, generating one move, resolving that move, and then rewinding the board state to get the next move. I’m pretty sure this is very inefficient, but I don’t yet know how to improve it. I’m working on that part.
Another problem we have is that Minimax is a depth-first algorithm. So by default, it works serially. It takes one possible move and analyzes it down to max depth before moving on to the next possible move. This approach works great for alpha-beta pruning, but is very difficult to parallelize. Which means that it doesn’t take advantage of multiple processors and ends up using like 10-20% of the CPU power that it could be using, depending on how many cores your PC has. There are some existing strategies for multi-threading a Minimax algorithm, which I am looking into.
And that… seems like a long enough first post for this thread, so I will leave it there for now.