AI Advances

I thought about posting this video in off topic because it is related to AI in fighting games, but this could be adapted to card games like Yomi. I found it very interesting because it is an AI that is able to adapt to situations and player tendencies. Check it out.

4 Likes

In my (not very copious) spare time, I’ve been working (very slowly) towards using some of the DeepMind-style advances to board game AIs to do a Yomi AI. Sadly, it requires implementing the Yomi rules as an engine first. However, I was working towards what would be effectively a super-human AI (and thus would tell us about what the limits of good play look like, rather than necessarily being a fun AI to play against).

4 Likes

One of the things that caught my attention during these guys implementation of AI is that they calculate optimal moves from the game state. The AI would then make a choice of moves that were within this range. My concern with this is that sometimes in Yomi it is optimal to do the sub-optimal. I was hoping someone in attendance would ask about this but sadly they did not.

2 Likes

I’m expecting that once it’s working, the AI would basically produce a percentage distribution over all of the moves available at the time, and then select from them based on that distribution. By optimizing that range of probabilities, rather than optimizing for a single move, I think it’ll capture more of the notion of playing a range in Yomi.

3 Likes

You can account for a good deal of the sub-optimal through artificially generating levels of thinking. Player A’s AI can look at player B’s discard pile, revealed cards and hand size to figure out what possible options and the optimal play weight for player B. Player A then weighs their decision based on both player A’s optimal move and player B’s most probable options, without actually cheating and looking at player B’s hand. Likewise, player B factors player A’s optimals into defining their own range, so both AI are able to skew their combat reveal based on the opponent’s most dangerous options. This should in theory mean that the AI players will add sub-optimal plays into their range or increase their weight from turn to turn based on the game state, over and above whatever data was fed into the DeepMind calculations. I wouldn’t say I’m an AI expert or anything, just throwing out the suggestion.