Ha! Lawrence, I had forgotten midway through my first comment above that the comment box stripped html and actually linked to your project (as “over here”). Glad you showed up.
- “I wonder if you could use Markovian chains to write a decent chess AI?!?!”
Well, here you have a problem of putting together a corpus, and one of defining a game state. Finding a corpus shouldn’t be too hard – there are many chess sites, and loads of historical games out there as well, so let’s call that a solved problem for the sake of argument.
(For the sake of counterargument, there is a corpus-size problem: do you want to feed it only excellent play, or are you willing to feed it a great deal more mediocre play? Markov does better with bigger corpuses, but is bigger and less-skilled worth the tradeoff?)
But, yes, so: you have your corpus of, say, many games of chess each with some dozen number of moves on either side. Call it a million total moves that you use to train the model.
Since a Markov table is essentially a series of state-move pairs, we need to define what a state is and what a move is in order to build the table form our corpus of moves.
The obvious answers, I think, are that a state is a given configuration of the board – the location of (up to) 32 pieces, some black and some white, on the 8x8 grid of the chessboard; and a move is a name of a piece and a pair of coordinates, say black pawn from king 2 to king 3 (forgive my lay nomenclature, I’m not up on my chess lingo).
Which, okay, great. But here’s a problem; among our million recorded moves, how much duplication of the board layout do we end up with? If (past the first seven or eight moves) we see very few duplications of a given state, there aren’t any options available to the AI at most junctures after the opening. In which case you have an AI that can play a few moves and then has to give up and fall back to a different heuristic.
In other words, a more complicated state means less novelty, and the explicit organization of a chessboard is a very complex state indeed in this sense. (Doing this with Go would be worse yet.) To get novel behavior out of your Markov chess AI, you need to find some way to reduce the complexity of the state to guarantee interesting choices even well into a game. Probably not good choices, but interesting, surprising, novel.
One approach: treat state as, not a record of the explicit configuration of the board, but as, say, the type of piece that was last moved by an opponent. Then analyze your corpus of moves like this, perhaps, stocking a 2-order table that picks the next move based on the previous two:
State: (I moved King, he moved Queen)
Moves:
(I move Pawn: probability = 0.27)
(I move Bishop: p = 0.13)
(I move Rook: p = 0.10)
…
That may be too simple for an interesting AI; perhaps you would want to incorporate move-to-capture vs move-that-doesn’t, or direction of movement, or other generalized aspects of a move. Finding a sweet spot that results in novel but not utterly random play would be the challenge and the fun, I think.
That’s just one approach, and off the top of my head as well so there may be some truck-sized holes in the idea. My gut says that applying Markov to chess at a tactical level is probably a doomed notion, but I’m neither an AI researcher nor a chess wonk, and so I wouldn’t be surprised to find that such models do have a role somewhere in existing AIs.