Markov and You

Very interesting, could also come in handy for unit testing scenarios

Regarding Markov chains and music, didn’t the old Sid Meier 3DO game “CPU Bach” use them? I loved that game/toy, wish Sid would put it out on XBLA or something.

MegaHal is quite impressive. I really enjoy what it produces.

Uh, Jeffy? The rest of us heard about Markovs in like 1980-something. Really those “buzzword bingo” cards work on the same principle. Are you telling us this is your first introduction to them?

For more fun, try the Emacs dissociated text plugin (M-x dissociated-press). Run it with a buffer full of text, get delightful gibberish that almost makes sense back. Try it on a buffer with code pasted from two different languages, watch it invent a new programming language before your eyes!

This is really interesting. I work at a internet harvester/analyzer and we use Bayesian spam filtering. This makes me want to look more into Markovian.

A Markov generated CodingHorror parody :wink: :

This year’s strongest contender comes to multicore performance, choice of language is no silver bullet.
How else can we explain the massive disparity between the fastest and slowest versions of Garfield strips.
If you look at the same time, or in the same words, or random gibberish, and that’s what you’ll get back.
But it’s sure tough to imagine a more ideal input corpus for Markovian techniques than the original article.
Of course, as with all other useful things, there is a program to count the most common HTTP GET URL entries in a dozen places rather than making all the key is the many-way communication of the code size, the code in this post, make it this one.

Everything old is new again: http://faculty.dwc.edu/wellman/Soluble-Fish.html#Sol

Jeff, the thing I love about your site is I can spend two weeks in the Australian outback learning how to predict the weather by studying the entrails of hand-wrestled alligators and come back to civilization to find your blog still here, filled with new posts, each one of which is guaranteed to be interesting. You are one of the most consistent bloggers out there and considering how long you’ve been blogging that’s saying a lot.

Now, any ideas on using Markov chains to simulate chat text at an online poker table? :slight_smile:

What’s your conclusion on the original comment? Do you now believe it was created by a bot using Markov chains?

Pretty interesting. Like a couple others, I see spam comments on my site from time to time that sure look like they came out of something like this. Pretty tricky, and probably damned near impossible to filter out.

I’m actually curious whether Markov-generated text would make for an effective attack on a Bayesian filter. I don’t know much about the latter, but my general impression is that the strength of a Bayes spam filter is it’s ability to evaluate the raw probability of a given set of words occuring in known-spam email vs. all email, independent of the order of those words in a given email.

In other words, a Bayes filter as I understand it wouldn’t care about the sentence structure or order of clauses – it’s not set off by specific fixed phrases so much as by the occurance or not, period, of unlikely-to-be-legit individual words in a candidate email.

The reason I’m doubtful about Markov as a means to defeat Bayesian filtering in specific is that a Markov regurgitation of source text doesn’t change the probability of any given word in that corpus appearing in the output. So if you feed “clean” text into Markov, you’re getting clean text out; feed “dirty” text (containing spam-flavored tokens) and you’ll get dirty text out. If Bayes just counts it all up as unsequenced tokens, you haven’t gained anything.

That’s not to say Markov spam can’t fool some filters. A filter that depends on matching a known-bad fixed string of words to reject spam may well be fooled by a Markov model that churns out unique syntheses of Jane Austen instead of just quoting the same verbatim passages again and again. So it makes sense as a gambit (spambit?), but as I understand it Bayes is pretty much the strongest general anti-spam technique going right now and Markov shouldn’t be a panacea in that sense.

Arbuckle is more than just strips that have been redrawn. They’re strips that are redrawn from John Arbuckle’s point of view!
They’re a reminder that Garfield CANNOT TALK. When you remember that Arbuckle cannot know what Garfield is thinking, the strip gains a new level of ridiculousness.
You don’t need to redraw them for this, of course. Most Garfield strips are improved greatly by simply removing Garfield’s thought-balloons.

I just wanted to share some randomly written text I found on one of my websites –

like the malfunctioning, spastic, wet-nightmare of an artificial omni-intelligent acid fried cyborg, battle of the bits is this mother larva queen spewing out spawned offspring in the form of bit crunchy pockets of noise paintings. elegant pulp data paradigms and musical notions assigned to snippets of life going in and out of relevance. algorithmic musical vessels of meaning for that which we assign… and then sometimes robocop assigns it for us! because we all know in our hearts that if robocop made electronic music, he’d kick all our asses so badly. THINK ABOUT IT. he’s pretty much an electronic man. if we let robocop learn how to use a drum machine, he’s going to destroy what we love the most. help me fight against this rising threat. show your support by calling 1-800-328-4475. if we work together we can preserve our silly human notions of what we think music should be. love, blood mr.

My friends and I will often sit around and ramble nonsense at each other. Markov can replace beatnik ideals but it’s not as much fun. I’m not going to say that engineered creativity doesn’t have it’s place. It’s just sad, for me, how often people would rather believe that such a presence was not the work of a fellow human. ;D/

@Graham

Or Python/Java/etc. bytecode…

Markov chains we’re talking about? One example of Garkov. The best description of material to Turn a Bayesian spam filtering. They’re even better! The most notable example of the states that page back to work on the links between pages. As a hard time we produce output so many times every letter follows an A, how unbelievably simple they work almost uncannily well, a Definition of the text corpus for Markovian models – are the original 1998 PageRank paper, titled The PageRank sites circa 1996 in this paper have come to a very free bottlemarkable, By

I wonder if you could use Markovian chains to write a decent chess AI?!?! I can imagine it now, Grand Master kooky :slight_smile:

I once wrote a Markov chain program that could interpolate between two graphs:

<a href="http://www.teamten.com/lawrence/projects/markov/">http://www.teamten.com/lawrence/projects/markov/</a>

Ha! Lawrence, I had forgotten midway through my first comment above that the comment box stripped html and actually linked to your project (as “over here”). Glad you showed up.

  • “I wonder if you could use Markovian chains to write a decent chess AI?!?!”

Well, here you have a problem of putting together a corpus, and one of defining a game state. Finding a corpus shouldn’t be too hard – there are many chess sites, and loads of historical games out there as well, so let’s call that a solved problem for the sake of argument.

(For the sake of counterargument, there is a corpus-size problem: do you want to feed it only excellent play, or are you willing to feed it a great deal more mediocre play? Markov does better with bigger corpuses, but is bigger and less-skilled worth the tradeoff?)

But, yes, so: you have your corpus of, say, many games of chess each with some dozen number of moves on either side. Call it a million total moves that you use to train the model.

Since a Markov table is essentially a series of state-move pairs, we need to define what a state is and what a move is in order to build the table form our corpus of moves.

The obvious answers, I think, are that a state is a given configuration of the board – the location of (up to) 32 pieces, some black and some white, on the 8x8 grid of the chessboard; and a move is a name of a piece and a pair of coordinates, say black pawn from king 2 to king 3 (forgive my lay nomenclature, I’m not up on my chess lingo).

Which, okay, great. But here’s a problem; among our million recorded moves, how much duplication of the board layout do we end up with? If (past the first seven or eight moves) we see very few duplications of a given state, there aren’t any options available to the AI at most junctures after the opening. In which case you have an AI that can play a few moves and then has to give up and fall back to a different heuristic.

In other words, a more complicated state means less novelty, and the explicit organization of a chessboard is a very complex state indeed in this sense. (Doing this with Go would be worse yet.) To get novel behavior out of your Markov chess AI, you need to find some way to reduce the complexity of the state to guarantee interesting choices even well into a game. Probably not good choices, but interesting, surprising, novel.

One approach: treat state as, not a record of the explicit configuration of the board, but as, say, the type of piece that was last moved by an opponent. Then analyze your corpus of moves like this, perhaps, stocking a 2-order table that picks the next move based on the previous two:

State: (I moved King, he moved Queen)
Moves:
(I move Pawn: probability = 0.27)
(I move Bishop: p = 0.13)
(I move Rook: p = 0.10)

That may be too simple for an interesting AI; perhaps you would want to incorporate move-to-capture vs move-that-doesn’t, or direction of movement, or other generalized aspects of a move. Finding a sweet spot that results in novel but not utterly random play would be the challenge and the fun, I think.

That’s just one approach, and off the top of my head as well so there may be some truck-sized holes in the idea. My gut says that applying Markov to chess at a tactical level is probably a doomed notion, but I’m neither an AI researcher nor a chess wonk, and so I wouldn’t be surprised to find that such models do have a role somewhere in existing AIs.

I wrote a little command-line program to do this as part of a CS223 assignment. It’s still good for a laugh. It’s fun to run IM conversation logs through it :slight_smile: Each line starts out with a prefix indicating who’s talking, such as “Rick: blah blah blah\nAndy: blah blah blah”. The generator always correctly predicts that a carriage return is followed by one of these prefixes, so the generated text still looks like an IM conversation.

You could take all of your blog posts and put them through a generator program like that. Crank up the ‘order’ until you generate a new blog post that makes sense and is also original … then post it and see if anyone can tell :slight_smile:

Is that what they have been doing?

For the last several years (about 2 years after baysean email filters were put into Thunderbird), I’ve been getting a lot of spam with what looks like randomly selected sections of various random works of literature. I collect some of my favrorites into what I like to call “spam Haikus” Is this what they have been doing?

It never works of course, as the words and phrases used in random literature never match very well with the vocabularly used by my real email partners.