Marius, owing specifically to it’s strong gramaticallity, the Svada thing looks to be a different beast from a Markov model (at least at the word-by-word level).
Markov’s greatest strength is it’s simplicity – it is a tiny whisp of a principle from which all kinds of exciting and surprising things can emerge, including e.g. novel, semi-coherent text. But you can never depend on anything more than semi-coherence on the whole.
The model’s greatest weakness is it’s lack of context. It knows nothing of sentence structure, or of the logical relationships between words in its tables beyond sheer proximity. Part of what makes Markov language output amusing in many cases is the tendency toward sudden surreal shift across an unexpected point of transition; that that transition is sometimes grammatical or even semantically plausible is just a chance of luck, not a feature of the model.
A pure Markov model is in that sense literally more context-free than even a Context-Free Grammar.
It’s possible that the Svada code depends on a CFG for generating its sentences; or it may rely some more structurally complex model, or on templates. It’s very neat work, regardless, but I don’t believe at a glance that it is Markovian at all at the level of sentence-construction.