I see what you did there…
def change_the_world():
if (platform == inconsistent || platform == irritating) {
incite_the_community()
wait(changes)
return feeling_accomplished
def incite_the_community(platform) {
article = complain about the shortcomings of {platform}
for(i=0, l=influential_people.length; i<l; i++):
article += helpful suggestion
article += {influential_people[i]} should get involved
blog += article
return
Do you still program computers or did you get bored and decide to dedicate yourself to full-time social hacking. It must be nice to get to the level of accomplishment in life where you can focus a lot of energy on coming up with good ideas rather than being inundated with implementing them.
If your thought process even remotely follows the ideas that Joel Spolsky presented at his Google Talk back when StackOverflow was still a ‘new thing’, your suggestions subtly hint of social engineering.
Please don’t take my remarks the wrong way, I mean that in the sincerest respect.
As for standardizing Markdown, it’s more of a social than technical problem. Already, a very loosely defined pseudo-standard has been released into the wild and there is a greater ecosystem that revolves around many of the independent implementations (obviously). What you’re seeing is a result of the ‘bike shed’ effect. While it’s somewhat difficult to create a complete end-to-end Markdown solution, it’s much easier to create a 90% solution with custom exceptions. I wouldn’t be surprised if John Gruber is unreachable because he’s fed up with hearing color suggestions for the shed.
No matter what you do the majority of the shed painters will never fall in line with a standard. I’d bet that you have heard of ‘Confirmation Bias’ before. Let me present exhibit A.
To break through that the specification and the implementation need to be of a high enough caliber to hijack the ‘Markdown’ namespace. If that can be achieved then every other iteration will be considered ‘just another copy’ and the non-standard branches will wither. W3C did it with the HTML spec, Apple did it with electronics design, Google does it with everything they can. I’m not a Sci-Fi enthusiast but even I know that ‘2001: A Space Odyssey’ is the ubiquitous reference for all Sci-Fi. It begs the question, why is that?
The interesting thing about OSS projects vs commercial one is, in OSS the community becomes the currency. The larger the community is, the better the feedback, the faster the code quality will increase. Conversely, the better the quality the more people the project attracts. After a certain point the success of a project becomes a runaway effect. At least until somebody screws up (the project gets forked) or the platform the project is built on becomes obsolete.
I see your inspirational troll but I like technical pissing contests as much as the next guy…
First, for all the people who advocate the use of LL*, ANTLR, or equivalent language generators, take a minute to consider the excessive amount of overhead those approaches create. You’re talking about building a complete AST (Abstract Syntax Tree) with a tone of intermediate memoization for what should essentially be a simple top-down parser.
It turns out that Chomsky was a pretty smart guy.
That may ‘work’ on local/browser implementations but on the server-side it won’t scale for shit.
I would argue that Markdown has a simple enough grammar that it should be possible to parse it with a Type 3 parser using a single char regex matching + FSA scheme. We’re talking, no AST and very very little overhead. The only memoization overhead expected is equal to the number of chars that are accumulated between state transitions (ie one string, no complex data structures necessary).
We’re talking a no frills implementation but it should be lightweight enough that further optimizations (ex inlining) will be rendered unnecessary.
The only exception to this is where code needs to be further processed such as the numbered link (which I really like) style that SO uses and syntax highlighting.
For syntax highlighting, it’s trivial to add an inline parser hook that can be leveraged for additional processing. For the numbered links you can do a mark and replace through a second pass. Which could be further optimized by marking string index positions on the first pass.
In lower level languages this could probably be optimized even further using non-null-terminated-strings (ie ones that contain a length prefix) but I’m no prolific C hacker.
If you’d like to see a Type 3 parser in action, feel free to browse the source @ jQuery-CSV. I created it because I wanted to complete the first 100% RFC 4180 complete parser written in pure javascript. The jQuery isn’t necessarily a dependency but if I’m going to go through all the effort to hijack a namespace, I might as well go for the biggest one. 
It contains two minimal CSV-specific parser implementations, $.csv.parsers.splitLines(), and $csv.parsers.parseEntry() (the name should indicate what they do). Also, the library includes hooks to inline code into the parser stages for further processing (ex auto-casting scalar values).
I can’t really take credit for the idea though. The newest parser implementation was inspired by some very good suggestions made by the author of jquery-tsv. I didn’t even know what a Type 3 parser was a month ago. As opposed to the formally educated, I have zero formal education on programming; I’m just have a talent for picking this stuff up along the way.
Will all of the half-assed CSV parsers that can be found on literally thousands of blogs disappear overnight. Of course not. They will still exist but the power of branding is that a name can propagate much faster than a concept.
I’m not sure if somebody is measuring but I think we have a winner (me). Either that or my ‘confirmation bias’ is being a douche again. lol…