JavaScript and HTML: Forgiveness by Default

Have to agree with Adam on this,

“Unfortunately, the Draconians won: when rendering as strict XHTML, any error in your page results in a page that not only doesn’t render, but also presents a nasty error message to users.”

That’s why it’s called STRICT. If there’s a problem on my page I’d like to know about it so I can fix it, rather than relying on some browser behaviour to kick the issue under the carpet.

“forgiveness by default is what works”

…but makes us lazy.

“forgiveness by default is absolutely required for the kind of large-scale, worldwide adoption that the web enjoys.”

That’s a rather magnificent jump to conclusion there, I think. All things considered, should the tools enforce the developers to follow the specification, the eco-system of web development might actually be in a much healthier state today.

Take a look at web browsers for instance. If the implementations would stick to the specification and avoid making their own interpretation then barrier to entry for creating new HTML/CSS/Javascript capable clients would be much, much lower. We’d see an environment where the technology would be driven by innovation rather than be hamstrung by the quirks of other dominant implementations.

Of course, this situation serves certain vendors just fine. Those vendors are much more interested in customer lock-in rather than truly pushing technology forward and enabling us to do MORE with the existing foundation. A certain vendor missed the Internet boom so completely they’ve done nothing since other than try to keep things from moving forward at an ever-increasing pace.

Same from developer perspective – imagine being able to actually produce functionality for the web instead of wasting most of your time debugging “forgiving” (read: wrong) implementations of various clients out there. Imagine that wasted effort actually spent on innovation, or being able to code that neat UI feature rather than finding a certain client implementation doesn’t support the spec correctly, and giving up.

You’re arguing that the web would not have come as far as it had if the shitty “forgiving” browsers didn’t exist. I’m wondering how many years we’ve fallen behind from where we could have been had the early implementations come with at least a modicum of quality, and with an expectation that the developers are able to produce code that actually meets the specifications. Had the mass-market tools for producing web content actually created documents that validate correctly against the specs, how much more could we have achieved by today?

To me the answer seems clear. Forgiveness in this case has led to a situation where the developers are stuck waist-deep in a mud of “forgiveness” instead of being able to create something greater and better.

Dis Gruntled Um, the web is pretty much, by definition, platform- and user-agent-independent.

If you want to say “this practice is breaking IE” then say so. Don’t claim it’s breaking “the web”, as it’s not. It’s breaking a single browser.

Yes, IE has a lot of market share. But IE is not “the web”. If you want to complain about /de jure/ standard javascript breaking IE, complain about it breaking IE.

Given that the number of different platforms that IE is not even available for is increasing as time passes (OS X, mobile phones, linux desktops, etc…), and they’re getting more popular, writing for IE-only is becoming less and less prudent.

Personally I think Opera handles the XML errors the best… it will render as much of the page as possible (up to the error) and show a huge message explaining why it cannot go any further.

However, when it comes to the ‘real world’, I develop my websites using FireFox (because of all the developer tools) and have the pages being delivered with the XML header… that way, I cannot even see the page until I get my errors fixed… then on the Live site, the configuration file (which detects which server it is on) will drop the pages into the more forgiving HTML rendering.

Saying that XHTML strict should go ahead and display is like saying your compiler should have known what you meant and tried to run the code anyways. But by the standard of the XHTML strict spec, the document is not conforming.

What I find broken about the example given is: “Unfortunately, the Draconians won: when rendering as strict XHTML, any error in your page results in a page that not only doesn’t render, but also presents a nasty error message to users.”

I have never seen the w3.org validator validate a page that wasn’t able to be validated by any browser. Perhaps the validator he was relying on is the one that’s broken?

Adam Bosworth calls this sloppy:
http://www.adambosworth.net/archives/000031.html

Tim wrote:
“Javascript is the single reason I’m looking to move away from web programming as a career. It is the cause of 90% of my testing and debugging. I want to debug logic, not fiddly cross browser issues.”

You’re are confusing two different issues; namely Javascript development and multi-vendor platforms. Developing solely for a single platform (ie Firefox) using Javascript is totally pain-free. Extending that support to Opera, Safari and especially IE is where the pain appears.

None of this pain is due to Javascript.

So, if, say, I decided to abandon web development and build everything in Javascript/Silverlight, the only reason it would be less stressful would be due to the single-vendor platform.

I started a comment here, and when I got to three paragraphs, turned it into a blog post: http://www.nedbatchelder.com/blog/200704.html#e20070429T080859

Once standards cease to be hijacked, extended, and bastardized by manufacturers, we might get something workable and ubiquitous.

Also, I shudder at exhortations to use add-on junk like Flash. Code for the Web! Don’t use proprietary add-ons! One man’s ActiveX is another man’s blank page!

@Grant: The problem, is, naturally, that your code is not quite correct for the test, since ‘i + “br”’ runs the risk of converting i implicitly to a string, which plays holy heck with the loop. Written using ‘i.toString()’ in place of i is the correct way to do it.

I ran both cases in IE7 and FF2.0, and I didn’t see a problem.

And those of you that agree with Dave Murdock unfortunately disagree with ECMA; ECMA-262 12.6.3, explicitly permits for() without var in the first expression list. Were IE and FF to get sticky over that point, they’d not be compliant with the specification. In a world where everyone insists on “compliance” with this or that RFC or specification–resistance is futile–then you have to start asking yourself about how good the RFC or specification is.

As for all those error boxes popping up on web pages*, what I’ve seen is that most of the errors have to do with annoying pop-up ads that are trying to violate sandbox constraints. The “developers” of those snippets really ought to learn about “try()/catch()” and playing nice with end-users. Of course, since they’re writing pop-up ads, they probably have no interest at all in playing nice. Either that, or it’s years-old scripting that hasn’t been updated in recognition of the security holes being closed over the last couple of years.

  • I see them, too, as about half my development time for the last decade has been spent writing Javascript, and I keep the debugging on AT ALL TIMES (everyone developing scripted pages should). CNN’s website is particularly bad for this.

Hi man,
I like reading your blog. One problem however, I am stuck on IE 5.13 (Mac OS9) - no dosh for upgrade to X - and I have to call up the source in order to be able to read anything inside “pre” tags on your page. While I read through the source I found that you don’t close your paragraphs. Close them - like you should do with block-objects - stick the pres in between as block-objects in their own right and everybody will be able to read everything on your page. I tested it. Yeah, some browsers are sticklers.
Carry on the good work!

Grant is right, the JS “example” simply isn’t true.
Run this in Firefox Error Console:
startIndex = 10; endIndex = 20; rv = Array(); for (i = startIndex; i endIndex; i++) { rv.push(i) }; rv.toString()
And guess what you get?
rv = [10,11,12,13,14,15,16,17,18,19]
Didn’t you recently complain about Bloggers not being Journalists? :stuck_out_tongue:

Sloppy HTML has it advantages and strict XHTML has as well.
XHTML is XML, so it is more “portable” between different Applications.
You may work with XSLT on it; you may embed other XML languages (such as SVG/MathML) into it; you may use any XML parser to read it; those parsers/writers supporting DOM may also manipulate it easily.

And not being able to be lazy is not a bad thing. Developers need to write better markup, which usually leads to more compatible code as the browser simply doesn’t have to “guess” that much.
Most bigger sites use a CMS anyway, so that “strict” shouldn’t be a problem.

See Feeds (RSS/Atom) for example. They are XML documents too. Need to be strict.
And yet, even although almost every major site has a fancy RSS icon somewhere nowadays, I rarely see a broken (XML parser error) feed.
Compare this with the relative numbers of “broken” sites due to IE only fixation. Laziness sucks.

So, in my opinion, HTML is a good tool for starters, HTML Strict (yep, there is) is for those wanting sites that work, and XHTML is the logical step to interoperablity.
While sloppy JS is a tool for nobody, and strict JavaScript (addressing the browser quirks as well) the only solution. You don’t write sloppy Whatever™ code after all, do you?

Brook Monroe:
@Grant: The problem, is, naturally, that your code is not quite correct for the test, since ‘i + “”’ runs the risk of converting i implicitly to a string, which plays holy heck with the loop. Written using ‘i.toString()’ in place of i is the correct way to do it.

What implementation does act like this? This would be a WTF.
You’re not assigning i. You pass the intermediate/temporary result, which is indeed a string, as a function parameter.

“While I read through the source I found that you don’t close your paragraphs.” - bernhard
The p doesn’t need to be explicitly closed, even in HTML 4.01 Strict. According to the http://validator.w3.org/check?uri=http%3A%2F%2Fwww.codinghorror.com%2Fblog%2Farchives%2F000848.html, none of the errors seems to be show-stopping. It might be CSS issues, but I don’t have IE 5.x to test with.

“The thing I miss from XHTML is auto-closed tags, mainly just because looks a lot cleaner.” - Foxyshadis
Are you sure that it’s XHTML? The following is HTML 4.01 Strict according to “http://validator.w3.org/”: ‘!DOCTYPE html PUBLIC “-//W3C//DTD HTML 4.01//EN” titleHTML 4.01 Strict/titlepThis is an example of a strict HTML 4.01 page. The html, head and body are optional and p doesn’t need to be explicitly closed.’

Oh wait, you probably meant that you “miss from HTML”.

Jeff, in reference to http://www.codinghorror.com/blog/archives/000846.html, perhaps you should allow visitors to use basic HTML or BBCode so that our comments would be a lot more readable. It’s very hard to read several pages of plain comments.

The ability to quote, link and emphasise would be nice.

You can’t compare the two. Apples and Oranges.

If you write malformed HTML,the browser attempts to make the best sense of possible and render something.

If you write javascript with syntax errors, the interpreter tells you, somewhere, and the doesn’t run any more scripts on the page. Logical/runtime errors are handled about the same as any other language.

Real XHTML didn’t fail because it’s draconian, it failed because Internet Explorer doesn’t support it. It never had a chance.

Absolutely agreed. Fortunately, we’ll most probably have HTML5 (aka Web Applications 1.0) instead of XHTML2.
http://xhtml.com/en/future/x-html-5-versus-xhtml-2/

BTW IBM’s PL/I compiler was quite forgiving (Because those days you had often just one compilation of you card deck per day. Can you imagine that now?) and tried to do lots of work to fix programmer’s errors and to have as much combilible code as possible. And many times it was right and helped a lot. E.g. it added semicolons if they were missed, and nowadays Javascript does the same. Most of the time - correctly.

even in the microsoft web sites (msdn, etc) there are a bunch of javascript errors in IE7