a companion discussion area for blog.codinghorror.com

The Principle of Least Power


This is how I see it:

I do understand the point this article is trying to put forward, but it very much depends on the problem that we are trying to solve with the software that is being built. In the end, we all design software to solve a certain problem that we have whether it be personal or at work.

I believe that the way in which we represent information is simply a design choice in the design phase and will depend on how “open” the software is and for what other purposes it will be used. The more closed source and “secret” the information needs to be the more complicated or obfuscated we will make it.

If we assume we are dealing with open source software, then I strongly support using formats which are simple to understand and reuse and follow some kind of standard where possible. It makes everyone’s live simpler.

One of the reasons for using a java applet may well be to try to hide the source as much as possible. I don’t think that a website which uses applets really intended other users to use that website as a source of weather data. In the end a well design peice of software will keep the GUI separate from data store/representation in which case it doesn’t matter that a java applet was used.

I agree with Tom: “Use the right tool for the job.”


HTML is not code, it’s markup (data). Dublin Core Metadata and “the content of most databases” is, uh, data. ACLs are still data!

All Tim is doing is lumping code and data into the same category, then pointing at the data and saying it’s easier to analyze because it’s simpler. No, it’s easier to analyze because it’s content, not instructions on what to do with the content.

Ugh, Aaron, then what is a well-written LISP program? Then whole point of LISP bottom-up programming is that you boost the language itself with higher-order functions and sometimes syntactic macros up to the point when you actually get to write the application logic, what you are actually using is hardly more than an extremely handy domain-specific markup language. It just has a more succint syntax than XML but (customer (field 'name) (field 'address)) isn’t that much harder to analyse programatically than it’s XML counterpart. It’s a trivial transformation to XML.

So Tim is wrong but for that reason you mentioned. What Tim did actually forgot that power IS simplicity - the power you have at your hand when you write your libraries is simplicity when you define your business logic.


The problem with understanding what Jeff is trying to convey is that you really need to be in a position to compare different languages to each other.

Remember that no language is idiot-proof (An idiot can write bad code in any language), although some seems to be genius-proof (I’m no genius, so I’m not sure, but it seems to me that certain ideas in certain languages enforces complexity), but very often people use ‘generated’ arguments against stuff that they have no personal experience with, which is detrimental to any debate.

So I would humble recommend those of you who have not done any non-simple project in JavaScript (preferably using a modern client-side framework like Dojo, et.al.) to do so to come in a good position to compare what (I think) Jeff is referring to when he mention simplicity.

For me, the two major bullet points when it comes to the simplicity of JavaScript is duck-typing (not unique in any way) and the ber-simple object model: Every object can be enumerable over its properties, everything is an object (except scalars), functions are fist-class objects.

This means that combined with the feature that you can create objects (including anonymous functions) on the fly for use as arguments or return values, bloat gets cut by orders of magnitude.

That’s simple for you.

Also, if static typing and enforced exception catching was as important as it seems, it would never have been possible to build something so large and powerful as Dojo, or indeed Yegge’s yet-to-be-released Rhino on Rails.

I have a sneaking suspicion (after eight years as a Java/JEE coal-miner) that those two features (among others in the same camp) just maybe have been costing us a whallop more of complexity than it has actually given us usable ‘security’ in the application.



@Peter Svensson, I’d agree that Java’s type system and exception declarations contribute to the complexity of codebases. But static typing isn’t necessarily as bad as Java’s. It’s good when the compiler catches real type errors. Java’s types make you 1) spell out types all the time (the compiler can’t work it out) and 2) make up meaningless types to make common situations work (for example, if I want a list that can contain As and Bs, I need to invent some common superclass that A and B inherit, whereas in dynamically-typed Ruby or better-statically-typed Haskell, I can just put As and Bs into the list).

Of course, all that’s tangential to Tim BL’s main point to prefer formal languages that are lower down the Chomsky Hierarchy of languages, when deciding on a representation for the web.


“all programming will be web programming” Honestly Jeff?

The web can never replace several large sectors of software, such as systems level work and, especially, embedded work.

Take a look here for some numbers. http://www.embedded.com/columns/barrcode/218600142?printable=true


HTML is not code, it’s markup (data). Dublin Core Metadata and “the content of most databases” is, uh, data. ACLs are still data!

All Tim is doing is lumping code and data into the same category, then pointing at the data and saying it’s easier to analyze because it’s simpler. No, it’s easier to analyze because it’s content, not instructions on what to do with the content.

More powerful languages are far more conducive to easy analysis than less powerful ones if “power” implies “expressive power”. Try analyzing some old C code that deals with strings and compare it to the C# equivalent. JavaScript is a powerful language, compared to, say, C or VBScript/Classic ASP. It’s easier to analyze JavaScript code than C code, all else being equal.

Then again, he didn’t really explain what he meant by power. If it’s some arbitrary definition of power that includes pointer arithmetic or Windows API calls, then yeah, I guess JavaScript isn’t very powerful.


@Andy: It might not be Duke Nukem Forever but what about a Doom(isque) POC?



Not sure if I agree with this statement in this blog post. Language power has nothing to do with information transferral and display and format.

Practically any language can parse, format, retrieve or forward information. This have nothing to do whether or not the information is accessible.

I might write a program in VB, Perl, ASP, C, Java, .Net to create an HTML page that displays the weather. People can then look at the data. HTML is open and data readily accessible.

I might also write a similiar program that displays the weather in a Flash Control, or in a Java, or maybe an Active X control written in VB, etc. In these cases, the information is no longer accessible due to the container it is now in.

If the first instance, the container was the browser and the format was HTML. This is an open means of rendering and displaying the data.

The second case the container was propertiary or additional code was written to not display the raw data, but to format and convert the data so that it was no longer accessible.

In early versions or web, text and images were about it. Now it is much more complex. People want bells and whistles, can’t just dislay the temperate and whether or not it is raining, you have to put a 3-D or 2-D map behind it and show the location that way. Also, user needs to manipulate the map as well. It is much more than just the weather temperate. The added complexity usually hides the underlying data in some fashion.

If Javascript becomes defacto standard, then most web applications will implement it as it is a powerful scripting language.

Maybe HTML just isn’t good enough for the web anymore, after all it is TEXT markup. Maybe a new protocol is needed for the more complex forms of data that are being presented on the web.


HTML should be a dead markup language. The web has grown far beyond what it should be doing with HTML, but due to backwards compatibility, we instead force HTML to do thinks it was never intended to do.

CSS, and manipulating the DOM using javascript are crude hacks to create dynamic web content.

The problem is that there is no real solid replacement. Its certianly not flash, and Silverlight will probably not gain the across the board support it needs.

So we keep hacking away mixing markup with logic at a vain attempt to create a rich user experience.


Oh, the pain. There’s a lot of seeming missteps in Tim Berners-Lee’s logic, but this helps explain the problems with HTML and the XML family of technologies.

HTML makes a crappy application programming language. Javascript + XML + HTML makes a crappy application programming platform. Why did programmers give up everything we had built up to this point for easy networking.

The secret to the success of the Internet was e-mail, HTML for documents, and networking. These were all successful, especially the networking part. HTTP made it rather easy to get a resource from the server to the client and TCP/IP made connections work.

HTML’s simplicity was a powerful factor back in the day when everyone had to impliment it. Being easy to impliment is much more important to early adoption than expressiveness, but unfortunately despite the fact that we’re no longer there, TBL is stuck there mentally.

HTML + XML + Javascript replaces one headache (understanding existing code) with another (having to invent the code yourself); or it simply has the same problems as other languages. Now Javascript is the nicest piece of the trio of HTML + XML + Javascript, but it can only do so much to prop up a language designed for documents in an application world (HTML) and a data language not suited for large amounts of data (XML).


Ok, Atwood’s law is starting to accelerate at a geometric rate.

My twitter feed has been full of things like javascript projects for ldap servers and clients, gui bindings, encryption, binary file decoding, webGL binding, etc.

Good call, dude.


Luis Montes left out Node… now we’re writing entire web servers in Javascript.


It’s bigger, and taking hold faster than any of us could have imagined. Jeff, you were way ahead on this.



And now its not only React JS, Vue.js for the GUI, but let’s add GraphQL to rethink RESTful APIs. Important implementations of graphql servers are in javascript.


10 years and is more true than ever. Hail to the Atwood Laws!



Who ever claims that everything is implemented in javascript (mainly @EdgarCerecerez), because of how weak it is, are wrong. This has nothing to do with backend services, but rather data transparency and how it’s being sent to the client. If anything, over the years since death of applets and flash, javascript became much more complex and retarded at the same time because of how many tools and trap choices there are to accomplish the same thing. Being a language that was designed over the course of 20 days it had a lot of nuances, but that’s far worse when there are hundreds of tools trying to solve or reintroduce problems that were already solved. Being a javascript first developer this makes me sad, because instead of building on experience provided by other languages like java’s, ruby’s, python’s and etc. it was ignored and everything went south very quickly.

Important implementations were never in javascript. Even node itself is written in C. Main tooling components are written in C, only the scripting parts (for example sets of tasks) are written in javascript.


It just keeps getting better

Which is running real Windows 95 as a JavaScript-based desktop app (using Electron). It does this using a virtual x86 machine written in JavaScript: