Worse Is Better

Although it's a little hard to parse through, I was blown away by The Rise of "Worse is Better", because it touches on a theme I've noticed emerging in my blog entries: rejection of complexity, even when complexity is the more theoretically correct approach.


This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2004/08/worse-is-better.html

In my opinion x86 succeeded through marketing and positioning, not implementation.

COM is simple in principle but for quite a while remained very complicated in implementation. Consider what VB6 did for writing COM components. ATL also made it easier to write COM components in C++. Don’t forget that .Net Framework was originally planned as COM+ 2.0.

Jeff, I agree in principle that simple is better, but I think a few of your examples are way off.

x86 architecture (crap or otherwise) is VERY complex compared to a RISC architecture. It happened to win out in the marketplace because the suppliers of RISC architectures (DEC, HP, Sun, SGI, etc) were never interested in the consumer market. Intel, meanwhile, kept increasing complexity in order to maintain backward compatibility (remind you of Windows?), thus solidifying its market share.

The reason we abandoned INI files in favor of the Registry was because they were too simple to be useful and because the parsing API did not make it easy to add complex functionality. Registry solved that problem but introduced others that you mention. XML is not necessarily simple (think about what it takes to parse it) but the tools are now in place that make it simple, so the Registry no longer has an advantage.

COM. COM is not more complex than Web Services. Again, its just that the tools for developing Web Services are much nicer than the tools for doing COM. That, and the fact that Web Services are much more open than COM and give you better interop opportunities.

Thanks for your comments.

I think x86 vs. RISC is still firmly in the camp of “worse-is-better” vs. “the-right-thing”. Clearly x86 is worse than RISC from a technical standpoint, but it has triumphed for other more practical reasons (eg, implementation).

“COM is not more complex than Web Services”-- is this a serious statement? I know people far smarter than myself who struggled with COM for years. On the other hand, the concepts behind Web Services can be grasped in a day by the most average developer.

Good point on the .INI file-- and I agree that XML parsing is a lot of overhead-- but I think the general theme of “plain text” vs. “complex proprietary binary format” is still a valid illustration of how simplicity always wins in the long run.

I never understood why people think COM objects are complicated. I have a friend whom I hired as a Delphi programmer whose first task was creating a COM object to do some processing (our company mainly writes COM - not even COM+ - objects for a living). He started writing the COM object in his first day, and I had to specifically warn him “at some point, you will read some article on the web talking about how complicated COM objects are; ignore that part, it’s because they are writing them in C or C++”.

“…the concepts behind Web Services can be grasped in a day by the most average developer” – here’s how I explained the idea of a COM object: “it’s like a regular Delphi object, except that it derives from TAutoObject and you use the type library editor to add/modify methods and properties”. That was all he ever needed. Sure, basic level stuff - he never wrote aggregated objects or custom factories - but on the other hand, 95% of our projects don’t need those.

It’s like saying “programming with windows is incredibly complicated”. Sure, the first program I saw (for Win 3.11) that actually created a window with a button on it had 80 lines in C. Tools have advanced a little since then; trying to write a SOAP-compliant web service in KR C would be quite a difficult task.

I figured it must have been an April fools thing.

  • “COM is not more complex than Web Services”…
  • “I never understood why people think COM objects are complicated”

It MUST be an April fools joke. These comments must have been changed to appear they were from the past. Did you use GMail Custom Time somehow?

http://mail.google.com/mail/help/customtime/index.html

I don’t agree that the New Jersey solution was better from the way it is presented. The way I look at it, you can handle that condition once in the OS, or thousands of times in applications all over the place. Which is simpler? How many times will you have programmers who are new to the environment who will not put that check in? Or forget? Or maybe the required check may be different in a future release requiring changes to how many programs?

I do a lot of iSeries (AS/400) programming. It is common to see date format routines written in different programs scattered all over the place. Someone needs to convert a date, easy enough, I’ll write a conversion. I’m used to PC programming and using standard libraries so my first inclination is to write a standardized implementation and start pointing those programs to them instead.

For me this is “right” because if there is a problem or a new format is desired, the change required is minimized. Most dates I see are still handled as decimal or packed fields even though a date type has been available for years. Ever wonder why Y2K was such a pain on midrange and mainframe systems?

The simplicity here should not be letting the OS developer be lazy at the expense of the app developer. It should be to minimize the effort to handle a condition. And the way to do that is handle it in one piece of code in one place (in this case, the OS).

Any idiot can make something complex. It takes genious to make it simple. But applying the wrong solution just because it’s simple is cheating, and it will come back to bite you.

I have recently discovered the worse is better article and back tracked into this. I agree to a point.

First there is life cylcle issue simple design does not = simple maintanance. Decoubling reguires much effort and will increase delevlopment time but it will may back at modification time testing time, and even in development if the system is large.

Second the real issue is to decide on criterion for what is simple. Clearly Ochams razor is one criterion,but recall that the most correct rendering of this principle is " Do not mutliply entitiles beyond necessity" To assest the extent to which a design is simple in this way requires us to consider what is of is not necessary and to what purpose. Recall as well Einstiens injuction to " Keep it as simple as possible,but no simpler" Again our attention is called to trade offs in design. Note further, and with appropriate alarm the extent to which this insight is now almost universally abased into the “Keep it simple stupid” priciple.

Third a design heristic is a hueristic not a law and thus must always guide judgment not suplant it.

My take away from this issue is this Complexity has a cost. Sometimes it has a coresponding benifit. If the cost is offset by the benifit then the complxity is worth the trade off. In some cases a simple design will winout over a complex one becuse the added benifit is not worth ( or in the case of the market not seen as worth) the benifit.

I think the best example of this is ethernet vs token ring. But even at that there are case for which token ring is worth the extra cost, but it lost in the market becuse such cases are not dominante. Never forget that when weighing the cost/ benifits of a design.

Interseting site.

Doesn’t it depend on the situation? Sometimes correctness is required, sometimes simplicity (for all it’s benefits) IS the correct choice.

I would agree with Sean Cosgrove, with the OS Kernel level example, that the case of an interrupt should be handled by the OS. However, if the pure correct solution is too complicated, why not abstract the simple solution with a wrapper which automates what the programmer(s) will have to do (loop until successful)?