The Greatest Invention in Computer Science

And to hit on the first comment’s complaint: it’s disassembled code – but even the comments suggest the existence of routines.

It would not surprise me to hear that routines predate the compiler or perhaps even assembler. There’s not much “computer sciency” stuff behind the idea.

How long should this routine be? How long is too long?

Well, to see a practical life example I just got thrown in to work with:

line 121 : "int main (int argc, char* argv[])"
line 2177: } /* main */

This is most definitely “too long”. And be sure, in that piece of junk called “software”, this is not even the longest routine. No, I saw a 7000-liner already, and I can’t tell, what other neat surprises may be still be hidden there.

(BTW, 7000 lines: Weren’t were times where we wrote a whole operating system in less than that?)

How short is too short?
When is code “too simple” to be in a routine?

Actually never. Such short routines get inlined these days anyway, so even a zero liner (sic!) can have it’s use without any “performance impact”. Especially in object oriented programming where a lot of routines just do nothing until inherited. :wink:

http://en.wikipedia.org/wiki/Turing_Award

I used to work in the computer gaming industry, and there was a time when unrolling loops was imperative, so we often had functions that were several thousand lines long. I think the longest one I ever had was around 150,000 lines where we had to do 2 operations per pixel on a 320x240 screen. But luckily we didn’t really have to maintain that, we just wrote programs that wrote the C files.

Actually, I believe the single greatest invention in C.S. is the Data Type. Seriously, think about it: the array, the binary tree, the hash table… How would you do anything useful without them?

Of course, programming without routines is a pain in the… somewhere, but how would you manage information without arrays? Or how would you search without trees?

Ups, my mistake, I wanted to write the “data structure”, not the “data type”

Sorry

With routines, it comes down to whichever was the first chipset to support either JMP or call, as those instructions formed the basis for early routines within assembly programming.

These higher-level languages that supported routines were being compiled down into the same instructions, with different variations and techniques used to support parameters (since they were not supported at the time). These early languages that supported routines didn’t simply extrapolate the process out numerous times throughout the code, but rather they compiled them into lower-level assembly routines.

With jump and return (these basic instructions had many different names in different architectures) support in chipsets, you have the foundations of what a routine is - so routines are very very old and very fundamental.

I think you have come way too late. Hopper’s whole concept of a “compiler” is the greatest invention in computer science. Everything else is just arguing over the paint job.

Oh ye, and instructions themselves can be considered routines, its just that they are wired… very meta (and depends how far you want to go ;))

http://en.wikipedia.org/wiki/A-0_programming_language

I would agree with Robert on that one. Although routines certainly came before the compiler, the compiler is probably much more important in the advancement of computer science.

Perfect pick - you just have to look into the assembly code of Microsoft’s Basic for Commodore 64 to see how things are not always arranged as routines, in order to save bytes. This was definitely not written with maintenance in mind.

With this post I feel obliged to explain how the computer do routines. Basically in the memmory there is the stack, and in the processor are the registers. One of the registers is called Program Counter (PC) which tells what instruction you are executing and gets incremented after each instruction is done. When you call a routine you basically put the values of the PC and all the others registrers in the stack so you have all the registrers avaible to your subroutine. Then you JMP (JMP basically put a value in the PC register, this value is the instruction you want to execute now) to the subroutine code in the memmory and when you finish de subroutine you take all the values back from the stack in the registers, the last one to be put back is the PC and it’s incremented. So the next instruction to be executed is the one of the program right after the subroutine call.
The point of all of this? It’s the part where you put all the values in stack and taking then back cause (besides causing stackoverflow) the function call overhead. So use your subroutines wisely, and use C-like macros (#define) whenever possible because those are simply copy-and-paste of the code (so no overhead, well except for the pre-processor), the code itself will be bigger but more fast. And by all means try not to use recursion.

Nice plug for the boardgame Othello. Wasn’t, ‘A minute to learn, a liftime to master’, writ large across the box? Very '80s.

Wrapping my head around the the use of ‘gosub’ in basic was my very first challenge in learning about programming, also in the 80’s.

Nanu nanu!

I was just reading through some of the code for Pac-Man (a href="http://cubeman.org/arcade-source/pacman.asm"http://cubeman.org/arcade-source/pacman.asm/a) and wishing it was written in something a bit more readable than assembly language.
@Scott Jackson: Look at what it says at the bottom of that page: “Disassembled 9289 instructions.”. In other words, that doesn’t necessarily have to be written in assembly.

Also: IMO assembly isn’t a language, it’s just an instruction set…

“So use your subroutines wisely, and use C-like macros (#define) whenever possible because those are simply copy-and-paste of the code (so no overhead, well except for the pre-processor), the code itself will be bigger but more fast. And by all means try not to use recursion.”

Unless you’re programming for embedded systems where clock cycles are still at a premium, you probably want to avoid using macros unless you have a really really good reason. The few clock cycles you gain by using macros instead of functions is no longer worth the loss of type and scope safety in most projects.

“Also: IMO assembly isn’t a language, it’s just an instruction set…”

It’s a low-level language. It allows you to express the instruction set in a somewhat human readable manner. i.e. MOV EAX,DWORD PTR SS:[ESP+34h] instead of having to express it in a sequence of binary or hex digits representing opcodes, operands and offsets.

How about the invention of high level programming languages themselves??

Why “the routine”? Why not the microchip, which rendered buildings full of vacuum tubes obsolete? Why not the keyboard which made programming a lot easier compared to punch cards? Even if you restrict your choice to software: Why not the compiler (or even just the lexer)? – not having to write machine instructions directly is arguably a bigger improvement than getting rid of line numbers.

The point everyone in the comments seems to miss is that, without functions, none of these other things would have been possible. From the point of view of abstracting electronics into algorithms, and being able to do useful things with computers, and build upon prior work, the routine is clearly most important.

Of course, it’s a bit mistaken to give CS credit for functional algorithms. Mathematicians have been doing that centuries, it just took us a little while to figure out how to express them in a way machines could use.

All this talk of hardware is silly.

Dijkstra said it best, as usual: “Computer science is no more about computers than astronomy is about telescopes.”