Learning on the Battlefield

I occasionally get emails from people asking how to prepare for a career in software development. Some are students wondering what classes they should take; others have been bitten by the programming bug and are considering their next steps.

This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2007/03/learning-on-the-battlefield.html

You learn from failure too. Arguably more so…



I’m currently going to school for a CS degree… however it seems to be a fairly useless thing. I’ve done a lot of programming outside of school, and I feel that that has helped me far more then school ever will. The classes I take seem pretty useless… I taught myself threading the summer before I took any classes on it. Even then, the class on it didn’t really go into any of the problems that occur when dealing with them.

I had a job last summer working with databases… that probably taught me more then I’ll learn in my upcoming “Database Systems” class.

I’ll be starting my third year of classes in a few months… So far, I haven’t learned anything relating to decent development methods (source control? No one here knows it exists.). Nor have I learned anything about program design (other then hearing about the advantages to object-oriented code on a continuous basis)

Schools are not taking the correct approach to things. At my school, students going for an Architecture degree spend a ton of time working with the latest tools in their field. They have people come in from industry to grade their projects. By the end of school, they will probably know what they are doing. In contrast, CS majors have two classes where Java/C++ is taught. These two classes are easily the majority of the programming that will be done in school… Yet they only have students programming for 1.5hrs/week. I’m no longer surprised when students who have only done what’s required of them in school have major problems understanding whats going on in class, and problems doing the pretty simple programs assigned to them.

If we stick with the battlefield analogy for a moment, I would claim that you better be prepared for the battle. Entering the battlefield without proper preparation will only get you killed. If someone else relies on you winning the battle to survive, you might get them into trouble too. Don’t get me wrong, I strongly believe that experience is critical to improve, but IMHO it is important to enter the “battlefield” with a solid foundation. You need to spend some time to learn and explore outside the battlefield to get to know your strenths and weaknesses. Once that is done head for the field.

Kim, the consequences for software failure almost never involve loss of life-- only loss of money. This is strikingly different from other engineering disciplines, such as the oft-quoted “bridge building” metaphor.

While I don’t think people should enter the work force unprepared, most of the nitty-gritty learning will be done on the job anyway. Military rigidity is wholly inappropriate for the fluid nature of software.

In software, you never stop learning how to learn. Some might argue that you have to run as fast as you can to merely stay in the same place…


“Learning how to learn is vital to being able to work, and that’s something a good university will teach you.”

I often recall myself saying the exact same thing. What’s the point in learning everything about a certain subject. You’ll end up forgetting it in the end anyway. What hopefully sticks is some kind of “intuition” that guides you through similar stuff in the future.

I’m not sure that university can make a programmer, but I do believe a good degree can instill good habits. I think degrees can also filter out those who aren’t inclined to work in the industry. Thats not to say that there aren’t good programmers without degrees, as the best programmer I’ve worked with didn’t have one.

Oddly enough, the best and worst programmers I know are the ones ‘bitten by the bug’. The best ones know how to channel what they have learnt, and have strived to learn through practice exploration. The worst ones get idealistic about the small amount they have learnt, and will push that along (a lot like attacking someone with a blunt sword).

I think the balance requires a solid education, and a large amount of trial and error - think of the Samuri who trains but never fights, versus the Samuri who charges into battle not knowing how to conduct himself.

Loved the article (as usual) but I don’t have a response.

Just wanted to point out that images arne’t showing up on any of the blog articles in firefox… works fine in ie. Not sure if it’s just me, or if anyone else experiences this as well.

I have just finished my CS degree which I have been doing on and off for the last 10 years part time. For the last 5 of those years, I’ve been working full-time as well.

I have to say that almost nothing I have been taught directly at university has been of use to me in my professional life. Having said that, being at university exposed me to several different ways of thinking which I have found to be of use.

If you aren’t going to go to university I think that the best thing you can do is to play with as many languages as you can. Learn the “wierd” ones like Scheme, Prolog, Haskell, and Lisp. Pick up Python, Perl and Ruby. Try out a few different environments such as ASP.NET, CGI, Ruby on Rails, PHP, Windows Forms, Tcl/Tk, GTK and Command line interfaces. Each one will teach you something new about the way programmer intent can be expressed or how to interract with your intended audience.

Never stop coding, never stop learning and try not to make the same mistake over and over. Being a programmer is about passion.

As an employer, I much much much prefer to hire people with at least 2 years of a comp.sci degree under their belt. That’s enough for them to have learnt the basic structures and algorithms. We’ve had people who were self-taught (a lot of them, as we initially favoured that), but we’ve found that it leads to some awkward blind spots. Better by far people have a formal grounding, then go out and learn the messier bits of reality.

In terms of office dynamics, it also helps to have a mix of people - the more varied the experiences of the programmers, the more they have to share with each other, which is good all round. We have programmers who were sysadmins, programmers who are formally taught, and programmers whose experience comes mostly from the field - and they all have a slightly different view on any given problem.

Overall, though, I really strongly would recommend against picking work over school, at least until you’re past first year or two of uni. Oh, and from here, a uni degree means you can travel and find jobs - that’s not to be discounted lightly.

My recomendation is to learn another subject in addition to programming. A solid understanding of the problem domain goes a long way.

The main goal of college education is - increase the mental maturity of students as well as soak their mind with the basics of a subject matter. Although college may cover only basics, remember that it is an important foundation for the rest of our life. Perry’s scheme of intellectual development is worth a read:

“It appears to me that software development is happening in industry, not in the universities.”

This is like saying that cars aren’t built by the materials science people. Damn right they aren’t.

Universities have a focus on research. The industry has a focus on producing products. The fundamental fault people make is thinking that a university education is preparation for work. It isn’t, and it can’t. In order to expose a student to all things in a software development company, it would have to be one.

What a university can do is allow you to widen your horizons. You will explore many areas of computer science - different languages, paradigms, and aspects of computers.

Many (but not all) of those who go straight to work end up with a narrower view of programming. For example, if they get a job as a PHP coder, then they will never ever learn anything except PHP, and when PHP is “out” and something else is “in”, well, they’re out of a job. (In the same way, many navel-gazing researchers forget that the wonderful new thing they have invented can’t be built in practice.)

Universities can provide you with breadth. They can keep you from being stuck in one small part of the programming world, and will let you see what may come out of the labs. That’s why you should stay in touch with the CS research community even after you’ve graduated.

Remember, the samurai who fought with swords were eventually slaughtered by the ones who got guns.

University = research
Software development company = commerce
Battlefield = you, commerce, research, social skills, coding, balancing work and free time, your family, life itself, …

For me it’s about solving real problems. By real I mean actually filling a need right now. In University, you tend either to solve non-real problems, or to solve very narrow slices of real problems.

Quite often in industry you find yourself taking a week to solve the core of a problem and 6 months to actually finish the application. This is because in the real world you have many other peripheral and non-functional requirements to deal with. The thing needs to be usable, fast, scalable, secure, deployable, upgradable, pretty, stable, profitable, maintainable, etc etc. The devil is in the details, so to speak. But solving these problems efficiently and cooperatively is the mark of a great developer (and development process).

Agree 100%. Baptism By Fire is the only way.

Get data structures and algorithms down in school. After that, you learn best by coding, and by being exposed to those with experience who code well. You’ll find such people most frequently in environments where code is moved to production quickly, and frequently.

I like this analogy, the point to me is, you cannot get better at writing AJAX enabled web “applications” by reading about what others have done. I 100% will tell you that there are non-obvious problems you WILL encounter for any non-trivial application.

I was once tasked with writing an example application. The only requirement was that it “uses struts”. This was the only application I’ve worked on that I would consider an (almost) complete failure. Every time I tried to inject a functional requirement, we got tied up with architecture details (should it be an interface or an abstract base class, explain) and never actually delivered anything.

It was amazing to me that the “leads” on the team where 100% happy to sit around for 10 hours per day and talk about the reasons why one solution or another might be better, but actually refused to TRY anything. In my experience, if you don’t actually try to implement a design, you actually have no idea if it’s going to work or not.

I ended up getting my job without a degree. granted I do make only 50k (this is my first year there though). And not have the CS degree has actually made it easier. The guys I work with that have them always seem to make things MUCH more complicated than they need to be… thus logging more hours… thus making more money… damn is that what college taught them? :p.

You mentioned CodeProject, but you forgot an important caveat: Don’t actually use anything that you find on CodeProject. It may be alright for getting your own stuff rated, but due to the quality of most code there Alex at Worse Than Failure had to make a special exception that code from that site not normally be featured on his site, otherwise he wouldn’t need any outside contributions for years.

I learnt a lot on the job as well as in Uni. To say one replaces or supercedes the other is nonsense.

Get practical experience while in Uni, it’ll help you learn better.
Get theoretical experience while at work, it’ll help you code better.

Excesses of one lead to mental m@sturbation and analysis paralysis, excesses of the other lead to shoddy copy+paste code that lacks expression. Life’s a balance.