How Should We Teach Computer Science?

Greg Wilson recently emailed me the following question:

This is a companion discussion topic for the original blog entry at:

Algorithms and data structures, the very core of CS, power all of today’s top notch software products and services. Without them, we would have no search engines that crawl billions of documents, databases, sophisticated network routing protocols, iPods, video games, distributed computing, etc. Basically all that’s left is the logic that powers today’s crappy applications that your boss wants to outsource.

Without a basic 4 year education in CS, you’re left with an college graduate who really has no hope of every producing anything of merit unless they’re the rare of exception that’s self driven, and has at least studied the fundamentals on his or her own.

What’s left is software engineering as you mention, which is quite important, but not to the detriment of a proper CS education. Just as a variety of other posters have mentioned, the university isn’t really a good place to learn this. A university will never replicate politics, the stress, the budgets, the variety of coworkers (all with different life goals), the customers, etc. All of which, in my humble opinion, form the majority of problems and issues you’ll face as a software engineer (see your blog a few entries back). Any attempt at a simulation will be just that. I can land a 747 in Microsoft’s Flight Simulator, but do you trust me to do the same with your life at stake?

As for technical topics like source control, buy your employees a good book (in this case:, have them read it, discuss the concepts as a team, and try to relate the material to your current projects. The command line switches/etc. can be learned with a man page.

I really hope you post a follow up blog entry.

Well, i agree that there should be courses that teach source control, but it is not that hard to understand and learn it by yourself.

I think that more critical is to teach the programmer what the user thinks and wants.

You’re making the common mistake here. Computer science is not software engineering. Believe it or not, some people do want to just learn the abstract, and there is reason to learn that and that’s what computer science is. If you want to learn development, take up software engineering.

I personally believe that the best programmers understand things from the bottom up, starting with the logic gates, microprocessors, assembly language, and then to native high level compiled languages like C++ or D. If you have written a native windows application in C or C++, programming a .net windows app is a walk in the park, and you know whats going on under the hood if you need a pinvoke to work around the frameworks limitations. After a few years learning how everything works under the hood, then I think the focus should be on learning real world application development from the requirements gathering usability stages to coding, deployment and documentation. The high level languages are great, but a computer science program needs to start with the basics. It should most definitely end with real world application development, including internships or mentoring programs. I think companies would love to bring in students at a low hourly rate for mentoring because finding loyal, experienced help is really hard these days.

If there is one thing I wished they teached me more, it is the human interaction. To get a feeling what I mean , there is an excellent webtunnel .

I’ve done so many technical correct projects but still they fail, due to a lack of human aspects.

Also interaction is often not learned when teached ex-cathedra.

My 2 eurocents.

I’m on a computer games course and I’ve found nobody knows anything about versioning or repositories, and there seems to be no intention of teaching it either.

Imagine the look of horror on my groups face when I mentioned the word ‘subversion’.

The ANU’s Bachelor of Software Engineering and Bachelor of Information Technology programs do this to a limited extent, but not for all assignments. In second year, there’s a project worked on in pairs which is submitted through SVN. Third year (and also fourth year for BSEng, who do the projects a second time but manage a team instead of just working in one) include a year-long team project which is all managed with SVN. The third/fourth year projects are sourced from groups in industry, which means that the final product had better be deployable if it’s to be useful.

College CS classes tend to be so dry and academic that you must spend your summers working in industry

Today with the advent of bachelors/masters and fees (in Germany), most people are earning money in their summers. It’s to some degree different in Fachhochschulen (more practice, more specialism, furthermore you have to consider something important: a college (in USA) is not an university. There are some colleges which are quality-wise similar to an university, but some colleges are level-wise equal to a German Gymnasium too! At the moment I’m approaching a 2nd ‘career’ or better a 2nd qualification (CS with B.Sc.), so I’m confronted with these differences between the ‘traditional’ way of studying (Diplom, Magister) and the ‘new’ way (bachelor,master). You don’t have the time anymore for such things (but you have to do it somehow).

This is not to say that computer science programs should neglect theory.

Apart from practical considerations like above, there are certain things which have to change. Computer science needs more bandwidth, more focus on ‘reality’ without losing its foundations.

Getting a literature degree doesn’t prepare you for a job in publishing. A chemistry degree doesn’t prepare you for a job at Glaxo. So why should a CS degree prepare you for a job in the software industry?

You’re confusing academia, which deals solely with the thirst for knowledge, with the vocational.

I think the major problem is that there are very rarely vocational training courses available for programmers starting a new job. You can bet that the chemists at Glaxo, particularly those taken on as graduates, are sent on extensive training courses on how to apply their knowledge and work with the company’s equipment. The programmers are probably shoved in the basement and told to get on with it. They’re expected to pick up SVN as they go.

I agree with Sauron?! here. Software Engineering and Computer Science are two very different subjects. I’m in the fortunate position learning both and I’ve been in the industry for two years so these concepts aren’t new to me.

But even so, the Software Engineering courses has us doing group projects and we opted to use source safe ourselves, we weren’t taught it. Deployment was also not taught, or had a very brief glance.

University courses seem to lead towards academic pursuits, not the industry. If I wanted to learn programming I would take a tech course.

Yeah, don’t call it computer science if what you’re really teaching is computer engineering. Us computer scientists get all uppity about that…

Seriously, I’ve taken probably 3/4 of the theoretical computer science class that Caltech has to offer. One of the missing classes in that 1/4 was algorithms, which I dropped a few weeks in, because it was just too alien to my thinking to actually be writing pseudocode as opposed to proving things about complexity classes and computability. Hmm, maybe I’m more of a “computer mathematician,” but still, my point stands.

(Wait, did I have a point? Now I’m not sure.)

I’m a computer science student, attending a local liberal arts college that just happens to have a decent CS department. While the department is pretty standard, they do have an absolute requirement that you intern at a company for at least a year before graduation.

In my case this presents some problems, since I already have a rather lucrative job in contract web development. I can program circles around most of the people in the course, but I’m still going to be forced to take time out of my well-paid and enjoyable contract work to work at an “official company”.

However annoying this may be, the college does do one thing right: they have one course per semester devoted to “special topics”–and that can be any serious topic the students vote for. This semester the course topic is C#/ASP.NET. Last semester, it was PHP. Every semester the course will be updated and a new topic picked, keeping it relevant and fresh.

What properties should a single line of code have and what they require? At a minimum it would be:

  • secure - the programmer should know about security, best practices, their environment, and other factors;
  • efficient - the programmer shouldn’t write an O(n^6) algorithm if they can help it;
  • readable - the programmer writing the code won’t be the only one to ever read it;
  • tested - the line has to be right, right?;
  • documented - some record should exist of who wrote the line and why;
  • commented - not the same as ‘documented’, since the ‘why’ here is different; and,
  • styled - the programmer should follow best practices, use appropriate libraries, follow convention;

From this perspective it takes both disciplines and lots of education to write a single line of correct, functioning code. But that’s only part of the problem, since after writing that line you have to worry about people, process, requirements, design, and constraints, as well as tools, languages, and environments.

I guess my core point is this: there’s a lot (~10 years worth of education) you have to know to write a single line of code, and there’s even more (taking the rest of your life) to know about dealing with the people that pay you for that line.

In the end arguing over Computer Science and Software Engineering seems silly to me, especially once you add in the people and managements skills you need. It’s like taking a cup and chopping it in half, and then arguing over which half is more important. You need both to get a drink!

Being in charge of a software engineering course, I have been wondering about the same question. In my experience, the key element is the “boredom factor” of the course.

I have been discussing about it a while ago.

For example, it takes roughly 30 min to get started with Revision Control, but it’s way much more efforts to learn all the good practices that come along with Revision Control. The job of teacher is to open the path; students, if interested, will make their own make way.

My 2cts on the question,

Universities are good at teaching abstract concepts and at evaluating whether those concepts have been learned. They are not so great at teaching the practical aspects of software engineering. However, everybody seems to observe this and think “Oh my god, universities are useless! We must make them better at teaching people exactly what it is like to work in industry.” I contend, however, that this is a waste of time.

Practice can be learned on the job, very effectively. Theory, however, is much harder to learn without lectures, workshops and assessments, which universities are well suited to provide. If someone doesn’t get a strong understanding of the theory at university, they may never get it at all. If they don’t learn the practice, but at least have the theory, they will certainly learn on the job. I believe that university courses should focus on what they can do well, the theory part, and not the day to day experience of being a software engineer, for which they are particularly un-suited.

Source control is an example of something that, while indisputably critical to the practice of software engineering, is hard to teach at university, but easy to learn on the job. Hard to teach because you really need to see it working on a sizable project with many members and multiple branches and merges going on. Easy to learn on the job because there are no particularly difficult concepts involved, and no particular creativity is required in the process.

What I find much more important to teach is how to learn and figure out new things. Tell the students, “To solve todays homework, you’ll have to find out how to do xxx. It’s not in the books or the scripts for this lesson. Good luck.”. Because that’s how it works in real life. Better learn how to Google and how to use your results.

Easy to learn [source control] on the job because there are no particularly difficult concepts involved, and no particular creativity is required in the process.

This has not been my experience when working with clients. Most programmers never fully grasp source control beyond the most absolute basic of “I’ve got a lock” concepts. I also think source control is a much deeper and far more complex subject than you allow.

I also dispute the idea that you can’t “work on a sizable project with many members and multiple branches and merges” in a university setting. Perhaps they could try having students contribute to an existing open source project of some kind, even in a small way?

I recently developed a very small app for a friend of mine in visual C++. The app took me about 3 hours to create, debug, and test. It took me about 2 weeks to deploy.

I realised then, that I’ve spent 3.5 years at university (2 separate courses) and never learnt how to deploy an application properly. What’s worse - there seems to be no great literature or step by step instructions on deployment! Why .net doesn’t have a simple ‘deploy’ button is beyond me.

University is so behind the times. Recently doing an Internet Client Side computing, we were taught that there’s over 1,000,000 web pages. NO WAY!! THAT MANY?!

However I’m looking forward to this year - teams of 4-6 are to develop an application over the whole year for actual clients. Industry grade. All documentation, user-guides, manuals, charts etc. Let’s just hope they require source control and really teach us deployment while they’re at it! Oh oh, and finally… a Design Patterns subject! About time.

I agree with many of the comments here, in that the science of computing has nothing to do with competently writing software.

What the bejesus does “web hosting … source control system … angry bug reports from your users every hour” have to do with science? That all sounds like some kind of practical software writing to me. Having a firm grasp of the science of computation would certainly help to develop your algorithms, but source control, bug tracking, and all that gubbins are not part of the science. If that’s the way the student wants to go, they should take software engineering instead.

To paraphrase Dijkstra, calling it computer science is like calling surgery knife science, and gives completely the wrong idea. Perhaps it should have been called “Computational Mathematics”; take the emphasis away from this tool called a computer.