Vampires (Programmers) versus Werewolves (Sysadmins)

Heh, this is a very interesting question. I think that it depends on why they need the access. If they need the ability to tell why applications are failing etc… then I think that they should be able to access the production systems to view that information. Or if they need examples or sample data to see what is happening on a daily basis. Granted I do not believe that they should have unlimited access all the time; but if there are issues going on with their application(s) and they need to see what/why is going on then they need to have that access.

I currently work in a position where I am in a sysadmin/datamining/developer rule. Our development group programs the application that we utilize inhouse to provide services to other companies. Our sysadmins control the systems that their applications reside on and many times they/we have to defer to the developers to let us know why something might be happening.

In our line of work our developers are working on application (as opposed to web application) level work (Java and C). The main problem for us is emulating the situations that we have; 10billion insert/update/delete (that’s billion with a B for EACH of those SQL types) per day. These numbers are staggering and are over a distributed system across the internet; what makes it very interesting is that, we cannot duplicate this type of input. We don’t have the hardware nor would we want to buy an extra 4-5k machines to duplicate this.

So in our environment, it is important for our developers to SEE how the changes that they have made or are going to make will change the systems performance. Even adding a single line (although this goes for almost any system it seems like ours is especially important) be drastic performance changes and even client impacting changes (I’ve seen issues where someone made a change that they didn’t even think would be big and in the code it wasn’t; and it changed the entire way we had to utilize the system from that point forward and not in a good way).

This is, of course, my view being in the middle of all the different sections. I am a user of the system that I am a developer of and I am one of the people trying to keep it running on the system that is built by our sysadmins.

Alkampfer, you are dead right. The only reason developers need access to production is when they want to hack in last minute changes that weren’t found during testing and QA. If you have a proper process then these situations won’t arise.

I am a programmer who thinks that developers should NOT have access to production. Even beyond that, I think the application deployment/patching should be done when developers are not even in the room. Let me defend this with the reasons I believe it.

  1. The “handoff” to the sysadmin team is an important step that causes documentation to be not only written but USED. I can’t stress this enough. It is easy to “write” documentation, but the only way you can check if your deployment checklist is correct is to have someone ELSE, without you sitting nearby to handhold to USE the documentation you wrote… in most cases, the first try won’t work – but the documentation will get better (use a wiki!) as you go. I also recommend on larger teams you have different members of the sysadmin team do the deploy so you don’t end up with pockets of knowledge.

  2. Beyond the benefits of documentation and corporate knowledge, it places the responsibility and control with the person who gets the 4AM call when the site goes down. Developers simply put a patch or deploy “in the pipe” and the sysadmins can decide when to deploy that patch… they can do extra testing they want, whatever, it puts control in the right hands.

  3. It creates a need for a robust and real development environments, which is vital and creates higher quality products. Developers work in a good clone of production, because they require it! It forces a good behavior.

  4. It allows developers to not have to worry about what happens after the work is done. You develop the product, you put the lotion in the basket and lower it down to the sysadmins, and move on to the next product (or feature).

~ Basically, giving access to production to developers is lazy, and it encourages bad documentation, poor testing, and binds them to a single environment they shouldn’t be thinking about… developers should live in development / beta environments, never in production. If you aren’t doing this, IMHO, you are doing it wrong.

I kind of like the “call for collaboration” at the end of the post. Albeit I think that it is also an attitude of the people: you don’t necessarily HAVE TO give them more to do or they would be bored. If they get bored, they are doing something wrong. If they have more time on their hands, they should probably use it to study, and grow… maybe learn a bit more about what the other guys are doing. Maybe that ITPro who is scared or scripting should try to do it and see that it won’t kill him if he does… it might actually open his mind.
I generally agree with many of the comments up here: Brandon Black, as well as bdunbar to quote a couple.
You can’t be a good programmer if you dont’ know how systems work, and you can’t be a good sysadmin if you can’t code a little (or a little more).
The slides mentioned by ejunker are also an awesome example.
Albeit you only have limited time in your days, you should be striving to increase your knowledge and know more of both worlds. Because, in fact, I think that the whole division between those two worlds has been historically a marketing artefact - a strategy to create audiences to “target” and to seel products and services too.
In the unix world, you have “programs” that do complex and useful things, which really are big PERL scripts - written by sysadmins, to solve system administrations’s headaches.
In the Windows world the division is a lot more visible or frequent… but lately a lot more people understood that those two audiences must be talking to each other and uderstanding each other more. Think of powershell, for example - I see it as a step in that direction.
bdunbar is also making a good point, about the evolution to the cloud, and the higher level of skills required.
the whole “division” between the two becomes a lot more blurry over time. Which is a good thing. Let’s hope it eventually fades away.

Sysadmins are programmers too. They might use different languages, and their aims and jobs might be different, but they are programmers too.

Even people that design computer hardware are programmers. Chip designers at Intel probably know more hard Computer Science than the majority of people at StackOverflow.

The difference between so called “programmers” and “sysadmins” as that their roles and aims differ. They both write new software, they both worry about performance and stability, they both worry about security.

The ones performing sysadmin tasks, however, stake their reputation and paycheck on “the systems are up” and the “client-feature programmers” stake their reputation and paychecks on the wide adoption of their feature by business end users (business because of internal budget allocation, direct payment, or advertising revenue).

If you don’t think sysadmins are programmers, you’ve never worked with good ones. I say this as a client-feature developer who sysadmins his own systems (passably at best).

Manual-override.blogspot.com made some very good points.

When I started in IT in the 80s, the IT department had 5 people in it and we did the lot (we even laid the cables). So I found it odd to be isolated from the production systems when I worked in bigger companies with separation between SysAdmin and Development. Especially when I had to jump through tons of hoops to get a change made.

However, all the points made by Manual-override.blogspot.com are very valid. It’s better not to be able to hack away at the production system at 3am, as it prevents suspicion when something breaks, and any credible company should minimise the people who can access production data.

That said, it depends on the size of the company. Nowadays I tend to run my own systems (web sites), and work with small clients, and generally I don’t have any SysAdmins to control anything and I have to do it all myself. It gives me some freedom, true, but I miss the furry little blighters (and yes I have been coding all night).

I’m installing MySQL this weekend. :frowning:

I need to know who the zombies are :slight_smile:

I think in most IT departments at companies they do not let developers access production, instead they have to make a productions request of some sort to get anything deployed. This in principle is good.

However, the biggest issue is that QA on the code itself often is non-existant. People do functional testing but not code testing. In that case it matters not whether you directly change something in production or deploy it using a strict process, either way the code goes through.

The Zombies are the users. They do not know or understand what goes into making their applications neither do they care or want to understand it. They also continue to create bugs even when it isn’t a bug instead it is just something that they didn’t know worked in a certain manner. “brains” becomes “bugs”

Yup Suroot, is right that users fit the Zombie analogy in this ridiculous fantasy, because they don’t know anything other than what they are told and they, quite rightly, have no interest in our stuff either, other than how it helps them achieve their job in the day to day running of the business.

However you have to expect them to report things they don’t understand, and it’s up to the professionals to be professional about it and fix it if it needs fixing, improve it if it can be improved, or identify and support enhanced training if that’s the resolution required. We have to accept they don’t share our fascination with our art, and not take it personally if they reach the conclusion that something doesn’t work when in fact the problem is that they don’t understand it which is probably due to a lack of training, or possibly because the system is too complex or not intuitive. Life would be easier without users but then we wouldn’t get paid.

bdunbar, I apologize, I should’ve been more specific. I was positing the scenario of a company opting to host its enterprise systems in an external cloud. As cloud platforms mature, a lot of smaller companies might look to them as a way to control costs. Pushed to its logical conclusion, perhaps internal production environments will one day become an unnecessary liability.

My company migrated from SharePoint 2003 to 2007. We allocated 3 servers for it, spent a huge pile of money on licenses, and another pile to have our sysadmins and developers install, configure, and customize it. Now MS has come out with yet another version. I don’t relish going through this again.

But suppose the next version of SharePoint is implemented as a set of separate (but interoperable) services on Windows Azure. If companies could migrate to “SharePoint Azure”, we wouldn’t need nearly as much production hardware. And if the “Software-as-a-Service” model takes off (hosted externally), internally maintained production environments would slowly become a liability. Let somebody else worry about it. All that stuff was just a means to an end anyway. And if I can get there without it, so much the better.

I’m aware that sysadmins do a lot more than manage hardware. But the focus of this discussion is about the intersection between sysadmins and developers: the production environment. So my question is: Could external clouds some day make internal production environments obsolete? Would this eliminate one of the main intersections between sysadmins and developers?

Wonderfully said, I guess that puts me (that I am both sysadmin and dev) in the abomination kind :slight_smile:

As a developer who also spends a lot of time helping out with deployment of software to different environments, I feel that in most cases developers should not have any access to production. If you can’t do a proper deployment on your identical acceptance environment, you should not deploy at all.
And if something goes really (involving in house code, which is extremely rare) wrong I want the sysadmin to be there and we should be looking at production together.

So if programmers are vampires, and sysadmins are werewolves, what are test engineers?

Forgotten about, I suppose.

I do agree, programmers are vampires! But I admit, I get along with our sysadmin quite well and he’s definitely a werevolve. Maybe it’s because in the first weeks at my company I was kind of a handyman for administration issues. But I’m not that kinda Daywalker like Vaida Dan described himself…

While dev access to prod is always an interesting and lively debate, in many situations the question is moot, as governmental regulations, such as Sarbanes-Oxley strictly limit developer access. As someone who has worked largely in the health care and financial industries I, as a developer, have almost never had more than read only access to production. And in many instances, I haven’t even had that, except for the occasional temporary access for a couple of hours here or there to troubleshoot a production problem that the admins couldn’t fix on their own.

Solved that one, I became both sysadmin and developer. Now the biggest problem is keeping myself out of production…

As a Developer (in terms of HR’s interpretation of my job description), my position on Prod is that I should always have as much access as I need in order to fulfill my duties. I do a tremendous amount of troubleshooting of live code, and am expected to support an application’s behavior in a live environment, so I expect at least read only access to the data and the logs.

Where things get particularly frustrating is when I’m “politically” responsible for an app, but a certain component is someone else’s “technical” responsibility, and he or she isn’t carrying the appropriate amount of weight. I hate to sound like A Jerk, but if you can’t do it better than I can, and you’re not inclined to learn how, then just let me do it.

This is, of course, a two way street: if my app is a resource hog and requires much bigger iron than a better designed app would; then an administrator should certainly be able to hold my feet to the fire. :slight_smile:

Why do developers need some sort of access to production machines?

I’ve worked in environments where they don’t and it was a miserable experience.

When the developer has no access to production they have to forward all of their requests to sysadmins or DBAs.

Trying to debug programs becomes almost impossible if you have to forward a request for a DBA to read a permission, set a permission, change a procedure, etc.

The developer inevitably ends up waiting for the sysadmin to complete the task before moving forward. The sysadmin gets upset with the developer for wasting his time.

I would end up waiting for days, or even weeks for the request to come back as the DBAs had work of their own to do as well.

Then after thinking it couldn’t possibly get any worse it did.

A new IT manager was hired who decided all requests had to be formally requested as a “change request” and be entered and tracked in a database.

Don’t want to give programmers access to the prodcution system then fine, debug it yourself :slight_smile:

Funny how an article about sysadmins and programmers ends up in a few reader comments noting users as dumb, uneducated, and stupid. Guess it’s the only thing we both seem to agree with. It’s also the very thing we are both wrong about.

As for the article, as a programmer myself, definitely agree that programmers shouldn’t have access to production servers. Ideally, in-house code should even be implemented on production servers as any other 3rd-party code; i.e. built and packaged. And any in-house project maintenance should follow the same procedures for 3rd-party code through more or less formal downstream bug reporting.

But lack of personnel, usual high maintenance requirements of in-house code, and always very short response times requirements, mean this “ideally” is rarely feasible. Programmers end up having to shortcut their way into the production servers for faster identification of causes or implementation of patches/fixes, or sysadmins end up having to script their way into a responsive and functional program while they wait for a proper fix from the developers.

So, I look at Jeff’s post more as a warning. A gentle push. A reminder that, while not entirely possible under most circumstances, we still should always try to move to an environment where programmers stay away from production servers and sysadmins don’t mess with development/test servers. The sin is not in failing to accomplish this, but in getting used to it.