Code Access Security and Bitfrost

The One Laptop Per Child operating system features a new security model-- Bitfrost. It's an interesting departure from the traditional UNIX and LINUX security model.


This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2007/03/code-access-security-and-bitfrost.html

Bitfrost and CAS are good security models. The problem is not that they are too hard, it’s that they are not worth the effort. Having the operating system enforce them would -make- them worth the effort.

Most applications can be easily classified in different categories, such as “no network or file I/O”, “no network IO, only file I/O to files in its own resource directory” , “network IO on port N only, no file I/O” and so on. A minimal amount of effort by the developers could go a very long way towards increasing security.

The reason developers don’t make the effort is because they feel they don’t have to, and indeed, they don’t. Users are so used to giving programs full access to everything. OLPC has a chance to make its users expect otherwise.

Bravo, Dave!

The last thing we need is more dialog boxes. I could rant for hours about the evils of dialog boxes. As programmers, we’re more apt than most to read a dialog box, but users simply DON’T read them. A dialog box with a message is worthless.

That’s the point. A dialog box isn’t going to make the computer safer. It’s going to irritate a user and then they’ll click on ‘yes’ please download the porn-spyware-trojan-horse.

CAS can be a bit confusing. When should you add a security demand attribute to your code or what exactly should be put into your assembly info file? How do you explain to your users when they need a custom CAS policy for your application?

Confusing, but not beyond comprehension; at least not for most developers, if they took the time to understand the security framework. But, as stated by Stewie, they don’t make the effort because the feel they don’t have to. Most users will install the application, and grant the thing full trust, even if it comes with an untrusted certificate.

If developers don’t have the motivation to understand the security model, what makes us think the average user will?

Garret: Right on. I call Dialog Boxes the Frenemy of the programmer, the lazy man’s way of informing the user of trivial information. There are times when a dialog box is of absolute necessity, and I think under those circumstances, the box should pop itself up devoid of dismissal for at least three full seconds, if not as much as ten, so they don’t accidentally keyboard it shut or click it shut incidentally.

Message (dialog) boxes are relied on by programmers far too much to deliver information that is either meaningless, trivial or important. With the abuse of the message box, the user never knows when applications truly need their attention, or are just trying to call something to the fore. I especially blame mal-war, spy-ware, annoyingly bad programming habits and idiot web programmers that thing the JavaScript MessageBox function is for everything.

I’ll stop ranting, now. But I stick by my message: message boxes are the frenemy of programmers. Useful tool and baneful crutch.

It seems to me the OLPC project is being misused by someone eager to test out his or her theories on software design. What use is an OLPC laptop if it locks kids into a proprietary software stack knowledge of which will not serve them at all in the rest of the world. Is this the political intention? as in; give 'em laptops but better not make 'em too smart or they’ll take our jobs!

Sandboxes. [sarc]Didn’t java solve this problem ages ago. Oh then .net solved it. Funny that it’s still an issue.[/sarc]

The problem is the tools. I, being a .net programmer, can’t be asked to figure out how to modify every single config file for every app I want to use. It’s all so awkward, so my sandbox is a VM instead. If I can’t be asked, how is my mother going to know what to do?

Wish list.

  1. Programs should be able to, in a clear way, describe what they need.
  2. OS (or runtime framework, whatever) should:
    a. easily identify what sorts of things the program says it can do and will do in a nice GUI with something my mother can understand.
    b. users can allow the program to do more than originally requested.
    c. users can lock down the program to do less.
    d. stop and notify the user when the program attempts to do other than it has been allowed to. a nice wizard would be good as well that explains that allowing certain things could be harmful.

I also want to say things like

  • ‘allow this program to communicate once per day to (their website) and limit to no more than 1k upload and 10k download’ (I’d also like stats and counters saved by the framework that let me see the activity of said program)
  • ‘allow this program to access files in their installation folder as well as files of type .log in the folder c:\gpslogs’

'We’re sorry, but program “notavirus.exe” is attempting to perform an action that is prohibited in it’s manifest. It is attempting to communicate via an http port to “www.downloadtrojans.com” would you like it to continue?

  • Yes / No / Just once.
    (click here for more detailed information)’

Windows and other OS’s are so far from this it’s not even funny. So instead I use a VM for testing / isolating programs, try to be careful and reinstall everything yearly.

Agree with AC. CAS isn’t just extremely poorly explained by the MSDN Library and the standard .NET literature, it’s also virtually unsupported by the OS and .NET loader. If your application has a CAS violation the user doesn’t get any information to this extent, let alone a chance to change permissions – instead he gets an incomprehensible error dialog as if the application had crashed!

On top of that, .NET Framework methods might request any sort of wacky unpredictable security permissions which of course aren’t documented anywhere. Hit one of them while running under less than full trust, and you get the incomprehensible crash dialog again. With a binary stack trace because that’s so helpful to the user!

Also, while the permissions are numerous enough to be confusing, they aren’t granular enough to be useful. If your application allows the user to create and manage files – not exactly a rare request – you have to grant global write permission on the file system.

Sure, you could restrict permission to “My Documents”. But then all users who wish to create a file in their own custom document folder would not be able to do that, and thus hate your application. You CANNOT restrict file manipulation to specific file types, which in Windows are only identified by an easily changed extension anyway.

Bottom line: CAS is a feature that would work well only with comprehensive OS support which Windows does not provide. As it stands it’s fairly useless.

In the Linux ecosystem, where distribution is largely separated from development, this sort of system could be made to work.

Yes, the UNIX security model is archaic. Yet, people understand how to program to it, and that’s more than you can say for it’s competitors. Attempts to bolt on a “better” security model have floundered – even though both Linux and Solaris have ACL support, I’ve yet to see a shop that actually uses it. SELinux, as pushed by Red Hat Linux, is the closest to being successful – it works pretty well if you just use the software RH gives you, but woe to you if you need to make changes to the security model.

This goes for Windows too. In theory, Windows (even before Vista) had a rich security model that could do more than the UNIX model. In practice, few people could figure out how to use it, so the usual answer to file permission problems is to give “full control” of a directory to everybody. The kind of privilege splitting that’s routine on UNIX (create a special user for each daemon) is unusual in the Windows world.

Not to rag on Windows, but it shows that security models have a complex ecology, and it can be hard to make something that really works in the real world.

Hazar: The author probably wrote it on March 20 and didn’t publish it until March 22.

I’ll tell you why I never used it, although I have spent, I believe, over ten hours total researching CAS–The fucken documentation SUCKS. Plus, there are extremely few books and other resources on CAS. It took me the longest time just to understand what the hell was going on. Am I asking for permission, or am I just declaring what I want to do. With CAS being so confusing and with the lack of good documentation, using CAS is like jumping off a roof. It doesn’t seem like it should be hard from the ground, but when you get up there…

Captcha: orange. Always orange. Never peach.

Here’s my parallel: Why did Canada manage to make everyone move to one and two dollar coins when the US failed to do the same thing?

The answer: because in Canada, we didn’t get a choice. They just took the one and two dollar bills out of circulation and shredded them all. In the US no one HAD to switch, so they didn’t.

Why haven’t MS developers adopted the CAS security model? Because they don’t have to. And until they’re forced to, they won’t.

OLPC has the advantage of forcing people into a more complex model that, while it’s complex, will succeed because unlike CAS, it’s MANDATORY.

Ethan: “OLPC has the advantage of forcing people into a more complex model that, while it’s complex, will succeed because unlike CAS, it’s MANDATORY.”

It’s mandatory on an experimental, obscure system. Being mandatory on that might just force people AWAY from that system instead of forcing them to use the policy.

I believe the security should come at the OS level and not at the application level. The application developer should not have to do anything in regards to security. Let me repeat that… the application developer should not have to do anything in regards to security. When an application is installed it should be installed by the OS with a default set of permissions (perhaps defined in the installer for the application) that are displayed on install to the system administrator who can then check or uncheck permissions. If while running the program the application makes a request outside of the program’s permissible actions, the OS should prompt the user explaining what the application is trying to do and then allowing the user to enter in the Administrator password and choose to allow the application to perform the requested action once, or every time thereafter (it should also allow the admin to view all permissions and make changes immediately as needed). There are firewalls that do this to some extent for network access, I can’t imagine it would be impossible to do for file access as well.

Microsoft always has a tendency to over complicate matters for developers. I give you Passport as a good example… good idea, poor implementation as OpenId is proving.

Well, the first mistake is making it “allow everything except…”

Security works best if you start from “deny everything except…”. Would be shorter for the developers as well (you only have to specify what you want your program to do, not everything that you don’t want it to do).

(For the Solitaire example - no one is going to add all the code to say
"no, it can’t go on the network. No, it can’t use the microphone. No, it can’t delete your files. No, it can’t delete anyone else’s files." And so on… But if you just have to say “yes, I use the mouse, I use the keyboard, I use the monitor, and I use the sound-out, done.”?

What works is making things secure out of the box.

It boils down to who secures it. If the application developer is solely responsible for securing it, then Mr. B. Cracker will simply apply the appropriate security to his application to read your bank statements. Ultimately the system administrator has to approve the security settings that the application needs. The best way for this to occur is for the OS to intercept requests and compare the ACLs for the application, if it doesn’t have enough permissions then the user/administrator needs to be prompted to grant the permissions… of course there also needs to be an option to say “never prompt me” for a given situation (i.e. don’t grant network access and don’t prompt me ever again about the application not having network access). Hopefully most of the needed permissions will be granted upon installation.

Totally agree with you Phil. The only thing CAS should be doing, from a development perspective, is preventing code from blowing up because a permission hasn’t been set.

I see it really to do things like putting a little orange triangle next to a command (ie menu item) - if you click it, it would tell you that because of administrator-set policies, the following will not function or may prompt you.

I agree that it needs to be enforced at the operating system level.

One thing I’d REALLY like to see in Windows is the ability to mark a file as accessible ONLY by applications signed with a specific key. This would allow me to easily hide keys and passwords in my own files. How wonderful would it be to be able to create a file that could only be accessed by your own applications?

I think, a pretty simple, but still good approach whould be to describe basic workflows that should happen in such a program. This workflows whould basically be a regex with parallel charakters and the charakters where interrupts the OS has to answer.

Example I: I have a program that reads a number of URLs out of a file in a subdirectory in my home directory, downloads this sites and stores it into another subdirectory in my home directory.
Thus, a workflow whould be:
1 or more times: (

  1. Read a file in $home/$subdir1
  2. Access internet for 1 HTTP-Request (this can be capped in Size, because we know this is a HTTP-Request and the size won’t be infinite
  3. Write a file in $home/$subdir2
    )

Or, if we have a simple recorder:
1 or more times: (
1: gui-action
2: parralel { Use microphone, write to $homedir ) 1 or more times.
)

This workflows are known at implementation time (or should be), thus stating these workflows should not be that hard (and encourage proper planning first)

Or, with solitair:
1 or more times: (
1: gui action
2: write to a config file of solitair
)

Thus, a takeover should be a lot harder, because if the solitair-program suddenly tries to access a file in the home-directory of the user, the OS can say “Hey, this is not stated in the current workflow, I will kill that program!” or if the program tries to access the network, the OS says again: “Hey, this is not stated in the workflows, lets terminate the thing.”

I think a solution like thos should work pretty nice for the developers, because the know those workflows already and it should be pretty easy to implement this in a certain OS, because you can create a DSF which evaluates the interrupts the program does and maybe ends in a “malicious”

MfG