Code Access Security and Bitfrost

The fucken documentation SUCKS. Plus, there are extremely
few books and other resources on CAS.

I will note as an aside that if a feature is so hard to explain that both the docs AND third-party discussions are unclear, the fault lies to a great extent with the feature itself. A feature that’s extremely hard to explain is a feature that’s got some design flaws, including (as noted in the thread) being over-engineered. Stated another way, if you create a feature that you have trouble explaining to users, think about redesigining it so that you CAN explain it. Because if they don’t understand it, users won’t use it …

I’ve had an idea for OS-level security in mind for awhile, now, that may or may not reflect what is currently being done. The basic idea bases itself off of the current concept of a root user and the administrators, users and guests. The basic change from this, however, is that there is a central application installation process. This process vets all incoming applications against an approved application database that holds file authentications like md5 checksums.

If the application does not match any known vetted application, or does match a vetted application but does not match the checksums, the installer throws a warning that would disallow any user-level operator to install further, but any administrator or root-level user to continue if they so choose.

I guess I never understood why all the applications get to make up their own directories, their own install paths and run their own installers. Seems pretty insecure to me. You only ever install one thing at a time, so why not have a centralized installation process that already knows all the pertinent information about your system?

Anyhow, I’m sure there are loop-holes and things I’m simply not considering, but on the surface it seems like a good idea.

Jae: I think VMS worked like that. Unfortunately we’ve had some “progress” in the meantime…

Aaron G: "I actually tried to set up all the CAS policy assertions on my application a while back after FxCop whined to me that they didn’t exist. But after a while I just gave up. […]

The only way CAS could ever work is if the permissions were automatically deduced from the source code - but then I guess that defeats the purpose, doesn’t it?"

Great post, completely agree. CAS is one of those features that were dreamt up without any thought whatsoever of real-world conditions and consequences. Nice intention, now please go back to the drawing board and design something useful! For example: deduce permissions automatically from the source code at install time – then check against those permissions at any future execution to protect against virus infection or runaway plug-ins. That might work.

Despite a few variations with time, ‘wrx’ permissions have stuck because of their simplicity.

So, why can’t it just be something like this in program access?
Just, instead of “user, group, others,” try “owner, local, network.”
“Owner” for the current user’s files, “Local” for other files on the machine, and “network” is fairly obvious.

And, for enforcing this, have it assume “no access” if it isn’t specified.
Maybe even give the user the option to override this during installation.

I’m sure it’s possible to implement–considering it would probably be a few steps back, in complexity, from the current system. I just don’t know how open MS is to accepting an idea with such a strong Unix/Linux base.

why does this have the date of march 20, when it was posted (afaik) today, the 22nd?

I quite like the blog, keep it up :slight_smile:

The example is a bit too simplistic for my taste.

Of course Solitaire doesn’t need access to much of anything. However, every other non-trivial application will need some kind of I/O to disk and/or the network to be of use. So once such an application is compromised, an attacker can do everything he needs to do to gain further access to the system.

In this security model, there will be dozens of vulnerable applications for each locked-down solitaire.

On the subject of partial trust and sandboxing, especially in .NET, OWASP member and security guy Dinis Cruz has been tilting at windmills about this for quite some time. There’s some fascinating (and depressing) reading to be found in the archives of the WEB SECURITY and SECURE CODING mailing lists, where he discusses some of these issues and attempts to get Microsoft to notice.

I agree completely with Aaron G and Dave.

I think the problem is that we’re trying to define “good” behavior as a set of rules. What we really want to say is:

Allow: Good
Deny: Evil

This is what all holy books have tried for the past 6000 or so years, and from what I understand, religious scholars are still debating on whether some actions are permitted or not.

Plus, what all these schemes ignore is the fact that if the user comes to a website that says “See celebrity NUDE!” and decides that yes, I want to see that, then it doesn’t matter if a dialog box pops up that says “The application watch_celeb_nude.exe wants full trust”. The click on “Allow” will come immediately.

We’re stuck with either inflexible rules (don’t allow the user to allow or deny permissions) or presenting the user with questions that they can’t really answer. (“Permit read access to file \Winnt\config\asdhfjk2.log?” - what will a regular user make of that?)

Hi folks,

there’s a whole lot of misunderstanding, disinformation and nonsense in this thread. Let me see what I can clear up.

John Nilsson writes:
This sound like it would be trivial to implement through
a simple chroot jail."

It’s not. Go down the list of protection mechanisms offered by Bitfrost, and you’ll find you can recreate very few with chroot() alone.

Rixor writes:
Of course Solitaire doesn’t need access to much of anything.
However, every other non-trivial application will need some
kind of I/O to disk and/or the network to be of use. So once
such an application is compromised, an attacker can do
everything he needs to do to gain further access to the system.

Of course not. Every application is granted the ability to perform disk I/O, but applications can’t access user documents arbitrarily. Furthermore, there’s a big difference between applications needing network access, and applications needing to act as servers. There are comparatively very few of the latter, and it’s only if you exploit one of those that you can create an avenue for re-entry to the system. Bitfrost won’t let a non-server application bind and listen to a port.

Stewie writes:
Bitfrost and CAS are good security models. The problem is not
that they are too hard, it’s that they are not worth the effort.
Having the operating system enforce them would -make- them worth
the effort.

Bitfrost permissions are enforced by the OS.

Tim Binkley-Jones writes:
If developers don’t have the motivation to understand the
security model, what makes us think the average user will?

Bitfrost makes it exceedingly simple for the developer, for one thing. For another, if developers aren’t willing to invest the few minutes of effort to specify the permissions their application requires, the application simply won’t have that functionality available without the user explicitly granting it through the security center; the user will not be prompted to grant missing permissions at runtime.

AC writes:
The problem is the tools. I, being a .net programmer, can’t be
asked to figure out how to modify every single config file for
every app I want to use.

Within its own VM, an application is free to modify whichever configuration file it wants; the system will not get in the way.

Stunningly, AC then proceeds to say:
Wish list:

  1. Programs should be able to, in a clear way, describe
    what they need.
  2. OS (or runtime framework, whatever) should:
    a. easily identify what sorts of things the program says it
    can do and will do in a nice GUI with something my mother
    can understand.
    b. users can allow the program to do more than originally
    requested.
    c. users can lock down the program to do less.
    d. stop and notify the user when the program attempts to do
    other than it has been allowed to.

This is almost EXACTLY what Bitfrost is doing! I’m not sure where the disconnect lies; either people aren’t reading the spec and are just commenting for the sake of commenting, or the spec isn’t sufficiently clear, and people should use one of the many avenues (my e-mail address in the spec, the public security@ mailing list at OLPC, or the OLPC wiki) to explain how to make it clearer.

Chris Nahr:
If your application allows the user to create and manage files
– not exactly a rare request – you have to grant global write
permission on the file system.

Of course you don’t. You might wish to read the spec more carefully; this is explained.

Dave Solomon writes:
If we’re shipping responsibility to decide what permissions an
app needs to function off to the user, we’re screwed.

I agree, and the entire point of Bitfrost is to not weasel off any responsibility to the user.

Matt writes:
One thing I’d REALLY like to see in Windows is the ability to
mark a file as accessible ONLY by applications signed with a
specific key.

Bitfrost lets you accomplish the same end goal via the document store protection mechanism.

Garret writes:
The last thing we need is more dialog boxes. I could rant for
hours about the evils of dialog boxes. As programmers, we’re
more apt than most to read a dialog box, but users simply
DON’T read them. A dialog box with a message is worthless.

Well said! For a second, I thought you were quoting the Bitfrost spec, since it similarly derides dialog boxes and puts a lot of work into getting rid of them completely in the security context. Read the spec.


All in all, from what I can tell, this devolved into a bitterfest about CAS, not constructive criticism of Bitfrost. If people would like to offer the latter, they have a number of ways to do so, and I’m more than willing to listen and discuss.

Cheers,
Ivan
Bitfrost lead architect.

Ivan,

thank you for your reply. Wish you best of luck with Bitfrost and future development on the OLPC platform.

Actually, the Symbian capabilities model is sort of along these lines. In order to do stuff, a program (specifically, an EXE or DLL) needs to have the right capabilities. So, for example, a Solitaire equivalent would lack the capability to do network access, or mess with arbitrary files. In order to get capabilities, you need your installation file to be signed (by Symbian). In extreme cases (for capabilities that grant Astounding God-Like Powers) you need the phone manufacturer (Nokia or Sony Ericsson) to do the signing.

Of course, the flip-side of this is that the platform is tightly controlled by the OS hardware vendors, though it’s not actually that difficult or costly to get an application signed with reasonable capabilities.

“For another, if developers aren’t willing to invest the few minutes of effort to specify the permissions their application requires, the application simply won’t have that functionality available without the user explicitly granting it through the security center; the user will not be prompted to grant missing permissions at runtime.”

Glad they got that the right way around. I think the biggest issue with .Net CAS is that if you don’t bother specifying any permissions you get all permissions by default. This gives the developer very little incentive to specify their permissions.

I wonder if Microsoft were developing .Net now (after their big security push) whether they would still set up CAS to be open by default?

Ivan: “Of course you don’t. You might wish to read the spec more carefully; this is explained.”

What spec? Do you realize that I was talking about .NET security permissions, specifically the FileIOPermission class? The MSDN Library only lists options for setting permissions on specific absolute file directory paths which is useless for applications that allow users to create and manage their own files – like I said.

“All in all, from what I can tell, this devolved into a bitterfest about CAS, not constructive criticism of Bitfrost.”

Surprise – people were talking about .NET CAS which is much more widely known than Bitfrost. And given the primadonna attitude you’re putting on show here, it will probably stay that way.

Chris: I misread your comment, sorry; any prima donna attitude you detect is more fueled by a multi-day lack of sleep than anything else :slight_smile:

Anyway, I interpreted Jeff’s post to be primarily about Bitfrost, with CAS being used as an example of why Jeff expects it to fail. This could be an incorrect interpretation on my part, but addressing your comment in this context, unlike CAS, Bitfrost doesn’t require you to give global permissions to applications in any but the rarest of cases. Cheers, Ivan.

how about having the developers framework force them to make choices about what access their application needs. And this would not allow a global access, but force them to choose each and every conceivable access their application will need.
system.security.permissions.AccessRequiresAppPath = True
system.security.permissions.AccessRequiresUserDocuemnts = True
system.security.permissions.AccessRequiresUserRegistry = True
system.security.permissions.AccessRequiresHTTP = True
system.security.permissions.AccessRequiresFTP = True
system.security.permissions.AccessRequiresNone = True

system.security.permissions.AccessRequiresGlobal - doesn’t exist

Then during standard application installation wizard process, after you choose what directory to install too, it give list of all access, with the applications requested access checked off. Most users will leave it alone, but it would allow other people to see what the developers planned at the very least.

CAS (and UAC, and other security mechanisms) are complex because fine-grained security is complex. That just seems to be the nature of the beast.

Now you could move to a more coarse-grained security model, which would be easier to understand and implement. But then the trade-offs between security, convenience, and backward compatibility will become even more apparent to both developers and users.

Although I must admit that MS screws itself because the tools for working and interacting with CAS/UAC/etc are truly and completely brain-dead.

Ivan,

thank you for your comment. Good to talk to someone who has spent significantly more time thinking about this than myself.

The only question I have is how the complete security model works: In Bitfrost, the application must declare what permissions it requires, and based on those declared (and enforced) permissions, the user can choose to install / run the application or not. (If I understand this correctly.)

My question is then - how does the user know that the application doesn’t do anything naughty with the resources it has access to? For example, suppose a cryptography program requires net access (to download public keys from a keyserver), and file access (to sign and encrypt files). What is to keep the crypto program from not just signing the sensitive files, but upload them to a server where I can get them?

Is this where I just have to trust the program?

tcliu: You’re welcome. Out of the box, Bitfrost will make it so that the program can only sign or encrypt those files you ask it to while running the program; it can’t go arbitrarily opening up your documents without your permission. Past that, if the program does request network access, then yes, it can upload the files you explicitly let it open (but no others) to a rogue server.

I should say that developers of this kind of software will be under strong pressure – from OLPC and the community – to request finer-grained permissions. For instance, if the extent of your program’s desired network access is talking to a PGP keyserver, the developer can easily have Bitfrost enforce communication with only a specific set of servers happening on specific ports (unless the user authorizes some other keyserver). If this pressure works, that’s great, everyone will be more secure for it. But I’m not depending on it: even without it, I believe the out of the box experience will be quite a bit more secure than mainstream desktops. Cheers, Ivan.

As long as the user ( owner of the machine ) can do whatever they want the PC will only be as secure as the user is informed. It does not matter what the developer does or the OS does. The user (owner) will always have the ability to override a permission. This is exactly why sooner or later all software applications will be in a secure hosted online environement. Anyone create a spreadsheet’s and/or document online using google’s web app? This is the future my friends get used to it. The day’s of installing programs on your computer are numbered. When user’s no longer need to ‘install’ applications on their computer the current model’s will work just fine. And NO I do not use IE.

The concept of an “operating system” that is supposed to “manage” the collected resources and services that most people call “the computer” is getting in the way here. Has it escaped everyone’s notice that the most popular devices of the last century all had basically boot loaders with no real OS at all? PDAs, game boxes, Macs up to System 7, PCs up to Windows 98. We’re shifting to over-featured phones now and calling what they run an “OS” is really stretching the term. And the assertion that OLPC itself, as an institution, is going to take no responsibility whatsoever for tracking and responding to threats, is mind-boggling considering what could happen to the users involved.

Strict execution control could work if all executions of all programs were logged the first time they happened, and anomalies detected (if a single instance of a program with the same name as another program was doing radically different things like downloading a lot of things that were unfamiliar) in its run after that. What would that take? An automatic upload of all the execution logs once per day from each machine? With a billion users that would cost less to track than any conceivable followup study that relied on watching the kids working or some more complex protocol of getting their permission - if they even know what that means. Privacy issues can be dealt with by obscuring the identities and scrambling them before they leave the country. It’s no more intrusive necessarily than anti-virus systems.

In other words, this is a problem for systems administrators, not for programmers. And those systems administrators have the information they need to make decisions only if they have some access to the usage records for that application, as it’s executed elsewhere. The information is so useful for other purposes (figuring out which of many programs should get a major upgrade, which should the kids be trained on, figuring out if there are particular patterns of abuse or educationally-inappropriate use developing) it’s hard to imagine any objection outweighing that.

As for the OS, well, as Alan Kay said himself, it’s a collection of tools and utilities that didn’t fit in the programming language. In other words, there shouldn’t be one. Having an OS to communicate with on the box adds an additional level of absurd complications.

When there’s a problem, remember, the data has already (according to the spec itself) been stored elsewhere. There’s no personal data of any kind on the particular OLPC having the problem. And if it’s usage profile is known, then, it’s relatively easy to know which boot image to restore. How long will it take to swap boot images and get a copy of the user’s data to that machine? Not long. And this capability makes it easy to give someone a new OLPC and then give someone else their old one. Tying users to the hardware only makes sense if they really assume they are going to fail and the hardware will not become ubiquitous. If the project succeeded and OLPCs were as common as say pencils and blackboards, it would make far more sense to gather up and redistribute old ones to new users while more sophisticated users got new ones with capabilities they needed. In which case, the data and boot image is what matters, not what particular OLPC you run it on. This also works much better when OLPCs are shared or in library settings, which (whether its founders admit it or not) is more likely as a deployment scenario initially.

While there’s nothing wrong in principle with a PKI infrastructure to make sure hardware is in fact equitably distributed to the persons who it was intended for, and nothing wrong with relying on peer to peer interactions to do the vast majority of the software monitoring and administration, I see no evidence that Bitfrost or the access control lists could eventually be passed on to the users themselves to administer. Starting with a centralized model that can easily be decentralized for each country, region or village that wants to learn how to administer and monitor the OLPC network itself, seems wiser to me than assuming that the programmers, children and schoolteachers are fit to handle the many scenarios of failure that could arise.

In other words, there’s a role for experts who monitor events from a technologists’ perspective, and respond to those events in real time, rather than anticipate them (as a programmer does) or interpret them pedagogically (as a teacher does) or as their own fault (as a kid does). Trying to rule systems administration out of the picture is like letting the Norse gods get fat and slack and forget to guard the bridge, which they did. I suggest a systems administrator interface called Heimdall (the Norse guard of the bridge) needs to be defined.

[http://www.mainlesson.com/display.php?author=kearybook=asgardstory=children]

“Heimdall was a tall, white Van, with golden teeth, and a wonderful horn, called the Giallar Horn, which he generally kept hidden under the tree Yggdrasil; [70] but when he blew it the sound went out into all worlds…[he said]… the son of nine sisters am I. Born in the beginning of time, at the boundaries of the earth, I was fed on the strength of the earth and the cold sea. My training, moreover, was so perfect, that I now need no more sleep than a bird. I can see for a hundred miles around me as well by night as by day; I can hear the grass growing and the wool on the backs of sheep. I can blow mightily my horn Giallar, and I for ever [71] guard the tremulous bridge-head against monsters, giants, iron witches, and dwarfs.” (Note no mention of trolls. Evidently this is a social and not a security problem!)

Now THAT sounds like a sysop. Especially the “blow mightily” part!