Understanding User and Kernel Mode

Please don’t lets cargo-cult the idea that exceptions are /always/ bad. As with any other technique, if performance is critical you should benchmark.

For language exceptions the performance costs break down into two broad categories:

  1. The cost of setting things up so it’s possible to throw an exception

  2. The cost of actually throwing an exception.

The first cost may be unavoidable - it depends on the language. For example some C++ implementations generate slightly faster function entry code if the compiler knows that it won’t subsequently have to unwind the stack due to an exception. Higher level languages tend to incur the ‘being able to handle an exception’ cost whether you actually throw an exception or not - and in general that cost will be small compared to whatever else the language is doing.

Actually throwing an exception will also take some time - how much depends on the implementation.

Measure then decide. Don’t cargo cult.

Jeff,

I’m not buying it. A one liner in a blog post that says “Probably take a trip through the OS kernel. Often take a hardware exception.” isn’t enough proof to say why exceptions are s o slow, especially when the rest of the things that definitely happen explain most all of the performance issues with exceptions.

Yeah, you shouldn’t use exceptions in performance intensive code, that we know. Perhaps you should trace the entire execution of a few types of exceptions (C#, C++, SEH, hardware) to actually see what happens.

So… you think that exceptions in .NET involve the CPU executing in kernel mode?

i’m not sure but it seems reasonable enough to be true ,possibly some of the .Net libraries are used in low level OS operations.Is this true?

This is what we extra-crashy-code-writing programmers like to call “progress”.

Haha! And to celebrate progress, I’ll write a couple (more) crashing bugs this afternoon :slight_smile:

I beleive he’s talking about Win32 exceptions.

Jeff, just to be clear: not all language exceptions require user/kernel mode transitions. There are three somewhat confusing ideas of “exceptions” here:

-Exceptions are a language feature in certain programming languages such as C++, C#, Java, and many scripting languages, for transferring control and triggering automatic stack unwinding. There’s no need for the kernel to get involved in a thrown exception caught by a try/catch block in general; the program can just save some information and jump to the handling code. The slowness of throwing and handling exceptions is generally due to implementations that trade off speed of entering try/catch/finally block with the speed of looking for exception handlers and handling an exception.

-Exceptions (more generally, structured exception handling or SEH) are a Windows operating system feature. They allow structures mirroring some languages’ try/catch or try/finally blocks to be used to handle things like page faults or memory access violations, which are raised from kernel mode, and also to handle application-raised conditions if desired. The exception handling blocks can be nested on the stack as deeply as you like. It’s not necessary to implement language exceptions using SEH, but you can. Visual C++ does. Visual C++ and the .NET CLR also translate certain kernel-raised SEH exceptions into language exceptions that can be detected and handled by try/catch blocks. For example, the .NET NullReferenceException is sometimes raised like this. Obviously, kernel-raised exceptions originate from kernel mode and need a mode transition, but user-raised SEH exceptions don’t: see http://www.nynaeve.net/?p=201 for an explanation of how this all works. Other operating systems use different mechanisms to communicate back to user mode; for example, Unix-like systems use “signals” for this purpose.

-Exceptions are the name for certain conditions detected by the processor, such as executing an illegal instruction, dividing by zero, or accessing memory “illegally”. A “machine check exception” is another example of this, where the CPU detects a hardware error. These cause the processor to immediately switch to kernel mode and run a piece of kernel code to handle the situation. In Windows, they may eventually result in an SEH exception being raised to user mode. There’s no idea of nested scopes or anything here; the processor just saves the address where the exception was triggered, and starts running kernel-mode code.

As you can see, the last category is the only one that requires an avoidable user/kernel mode transition, but it can be translated all the way back into the first category of exception.

Actually, .NET exceptions are implemented in terms of SEH exceptions, which means that they do go on a trip through kernel mode. See http://blogs.msdn.com/cbrumme/archive/2003/10/01/51524.aspx for Chris Brumme’s explanation.

And it’s not fair to say that ‘most’ drivers are moving to user mode. Things like fixed disk drivers are still fundamental to system operation and will stay in kernel mode. But USB devices (e.g. Windows Mobile device sync) are starting to move to user mode, and audio devices did in Windows Vista.

Throwing exceptions in application code has nothing to do with processor rings. It’s all the same ring anyway.

First and foremost, throwing exceptions is software architecture decision. In general, you throw them when something goes wrong, which is a point where you don’t really care if it takes a bit longer to display error message to the user.

That said, exceptions in .NET are slow as hell, and that might be why .NET programmers tend to use them less than, say, Java programmers. I once saw a speed benchmark comparing throwing exceptions in different languages, and .NET was 5 or 6 times slower than all other languages, including C++ and Java.

rim’s example will make your process run with SYSTEM credentials and will thus give you complete control of the system (SYSTEM has more privileges than any Administrator), but it doesn’t mean you’ll run in kernel mode. Wouldn’t even work, as Kernel/User mode are pretty different environments (you can’t do Win32 API calls from kernel mode, for instance).

Jeff: exception handling doesn’t necessarily take a detour through the kernel, it depends on your programming environment as well as stuff that’s project-specific. For C++ code, it typically won’t detour through the kernel, try tracing code in a debugger. Notice that kernel32.dll != kernel-mode code.

And throwing an exception in a programming language should never involve any hardware exceptions…

Note: it seems the Visual C++ runtimes do call kernel32.RaiseException wich calls ntdll.RtlRaiseException which calls ntdll.ZwRaiseException, which eventually does do a switch to ring0.

Don’t have a copy of GCC installed on windows right now, but I think they’re using a different route without depending on OS support?

If anyone is wondering,

View Show Kernel Times

in the Performance tab does the trick.

What has throwing exceptions got to do with kernel vs user mode? Switching to kernel mode is done by a “trap” call (wich is strictly speaking an interupt that is handled by a kernel process/thread), using the terminology I was thaught in “Operating Systems” at school. A runtime exception in a highlevel language may be a trap call or an interrupt handled by some error handling code in the OS or interpreter when trying to read/write to protected memory etc. but an “exception” doesn’t need to involve kernel traps (I think)…

“User mode is clearly a net public good, but it comes at a cost. Transitioning between User and Kernel mode is expensive. Really expensive.”

  • Not always!
    Tip: Windows GUI methods are executed in kernel mode!

Most drivers are shunted to the User side of the fence these days,
with the notable exception of video card drivers, which need
bare-knuckle Kernel mode performance. But even that is changing; in
Windows Vista, video drivers are segmented into User and Kernel
sections. Perhaps that’s why gamers complain that Vista performs about
10 percent slower in games.

A good guess might be that the “real” driver is running in kernel mode, while DRM shenanigans tied to the driver are taking place in user mode.

“Windows GUI methods are executed in kernel mode!”

Which is why crashing the GUI can crash Windows …

Unix always seems to have had the policy of everything user mode unless absolutely required

Windows seems to have had the policy of … well none at all really user/kernel/whatever if it’s windows itself …?

Unix is another world. There are good structures and good communications across APIs Modules.

Windows are full of patches and hacks!

“his is what we extra-crashy-code-writing programmers like to call “progress””

Hey now, I never write crashy or buggy code…never!

:wink:

Good post, Jeff! Me too love user mode.

In User mode, the executing code has no ability to directly access
hardware or reference memory. Code running in user mode must delegate
to system APIs to access hardware or memory.

I can’t read memory without making an API call?

Hey Now Jeff,
I just learned some info about CPU modes (Kernel User).
Coding Horror Fan,
Catto