The bloated world of Managed Code

Mark Russinovich recently posted a blog entry bemoaning the bloated footprint of managed .NET apps compared to their unmanaged equivalents. He starts by comparing a trivial managed implemention of Notepad to the one that ships with Windows:


This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2005/04/the-bloated-world-of-managed-code.html

So I guess you think Steve Gibson is crazy?
http://www.grc.com/smgassembly.htm

That’s pretty much a given…

Minimizing a window always causes the working set to be cleared (Windows Explorer calls the OS function SetProcessWorkingSetSize(h,-1,-1) when it minimizes the main appliation’s window).

Those baselines seem OK to me too [for an application with a button and a label].

I agree completely about the productivity gains, but let’s not ignore the elephant in the room. Developers of commercial software may not be able to justify the resource needs vs. productivity gains.

Right back at you Jeff! you feel the love? :slight_smile:

http://viewfusion.com/blog/jcarlisle/archive/2005/04/22/366.aspx

btw I’m calling uncle bill about you showing off performance metrics on a beta product!

I didn’t think you’re supposed to release performance metrics of pre-release products? In any case, expect memory usage to go down.

I would expect a significant drop in disk and memory footprint in the final product(s), of both the framework and VS2005.

The current builds are not optimized, and probably have a significant amount of debugging information…

I’m worried about some of the same things as Mark is. About 5 years ago, I worked on a distributed near real time system for our company’s trading arm. We ended up with 1 process/desk, and 3-4 processes for each of the intermediate steps in processing the data, and a couple of large data caches. This was running on 2 servers with 4 processors each, and 2 GB of RAM each. We ended up with over 100 processes/server, at ~1.7 GB memory utilization per server.
Most of our applications were in Delphi and C++, with a couple of Java processes (which had to be tuned for memory performance). I’m pretty sure that if we had used managed code, the additional memory footprint of just the code would’ve pushed us over the 2 GB mark. Our choices would’ve been to stick everything in the GAC or write monolithic applications that made heavy use of threading. Niether one is particularly appealing.
Also you’re comparing building an app in C# vs. C++. I think that you’re really comparing it with MFC, which is notoriously difficult to use. I don’t find C# to be any easier to use than Borland’s Delphi and only a little better than C++ Builder. There’s also Sybase’s ill starred Optima++, which also made GUI programming easy.

Marty, I noticed the same things:

  1. minimizing the form reduces the WinForms hello world app working memory set drastically.

  2. the footprint of any additional code, after you’ve “paid” for the baseline .NET runtime costs, is relatively small.

In the end, you have to compare real apps to real apps; there are too many variables for the “managed sample notepad app to unmanaged notepad” to be a valid comparison. Not to mention that unmanaged notepad kinda sucks at any size and speed.

Well in an Ideal world youd be able to compile to Native Win32, .NET, Linux and even Win16(at a pinch) all from the same source code…

So you could support paranoid fat corporate Workstations, cheap people like me still running on pentium II, and the oddballs on unix. :slight_smile:

You guys ever heard of Delphi…??

Try www.borland.com

Code once compiles to all the above platforms one codebase one IDE…

Still probably better to stick with Microsoft after all, werent they ones that looked after your compatibility so well when we went from 16bit to 32bit (I feel old)… and they’ve done a super job with VB6 to VB7…

Delphi is written in Delphi therefore if they break your code they break theirs too…kinda gives me a warm feeling that.

"The goal of the .NET runtime is not to squeeze every drop of performance out of the platform-- it’s to make software development easier. "

Umm, what part beacme easier? I think “Managed” is just an acronym for “Mainly A New Avoidance of Good Engineering Design”.

asp.net compared to asp is interesting. asp.net requires the vs environment rather than a simple text editor, it creates immense code, most of which is only required by the aforementioned vs environment. If you have to make a simple textual change, you have to recompile the application rather than just make the change and keep going. Asp.net apps start slower and are css inflexible. The list just goes on and on.

Well said Jeff. There seems to be people in this world who think that they are ‘hard-kore coderz’ cuz they squeeze an extra K of memory on the stack cuz ‘managed-code sucks, man’. I wonder who pays these hackers the extra time it takes for them to do that? Do people realise it costs less than $0.01 to buy that 1Kb of memory? Do they realise the memory is also likely to have fewer defects than geek-boys code?

Well well, managed code is for kids who don’t know how to write programs. For professional level it simply sucks. I wonder why people come to programming or call them programmers if they can’t write high performance programs. A skilled unmanaged programmer finishes entire project well before a skilled managed programmer could finish the same project (provided the project is big enough so that coding will continue for several months). It takes time only for kids who don’t know how to write programs.

Hi Jeff,

I am tending to not agree at the moment that the baseline memory footprint of .NET is acceptable. I have been working on a Windows Forms application for a client, and the memory bloat has the client believing the application is leaking memory. I can’t really prove that the app isn’t leaking memory, so I am going to end up on a wild-goose-chase to see if I can reduce memory consumption because of the bloat. I am sure I will find a leak or two. I never thought I would have to call Dispose so much when I started .NET development. Where is the Garbage Collected paradise?

Oh, and fix the comment form so I can put my Google blog address in the URL box. The domain is getting flagged as questionable content!!!

I am seeing a great deal of misconceptions in regards to the way the CLR allocates memory.

What you are calling memory bloat is actually by design and for very good reason.

.NET allocates as much memory as the application may need for inital compliation (don’t forget that your nifty code is JIT complied from IL into machine code at runtime) and for running.

The CLR allocates this memory upfront as it is faster to allocate a large block upfront than it is to keep allocating small blocks of memory (hence the 7mb base footprint of all .NET applications). This is common practice for high performance applications and many game engines do the exact same thing.

If you near the memory limit then when the CLR allocates memory it allocates roughly twice as much memory as actually needed in case you need to allocate more memory in the near future thus avoiding the performance impact of that allocation and the readjustment of the GC in order to cope with that allocation.

Should the system become low on memory the CLR automatically frees any unused memory for use by the operating system. You can test this by starting the “one button” sample form as discussed above and then loading the up the system with other applications while keeping the form maximised. You will notice that the footprint of your CLR application gets reduced at this point.

This guys as it happens IS in the rotor/GC docs should you wish to read them.

The whole concept that managed code gets the job done quicker or in my opinion the use of one programming language over another is misleading at best. Those working a particular space would be best served by investing their time in a good initial design that conciders the entire product lifecycle for what they are building and finding the necessary common supporting libraries to get the job done effectivly.

The idea that you don’t need to worry about managing resources is fine on some level but it quickly breaks down in any serious application. If you don’t know what I’m talking about its because you havn’t worked on one. Memory is cheap however connections, synchronization objects, threads, transactions, and various OS handles are not. You simply cannot elect not to manage resources and expect a serious application to continue to operate in a sane and reliable manner.

When it comes down to it yes you still have to manage resources on some level regardless of what modern marvel of a programming system you are using and no if handled properly this does not have an appreciable effect on the time it takes to get a particular job done.

I’ve written entire perl applications that use nothing but automatic variables (None explicitly defined or referenced) in a few minutes that would have taken hours to do in any other language that I’m aware of. Thats well and good but we all know full well that serious medium to large scale applications can’t be written this way.

I happen to be in Mark’s camp. I think .NEt is a step in wrong direction. Microsoft technologies have always been difficult, and in-efficient, and .NET is probably the worst.

By constatnly saying that you need 2GB memory to run the latest and the greatest, you almost make them foget to ask why? Why does the new application has to be 10 times bigger, AND 10 times slower, thereby offsetting the advantage of the improved hardware? Just so that you can have 10 such applications? How many applications do you really need (per task of-course)? One good one or 10 sluggish ones?

Secondly, as far as rapid application development is concerned, even there .NET is a failure. It’s languages are poorly designed, and hence will keep on changing. Somehow we have come to equate that newer versions of languages (CLR 1.0, CLR 2.0, DXD1 - 10) is an acceptable criteria to judge the ‘maturity’ of a product. Well, these are not products. These should be thought out properly to begin with, so that they stay stable and people can use them to develop products. Products, developed by using the tools are what should have versions, with features added and improvements included.

And finally, (sorry this is getting long), the oft-touted rhetoric that you can develop in ANY language is nonsense. But all these languages are really not different. Their is only one underlying language, CLR - and all the .NET languages are really just syntax sugar coating around CLR. They can not offer anything above and beyond what is there in CLR. A far better alternative is to differentiate the languages based on their ‘class’. Machine level (assembly), main programming (C/C++, Fortran), and then scripting (Perl, Python, Ruby), etc. .NEt languages are much more difficult compared to the above mentioned scripting languages, and much less powerful too. And they are too slow for main-programming category as well. A newer alternative to programming is to use a scripting language to design the shell and logic of your application and implement main engine in a main programming language. I know about Python, that it lets you do that and I belive Python (Lua, or Ruby too), + components developed in C/C++ are the way to program CORRECTLY.

Obviously spoken by someone that never developed anything in .NET.

The share fact you say things like it needs 2G to run and that it’s ten times slower tells me you don’t know the first thing about .NET, try looking up “ngen” in the documentation.