About the performance issues…
I work at AMD (where the 64-bit extensions to x86 were invented) in the group that among other things does liason with the compiler venders, working with them on code generation, etc., so I have some real basis for what I’m saying…
Your average large C/C++ program will probably run 10%-15% faster just recompiling it for 64-bit, and the reason is amost entirely the fact that there are twice as many registers in the ISA, so the compiler can keep more stuff in register, eliminate spills to memory.
A second advantage is that where in 32-bit, there are Lord knows how many different calling conventions (__stdcall, __cdecl, FORTRAN, PASCAL, WINAPI, …), in 64-bit there is only one calling convention and it passes most parameter values in register, not on the stack.
Sure, your mileage will vary, and there are some pathological cases where performance actually decreases, but on average it’s a modest win, about 1 processor speed bin.
Of course, you don’t notice much difference on most desktop apps, since most desktop apps are not aggresively optimized for speed. A 3GHz processor used to surf the web is idle most of the time waiting for your page to download, so who would notice the difference?
As others have pointed out, only pointers and size_t are expanded to 64-bit; int, long, float, double, short, char are still the same size that they were for 32-bit builds, to the program .exe size and data space requirements do increase, but only modestly. Your data structures don’t double in size, unless they’re all pointers and size_t.
You could argue that the switch from ASCII to UNICODE, by doubling the size of all the text strings probably did a lot more to increase program size and memory footprint, but nobody seems to be too upset by that.