9

While reading through the 70-536 training kit, it states:

The runtime optimizes the performance of 32-bit integer types (Int32), so use those types for counters and other frequently accessed integral variables.

Does this only apply in a 32 bit environment? Does Int64 take over in a 64 bit environment, or is Int32 still the better choice?

4

5 回答 5

6

That's a funny way to put it. The runtime doesn't have much to do with it. The CPU is designed for processing 32-bit integers, which is why they're the most efficient to use.

In a 64-bit environment, it again depends on the CPU. However, on x86 CPU's at least (which, to the best of my knowledge, is the only place .NET runs), 32-bit integers are still the default. The registers have simply been expanded so they can fit a 64-bit value. But 32 is still the default.

So prefer 32-bit integers, even in 64-bit mode.

Edit: "default" is probably not the right word. The CPU just supports a number of instructions, which define which data types it can process, and which it can not. There is no "default" there. However, there is generally a data size that the CPU is designed to process efficiently. And on x86, in 32 and 64-bit mode, that is 32-bit integers. 64-bit values are generally not more expensive, but they do mean longer instructions. I also believe that at least the 64-bit capable Pentium 4's were significantly slower at 64-bit ops, although on recent CPU's, that part shouldn't be an issue. (But the instruction size may still be)

Smaller than 32-bit values are somewhat more surprising. Yes, there is less data to transfer, which is good, but the CPU still grabs 32-byte at a time. Which means it has to mask out part of the value, so these become even slower.

于 2009-02-10T00:48:03.443 回答
2

Scott Hanselman 今天在他的博客上发表了一篇文章,解决了 32 位和 64 位托管代码之间的差异。总结基本上只有指针改变大小,整数仍然是 32 位。

你可以在这里找到帖子。

于 2009-02-11T19:56:08.257 回答
1

Unless you plan on having the value exceed 2 billion, use an integer value. There is no reason to be using extra space for a percieved performance benefit.

And contrary to what other people on this thread may say, until you measure the benefit of such a small thing as this, it is in only a percieved benefit.

于 2009-02-10T00:47:52.947 回答
1

http://en.wikipedia.org/wiki/64-bit suggests (you might find a more authoritative source, this one is the first one that I found) that Microsoft's "64 bit" offerings use 64-bit pointers with 32-bit integers.

http://www.anandtech.com/guides/viewfaq.aspx?i=112 (and I don't know how trust-worthy it is) says,

In order to keep code bloat to a minimum, AMD actually sets the default data operand size to 32-bits in the 64-bit addressing mode. The motivation is that 64-bit data operands are not likely to be needed and could hurt performance; in those situations where 64-bit data operands are desired, they can be activated using the new REX prefix (woohoo, yet another x86 instruction prefix :)).

于 2009-02-10T00:51:08.040 回答
-2

A 32-bit CPU handles 32-bit integers faster. A 64-bit one handles 64-bit integers faster; just think about it - you either have to shift bits by 32 bits all the time, or waste 32 bits for each 32 bits, which is essentially the same as using a 64-bit variable without the advantages of a 64-bit variable. An other option would building extra circuitry into the CPU so that shifting would not be necessary but obviously that would increase production costs. This is the same for 32-bit CPUs handling 16-bit or 8-bit variables.

I'm not sure but I wouldn't be surprised if the 64 bit variant of the .NET Framework was a bit more optimized to use longs - but that's just speculation on my part.

于 2009-02-10T00:52:44.667 回答