Frank Heckenbach wrote:
Scott Moore wrote:
To reach 32 bit operations in AMD 64, you have to use a mode prefix (from 64 bit mode). Hence, there is an instruction length penalty.
That's already bad. Even worse if it can only read/write 64 bit words, so in particular 32 bit writes get quite cumbersome. (Do you know if this is so?)
Not really. The design (which is really just the 32 bit method extended) was to flip all operations to 64 bits, then provide lots of new move modes to make up for the loss of efficentcy in 32 bit operations. So there is a direct load/store of byte, word (16 bit), 32 bit, and 64 bit. 64 bits can be moved as an immediate, which is required because all of the immediates on common instructions stayed at 32 bits. For example add rax,imm, imm is 32 bits only. The loads, of course, have various signed and unsigned extention modes.
So the short answer is the processor is getting more RISC like, relying more on loads and stores, while the other instructions with load and store modes like add get penalized.
Having common immediate modes stay at 32 bits was one of the secrets that allow x86_64 bit code to be as small as x86 code (in my compiles, using GCC, the difference in large program sizes with both modes was practically nil). There is a price: AMD64 cannot practically have a program larger than 2gb in size (the range of a relative jump). There is no way to encode a 64 bit jump, unless you do something like load it to a register, then jump to that.
In any case, the ISO 7185 standard is clear: The range of integer is -maxint to maxint, and all calculations shall occur in that range, ie., integer is the maximum range type on any system. The program is supposed to use maxint to adjust its specific behavior, as required, or specifically declare a subrange, perhaps to give a compile time message about it being larger than the target is capable of.
That's fulfilled whatever me make `Integer' to be, since `MaxInt' will always refer to this particular type. The longer types are disabled in standard modes (and they have `MaxLongInt' etc.).
Yes, but that really negates the intent of the standard, which intended that integer represent the most efficient word length on the machine.
By the way, the AMD ABI or calling/sizing convention which gcc is based on, and requires ints remain 32 bits, violates the standard for C as well.
What does it demand? (I'm not really an expert on C standards. ;-)
Frank
Oh, well, I was afraid you would call me on that. I taught a class on the AMD64 arch, and it came up in my research. However, its probally a valid point for you, so I will go find it.