The above code should compile, I translated it from the BP code. The
only difference is in the type declarations; mByteOrder must always be 32-bits, and mByte must always be an array of 4 elements with 8-bits each, hence the use of Cardinal(x). The way it works is simple, and the code does not depend on any compiler specific syntax (except for type declarations) so it should compile correctly in the long run. AFAIK, constants are compiled in the native byte order of the target architecture. By comparing the long value as bytes, one-by-one, the byte order can be determined, hence the endianess can also be determined. I have tested this code on a Macintosh computer (which is the only big-endian machine I currently have access to), and have of course tested this on a PC. The results displayed are correct in both architectures.
This is very nice, but will require runtime testing (and checking of the bigendian flag *each* time)
Maybe it is smarter to let the above program output something to an includefile.
The makefile should then always compile the above program prior to the actual compilation.
This way one would have the flexibility AND the compiletime decision between little and big endian?
Marco van de Voort (MarcoV@Stack.nl) http://www.stack.nl/~marcov/xtdlib.htm