Dave Bryan wrote:
GPC's sized integers appear to be intended to address this very problem. An "Integer (16)" stays 16-bits always, regardless of what GPC uses for "Integer" this year. If a particular interface demands certain-sized parameters, e.g., a 32-bit Boolean, then it appears to me that "Boolean (32)" will always produce the correct type, whereas "WordBool" will produce the correct type now but may not in the future. Surely that is what the GPC manual cautions in the section on sized integers.
(At least that's what I know from other systems, and which makes porting to 64 bits generally rather easy, just recompiling with a compiler for the new target -- as long as you didn't make any assumptions....
But presuming that "int" will be the same across platforms is just such an assumption. (And indeed, as I recall, MS had a lot of initial trouble moving from 16-bit to 32-bit interface calls, because under 16-bit Windows, there were a number of 32-bit parameters that were declared as "long"s -- which were 32-bits under the 16-bit compiler but became 64-bits under the 32-bit compiler. And so a lot of programs failed to work when recompiled, because they remained 32-bit parameters under the new interface.)
The problem is that C doesn't have, as far as I know, a method of declaring an integer of a guaranteed size, which makes interfacing across compilers problematic. You can pick some types that happen to work today, but they aren't guaranteed to work tomorrow. ISO Pascal, of course, has the same problem. GPC appears to solve that problem by providing sized integers. (Ada also recognizes this problem and solves it by providing optional "representation attributes" to guarantee a particular size where needed.)
"WordBool" and "Boolean (32)" solve different problems: the former when interfacing to a GNU C routine that uses "unsigned int" and where the parameter size is intended to track changes in the size of GNU C's "int", and the latter when interfacing to third-party routines that require specific parameter sizes.
I do not think that "explicitely sized" types really solve the problem. Namely, the types are chosen to have needed properties, for example a type must be big enough to allow indexing any array and must be efficiently passed to procedures. Proper sizes depend on target processor and also on the compiler used. They also reflect same design choces of OS writers. On Unix OS interface assumed that long is big enough to hold file offsetes (but this is not true now). Many 64-bit compilers make int still 32 bit and only long and pointers are 64 bits. On Wine list I saw a post stating that 64-bit Microsoft interface uses 32-bit ints and longs and 64-bit pointers (which violates one of basic assumptions in C). So for OS interface one really has to tune mapping for a given flavor -- however such mapping may be hidden in some small unit like: unit mstypes; interface type MSInt = Integer; .... MBBool = Integer; ...
Of course, I assume that there is a way to specify needed type, but mapping of usual Pascal types is likly to require less changes than using explicit sizes. So I would limit explicit sizes to the cases when one really means given size, like graphic program optimized for fixed bit-depth or reading file formats with prescribed size (but for files, due to endiannes problems reading bytes and reconstructing values seems preferable).