Waldek Hebisch wrote:
Ernst-Ludwig Bohne wrote:
Simplifying your program I observe the same problem:
PROGRAM testsubrange; var i: ShortInt; {signed 16 bit integer} j: SizeType; {unsigned 32 bit word} point : record x,y: real end; {takes 16 bytes} begin i:= 3658; j:= 16; writeln ('result1 = ', i*j); {58528, OK} writeln ('result2 = ', i*SizeOf(point)); {-7008, wrong} end.
My suspicion is that for calculating the second expression the compiler uses wrong operand size (16? bit) instead of 32 (function SizeOf returns a value of SizeType). BTW: -7008+2^16 = 58528
Basically speaking gpc performs operations up to precision of more precise argument. When computing `i*j' gpc notes that `j' is more precise (usually `SizeType' has 32-bit or better precision) and uses its precision to perform multiplication. In the second case (`i*SizeOf(point)') gpc notes that `SizeOf(point)' is a constant and that this constant fits into 16-bits, and uses only 16-bit precision for multiplication.
Also, ATM gpc has no runtime overflow checking, so one silently gets incorrect result.
You may ask gpc is "correct". That is samewhat tricky question. Namely, gpc applies rules, and AFAICS it applies exactly the rules which were intended by programers coding them (I think Frank changed rules to the current ones). So the real question is if the rules are good ("correct"). Now, it would be nice to have rules which always give mathematically correct result (in other words, use precision big enough to avoid any possibility of overflow). But this is impossible in currect gpc: we have maximal precision and we can not go beyond that. Another possibility is to give correct results if possible and use maximal precision otherwise. However, the maximal precision is really a "double precision". Namely, gpc can perform arithmetic at twice of normal machine precision. Which is nice, but expensive: such operation may be significantly slower then normal operations. ATM gpc is rather dumb when predicting needed precision, so even if arguments are small gpc may think that maximal precision is needed. Also, even maximal precision may still give you wrong results.
So, there is rather nasty compromise between correctness and speed. I usuallt go for correctness. However, here speed penalty may be very significant (about 3 times when done right, but may be as high as 20 if maximal precision is slow). So, having optional overflow checking looks more attractive: one can test with overflow checking on and then release with checking off.
Yes, that's my reasoning too. I've always had overflow checking in mind (even though I also cannot work on it in the near future). And, of course, there should always be a way to force evaluation in higher precision if needed. In GPC/BP modes, type-casting one operand is a common way (also common in BP code). In ISO mode, we don't have such a way, AFAICS. OTOH, ISO doesn't support types larger than `Integer' anyway.
One could also try to invent some compromise rules, for example trying to use "most precise fast arithmetic", but doint it right is tricky.
And it makes testing for several targets tricky, because they can differ about which types are fast -- not only relating to size, but also to Pascal types, e.g. MedInt may be fast or not.
However, I do see a GPC problem here. According to ISO Pascal, operations should not be done on smaller than Integer types. GPC followed this rule for `+' and `-', but not for other operations. The attached patch fixes this. It doesn't cause any regressions in the test suite on IA32 (but I haven't tested on other systems yet, and type size changes are always somewhat critical -- that's why I included 3 versions of these test programs, even if rather similar). Whether SizeOf gives a result of type SizeType or another type is quite irrelevant as GPC mostly ignores types of integer constants.
Frank