Waldek Hebisch wrote:
Adriaan van Os wrote:
Yes, that is what the current implementation does, but the following declaration shows the absurdity of it:
int9 = -1..0 attribute( size = 9);
This declares a type with a range -256..255. Yes, syntactically and formally it is correct, but semantically it is nonsense to write -1..0 where you mean -256..255.
So, I argue again that, semantically, in
int16 = integer attribute( size = 16);
"integer" should not be regarded as a type identifer but as a placeholder for the-family-of-signed-integral-numbers.
I do not understand what is your point here. Do you argue "in abstract" or maybe you have some concrete proposal what to change in GPC?
Concerning proposals: we can make `int9 = -1..0 attribute( size = 9);' illegal (or maybe ignore size attribute here as we do in other places).
(1) I agree with Adriaan that the current behaviour for `-1 .. 0 attribute (Size = 9)' seems strange, so we should either forbid it, or make it a type with range -1 .. 0 and size 9. Though I'm not sure ATM how hard the latter would be to implement and whether it actually has any useful purpose. And there are also problems with it, as it would make a difference between "default" ranges and explicit subranges (which must also be taken into account internally in GPC, i.e. one more flag probably, which I generally dislike). E.g.:
type t1 = Low (Integer) .. High (Integer); t2 = t1 attribute (Size = 64); t3 = Integer attribute (Size = 64);
t1 is basically equivalent to Integer. But according to the new rule, t2 would have the same range as t1 (say, 32 bits), while according to "established practice", t3 would have the full range of a 64 bit type.
(2a) We could also change that "established practice", so that `Integer attribute (Size = 64)' has only the range of `Integer', but size of 64 bits. Thus, to get a real 64 bit integer type, one would have to specify a type of sufficient range such as
t4 = -$8000000000000000 .. $7fffffffffffffff attribute (Size = 64);
but it would require changes to existing source code. One could argue it's a bit overspecified (size and range), and thus unclear whether the range as given or the range derived from the size are actually used (currently the latter, perhaps unintuitively), but as long as both match, it could work, even if we decide to use the former in the future.
(2b) We could also define the rule that a size attribute can only reduce the range (more precisely, it would take the intersection of the original range and the maximum range for the given size -- possibly resulting in an invalid range and thus an error in strange cases). Then `-1 .. 0 attribute (Size = 9)' would work as Adriaan (and I) expect, and `Integer attribute (Size = 8)' would also work as it does now. Larger types would still have to be changed, either like t4 above, or somewhat simpler:
t5 = LongestInt attribute (Size = 64);
In fact, Longest{Int,Card} would always do then, for any size that GPC can represent at all, so it would be a safe choice (and unambigous WRT signedness). But it would still require changes to existing source code -- not as many as (2a) since "small" types would still work with `Integer', but it's not generally clear how small is small enough (`Integer' is 16 bit on a few exotic platforms, so `Integer attribute (Size = 32)' wouldn't be fully portable then), so in practice one would probably have to change almost all occurences to Longest{Int,Card}.
At least such changes (2a and 2b) would be compatible with existing GPC versions.
(3) OTOH, if we want to forbid that case altogether, we need explicit rules what to allow. As Waldek said:
OTOH writing "should not be regarded ..." is _not_ a proposal. The compiler has to follow formal rules and change means that we have new rules. I do not see how to convert what you wrote into reasonable rules (the example I gave show _some_ of the difficulties).
(3a) One possible rule would be to allow only built-in types to have a size attribute. But it seems a bit arbitrary, and makes those types more "magic" than otherwise necessary, and what about aliases, etc.?
(3b) Another possible rule would be to allow any type that has full range WRT its size (i.e. `Integer' has e.g. 32 bits and full range -2^31 .. 2^31 - 1, but -1 .. 0 doesn't ... well, unless it has a size of 1 bit). So that also seems a strange rule, in particular as the original size and range is exactly what's not used of the type.
(4) What we actually use is the information whether it's integer or Boolean, and in the former case whether signed or unsigned. So perhaps a completely different alternative would be to get rid of the size attribute and to predefine 3 pseudo-schema types (e.g., SignedIntegerOfSize (n)). They wouldn't be real schema types, of course, as they couldn't be defined normally, and the "discriminant" would have to be a complie-time constant in a certain range, etc. Also, it would required changes to existing source code, so I'm also skeptical of the idea.
(0) BTW, another problem of the current implementation is that `0 .. 1 attribute (Size = 8)' doesn't even say whether it's signed or unsigned, as the subrange type could be either (without a semantic difference). Of course, we could define that it's always unsigned if both bounds are nonnegative. Then what about variable subranges, `a .. b attribute (Size = 8)'? This would be unsigned if both a and b are of unsigned types (rather than nonnegative values, as that wouldn't be known until runtime).
I'm afraid, so far all the alternatives I can see (1-4), including the current implementation (0), don't look very good.
Peter N Lewis wrote:
At 15:22 +0100 10/3/06, Waldek Hebisch wrote:
Adriaan van Os wrote:
int9 = -1..0 attribute( size = 9);
I do not understand what is your point here. Do you argue "in abstract" or maybe you have some concrete proposal what to change in GPC?
I believe Adriaan's issue is that he wants to be able to define the type "Integer" to be a specific size, different from GPC's default, effectively:
type Integer = Integer attribute( size = 16 );
His argument is that the second "integer" is not a use of the pre-defined type "integer" as such, it is a "placeholder for the-family-of-signed-integral-numbers".
The issue stems from the compatibility problem that otherwise arises when porting from Pascal dialects that traditionally have a 16 bit integer.
How about the way the `System' unit does if __BP_TYPE_SIZES__ is set? Sure, it only works in a module, but you normally would want to do such low-level stuff in a low-level module anyway, I suppose.
Alternatively, how about the way I proposed under (2) above? This would work if we decide to do (1) or (2) (or leave (0)), not with (3) or (4).
type t = -$8000 .. $7fff attribute (size = 16);
If you can suggest a way to define the "Integer" type to be 16 bits, as has previously been done as:
Int16 = Integer attribute( size = 16); Integer = Int16; {error: identifier `Integer' redeclared in a scope where an outer value was used}
then that would probably be sufficient. Otherwise, I believe he is suggesting that the use of Integer (or Cardinal) with an attached size attribute not be considered a use of the Integer (or Cardinal) type as far as the "redeclared in a scope where an outer value was used" warning is concerned.
As Waldek said, this wouldn't be unproblematic (and that's apart from any implementation difficulties of "unmarking" a type as not used). What you have in mind in something like letting `Integer' stand for my "pseudo-schema" above. The problem is that `Integer' is not a reserved word, and after (or while) redeclaring it, things get hairy.
Frank