On Tue, 10 Jun 2003, Frank Heckenbach wrote:
Also here, the assembler output looks correct:
prime-fail.p:
const max=600000000;
var n:array[2..max] of integer;
prime-fail.s:
.common N,2399999996,4
(Not incidentally, the negative size reported in the message is just 2399999996 - 2^32.)
So I still think the problem must be with the assembler. Either it's actually not 64-bit-capable, or it might need a special option for this. I'm afraid I have no experience in that area.
OK, I contacted to the binutils guys. I will forward they solution. Also more tests in progress.
BTW, do those other tools actually use such larrge data structures? There's a difference (from the assembler's point of view) between a program that just uses 64 bit instructions and one that contains such large data structures. In the latter case, the assembler itself must be able to work with 64 bit numbers, while in the former case it only needs the 64 bit instruction codes.
We compiled 64 bit versions of gawk and perl. They were tested with large data sets (12 Gig). I know they allocates memory dynamically. That could make the difference I guess.
miklos