Cserzo Miklos wrote:
On Sat, 31 May 2003, Frank Heckenbach wrote:
/var/tmp//ccrvtgAm.s: Assembler messages: /var/tmp//ccrvtgAm.s:7: Error: .COMMon length (-1894967300.) <0! Ignored.
line 7: .common N,1999999996,4
This size is correct, so I suppose the problem is with the assembler. You might need to get a 64 bit one (if it's a Solaris assembler), or try to compile binutils for 64 bit target (if it's using the GNU assembler).
Sorry, did not understand you last time. The source and assembler I have sent is the version that compiles. Here goes the version that fails. The only difference is the size of the vector. The assembler is the "GNU as" and it is 64-capable. Other tools have been successfully compiled with it on the system.
Also here, the assembler output looks correct:
prime-fail.p:
const max=600000000;
var n:array[2..max] of integer;
prime-fail.s:
.common N,2399999996,4
(Not incidentally, the negative size reported in the message is just 2399999996 - 2^32.)
So I still think the problem must be with the assembler. Either it's actually not 64-bit-capable, or it might need a special option for this. I'm afraid I have no experience in that area.
If you have no other recourse, you might want to try assembling prime-fail.s with as directly, and if this fails as expected, compile a similar C file, get its assembler file and assemble it directly. If this works, you should be able to see some significant difference in the Pascal and C assembler outputs. If it also fails for C, apparently those other tools have been built differently and you need to find out how.
BTW, do those other tools actually use such larrge data structures? There's a difference (from the assembler's point of view) between a program that just uses 64 bit instructions and one that contains such large data structures. In the latter case, the assembler itself must be able to work with 64 bit numbers, while in the former case it only needs the 64 bit instruction codes.
Frank