Silvio a Beccara wrote:
This version doesn't do any range checking. This means the code may have worked incorrectly, unnoticed.
Too bad... anyway now I'm using the latest one.
Are you using a 64 bit platform? This array would have a size of 6.4 GB which is not possible on 32 bit.
From your question I understand that the maximum limit coincides with the
physical memory (which is limited to 4 GB on a 32-bit machine). I'm using a 32 bit machine, so I adjusted the dimensioning of that array (and the size of the calculation) to fit my memory.
It depends on several things:
- Virtual memory. (Physical memory is usually not necessary, e.g. on my system with 512 MB physical RAM and a lot of swap, I can allocate 2 GB, though actually using it may result in heavy thrashing if not careful.)
- Addressable space. On 32 bit machines, that's 4 GB. Even newer IA32 systems which support up to 64 GB of physical RAM, can only address 4 GB at a time, and a single array must fit in addressable space, so on 32 bit platforms, that's the absolute limit. (On 64 bit platforms, there's virtually no limit in this regard.)
- System restrictions on addresses. E.g., Linux and/or GNU ld, by default AFAIK, places the heap at 1 GB in virtual space, and the kernel reserved space starts at 3 GB. So 2 GB is the total space available for all allocations within a process. (It may be possible to change things with special options, but I'm not currently familiar with them.)
- Explicit limits (ulimit/limit on Unix).
- Backend version. gcc-2.x imposed stricter limits (4 G bits, i.e. 512 MB on 32 bit platforms), but gcc-3.x should pose no such problems.
Frank