In article Pine.HPX.4.02.10011281537040.14830-100000@dirac.desy.de, Ernst-Ludwig Bohnen bohnen@mail.desy.de writes
A special sequence of new() and dispose() may produce memory fragmentation. The example below doesn't take into account additional bytes needed for pointers or byte counts, etc, and ignores modulo 4 effects:
Get 1000 records each with length of 1000 bytes numbered from 0 to 999. We use 1,000,000 bytes taken from the heap.
Now free all odd records and then get another 1000 records each with length of 1001 bytes. Now we use 500*1000+1000*1001= 1501000 bytes, but really took 1000*1000+1000*1001=2001000 bytes from heap because the old odd 1000 byte records can't be reused for 1001 byte records.
Assuming a complex get/put application may break the heap into unusable small memory fragments this way and calling new() will eat up fresh memory with sufficient size.
Thanks. I now understand what you are saying.
But the numbers you mention above do not indicate to me that this story causes the problem. Continuous memory leakage proportional with time is more probably caused by missing dispose calls, for instance forgotten to dispose daughter records before disposing their mother record, or ...
We have looked for such sequences and haven't found anything. This doesn't mean that our code is correct. This is a large application with a lot of legacy code, some of it dating back more than 20 years and having been ported between several operating systems, compilers and processors. Now that our most urgent deadline has passed we will have a little more time to look at the debugging data we have gathered and to follow Frank's suggestion to upgrade to the latest snapshot of the compiler. Thank you for your interest in our problem.