On Mon, 27 Nov 2000, Martin Liddle wrote:
In article Pine.HPX.4.02.10011271431440.19419-100000@dirac.desy.de, Ernst-Ludwig Bohnen bohnen@mail.desy.de writes
Another shot in the dark: up to now we can't exclude a garbage collection problem inside the memory management behind new and dispose. Under certain conditions pieces of memory freed with dispose may not be reusable for (even slightly) bigger ones. Suitable new and dispose sequences hence may produce lots of unusable memory blocks, which would eat up all available memory.
Can you be any more specific about what conditions might apply. The worst affected program starts off using ~5Mb of RAM and after 24 hours can be using more than 100Mb. Fortunately the server has 768 Mb RAM so the problem is manageable, although not pleasant.
A special sequence of new() and dispose() may produce memory fragmentation. The example below doesn't take into account additional bytes needed for pointers or byte counts, etc, and ignores modulo 4 effects:
1) Get 1000 records each with length of 1000 bytes numbered from 0 to 999. We use 1,000,000 bytes taken from the heap.
2) Now free all odd records and then get another 1000 records each with length of 1001 bytes. Now we use 500*1000+1000*1001= 1501000 bytes, but really took 1000*1000+1000*1001=2001000 bytes from heap because the old odd 1000 byte records can't be reused for 1001 byte records.
Assuming a complex get/put application may break the heap into unusable small memory fragments this way and calling new() will eat up fresh memory with sufficient size.
But the numbers you mention above do not indicate to me that this story causes the problem. Continuous memory leakage proportional with time is more probably caused by missing dispose calls, for instance forgotten to dispose daughter records before disposing their mother record, or ...
Ernst-Ludwig
-- Martin Liddle, Tynemouth Computer Services, 27 Garforth Close, Cramlington, Northumberland, England, NE23 6EW. Phone: 01670-712624. Fax: 01670-717324.