Maurice Lombardi wrote:
However, TP objects don't apply here, since their destructors are not called automatically (only explicitly with "Dispose (pointer, destructor (...))"), which I consider a major drawback of the TP object model, BTW.
What model do you think which allows automatic destructors.
I think you're confusing two things: The automatic removal of objects (which you talk about) and the automatic calling of a destructor method when the object is removed, whether the removal itself is automatic or manual (which I meant).
I have heard of three
- The old TP objects declared on stack (and thus automatically freed at
the exit of the procedure) with var o: tObject ... The drawback is that it does not allow inheritance and polymorphism (only encapsulation), because these need size changes. This makes them nearly useless.
FWIW, I disagree here. It's not old -- languages such as C++ fully support this way. The drawback you mention only applies when you need polymorphism at creation time, e.g., you need to create objects of different (related) classes, depending on some actual parameters or runtime data. There are such situations, sure, but in many other situations the class is known at creation time; such objects will still be polymorphic when you call their virtual methods or pass them as reference parameters.
So I prefer a model (such as that of TP and C++) that allows both "static" objects (in global memory or on the stack) and dynamic ones. I always shudder when I see e.g. Java code that does "TFoo Foo = new Foo ();" compared to simply "TFoo Foo;" in C++ or "var Foo: TFoo;" with TP objects just to create a simple local object.
- Garbage collector "a la Lisp". It has no sensible way to decide when
to call it. Usually it was called "when memory is exhausted", a situation difficult to decide with nowadays shared memories systems. The program using it hanged for very long times when it started. A very bad compromise between lazziness and efficiency.
- reference count, a better compromise AFAIU. The count is incremented
each time the object is affected to a new pointer (creation or affectation proper), decreased automatically at the exit of the procedure where the pointer is declared, and the object is freed when the count is zero. The size overhead is a word to store the count, the time overhead is the time to increment/decrement the count at affectation or exit of the procedure and check for zero.
Which one of these is "better" seems to be a religious issue (FWIW, I also prefer reference counting most of the time, though I know that some progress has been made in the implementation of garbage collectors in recent years). GPC does not support either of them ATM; though you can plug in a garbage collector (such as the conservative Boehm GC) that simply overrides the low-level memory management routines.
But again, that wasn't my point. If you have a (say, TP style) object that declares a destructor "Done", in none of the cases it will be called automatically:
- If the object is on the stack and leaves scope, and you don't call the destructor manually ("Foo.Done").
- If the object is dynamic and you dispose of it simply with "Dispose (Foo)" rather than "Dispose (Foo, Done)".
- If the object is dynamic and its memory is reclaimed, e.g., by the Boehm GC (which doesn't know about objects at all, just memory blocks).
In contrast, in C++ in the first two cases the destructor is called automatically -- not in the last one which makes the Boehm GC somewhat less useful for C++; this is because it has to treat the memory as untyped, which goes back to the original question: Like there, if you only have an untyped pointer, the compiler cannot automatically call the destructor (if it would do so at all, i.e. not for TP objects anyway).
Frank