Scott Moore wrote:
Frank Heckenbach wrote:
Good example indeed. But it is a case where default parameters may be even better suited -- if there are n such parameters, with overloads you'll need 2^n routines.
In theory, yes. For practical matters, the number never rises above 8 (three optional parameters). However, I don't have any fundamental objection to default parameters beyond degradation in source transparency, which I agree overloading also affects. By this I define source transparency is making it obvious to the reader of a program exactly what routine will be executed for any particular call, and how that call will appear.
You may easily have 8 default parameters. Of course, it is quite likely that some (most) of the parameters have the same type, so you need some way to signal which parameters you want to pass explicitely like:
procedure foo(i1: integer value 0; i2 integer value 1, i3 .... i8: integer value -1); ....
foo(5 => i2, 9 => i7);
Note that with equal types you can not use overloads.
But in your scheme, each invocation would create a new copy of the routine, which is not really nice WRT compile-time and binary-size. I think I prefer schema parameters for this case which are also type-safe.
Schemas require run time templates, which mean that you trade code space (template style) for runtime efficiency. Schemas are largely the reason I decided not to implement the ISO extended Pascal standard. They largely import type checking from compile time to runtime. I had a conversation with John Reagan (one of the authors and chief proponents of the standard), where he related to me that the makers of the standard believed that compile time for runtime trade offs were acceptable in a day and age where "computers were more powerfully than what is required".
Without (hopefully) getting into a tirade on the subject, I fundamentally oppose that.
I very much oppose the view that "computers are more powerfull than what is required". Looking how processor price increases with speed clearily shows that this is nonsense.
However, I think that such opinion about schema is just mistaken. Pascal tradition was to use statically sized data -- if you can live that then you will get best speed (and you may forget about schema). But having faster computers we want to solve more complicated problems. For such problems dynamically sized data is a must. So the question is how to implement such data -- efficiently and in type safe way. IHMO Pascal schema are one of best solutions: -- they allow ommiting extra info if array is statically allocated and _not_ used as an argument to function/procedure -- they allow to do many checks at compile time -- runtime and size overhead is quite small -- they are quite flexible
Note that your "general arrays" are just as efficient/expensive as simple array schema with one discriminant. If I read your description correctly, the only thing that "general arrays" can but schema can not is cooperating with fixed length arrays. So using schema you loose a single word per fixed array. But if you need n by n matrix your workaround looses 2n words and a lot of time for pointer manipulation -- in that case schema is much more efficient.
etc. The method used in IP Pascal requires a fraction of the work of schemas, both from the point of view of the compiler writer, and also runtime checking, but accomplishes all of the aims of schemas with respect to dynamic arrays. So I submit that schemas were not a required price to pay to enhance standard based Pascal.
The only advantage your method has is simpler implementation. You are penny-wise for one dimensional arrays (how many fixed length one dimensional arrays do you want to have?), and you loose big on multi dimensional arrays.