On 23 Nov 2002 at 17:17, Frank Heckenbach wrote:
[...]
I don't think that Borland ever claimed that their Pascal was "the" Pascal.
As I said in the part you deleted, "I'm not sure if Borland actually made this claim".
Yes, my mistake.
[...]
Is the following code C or not? All versions of Bourne compatible shells can run it and produce the expected output. If they do, then they are C compilers (or maybe C interpreters). ;-)
#include <stdio.h> #define echo int main () { printf ( #define exit "\n"); return 0; } echo "Hello, world!" exit
Just betrays the C origins of the Bourne and similar shells. The logical conclusion is that "sh" can act as a C interpreter, and if it compiles the code, then it is not very different from a C compiler or a subset thereof.
[...]
Therefore I conclude that BP and Delphi are "Pascal", albeit incomplete implementations with extensions.
I might agree to this. However, it's a very wide description. For sufficiently large values of incomplete and sufficiently many extensions, every C compiler is an incomplete implementation of Pascal with extensions. ;-)
When a C compiler can compile correctly the simple Pascal program that I used as an example, then I will consider it an incomplete implementation of Pascal.
[...]
There is a need if you do not want to restrict your C++ users to precompiled Pascal libraries. What if they want to change something in the VCL (or 3rd party) sources, to fix a bug, or for whatever other reason? This would be impossible if they couldn't then recompile the sources.
Then Borland should simply ship the Pascal compiler together with the C++ compiler.
That is what "sort of" happens. There are binaries that the C++ compiler interacts with which are little more than emasculated versions of some Delphi binaries.
(That might be a question of licensing, but if the current C++ compiler can compile Pascal, every user of that compiler effectively has a Pascal compiler now, so it wouldn't make much of a difference.)
I haven't tried to produce .exes with the emasculated versions of Delphi binaries, but I suspect that it wouldn't work, and that you can only compile units with them. So they are not of much use other than to produce .DCUs for the C++ compiler to use.
I still hold that it's not necessary to mix the languages for any such reason. (And again, that's quite exactly what GNU does, so I think I have some experience in this area ...)
BTW, interesting that you bring up this point in this context. What if the users want to fix a bug in the compiler? Does Borland supply the source code of the compiler then?
Nope. Why would they? They supply the sources to the VCL and RTL if you have the "professional" or "enterprise" versions. But if you want to fix bugs in the compiler you just have to report and then wait until service packs are released (and hope they got round to fixing "your" bug).
Yes, it is a bit silly. I guess the closest thing to it that I can think of is varargs in C. This is their way of supporting variable numbers (and types) of parameters. It is controversial whether that should ever be supported. But once you decide to support it, you are already extending the language - and I guess it's then up to you how you do it.
Anything they do in their compiler is up to them, but we may still >discuss whether or not it's done well.
Of course.
[...]
I am not sure that there is a consensus on that point (or that there can be). Delphi code seems very clear and eminently readable to me (and no doubt to many others). The extensions do not make the code any less clear or readable to me.
Honestly, I'm quite tired of hearing these general and unspecific claims.
Nothing general or unspecific about that I said! I merely articulated my experience when examining source code.
Why don't you present some actual code which is clearer or better understandable due to these controversial features like overloading, `()' etc.?
I never said that it was clearer or better understandable. I said "The extensions do not make the code any less clear or readable to me".
"i := foo();" is just as clear and readable to me as "i := foo;"
Some might say that it is more readable (I express no opinion on that point - but Waldek indirectly alluded to this) because, in the former, it is clear that "foo()" is a function/method - but in the latter, "foo" might be a variable/field, or a function/method.
(I mean real understandability -- it might be easier at first sight to read some code where "everything" is called, say, `Read' -- one will think, oh yeah, that's reading something, very clear. But if you really want to understand what's going on, you'll have to figure out which routine/method is called in which case, and I don't see how overloading will help there.
I can. If you have:
Procedure Read (x:string);overload; Procedure Read (x, y:integer);overload; Procedure Read (x:string; y:integer; z:Double);overload;
It would be quite obvious which of these is being called, when the parameters passed are examined.
"Read (14, 32);" cannot be calling anything other than the second "Read".
This is arguably less clear than:
Procedure Read1 (x:string); Procedure Read2 (x, y:integer); Procedure Read3 (x:string; y:integer; z:Double);
"Read2 (14, 32);"
But I am not confused or mystified by either model, and both are equally accessible and understandable to me, and if I am debugging, I cannot see that one model makes which routine is actually being called less or more clear to me than the other. Whether or not one is "better" than another is a matter for the individual programmer's taste and philosophy. We can debate these things until kingdom come - but anything that we say for or against either model will only be rehashing well worn arguments in the programming world.
Best regards, The Chief -------- Prof. Abimbola A. Olowofoyeku (The African Chief) web: http://www.bigfoot.com/~african_chief/