On Thu, 15 Nov 2001, Frank Heckenbach wrote:
Maybe. But (Visual) Basic, C++ and FORTRAN also evolve, don't they? For example, consider Visual Basic's "collections", which are basically sets of members which can have various types (also mentioned in gpc info's TODO list as ???).
Err, where?
info -f gpc::Welcome::Note To Do::Planned features::Planned features: Other types->item 9 in list.
[ ... ]
Anyway, range checking can be life saving. For example, remember Arianne launch failure when a 16-bit counter that was there for years rolled-over into negative when something was speeded-up. (In the time of design they said it can never go over 16 bits.) So, the unit failed, after which rendundant unit was consultet only to have the same result (of course, using the same software and same 16-bit counter). Then the rocket's computer thought it started to fall down and activated self-destruct sequence, even though it was climbing.
This story makes me agree that extensive range-checking, especially with arrays, sets and (last but not the least) implicit precision conversions is very important. That's also a weakness of C language which C++ hasn't eliminated (but I'm slightly off-topic maybe?) ...
Not really. Range-checking will at best generate runtime errors, not correct the problems automatically. If Ariane's software would have generated an error (even it if was an exception that could be handled), it's quite unlikely that it could've found the cause of the problem and corrected it (in real-time, therefore fully automatically!).
Here I'd always leave the possibility to programmer whether to compile-in range checking or not. Yet, in Arianne example, if you get runtime errors, it doesn't have to end execution, some errors may be recoverable. For example, life-support computers and similar applications can't just say "panic: freeing free inode: syncing file systems ..." (common message on SunOS 4.1.x :-) ... ). But to allow for error to go undetected is neither a good solution.
Range-cheking is mostly a debugging aid that will tell the programmer more early about possible problems and easy debugging. For an end-user application, it might even have negative impacts -- think of an editor or smoething which suddenly abort with a runtime error (while without a check it might behave wrongly but may just keep working long enough so the users can save his changes). Of course, the problem can be alleviated by catching runtime errors (like I do, e.g., in PENG) which GPC allows but is certainly not standard.
On the other hand, by leaving range-checking out we could be able to calculate very expensive computations invain, and never be sure in results. I've got the grasp of that feeling while trying to design some of recent tests for example - there's just nothing firm and proven to attach to sometimes.
For example, in 1995 Pentium had a float division bug. Then they tried to cover up and say it doesn't ocur too often (incredible). Yet, was anybody able to run tests of dividing every compination of two 80-bit double numbers to see if the result is correct to the last bit?
This would require 2^80*2^80 ~ 1.461*10^48 tests. With 1 Gflops, this would require approx 4*10^31 years ... Much much longer than the Universe lasts.
So what I want to say is just that I don't see range-checks as the panacea that some of you seem to. They're a debugging aid, sure, but I've found other debugging aids (e.g., GPC's warnings about using uninitialized variables) at least as useful.
Yes, but UCSD pascal had it back then in 1987. If it can be switched on and off with compiler directive, I can't see why not?
For example, who excludes floating-point checks to make code run faster and get a matrix full of NaN (not-a-number) as a result?
With the editor example you have a point - but writer of the editor could disable runtime checks in production binary. On the other hand, in complex calculation with no range checking results become useless with slightest error ...
To range checking I would especially add pointer checking. I think it's possible to avoid lots of wasted hours if we test whether pointer points at regularly allocated part of memory, or to a freed block. And this one could be very easily implemented. Of course, it shouldn't run in production binary, since checking every ointer for validit before dereferencing would be tremendously slow.
But, if you don't like it - no problem, you are the boss :-)
mirsad
-- This message has been made up using recycled ideas and language constructs. No plant or animal has been injured in process of making this message.
Mirsad Todorovac wrote:
Not really. Range-checking will at best generate runtime errors, not correct the problems automatically. If Ariane's software would have generated an error (even it if was an exception that could be handled), it's quite unlikely that it could've found the cause of the problem and corrected it (in real-time, therefore fully automatically!).
Here I'd always leave the possibility to programmer whether to compile-in range checking or not. Yet, in Arianne example, if you get runtime errors, it doesn't have to end execution, some errors may be recoverable.
Yes, but you have to code it yourself. And then it doesn't make that big difference if you write something like:
procedure ErrorHandler; begin CorrectValues ... end;
[...] InstallErrorHandler ... Foo := Bar;
or:
if (Bar >= Minimum) and (Bar <= Maximum) then Foo := Bar else CorrectValues
In fact, the second one seems easier here, and does not require any non-standard features for error handling.
For example, life-support computers and similar applications can't just say "panic: freeing free inode: syncing file systems ..." (common message on SunOS 4.1.x :-) ... ). But to allow for error to go undetected is neither a good solution.
Again, you'd need something more complex. Detecting the error (which could be automated) is the easier bit. Carrying on with reasonable behaviour in every possible case of error (possibly even multiple errors) is much harder.
Yes, but UCSD pascal had it back then in 1987. If it can be switched on and off with compiler directive, I can't see why not?
I didn't say it wasn't a useful feature.
For example, who excludes floating-point checks to make code run faster and get a matrix full of NaN (not-a-number) as a result?
Just as useful as a runtime error abort in the middle. If you assume that the error occurs in the middle (statistically), and the check would make it run twice as slow, it would take the same time (I know, very rough estimation ;-).
With the editor example you have a point - but writer of the editor could disable runtime checks in production binary.
As I said, a debugging tool ...
Frank
Frank Heckenbach wrote:
Mirsad Todorovac wrote:
... snip ...
With the editor example you have a point - but writer of the editor could disable runtime checks in production binary.
As I said, a debugging tool ...
IMNSHO a bad attitude. Not sure if it was Djikstra, Hoare, Knuth who had some pithy comments on the practice of removing runtime checks - comparisons with discarding life preservers when going to sea. At any rate, the decision is up to the developer, and thus it should be possible to be very specific as to areas where the checks are removed.
CBFalconer wrote:
As I said, a debugging tool ...
IMNSHO a bad attitude. Not sure if it was Djikstra, Hoare, Knuth who had some pithy comments on the practice of removing runtime checks - comparisons with discarding life preservers when going to sea.
I didn't say that one should remove the check at runtime. I was merely pointing out that it doesn't make things so much better, since for the normal a runtime error message is not much better than some obscure behaviour, without a developer sitting next to him so debug the problem.
At any rate, the decision is up to the developer, and thus it should be possible to be very specific as to areas where the checks are removed.
This is planned, and it will be the easiest thing. Adding a compiler directive (of course, {$R+/-}, for compatibility with BP and other compilers since there are no official standards for it) is rather easy.
Nice, but wrong (e.g.: i: -4 .. 4)!
Pointing out that the check has to be performed at index time.
That's what I meant.
Again, this is the wrong perspective. The default should be to check things.
I think we're misunderstanding each other. I'm not arguing that range-checking is a bad thing. It's just a matter of priorities. We can't do all at once, and every bug and every missing feature is very urgent to someone. Of course, any contributions (code or money) might help to shift priorities ... :-)
Frank