On Thu, 15 Nov 2001, Frank Heckenbach wrote:
> > Maybe. But (Visual) Basic, C++ and FORTRAN also evolve, don't they?
> > For example, consider Visual Basic's "collections", which are basically
> > sets of members which can have various types (also mentioned in gpc
> > info's TODO list as ???).
>
> Err, where?
info -f gpc::Welcome::Note To Do::Planned features::Planned features:
Other types->item 9 in list.
[ ... ]
> > Anyway, range checking can be life saving. For example, remember Arianne
> > launch failure when a 16-bit counter that was there for years rolled-over
> > into negative when something was speeded-up. (In the time of design they
> > said it can never go over 16 bits.) So, the unit failed, after which
> > rendundant unit was consultet only to have the same result (of course,
> > using the same software and same 16-bit counter). Then the rocket's
> > computer thought it started to fall down and activated self-destruct
> > sequence, even though it was climbing.
> >
> > This story makes me agree that extensive range-checking, especially with
> > arrays, sets and (last but not the least) implicit precision conversions
> > is very important. That's also a weakness of C language which C++ hasn't
> > eliminated (but I'm slightly off-topic maybe?) ...
>
> Not really. Range-checking will at best generate runtime errors, not
> correct the problems automatically. If Ariane's software would have
> generated an error (even it if was an exception that could be
> handled), it's quite unlikely that it could've found the cause of
> the problem and corrected it (in real-time, therefore fully
> automatically!).
Here I'd always leave the possibility to programmer whether to compile-in
range checking or not. Yet, in Arianne example, if you get runtime errors,
it doesn't have to end execution, some errors may be recoverable. For
example, life-support computers and similar applications can't just say
"panic: freeing free inode: syncing file systems ..." (common message on
SunOS 4.1.x :-) ... ). But to allow for error to go undetected is neither
a good solution.
> Range-cheking is mostly a debugging aid that will tell the
> programmer more early about possible problems and easy debugging.
> For an end-user application, it might even have negative impacts --
> think of an editor or smoething which suddenly abort with a runtime
> error (while without a check it might behave wrongly but may just
> keep working long enough so the users can save his changes). Of
> course, the problem can be alleviated by catching runtime errors
> (like I do, e.g., in PENG) which GPC allows but is certainly not
> standard.
On the other hand, by leaving range-checking out we could be able to
calculate very expensive computations invain, and never be sure in
results. I've got the grasp of that feeling while trying to design some of
recent tests for example - there's just nothing firm and proven to attach
to sometimes.
For example, in 1995 Pentium had a float division bug. Then they tried to
cover up and say it doesn't ocur too often (incredible). Yet, was anybody
able to run tests of dividing every compination of two 80-bit double
numbers to see if the result is correct to the last bit?
This would require 2^80*2^80 ~ 1.461*10^48 tests. With 1 Gflops, this
would require approx 4*10^31 years ... Much much longer than the Universe
lasts.
> So what I want to say is just that I don't see range-checks as the
> panacea that some of you seem to. They're a debugging aid, sure, but
> I've found other debugging aids (e.g., GPC's warnings about using
> uninitialized variables) at least as useful.
Yes, but UCSD pascal had it back then in 1987. If it can be switched on
and off with compiler directive, I can't see why not?
For example, who excludes floating-point checks to make code run faster
and get a matrix full of NaN (not-a-number) as a result?
With the editor example you have a point - but writer of the editor could
disable runtime checks in production binary. On the other hand, in complex
calculation with no range checking results become useless with slightest
error ...
To range checking I would especially add pointer checking. I think it's
possible to avoid lots of wasted hours if we test whether pointer points
at regularly allocated part of memory, or to a freed block. And this one
could be very easily implemented. Of course, it shouldn't run in
production binary, since checking every ointer for validit before
dereferencing would be tremendously slow.
But, if you don't like it - no problem, you are the boss :-)
mirsad
--
This message has been made up using recycled ideas and language constructs.
No plant or animal has been injured in process of making this message.