(Resending this because the first time I replied to a wrong list addresse in the mail I'm replying to (gpc@pnu.de). Sorry if anyone gets this twice.)
Maybe. But (Visual) Basic, C++ and FORTRAN also evolve, don't they? For example, consider Visual Basic's "collections", which are basically sets of members which can have various types (also mentioned in gpc info's TODO list as ???).
Err, where?
Besides, sparse sets wouldn't put too much of a hard burden on compiler itself, since 95% of the stuff required to make it work belongs to RTS library anyway, doesn't it?
Besides, it can be done without compiler and language interface change, just by making RTS functions more sophisticated.
Almost. In the best case, the compiler would only have to call different library routines, depending on the kind of sets involved.
Yup, but for a Chinese programmer this ['0'..'9'] may run well beyond your 256 elements per set (since their glyphs number about 3000 to read daily newspapers). With i18n issues, all design based on assumption that people will never want to use anything outside US-ASCII will eventually have to be rewritten IMHO, and that might be a very close future.
Same arguments apply to many applications of numeric sets as well. Having to recompile (with a differene `--set-limit' each time the number changes) isn't the optimal way of working (and only possible if the maximum set bound is known at compile time at all).
Anyway, range checking can be life saving. For example, remember Arianne launch failure when a 16-bit counter that was there for years rolled-over into negative when something was speeded-up. (In the time of design they said it can never go over 16 bits.) So, the unit failed, after which rendundant unit was consultet only to have the same result (of course, using the same software and same 16-bit counter). Then the rocket's computer thought it started to fall down and activated self-destruct sequence, even though it was climbing.
This story makes me agree that extensive range-checking, especially with arrays, sets and (last but not the least) implicit precision conversions is very important. That's also a weakness of C language which C++ hasn't eliminated (but I'm slightly off-topic maybe?) ...
Not really. Range-checking will at best generate runtime errors, not correct the problems automatically. If Ariane's software would have generated an error (even it if was an exception that could be handled), it's quite unlikely that it could've found the cause of the problem and corrected it (in real-time, therefore fully automatically!).
Range-cheking is mostly a debugging aid that will tell the programmer more early about possible problems and easy debugging. For an end-user application, it might even have negative impacts -- think of an editor or smoething which suddenly abort with a runtime error (while without a check it might behave wrongly but may just keep working long enough so the users can save his changes). Of course, the problem can be alleviated by catching runtime errors (like I do, e.g., in PENG) which GPC allows but is certainly not standard.
So what I want to say is just that I don't see range-checks as the panacea that some of you seem to. They're a debugging aid, sure, but I've found other debugging aids (e.g., GPC's warnings about using uninitialized variables) at least as useful.
Frank