Adriaan van Os wrote:
Frank Heckenbach wrote:
ISO Pascal wants an error (though, in principle, I think we could use NaN and document the error as "not detected" -- we might do this sometime, perhaps optionally). For now, I'm making it runtime errors. (emil27*.pas)
Special cases for the pow function in IEEE 754 (see e.g. Apple Numerics Manual, second edition, page 64, or the PowerPC Numerics manual available from <http://developer.apple.com/documentation/mac/PPCNumerics/PPCNumerics- 2.html>).
Operation Result Exceptions raised
[ List suppressed ]
It would be a good thing if gpc provided full support of the IEEE 754 Standard.
Unfortunatly (for numerical applications, at least), many compilers either provide no support at all, or provide only partial support. The latter is often even worse than the former.
(Apple has indeed been a loyal supporter of the standard from the very beginning.)
The need for some features in the IEEE 754 standard is less obvious to the uninitiated. Good articles in this respect are
Kahan, W. Lecture Notes on the Status of IEEE Standard 754 for Binary Floating-Point Arithmetic. May 1996. http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF
and
Kahan, W. Why do we need a floating-point arithmetic standard? Unpublished note, February 1981. http://www.cs.berkeley.edu/~dbindel/class/cs279/why-ieee.pdf
William Kahan is the intellectual father of the standard.
Another important references is (the title says it all):
Goldberg, D. ``What every computer scientist should know about floating-point arithmetic'', Appendix~D in Numerical Computation Guide. Sun Microsystems, Revision A, May 2000; originally published in ACM Computing Surveys (March 1991). http://docs.sun.com/source/806-3568/ncg_goldberg.html
The following (short) book is a good introduction to the use of IEEE 754:
Overton, M. L. Numerical Computing with IEEE Floating Point Arithmetic. SIAM, 2001.
For those with less time, I recommend an article I recently wrote:
Gyula Horvath, Tom Verhoeff. ``Numerical Difficulties in Pre-University Informatics Education and Competitions'', Informatics in Education, Vol. 2, Number 1, pp.21-38. http://www.win.tue.nl/~wstomv/publications/INFE012.pdf
Here is one simple example contained in that article, to explain the need for avoiding unnecessary exception raising.
Suppose you want to write a piece of program to calculate the effective resistance of two (and later more than two) parallel resistors, R_1 and R_2, both non-negative real numbers (Ohms). The basic formula is
1 R_Eff = --------- (i.e. the "harmonic" mean of R_1 and R_2) 1 1 --- + --- R_1 R_2
Of course, it fails (under normal math rules), when R_1 or R_2 (or both) are zero. However, when either (or both) are zero, the effective resistance is well-defined, viz. zero. A slightly better formula may seem to be
R_1 * R_2 R_Eff = --------- R_1 + R_2
because it also "works" when either of R_1 and R_2 are zero. However, there are two problems with that formula: (1) when both are zero, it gives 0/0, and (2) it does not generalize easily to more than two resistors.
Now, under the IEEE 754 Standard, the first formula works perfectly well. Why? Because the standard includes signed infinities and NaN (not-a-number) that are well-behaved:
1 / +0 = +inf (zero is also signed!) a + inf = inf 1 / +inf = +0 0 / 0 = NaN 0 * +inf = NaN NaN <any-operator> a = NaN
With these rules, the first formula works correctly under all circumstances, without requiring any case analysis in the program. However, if division by zero would raise an exception, then you need to catch it in the program, and you get a clumsy program.
The second formula fails under these rules for R_1 = R_2 = 0, because it involves 0 / 0.
In (numerical) practice, there are even "worse" examples (such as rational function with multiple "false" poles; see Kahan's articles), where a case analysis would be horrible compared to the straightforward approach via IEEE 754.
All in all, the committee that developed the IEEE 754 Standard thought hard about the features, and they came up with a carefully balanced set. Partial implementation is not recommended, because it destroys some important properties.
Tom
Tom Verhoeff wrote:
It would be a good thing if gpc provided full support of the IEEE 754 Standard.
Unfortunatly (for numerical applications, at least), many compilers either provide no support at all, or provide only partial support. The latter is often even worse than the former.
<long IEEE advertisement snipped>
I have read some texts praising IEEE, but frankly, I am unimpressed. I find some properties of IEEE (denormals) harmfull -- if denormal support is active I really can not rely on hardware detection of overflow/underflow (I can loose acuraccy without any indication).
AFAIK up from i387 Intel FPU complies to the letter (and probably also to the spirit) of the standard -- but if one wants to use IEEE features brain damage show up quickly. Namely, instead of providing separate instructions for different modes of operation processor has mode bits (I think it is what standard encourages) and if one wants to mix modes one has to change processor status word like crazy (costly even on newest processors and really kills performnce on Pentium).
The resistor example is cute, but a number comes from calculation then really w can not assume that it will be non-negative -- if it can be zero the typically it can be of any sign. Yes, Kahan gives examples where IEEE features guarantee sign, but IMHO checking that result can not be negative is more complicated then full error analysis + adding a positive bias which compensates for error. Note also that IEEE division is significantly more expensive then non IEEE one (for non IEEE one can use few steps of Newton aproximation, but IEEE requires extra checking/correction step or slow serial algorithm).
Trying to support fully support IEEE in compiler is really damaging to performance: if x = x then writeln('Noraml number'); else writeln('IEEE creature');
can not be simplified -- and such code appears during optimization of real programs, only programmers can not understand why compiler is not making "obvious" simplifications. And there are really more examples.
Lot of folks doing number crunching say that they do not care about IEEE, but they do care about speed.
For me traditionl Pascal formulation: "real type is a finite approximation to true real numbers" (with understanding that I will get _some_ result within specified error bounds) is quite reasonable. IEEE makes floating point computation exact, but following really complicated rules. I think that trying to predict results using IEEE rules is too complicated for most folks, so they really make no use of exactness. But exactness costs a lot.
Also, Pascal standard leaves real computation as "implementation defined", so relying on IEEE is non-portable. So I would limit effort towards IEEE -- I think it is better to produce reasonable results quickly then trying to follow IEEE in all details.
Waldek Hebisch wrote:
Tom Verhoeff wrote:
It would be a good thing if gpc provided full support of the IEEE 754 Standard.
Unfortunatly (for numerical applications, at least), many compilers either provide no support at all, or provide only partial support. The latter is often even worse than the former.
For the record, GPC does not claim (and never has AFAIK) to support IEEE 754, wholly or partially. It aims to support standard Pascal arithmetic, sometimes a few Borland oddities, and to a large extent just uses the arithmetic present on the target machine. These things might happen to overlap with IEEE, more or less.
In particular, if you really aim for an IEEE compatible GPC, you must decide what to do about platforms with non-IEEE arithmetics. (I don't currently know which platforms are or aren't IEEE compatible.) Do you want to ignore them, or provide an IEEE emulation?
Another thing to do would be to compare IEEE and standard Pascal arithmetics for areas of conflict. Some issues, such as the original topic here, will be errors which the standard allows to ignore, so with some bending of the standard, it would be ok to return NaN etc. (optionally, of course). But there may be more serious conflicts, and before starting any work in this area, it would be useful to know whether they exist, and if so, where they are.
(You may now raise your hand to volunteer. :-)
<long IEEE advertisement snipped>
I have read some texts praising IEEE, but frankly, I am unimpressed. I find some properties of IEEE (denormals) harmfull -- if denormal support is active I really can not rely on hardware detection of overflow/underflow (I can loose acuraccy without any indication).
I share some of your doubts (perhaps as issue of mathematicians vs. number crunchers ;-). OTOH, I've briefly looked Kahan's paper, and I see that the state of things before (ok, the paper is a bit old) was not exactly satisfying ...
I haven't read all your references in deatail, may do when my time permits ...
AFAIK up from i387 Intel FPU complies to the letter (and probably also to the spirit) of the standard -- but if one wants to use IEEE features brain damage show up quickly. Namely, instead of providing separate instructions for different modes of operation processor has mode bits (I think it is what standard encourages) and if one wants to mix modes one has to change processor status word like crazy (costly even on newest processors and really kills performnce on Pentium).
I also don't like this, but that's not a decisive argument for me. What's more important for me is whether we can access the modes in a portable way. (This is mostly a backend question. I haven't looked at it yet.)
The resistor example is cute, but a number comes from calculation then really w can not assume that it will be non-negative -- if it can be zero the typically it can be of any sign.
I was also wondering how practically relevant this example is. Zero resistance doesn't (in practice) usually arise from nontrivial computations, and the resistance of a superconductor in parallel with a normal resistor also seems like a rather uncommon thing to compute AFAIK.
In summary, I'm not against IEEE support, but the main questions for me are:
- How many conflicts with other supported features are there? (IOW, how many extra options may we need?)
- Will it done in a "complete" way (rather than "works for me, fails to even compile for everyone else" ;-)?
And mainly:
- Who will do the work?
Frank
Tom Verhoeff wrote:
... snip ...
For those with less time, I recommend an article I recently wrote:
Gyula Horvath, Tom Verhoeff. ``Numerical Difficulties in Pre-University Informatics Education and Competitions'', Informatics in Education, Vol. 2, Number 1, pp.21-38.
The response to that is "403 - forbidden".