Mirsad Todorovac wrote:
To be realistic, Frank has made it very clear so far that excess precision doesn't make much sense if i.e. Sin(x) is calculated in (only) double precision, and depends on the quality of implementation in system's C library -- otherwise we're depending on GNU GMP lib or something like that.
I think if you want excess precision, you either have to implement it all yourself (including Sin and all), or switch to GMP completely (which, as I mentioned, is not exactly saving memory). Converting to GMP just for an operation seems clusmy and inefficient. But if you use GMP types for high precision, self-made types (memory saving) for very low precision plus the standard types, then it really becomes a nightmare I guess ...
Or, we'd have to instantiate compiled math library for every given (Exponent, mantissa) combination and requested range/precision.
So, having all this limitations in mind customized Reals could serve for the same purpose as customized Integers -
And while we're at it, I'm not even sure if these are worth the effort. They can be useful for backward-compatibility with some fixed-layout bit-formats, but I don't use them much otherwise (with the exception of things like size = 8, 16 or 32 which basically does the same as SELECTED_REAL_KIND, or even things like uint32_t in C, i.e. just chooses one of the available types).
yet we'd better estimate cost/benefit in this case. If it's month of compiler writter's work, then it's expensive; and Frank is right when he says as much as possible should be done on user level.
If you really want to do it all in the compiler, then a month might be short actually.
In Pascal unit (probably schemata) implementation, we just might not have to limit anything OTOH. But the price is lost efficiency. (Just like with sets: to test one bit (a IN setA), we have to construct entire function call.)
The solution I prefer is to store routine bodies in GPI files so they can be inlined. That's non-trivial currently, but it's a more general solution, so in the end it will be better than a special hand-crafted set implementation in the compiler(*), a special hand-crafted customized real implementation in the compiler, etc.
(*) which GPC once had, but Juki who wrote it left the compiler team, and nobody else understood the code or was able to fix the bugs, so we switched back to a Pascal based solution.
But while we're at it, we might as well think about if, how, and how far something can/should be implemented at all before we put it on the list, so the wish list remains (or perhaps gets ;-) a little more realistic.
IMHO strivings should be 100x of what we can achieve, and what is planned only 10x that much ;-)
I wasn't so much talking about what we can achieve (the list might already contain more than that), but what we should actually try to put in the compiler. I don't want to repeat myself too often, but I really prefer powerful and general mechanisms in the compiler (such as GPI inlines, user-defined conversions) rather than a lot of complex special-case features which will inevitable increase the complexity and decrease the maintainability of the code considerably.
So generally (this also applies for other features that may be discussed in the future), I'd like to raise the question, how much of it does *really* have to be in the compiler, and how much can be done on the Pascal side.
In fact, in ideal world all the hassle over standard arithmetic with modifications of standard types would have been taken care of by the back-end.
Now this might be an argument for putting this in the compiler, if the backend was able to do better optimizations on it. (The same applies, BTW, to strings and sets; we currently have a few problems since we convert them to backend-representable expressions, often RTS calls, rather early, so it's hard to optimize them later.)
If that's your goal, well, you'll have to talk to the backend-people, anyway, and convince them. If you succeed, we can, of course, add the necessary interfaces in the frontend, but frankly I doubt it.
I don't knwo what ar the experiences with the implementation of Cardinal, Word and Integer bitfields;
I don't know very much either, since most of it is done in the backend. Only packed arrays are handled in the frontend (since the backend doesn't support them yet), and that's evil enough. We have 64 lines for reading them (build_pascal_packed_array_ref), 100 lines for writing them (within build_modify_expr), various special cases for `BitSizeOf', I/O and other things, and two fundamental problems (initializers don't work and the current implementation fails badly on some platforms), both of which can perhaps be solved with quite a bit of extra code. Bottom line: Doing this in the frontend is a bad idea. I really hope the backend will be able to handle this in the future (currently it supports bitfields with constant offsets as in packed records, and generalizing it to variable offsets should be easier than the mess we have to do in the frontend).
Of course, it would be very clumsy to instantiate all the arithmetic separatelly for each new manitissa or exponent size, now matter how standard the calculation routines may be. And worse, the resulting arithmetic could be slower than that generated by promotion to nearest higher precision standard IEEE float.
I can't tell. I'd have to know more details about the hand-written arithmetic on the one side, and the (also hand-written) conversions on the other side. Since the latter are two operations (plus the IEEE operation), I suppose it may be faster perhaps only for more complex operations like trigonometrics if they use specially optimized hardware ...
I'd be very encouraged if it's not rejected that easily, and for good. Perhaps in 5 years it would be 50 lines in the front-end to implement it, at little cost of programming and debugging.
If the backend does all the real work (see above), we might be able to get by with 50 lines. But already the basic declarations in the compiler, plus syntax and such "administrative" stuff might exceed that.
So in fact I suggest you try to write a Pascal unit for such a type that you see a need for, with all those operators etc., and find out what can't be done currently.
OK; then I'll add it to my TODO list. It appears at first glance that authomatic promotions are the first thing that is missing -- compiler might never know that new defined type is even a numeric, let alone real ...
Yes, the automatic conversions seem to be the main problem. That's true whether full customized reals will be implemented in the compiler, or only user-defined conversions. (There are many places that check, e.g. if some value is numeric, and they'd all have to be changed to take possible user-defined conversions into account, possibly several ones in a row. Not to mention possible ambiguities ...)
Frank