Hi,
Some of my beginners found that Delay work proprely for timing more than 12 ms. Under 11 are no delay but the C'delay do.
I don't find it in the Bug list.
Olivier Pécheux wrote:
Hi,
Some of my beginners found that Delay work proprely for timing more than 12 ms. Under 11 are no delay but the C'delay do.
I don't find it in the Bug list.
in PC's time is usually increased by a hardware timer with ticks every 1/60th second. So this does not seem unreasonable. Are you sure that the C routine you speak about measure really microseconds, or just put out a number in microseconds which is in fact the one given by ticks of the timer ? What is actually the C function you use to measure the delay ? Or may be the delay is obtained with some programming loop ?
Maurice
Maurice Lombardi wrote:
Olivier Pécheux wrote:
Hi,
Some of my beginners found that Delay work proprely for timing more than 12 ms. Under 11 are no delay but the C'delay do.
I don't find it in the Bug list.
in PC's time is usually increased by a hardware timer with ticks every 1/60th second.
Almost. It's ~ 1/18.2 seconds, and it's only under Dos. Linux, e.g., uses 1/100 s on PCs.
To get an integer number of CLOCKS_PER_SEC, DJGPP multiplies the ticks by 5 internally, so we have 1/91 s. Therefore, every delay shorter than that is considered equal to 0 by usleep() (which is used internally by Delay).
So this does not seem unreasonable. Are you sure that the C routine you speak about measure really microseconds, or just put out a number in microseconds which is in fact the one given by ticks of the timer ? What is actually the C function you use to measure the delay ?
I don't know how Olivier measured it, but if I call Delay(11) several times in a loop, I can immediately see that it doesn't wait.
Well, on Dos with such a poor time resolution it's difficult to get satisfactory short delays. Maybe the only way to do it (besides messing with the hardware which will surely conflict with a lot of other programs etc.) is busy waiting like Borland did in BP which has some serious problems, like eating up CPU time unnecessarily and getting the delay wrong during the whole program if the system is busy during calibration (and I'm not even talking about the famous bugs). The first problem could be alleviated by doing busy waiting only for short delays where the waste doesn't matter so much, and use usleep() (which yields the processor while waiting) for longer delays.
I'm not sure if it's worth the extra effort -- otherwise it might just boil down to: If you want a good delay, use an OS that supports it without any tricks...
In any way, I can't write such code because I have studied Borland's Delay code far too well in the past so it might be a little copyright problem if I'd write something similar for GPC. Maybe someone else likes to write the code -- not based on Borland's code, though the idea may be the same; if you want, I can tell you the idea in words, and you can translate it in code (clean re-implementation).
Frank
Maurice Lombardi wrote:
Olivier Pécheux wrote:
Hi,
Some of my beginners found that Delay work proprely for timing more than 12 ms. Under 11 are no delay but the C'delay do.
I don't find it in the Bug list.
in PC's time is usually increased by a hardware timer with ticks every 1/60th second. So this does not seem unreasonable. Are you sure that the C routine you speak about measure really microseconds, or just put out a number in microseconds which is in fact the one given by ticks of the timer ? What is actually the C function you use to measure the delay ? Or may be the delay is obtained with some programming loop ?
Maurice
Delay is a loop I think. It's well known in BP beacause of the famous error 200 on the 200Mhz PC.
I dont mind that delay is realy or not one ms, but I want a time and 1/18 s is too big.
I forgot to say I work with Win95/gpc2952b.zip found on agnes.dida.physik.uni-essen.de/home/maurice/
The test program is:
Program test_delay; Uses Crt; Procedure DelayC(Temps:Integer); AsmName 'delay'; Var N,I:integer; Begin For N:=15 Downto 1 Do For I:=1 To 20 Do Begin Write(N:4); Delay(N); Delay(N) End End.
If i use delay, the end is done quickly. If i remplace by delayC (from the C library), it works.
Opie
Opie Pecheux wrote:
Delay is a loop I think. It's well known in BP beacause of the famous error 200 on the 200Mhz PC.
That's called busy waiting (as I mentioned in my other mail) and is generally a Bad Thing (tm). Even Dos programs often run in multitasking environments today (MS-Windows, Linux DosEmu, etc.), and burning CPU cycles when waiting is quite inefficient. Also, as I mentioned, calibrating a delay loop exactly is a difficult thing. Borland actually did quite a clever thing there, but the more sophisticated processors have become, the more inaccurate this method has become (I'm not talking about the bugs, but about the general issue of doing exact timing by a programming loop).
I dont mind that delay is realy or not one ms, but I want a time and 1/18 s is too big.
I forgot to say I work with Win95/gpc2952b.zip found on agnes.dida.physik.uni-essen.de/home/maurice/
The test program is:
Program test_delay; Uses Crt; Procedure DelayC(Temps:Integer); AsmName 'delay'; Var N,I:integer; Begin For N:=15 Downto 1 Do For I:=1 To 20 Do Begin Write(N:4); Delay(N); Delay(N) End End.
If i use delay, the end is done quickly. If i remplace by delayC (from the C library), it works.
Interesting. :-)
This should make it possible to fix/work-around the problem in CRT...
What I'm wondering is, if DJGPP's libc contains a delay() function which does handle short delays well (and even seems to use microseconds internally AFAICS), why doesn't its usleep() function do the same thing?
Are there any issues with the interrupt call used in delay() (that I probably should be aware of when using it in CRT), or what is the reason? Anyone knows?
Frank
Frank Heckenbach wrote:
This should make it possible to fix/work-around the problem in CRT...
What I'm wondering is, if DJGPP's libc contains a delay() function which does handle short delays well (and even seems to use microseconds internally AFAICS), why doesn't its usleep() function do the same thing?
Are there any issues with the interrupt call used in delay() (that I probably should be aware of when using it in CRT), or what is the reason? Anyone knows?
From the libc sources (in djlsr203.zip) usleep() uses the 55ms timer ticks while delay() uses the 0x1586 BIOS interrupt described as
Category: BIOS
INT 15 - BIOS - WAIT (AT,PS) AH = 86h CX:DX = interval in microseconds Return: CF clear if successful (wait interval elapsed) CF set on error or AH=83h wait already in progress AH = status (see #00496) Note: the resolution of the wait period is 977 microseconds on many systems because many BIOSes use the 1/1024 second fast interrupt from the AT real-time clock chip which is available on INT 70; because newer BIOSes may have much more precise timers available, it is not possible to use this function accurately for very short delays unless the precise behavior of the BIOS is known (or found through testing) SeeAlso: AH=41h,AH=83h,INT 1A/AX=FF01h,INT 70
Duno why the difference. I send a copy of this message to c.o.m.djgpp: somebody should know there.
For reference: The code in usleep.c is
#include <unistd.h> #include <time.h> #include <dpmi.h>
unsigned int usleep(unsigned int _useconds) { clock_t cl_time; clock_t start_time = clock();
/* 977 * 1024 is about 1e6. The funny logic keeps the math from overflowing for large _useconds */ _useconds >>= 10; cl_time = _useconds * CLOCKS_PER_SEC / 977;
while (1) { clock_t elapsed = clock() - start_time; if (elapsed >= cl_time) break; __dpmi_yield(); } return 0; }
The code in delay.c is
#include <dos.h> #include <dpmi.h>
void delay(unsigned msec) { __dpmi_regs r; while (msec) { unsigned usec; unsigned msec_this = msec; if (msec_this > 4000) msec_this = 4000; usec = msec_this * 1000; r.h.ah = 0x86; r.x.cx = usec>>16; r.x.dx = usec & 0xffff; __dpmi_int(0x15, &r); msec -= msec_this; } }
Hope this helps
Maurice
Maurice Lombardi wrote:
Frank Heckenbach wrote: The code in usleep.c is cl_time = _useconds * CLOCKS_PER_SEC / 977;
I should have included the explanations from usleep.txh:
@node usleep, process @subheading Syntax
@example #include <unistd.h>
unsigned usleep(unsigned usec); @end example
@subheading Description
This function pauses the program for @var{usec} microseconds. Note that, since @code{usleep} calls @code{clock} internally, and the latter has a 55-msec granularity, any argument less than 55@dmn{msec} will result in a pause of random length between 0 and 55 msec. Any argument less than 11@dmn{msec} (more precisely, less than 11264 microseconds), will always result in zero-length pause (because @code{clock} multiplies the timer count by 5). @xref{clock}.
@subheading Return Value
The number of unslept microseconds (i.e. zero).
@subheading Portability
@portability !ansi, !posix
@subheading Example
@example usleep(500000); @end example
Hope this helps
Maurice
On Thu, 29 Mar 2001, Maurice Lombardi wrote:
What I'm wondering is, if DJGPP's libc contains a delay() function which does handle short delays well (and even seems to use microseconds internally AFAICS), why doesn't its usleep() function do the same thing?
If both functions used the same service, you couldn't have chosen one or the other, depending on your needs. Having both is better than only one.
Why is it a problem that they use different methods? The docs clearly says what service is used, so the information is available, isn't it?
Eli Zaretskii wrote:
On Thu, 29 Mar 2001, Maurice Lombardi wrote:
What I'm wondering is, if DJGPP's libc contains a delay() function which does handle short delays well (and even seems to use microseconds internally AFAICS), why doesn't its usleep() function do the same thing?
If both functions used the same service, you couldn't have chosen one or the other, depending on your needs. Having both is better than only one.
Why is it a problem that they use different methods? The docs clearly says what service is used, so the information is available, isn't it?
The question has been raised by an user of GNU pascal (gpc) who found a difference between delay() in C and delay() in gpc. gpc beeing all platforms, not limited to djgpp, has implemented a delay() function by using usleep(), which, while being neither ANSI nor POSIX, is available on many platforms: prototype in unistd.h which exists e.g. on linux. The djgpp C delay() function is a dosish nicety (prototype in dos.h) which exists nowhere else. It is unfortunate that djgpp implemented the less accurate delay function with the more standard usleep() function. But probably gpc maintainers will cope with this ... Maurice
On Thu, 29 Mar 2001, Maurice Lombardi wrote:
djgpp C delay() function is a dosish nicety (prototype in dos.h) which exists nowhere else.
That's not true: I find `delay' on some Unix boxes.
It is unfortunate that djgpp implemented the less accurate delay function with the more standard usleep() function.
`delay' is a compatibility function in DJGPP, and the reference for that compatibility is Borland. AFAIK, Borland's library uses Int 15h for its `delay' implementation.
Eli Zaretskii wrote:
On Thu, 29 Mar 2001, Maurice Lombardi wrote:
What I'm wondering is, if DJGPP's libc contains a delay() function which does handle short delays well (and even seems to use microseconds internally AFAICS), why doesn't its usleep() function do the same thing?
If both functions used the same service, you couldn't have chosen one or the other, depending on your needs. Having both is better than only one.
Why is it a problem that they use different methods? The docs clearly says what service is used, so the information is available, isn't it?
Well, usleep() is a common function of many systems (though not POSIX, I admit), so I think it might just be a good idea if it produced good results for short delays (which is one of its main uses).
In the case discussed here, the PDCurses library implements its napms() function by a call to usleep() (because it is multi-platform), and that's what the Delay procedure in GPC's CRT unit calls, so calling Delay with a small argument doesn't do anything, like Opie Pecheux reported.
To fix it, we'll have to make a conditional change for DJGPP to call delay() rather than usleep() in either PDCurses or CRT as well as in any other code that uses usleep(). Having DJGPP's usleep() do the right thing for short delays, in contrast, would only be one change in a single place...
Frank
Eli Zaretskii wrote:
On Thu, 29 Mar 2001, Maurice Lombardi wrote:
What I'm wondering is, if DJGPP's libc contains a delay() function which does handle short delays well (and even seems to use microseconds internally AFAICS), why doesn't its usleep() function do the same thing?
If both functions used the same service, you couldn't have chosen one or the other, depending on your needs. Having both is better than only one.
So what's the advantage of the method used by usleep()? Is it more accurate for longer delays, or doesn't the delay() method work on all kind of Dos / DPMI servers / whatever? Being no DJGPP expert, I'm wondering what we should call now in the GPC units.
From what I see in the docs, a problem with delay() is that "some
operating systems that emulate DOS, such as OS/2 and Windows/NT, hang the DOS session when the @key{Pause} key is pressed during the call to @code{delay}."
Does this mean we shouldn't call usleep() and just accept that short delays don't work, or should we just call delay() because MS-Windows users are used to random system crashes and hangs, anyway? ;-)
Frank
Date: Sat, 31 Mar 2001 03:29:22 +0200 From: Frank Heckenbach frank@g-n-u.de
So what's the advantage of the method used by usleep()?
It has both advantages and disadvantages.
Is it more accurate for longer delays, or doesn't the delay() method work on all kind of Dos / DPMI servers / whatever?
All of the above ;-)
The main advantage is that it uses the main system clock, not the one in CMOS which runs independently of the system. I also suspect that delay() might not work reliably on Windows.
Being no DJGPP expert, I'm wondering what we should call now in the GPC units.
I cannot advise here, since my Pascal knowledge is virtually non-existent. Do Pascal users really need sub-54ms resolution when they call these functions?
From what I see in the docs, a problem with delay() is that "some
operating systems that emulate DOS, such as OS/2 and Windows/NT, hang the DOS session when the @key{Pause} key is pressed during the call to @code{delay}."
Yes, that's one of the more notorious problems. But it isn't the only one.
I actually like the Int 15h services very much; it does a wonderful job on DOS. Unfortunately, it is not very reliable on Windows.
Does this mean we shouldn't call usleep() and just accept that short delays don't work, or should we just call delay() because MS-Windows users are used to random system crashes and hangs, anyway? ;-)
You could use delay() on plain DOS and usleep() on Windows and OS/2.
Eli Zaretskii wrote:
Being no DJGPP expert, I'm wondering what we should call now in the GPC units.
I cannot advise here, since my Pascal knowledge is virtually non-existent. Do Pascal users really need sub-54ms resolution when they call these functions?
It hasn't too much to do with Pascal. The Delay procedure (= napms() in curses) just has a declared millisecond resolution, so users will at least expect to get a delay > 0 when passing a value > 0 (though, coming from another Dos compiler, they can't really expect too precise delays)...
Does this mean we shouldn't call usleep() and just accept that short delays don't work, or should we just call delay() because MS-Windows users are used to random system crashes and hangs, anyway? ;-)
You could use delay() on plain DOS and usleep() on Windows and OS/2.
So, how to distinguish them? -- I suppose that's in the FAQ or something, so perhaps some other DJGPP user can tell me how to do it or send me some code.
Frank
Date: Mon, 23 Apr 2001 17:27:16 +0200 From: Frank Heckenbach frank@g-n-u.de
You could use delay() on plain DOS and usleep() on Windows and OS/2.
So, how to distinguish them? -- I suppose that's in the FAQ or something, so perhaps some other DJGPP user can tell me how to do it or send me some code.
Function 1600h of Int 2Fh returns info that can be used to see if you are running on Windows (and on what version of Windows). Ralf Brown's Interrupt List has all the details. Here's a working code fragment (from the Emacs distribution's dosfns.c file) that is used in the DJGPP port of Emacs to set the Lisp variable dos-windows-version:
/* If we are running from DOS box on MS-Windows, get Windows version. */ dpmiregs.x.ax = 0x1600; /* enhanced mode installation check */ dpmiregs.x.ss = dpmiregs.x.sp = dpmiregs.x.flags = 0; _go32_dpmi_simulate_int (0x2f, &dpmiregs); /* We only support Windows-specific features when we run on Windows 9X or on Windows 3.X/enhanced mode.
Int 2Fh/AX=1600h returns:
AL = 00: no Windows at all; AL = 01: Windows/386 2.x; AL = 80h: Windows 3.x in mode other than enhanced; AL = FFh: Windows/386 2.x
We also check AH > 0 (Windows 3.1 or later), in case AL tricks us. */ if (dpmiregs.h.al > 2 && dpmiregs.h.al != 0x80 && dpmiregs.h.al != 0xff && (dpmiregs.h.al > 3 || dpmiregs.h.ah > 0)) { dos_windows_version = dpmiregs.x.ax; Vdos_windows_version = Fcons (make_number (dpmiregs.h.al), make_number (dpmiregs.h.ah));
Eli Zaretskii wrote:
Date: Mon, 23 Apr 2001 17:27:16 +0200 From: Frank Heckenbach frank@g-n-u.de
You could use delay() on plain DOS and usleep() on Windows and OS/2.
So, how to distinguish them? -- I suppose that's in the FAQ or something, so perhaps some other DJGPP user can tell me how to do it or send me some code.
Function 1600h of Int 2Fh returns info that can be used to see if you are running on Windows (and on what version of Windows). Ralf Brown's Interrupt List has all the details. Here's a working code fragment (from the Emacs distribution's dosfns.c file) that is used in the DJGPP port of Emacs to set the Lisp variable dos-windows-version:
/* If we are running from DOS box on MS-Windows, get Windows version. */ dpmiregs.x.ax = 0x1600; /* enhanced mode installation check */ dpmiregs.x.ss = dpmiregs.x.sp = dpmiregs.x.flags = 0; _go32_dpmi_simulate_int (0x2f, &dpmiregs); /* We only support Windows-specific features when we run on Windows 9X or on Windows 3.X/enhanced mode.
Int 2Fh/AX=1600h returns: AL = 00: no Windows at all; AL = 01: Windows/386 2.x; AL = 80h: Windows 3.x in mode other than enhanced; AL = FFh: Windows/386 2.x We also check AH > 0 (Windows 3.1 or later), in case AL tricks us. */
if (dpmiregs.h.al > 2 && dpmiregs.h.al != 0x80 && dpmiregs.h.al != 0xff && (dpmiregs.h.al > 3 || dpmiregs.h.ah > 0)) { dos_windows_version = dpmiregs.x.ax; Vdos_windows_version = Fcons (make_number (dpmiregs.h.al), make_number (dpmiregs.h.ah));
Maurice (or someone else?), is your C good enough that you can make a usleep replacement that calls usleep or delay based on this distinction, and test it on plain Dos and under Windoze? (In the case of usleep(), if the value is > 0 and < than the minimum possible (12ms or something), you might want to round it up to the minimum.)
If you can send me such a routine, I can just drop it in. Otherwise, testing is a little difficult for me.
Frank
Frank Heckenbach wrote:
Maurice (or someone else?), is your C good enough that you can make a usleep replacement that calls usleep or delay based on this distinction, and test it on plain Dos and under Windoze? (In the case of usleep(), if the value is > 0 and < than the minimum possible (12ms or something), you might want to round it up to the minimum.)
The following works as expected on W98 DOS box and plain DOS: #include <dpmi.h> #include <unistd.h> #include <dos.h>
unsigned usleep2(unsigned musec) { _go32_dpmi_registers dpmiregs;
/* If we are running from DOS box on MS-Windows, get Windows version. */ dpmiregs.x.ax = 0x1600; dpmiregs.x.ss = dpmiregs.x.sp = dpmiregs.x.flags = 0; _go32_dpmi_simulate_int (0x2f, &dpmiregs); /* Int 2Fh/AX=1600h returns:
AL = 00: no Windows at all; AL = 01: Windows/386 2.x; AL = 80h: Windows 3.x in mode other than enhanced; AL = FFh: Windows/386 2.x
We also check AH > 0 (Windows 3.1 or later), in case AL tricks us. */ if (dpmiregs.h.al > 2 && dpmiregs.h.al != 0x80 && dpmiregs.h.al != 0xff && (dpmiregs.h.al > 3 || dpmiregs.h.ah > 0)) { if (musec > 11263) {usleep(musec);} else usleep(11264); } else { delay(musec / 1000); }; }
Hope this helps
Maurice
Maurice Lombardi wrote:
Frank Heckenbach wrote:
Maurice (or someone else?), is your C good enough that you can make a usleep replacement that calls usleep or delay based on this distinction, and test it on plain Dos and under Windoze? (In the case of usleep(), if the value is > 0 and < than the minimum possible (12ms or something), you might want to round it up to the minimum.)
The following works as expected on W98 DOS box and plain DOS:
[...]
OK, I'm putting it in, slighty changed so the Windoze detection is run only once.
It will be uploaded soon (20010429). When you get it, please check if it still works. (Also Olivier and others who are interested, of course.)
Frank
Frank Heckenbach wrote:
Eli Zaretskii wrote:
Does this mean we shouldn't call usleep() and just accept that short delays don't work, or should we just call delay() because MS-Windows users are used to random system crashes and hangs, anyway? ;-)
You could use delay() on plain DOS and usleep() on Windows and OS/2.
So, how to distinguish them? -- I suppose that's in the FAQ or something, so perhaps some other DJGPP user can tell me how to do it or send me some code.
Not simple: a DOS box is made to let djgpp programs think they are on DOS. The only dirty trick I imagine is to check for the LFN API: it is present on a Windows DOS box and not in bare dos. There is a function _use_lfn in the djgpp libc to check that. I have checked the following program:
program test; function UseLFN(path:CString): byte; AsmName '_use_lfn'; begin Writeln(UseLFN('C:')); end.
gives 1 on a W98 dos box and 0 when booting in bare dos (on the same machine). It checks for the LFN API on this drive (assuming in fact that all drives have it simultaneously) and does not depend on the setting of LFN=y or LFN=n on the environment. There will be problems fot NT however. Hope this helps
Maurice
Maurice Lombardi wrote:
program test; function UseLFN(path:CString): byte; AsmName '_use_lfn'; begin Writeln(UseLFN('C:')); end.
This is a very, very bad way of detecting Windoze because it assumes LFN = Windoze. Ever thought that someone could implement a DOS LFN API on a non-Windoze platform? A DOS device driver (there is one in progress) or a new DOS could provide this , as could many DOS emulators of already-LFN-capable operating systems, such as Linux. The DOS 7.x LFN API does not restrict its use to VFAT drives - even case-sensitive file systems (what a bloody stupid idea they were/are) are catered for in DOS 7.x LFN API.
There will be problems fot NT however.
Not just NT (NT/2000 does not provide the DOS LFN API) my friend.
Hope this helps
No. :-)
Jay
Jason Burgon - Author of "Graphic Vision" GUI for DOS/DPMI === Free LFN capable Dos/WinDos replacement and === === New Graphic Vision version 2.12 available from: === http://www.jayman.demon.co.uk
On Mon, 23 Apr 2001, Jason Burgon wrote:
program test; function UseLFN(path:CString): byte; AsmName '_use_lfn'; begin Writeln(UseLFN('C:')); end.
This is a very, very bad way of detecting Windoze because it assumes LFN = Windoze.
I agree that this is not a good idea. I sent a different suggestion, which I think is better, based on Int 2Fh function 1600h. Is that method okay? If not, why not?
There will be problems fot NT however.
Not just NT (NT/2000 does not provide the DOS LFN API) my friend.
That's not true: W2K _does_ include the LFN API support in its DOS box.