Link to home
Start Free TrialLog in
Avatar of WilliamSolberg
WilliamSolberg

asked on

Sleep 100 or 10 microseconds in visual c++

Hi there,

    I am looking for a routine to have my program sleep for 10 or 100 microsecnds without using full processor use.   I saw some good topics already posted but they use full processor time.  The routine could be timer or a sleep.  It does not have to be to the microsecond accurate but very close.

Thanks,

Bill
Avatar of AndyAinscow
AndyAinscow
Flag of Switzerland image

You could use this in a loop (with a PeekMessage to remove and process any pending messages)

INFO: Timer Resolution in Windows NT and Windows 2000

Q115232


--------------------------------------------------------------------------------
The information in this article applies to:

Microsoft Win32 Application Programming Interface (API), used with:
the operating system: Microsoft Windows NT, versions 3.1, 3.5, 4.0
the operating system: Microsoft Windows 2000

--------------------------------------------------------------------------------


SUMMARY
In Win32-based applications, the GetTickCount() timer resolution on Windows NT, and Windows 2000 can be gotten by calling GetSystemTimeAdjustment() and checking the value returned in the lpTimeIncrement parameter. This value is indicated in 100 nanosecond units and should be divided by 10,000 to achieve a value in terms of milliseconds.

NOTE: The measurements in milliseconds indicate the period of the interrupt, not the units of the returned value.

The GetTickCount() API should not be used for resolution critical algorythms. Instead, QueryPerformanceCounter() and QueryPerformanceFrequency() should be used.

The Win32 API QueryPerformanceCounter() returns the resolution of a high- resolution performance counter if the hardware supports one. For x86, the resolution is about 0.8 microseconds (0.0008 ms). You need to call QueryPerformanceFrequency() to get the frequency of the high-resolution performance counter.
Avatar of Member_2_1001466
Member_2_1001466

The problem you are facing here is the OS latency. A "sleep" that not uses full processor time gives CPU slices to other processes including the OS itself. Latter has normally a reaction time of several ms (10s of ms) so a sleep of 10 microseconds not having control over the processor will only be accurate to about the OS latency of ms. So I doubt it usefullness sleeping for attempted 10 us with an error of +- 10 ms.

When you don't give away the processor control you can do better but even when not using it. But the taskmanager will report 100% load due to other processes not being able to get CPU slices.
Avatar of WilliamSolberg

ASKER

An example on use of PeekMessage In a loop for this purpose would be greatly appreciated.  I was unable to find that Article Q115232.

Thanks,

Bill
ASKER CERTIFIED SOLUTION
Avatar of AndyAinscow
AndyAinscow
Flag of Switzerland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
> I am looking for a routine to have my program sleep for 10 or 100 microsecnds without using full processor use.

The WinAPI's Sleep() function doesn't use full processor time. So what is the purpose of your question? Do you want to make your own function, or maybe do you want to make sleep a thread while the other continues working?
jaime,

the WinAPI's Sleep function will wait for 10 milliseconds and can be lowered to 1 Millisecond, but that is too long for my application..

The purpose of the short wait is that I want to read the state of a parellel port about every 10us (microseconds) but it cannot exceed 100us for acurrecy.

a microsecond is 1/1000 of a millisecond
WilliamSolberg,

but in that case my comment applies. Yo won't be able to assure this short timing due to OS latencies. For NT type systems this is ~ 10 ms for '9x this  is around 40 ms. So either you control the CPU not giving control to any thread (especially the OS like AndyAinscow's comment does when processing messages) and hope that the system is not demanding it: when your timeslice has expired another thread including the OS can get the next one and you could not get time for several milliseconds.

If you need to react in the microsecond range a RealTime (RT) OS is mandatory!

Or do the processing in a embedded device which has a big buffer on the side towards the PC.

If it has to be max 100 microsecond intervals you will have to control the processor.  My suggestion is a way to do constant checks without controlling the procesor but then one is at the mercy of the OP system assigning time slices to your app.

Is it logging or controlling?
Logging - consider something like a device driver that runs at kernal level and puts the values of the port into a file/pipe for your app.
Controlling - run-time level priority OR dig out those old DOS disks and do a dos based app.