Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1124
  • Last Modified:

clock() overflow

Does anyone have any code or know how to use a timer function on milliseconds level? the clock() function works but will eventually overflow.  Thanks.
0
groundhogday
Asked:
groundhogday
  • 5
  • 4
  • 3
  • +1
1 Solution
 
PlanetCppCommented:
use GetTickCount() it returns a DWORD and will give you the amount of miliseconds since the system was started. unless your process goes over 49.7 days...it won't overflow.
0
 
Fallen_KnightCommented:
you can just do a overflow check, if the last Clock returns 1,000,000,000, and you call it again and its 1000 you know an overflow has occured, at witch point you take note of that fact.

But if you want a more dependable solution you'll have to use timers. for windows look at SetTimer(). its a good way to do a msec timer. And no overflow, but you'll need to create a window (even if invisable).

Any questions about how to use SetTimer() or how to do this for unix/linux just ask.
0
 
PlanetCppCommented:
let me just make sure i read your question right.
your using clock to get a start time, do some action, then and end time to get the duration right?
i was going to say use a windows timer but what i think your tryinug to do is find out how long something took. like i said before unless it goes over 47 days youll be fine with gettickcount
if your looking to wait some millieconds then perform an operation, do what fallen said and use settimer.
0
Independent Software Vendors: We Want Your Opinion

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
SalteCommented:
You can also use the system timer under windows. It is a counter that ticks every 100 nanoseconds and is 64 bit. It won't overflow in the next 20 000 years or so.

Under linux you can also use a similar counter - the PC has a 100 nanosecond hardware timer that is used by both windows and linux.

Under other modern unixes you also typically have a nanosecond (or at least sub microsecond) timer that is 64 bit and won't overflow for a (couple of) thousand years or so.

In standard unix you can use the microsecond resolution timer which can be combined with a second timer and the second timer won't overflow until 2038 or so if it is still 32 bit by that time. Chances are your time_t variable will be 64 bit at that time and then it won't overflow for a couple of million years or so.

I wouldn't worry about overflow if I were you. You just have to program with the overflow in mind but if you do it is easy to overcome such problems.

For the windows clock_t clock the answer is simple: don't use it. Even GetTickCount() is probably a bad idea, use the Win32 64 bit system timer.

Even the BIOS timer is depreceated. It is only used by windows at startup time to set the 64 bit timer and even that is better done by using the network instead of the BIOS timer.

Alf
0
 
groundhogdayAuthor Commented:
I forgot to mention that I need the timer for a 16-bit DOS app.  It would be nice if I can reset the clock back to 0 whenever I want.  Does any of what's mentioned above still apply?  Thanks.
0
 
groundhogdayAuthor Commented:
I forgot to mention that I need the timer for a 16-bit DOS app.  It would be nice if I can reset the clock back to 0 whenever I want.  Does any of what's mentioned above still apply?  Thanks.
0
 
PlanetCppCommented:
i'm still not sure of you want this to clock a process or event, if you have some function or functions and you want to time them then theres no need to set any clock back to 0
you use GetTickCount the same as you did before with clock()
you have three DWORD variables
start,end, and duration
right before you do the actions to be timed you set start to GetTickCount()
then afterwards you get tickcount again but set it to end'
then duration = end-start;
thats the milliseconds it took to complete the task
0
 
Fallen_KnightCommented:
no, as far as i know only clock() wiull work in pure dos.

The best way is to check for overflow.

like below;

nexttime = clock()+DELAY;

if((time = clock) == nexttime) {
    //do whatever
    if((time + DELAY) > MAX_CLOCK_RETURN_VAL) {
        nexttime = time+delay-MAX_CLOCK_RETURN_VAL;
    } else {
        nexttime = time+DELAY;
    }
}

that help? and it should work perfectly fine on all systems
0
 
SalteCommented:
If you run on a modern pentium with the 64 bit hardware clock you can still use it - also in a 16 bit app.

Probably have some troubles handling the 64 bit integer in a 16 bit app though but can work too if you handle it as an array of two 32 bit long ints with one 32 bit long being the high 32 bits and the other being the low 32 bits.

A simple function to subtract two 64 bit timer values can be easily made using the pentium SBB instruction (subtract with borrow).

// compute the time difference between later and earlier
// store the difference in dst.
// all values are 64 bit int stored as an array of 4
// unsigned short with arr[0] being the least significant
// 16 bits and arr[3] being the most significant.
// if you have 64 bit timer you also have fs and gs
void time_diff(unsigned short dst[4], unsigned short later[4], unsigned short earlier[4])
{
   __asm {
      pusha
      push es
      push fs
      push gs

      les di,dst // assume far pointers es:di == dst
      lfs si,later // assume far pointer fs:si == later
      lgs dx,earlier // assume far pointer gs:dx == earlier
      mov bx,si    // bx == & later[0]
      mov ax,fs:[bx] // ax == later[0]
      mov bx,dx    // bx == & earlier[0]
      sub ax,gs:[bx] // ax == later[0] - earlier[0]
      pushf  // push flags
      mov bx,di      // bx == & dst[0]
      mov es:[bx],ax  // dst[0] == later[0] - earlier[0]
      mov bx,si      // bx == & later[0]
      mov ax,fs:[bx+2]  // ax == later[1]
      mov bx,dx  // bx == & earlier[1]
      popf   // pop flags
      sbb ax,gs:[bx+2] // ax == later[1] - earlier[1]
      pushf   // push flags
      mov bx,di  // bx == & dst[1]
      mov es:[bx+2],ax // dst[1] = later[1] - earlier[1]
      mov bx,si
      mov ax,fs:[bx+4]  // ax == later[2]
      mov bx,dx
      popf
      sbb ax,gs:[bx+4]
      pushf
      mov bx,di
      mov es:[bx+4],ax // dst[2] = later[2] - earlier[2]
      mov bx,si
      mov ax,fs:[bx+6] // ax == later[3]
      mov bx,dx
      popf
      sbb ax,gs:[bx+6]
      mov bx,di
      mov es:[bx+6],ax // dst[3] = later[3] - earlier[3]
      pop gs
      pop fs
      pop es
      popa
   }
}

This function works in 16 bit mode to subtract two 64 bit values stored in unsigned short[4] arrays. The destination can be the same as any or both of the sources.

For time difference you will probably never encounter the 64 bit difference two have any bit set in the high 32 bits since the short 16 bits is 6553600 ns == 6553.600 micro secdons == 6.5536 milliseconds.

low 32 bits is:

429496729600 ns == 429496729.600 us == 429496.7296 ms
 == 429.4967296 seconds == 7 minutes and almost 9.5 seconds

So if your time difference is faster than 7 minutes you won't have to deal with anything more than 32 bit time differences.

Just for your reference, the full 64 bit counter is signed and so the maximum value is:

 922 337 203 685 477 580 800 ns ==
 922 337 203 685.4775808 seconds ==
  15 372 286 728 minutes 5.4775808 seconds ==
  256 204 778 hours 48 minutes 5.4775808 seconds ==
   10 675 199 days 2 hours 48 mins 5.4775808 secs ==
   more than 29000 years.

If you run on old 386's and similar stuff that doesn't have the 64 bit counter you have to settle for GetTickCount and watch out for the times when it overflows approximately once every 45 days. If you have a timer to set once a day it is easy to detect that the tick count has turned around and adjust accordingly. Use a separate second counter (available as time_t and time(0)) to keep track of the higher resolution and use GetTickCount() to get the microseconds.

Sorry but no nanoseconds on old hardware.

Alf
0
 
groundhogdayAuthor Commented:
Fallen_Knight, is it safe to try to handle the overflow like that?

PlanetCpp, I need a timer event that can track on a millisec level.  Each event only lasts several minutes, but its a recurring event that can run 24x7.  The only event that I've found to tracks milliseconds is the clock event.  If there is a stop-watch type event, that would be what I need.

Salte, I only have a 32-bit system.  I will be migrating to WinCE, but I have to deal with DOS for now.  Too bad your 64-bit solution won't work for 32-bit systems
0
 
SalteCommented:
Actually the 64 bit clock works on a pentium or 486. Doesn't care about 16 bit or 32 bit. It does not work on 286 or 386 though, so for that hardware you can't use it.

However, if you plan to migrate to WinCE then you probably don't have the clock available, since you want to move to WinCE and the hardware that that WinCE runs on probably doesn't have a 64 bit clock.

In that case I think you should seriously limit yourself to the API available on WinCE and not use anything beyond what that platform gives you.

Alf
0
 
groundhogdayAuthor Commented:
BTW, where can I find GetTickCount()?  I'm using borland C++ 5.0 and I only found it in win16/windows.h and win32/winbase.h.
0
 
SalteCommented:
Borland C++ also have winbase.h available. GetTickCount is a Win32 system function, should be available on any platform on Win32 systems. Just call the function, #include <windows.h> first. Win32 sort of assume you #include <windows.h> even if you don't write a windows program - as long as you want to access Win32/windows specific functions you should include that file - also for so-called console applications.

The only programs that doesn't have to #include <windows.h> are programs that uses standard ANSI C++ iostreams, stdio or stuff like that and are strictly standard programs that would compile on Unix and any other platform (possibly with other #include files).

Alf
0
 
Fallen_KnightCommented:
it should be safe if you do your calculations right

when your timer is at FFFF, +1 = 0000, +2 = 0002;
so lets say your delay is 10 ms, and your at FFFF,  FFFF + A(10) would be 0009.

so if you test for a possible overflow between now and when your next action is, and if there is one then compensate for it by subtracting MAX_CLOCK_RETURNED_VAL from it you'll get the proper delay time.

The only problem that can arise from this is if your timing something that could take longer then the maximum range, or needs to take longer. But in both these instances there are ways to make it work.

It should be perfectly safe, and there’s no harm in trying it either. in theory it will work perfectly, but if it does in practice that’s something different, but I believe it will work just fine.

also if this is for something embedded look into using a embedded Linux. it would give you a lot more options , also providing a better (more stable) environment then dos (and definitely much better and stable then winCE). I would NEVER trust winCE to run 24/7/365, and not even 24/7 really.
Also a good chance you’ll save a bundle on licensing fees from Microsoft.
0
 
groundhogdayAuthor Commented:
There really wasn't a good solution for DOS, but I'm going ahead and close this out.  Thanks everyone for the help.
0

Featured Post

Free Tool: IP Lookup

Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

  • 5
  • 4
  • 3
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now