Link to home
Start Free TrialLog in
Avatar of rmillerSST
rmillerSST

asked on

Getting the current time to the millisecond

I'm using Visual C++ 6 to build an activeX control.  I need to be able to provide the current system time on request to the millisecond but I can't seem to find a function that will provide that kind of accuracy.  Both VB and Foxpro provide ways to do so so I can't believe that C wouldn't.  Any help is welcome but I'm fairly new to C++ so please be specific.  Thanks.
Avatar of peterchen092700
peterchen092700


VOID GetSystemTime(
  LPSYSTEMTIME lpSystemTime   // system time
);


typedef struct _SYSTEMTIME {
    WORD wYear;
    WORD wMonth;
    WORD wDayOfWeek;
    WORD wDay;
    WORD wHour;
    WORD wMinute;
    WORD wSecond;
    WORD wMilliseconds;
} SYSTEMTIME, *PSYSTEMTIME;

Windows NT/2000: Requires Windows NT 3.1 or later.
Windows 95/98: Requires Windows 95 or later.
Header: Declared in Winbase.h; include Windows.h.

There is no generic C/C++ function to acquire time at this accuracy; this is a system-.dependent feature (only those systems that don't have, haven't seen FoxPro or Basic)

Peter
Search for the word milliseconds on this page.

http://www.gamedev.net/reference/articles/article609.asp

A technique is described there.
The GetSystemTime returns a value EXPRESSED in milliseconds, it is not ACCURATE to the millisecond.  i.e. the clock resolution is much higher than that so the value returned "jumps" by multiple milliseconds.
Avatar of rmillerSST

ASKER

Ok...I'm getting there.
I used peterchen's approach with a
SYSTEMTIME st
GetSystemTime(&st)

When I look at it in the debugger I can expand the structure and see the individual fields, but how do I break them out and store them to variables?  Preferrably numeric.
Thanks!
Did you understand what I said?  it doesn't give you an accuracy of milliseconds, in fact the accuracy is about 1/18th of a second on windows 9x.

its like measuring yoru height at 5ft 8 inches and then adding to more decimals like 5ft 8.000000000 inches and saying that is more accurate.   The system time measures to an accuracty of less than 1/100th of a second, but expresses the number as if it were measuring to the accuracy of a millisecond.
WORD msec = st.wMilliseconds;
etc.
nietod is right, accuracy is about 1/18th of a second on Win9x systems (atleast, on most of them...).

there's no "one step" solution to get the current time with this high resolution. You can get it with a combination of timeGetTime and GetSystemTime(), but this is complicated.

So if you explain what you need it for (measure time difference), we could better help you.

Peter


(btw. and at leat VB doesn't give you ms resolution too, so don't blame C++)
Maybe this is what you want, Paul.

#include <windows.h>
#include <stdio.h>

#define unsigned long ULONG;

class hrt {
private:
  LARGE_INTEGER frequency;

  LARGE_INTEGER startCount;
  SYSTEMTIME    startTime;
  LARGE_INTEGER startMs;

  LARGE_INTEGER currentCount;
  SYSTEMTIME    currentTime;
  LARGE_INTEGER currentMs;

public:
  hrt(void);
  ~hrt(void){};

  SYSTEMTIME *getTime();
};

hrt::hrt()
{
  QueryPerformanceFrequency(&frequency);
  QueryPerformanceCounter(&startCount);
  GetSystemTime(&startTime);
  startMs.QuadPart = startTime.wHour*3600000+startTime.wMinute*60000+startTime.wSecond*1000+startTime.wMilliseconds;
}

SYSTEMTIME *
hrt::getTime()
{
  ULONG ms;
  ULONG hour,min,sec;

  QueryPerformanceCounter(&currentCount);
  currentMs.QuadPart = ((currentCount.QuadPart-startCount.QuadPart)*1000)/frequency.QuadPart;
  ms = static_cast<ULONG>(startMs.QuadPart+currentMs.QuadPart);

  hour = ms/3600000;
  ms %= 3600000;
  min = ms/60000;
  ms %= 60000;
  sec = ms/1000;
  ms %= 1000;

  currentTime = startTime;
  currentTime.wHour = static_cast<USHORT>(hour);
  currentTime.wMinute = static_cast<USHORT>(min);
  currentTime.wSecond = static_cast<USHORT>(sec);
  currentTime.wMilliseconds = static_cast<USHORT>(ms);
  return &currentTime;
}

int
main(int argc, char *argv[])
{
  hrt         t;
  SYSTEMTIME  *now;

  for ( int i=0 ; i<100 ; i++ ) {
    now = t.getTime();
    printf("%02d:%02d:%02d.%03d\n",now->wHour,now->wMinute,now->wSecond,now->wMilliseconds);
  }
  return 0;
}
Avatar of DanRollins
Most often one doesn't really need to know the actually millisecond.  Usually it is enough to know whether one particular time is before or after another.

This simple fn returns a different value each time it is called.  The actual wMilliseconds is only an approximation (worst case off by about 54 ms).

SYSTEMTIME* MyGetTime()
{
    static SYSTEMTIME rPrev= {0};
    static SYSTEMTIME rNow;
    static WORD wDelta= 0;

    GetSystemTime( &rNow );
    // if new time is same as prev, bump return value
    if ( memcmp(&rPrev,&rNow,sizeof(SYSTEMTIME)) == 0 ) {
        wDelta++;
        rNow.wMilliseconds += wDelta;
    } else { // new tick; clear for next call
        GetSystemTime( &rPrev );
        wDelta= 0;
    }
    return( &rNow );
}

void main()
{
    for ( int j=0 ; j<50 ; j++ ) {
        SYSTEMTIME* prST= MyGetTime();
        Sleep( 5 );
        printf("%02d:%02d:%02d.%03d\n",
           prST->wHour,prST->wMinute,prST->wSecond,
           prST->wMilliseconds
        );
    }
}
-- Dan
Dan: what happens if I call your function 100 times in 50ms??
Now you are talking about calling more than once per millisecond.  So even a 1 ms-resolution timer will fail the uniqueness test.

You still get unique values, but you can end up with >1000 milliseconds.  So if you then "normalize" the ms by carrying to the seconds, you could have a non-unique timestamp.  However, barring such normalization, you would need to call the fn > 65000 times in one ms to end up with a roll-over that could potentially lose uniqueness.

Actually, I don't really love that function... just throwing it out for fun.  zebada's QueryPerformanceFrequency is probably best -- when running on systems that have a high-performance counter.

-- Dan.
>> Now you are talking about calling more
>> than once per millisecond.  So even a
>> 1 ms-resolution timer will
>> fail the uniqueness test.
There are many times when you might need a time accurate to a 1 ms value and "making up" a time is not going to help
>>There are many times when you might need a time accurate to a 1 ms value and "making up" a time is not going to help

I agree! Knowing exactly how much earlier an event transpires can be critical.  For instance, I understand that on eBay if your bid is a full 17 milliseconds earlier than another bid, you win, but if it is only 15 milliseconds earlier, monkeys fly up my butt.  A real dilemma, I assure you.

-=-=-=-=-=
I thought _ftime was a possible solution, since it provides a millisecond component -- but it seems to use GetSystemTimeAsFileTime.  That fn's docs claim 100-nanosecond increments, but I couldn't get _ftime to produce a delta unless at least 10ms had elapsed.

Anyway, on the two machines I tested, the old 55-ms interval doesn't seem to hold.  It certainly *was* true 20 years ago and even 10 years ago, but I'm starting to think that newer computers use a faster timer interrupt -- but still not fast enough to track 1-ms increments.

-- Dan
Most intel pentium base PC's (I'm not sure about others) have a timer chip on the motherboard that has a 1.19318166667MHz counter.

The counter counts down from N (by default N=65535) to 1 at the rate of 1.19318166667MHz. The system timer interrupt is generated when the counter rolls over from 1 to N (zero never occurs). Note the system time is updated by this interrupt.

If N=65535 then the system timer interrupt is generated (1.19318166667/65535*1000000)=18.2 times per second. This equates to every 54.9 milliseconds.

This grainularity is inherent in most PC's in use today.

By the way in my sample code the QueryPerformanceCounter() function actually queries this counter value and other more significant counters. It is prety much the most accurate timer you can use on a PC.

I needed to know this stuff for a software package that reads VPW (Variable Pulse Width) wave forms from an electronic engine management system and converts it into binary data. Some of the pulses were as short as 40 microseconds.

You can download the source code (written in turbo C and assembler for Dos) at http://www.blacky.co.nz/efilive/vpw.zip
I didn't write the assembler stuff, but it's pretty neat anyway.

Paul
Here is an interesting bit of code:  

#include <afx.h>
#include <stdlib.h>
void main()
{
    DWORD nTick, nTick2, nTick3;
    nTick= GetTickCount();
    do { //------------------------ wait for tick rollover
        nTick2= GetTickCount();
    } while( nTick2 == nTick );
    do { //------------------------ time one complete tick
        nTick3= GetTickCount();
    } while( nTick3 == nTick2 );
    printf("Tick: %u\n", nTick2 );
    printf("Tick: %u\n", nTick3 );
    printf("Delta= %ums\n", nTick3 - nTick2 );
    printf("\n");
}

From the previous discussion one would predict that the output would be something like

Tick: 6287152
Tick: 6287207
Delta= 55ms

However, the output consistantly shows a
    Delta= 5ms
(*not* 55ms) on my PC. Any explanation?  

Now here is a snazzy paradox!  Add this line...

    do { //------------------------ time one complete tick
        Sleep(1); // <<----------- add this
        nTick3= GetTickCount();
    } while( nTick3 == nTick2 );

and the output varies, but it often shows a 1ms to 3ms delta.  Whoa Nellie!  Also, the do loop executes exactly once (without the Sleep(), it loops about 8500 times on my PC).

-- Dan
The PC's programmable interval chip (8253/8254 PIT) that you are refering to has has 3 timers, 2 of which are dedicated to hoursekeeping functions, one of which is used to generate the timmer interrupt.  On DOS and windows 9x it generates the timer interrupt at about 18.2 times a second.  But it can be clocked faster (that is actually its slowest speed) so on windows NT it is often faster.  But it is still not fast enough to do millisecond resolution.

>> Delta= 5ms
>> (*not* 55ms) on my PC. Any explanation?  
Was this NT?  NT uses a faster PIT timer.  98 might to, although I doubt it, DOS and 95 don't though.
In the MSDN Library documentation for the timeGetTime function (part of multimedia support), it states:

Windows 95: The default precision of the timeGetTime function is 1 millisecond. In other words, the timeGetTime function can return successive values that differ by just 1 millisecond. This is true no matter what calls have been made to the timeBeginPeriod and timeEndPeriod functions.

(Achieving 1 msec precision for WinNT may require the timeBeginPeriod and timeEndPeriod function calls.)

David
ASKER CERTIFIED SOLUTION
Avatar of DanRollins
DanRollins
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
hi rmillerSST,
Do you have any additional questions?  Do any comments need clarification?

-- Dan
hi rmillerSST,
Do you have any additional questions?  Do any comments need clarification?

-- Dan
Comment from DanRollins accepted as answer.

Thank you
Computer101
Community Support Moderator