• C

Timer. Loop every 5ms

I want to set up a timer to send raw packets accross the network every 5ms( or quicker if possible ). I am using winpcap to send the packets as this bypasses the protocol stack and has less overhead. I was wondering what is the best way to set up a loop to get this kind of accuracy.
paddycAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

AxterCommented:
What OS are you using?

In windolws, you can try using GetTickCount API function.
0
AxterCommented:
The following function is taking out of a book called Windows Graphics Programming.

inline unsigned __int64 GetCycleCount(void)
{
     _asm _emit 0x0F;
     _asm _emit 0x31;
}

This function returns RDTSC (Read Time Stamp Counter), which returns the number of clock cycles passed since the booting of the CPU.  
On a Petium 200-Mhz machine, this has about a 5-nanosecond precision.

This function is not OS dependent, but it is machine dependent.
0
paddycAuthor Commented:
Windows XP.
It's a Win32 console based application.

Can you show me how to implement GetTickCount in c.
0
Has Powershell sent you back into the Stone Age?

If managing Active Directory using Windows Powershell® is making you feel like you stepped back in time, you are not alone.  For nearly 20 years, AD admins around the world have used one tool for day-to-day AD management: Hyena. Discover why.

AxterCommented:
DWORD StartTime;

StartTime = GetTickCount();
while(GetTickCount() < (StartTime + 5)); //Wait 5ms
0
paddycAuthor Commented:
How can GetCycleCount allow me to call a function every 5ms?

Is there a way of starting a timer and then calling my function every 5ms. i.e. after 5ms, 10ms, 15ms.....
0
paddycAuthor Commented:
What type of resolution does GetTickCount have?
0
AlexFMCommented:
AFAIK, GetTickCount resolution is about 20 ms. You can use high-precision multimedia timer, for example, timeGetTime function.
0
paddycAuthor Commented:
timeGetTime has a resolution of 2ms which is too large for my application.

PerformanceCounter() has a better resolution but I dont know how I would implement it.
0
AlexFMCommented:
In timeSetEvent function you can set required resolution, and it works with callback function.
0
AlexFMCommented:
This is code fragment which shows how to measure time using QueryPerformanceCounter:

    LARGE_INTEGER nFreq;
    LARGE_INTEGER nBeginTime;
    LARGE_INTEGER nEndTime;
    __int64 nCalcTime;   // ms

    QueryPerformanceFrequency(&nFreq);
    QueryPerformanceCounter(&nBeginTime);

    // do something ...

    QueryPerformanceCounter(&nEndTime);
    nCalcTime = (nEndTime.QuadPart - m_nBeginTime.QuadPart) * 1000/m_nFreq.QuadPart;
0
AlexFMCommented:
nCalcTime = (nEndTime.QuadPart - nBeginTime.QuadPart) * 1000/nFreq.QuadPart;   // correction
0
paddycAuthor Commented:
The above code checks how long something takes to complete.

I want to be able do something after a certain time has elapsed.
0
AxterCommented:
>>Is there a way of starting a timer and then calling my function every 5ms. i.e. after 5ms, 10ms, 15ms.....

Try the following:

DWORD StartTime;

for(;;)
{
     StartTime = GetTickCount();
     while(GetTickCount() < (StartTime + 5)); //Wait 5ms
     YourFunctionCall();
}

The above should call your function every 5ms

FYI: From MSDN documents:
The GetTickCount function retrieves the number of milliseconds that have elapsed since the system was started. It is limited to the resolution of the system timer. To obtain the system timer resolution, use the GetSystemTimeAdjustment function.
0
AxterCommented:
No matter what method you use, you will also need to give your process a higher priority to have your timer be accurate.
0
paddycAuthor Commented:
Would I be able to call my function every 0.5ms with this function or would the accuracy be compromised?

How would I give my process a higher priority?
0
AxterCommented:
>>Would I be able to call my function every 0.5ms with this function or would the accuracy be compromised?

Not 0.5ms
That is less then a ms

How are you starting your process?
What type of project do you have?  Is it a console application or a Win32 windows application?
0
grg99Commented:
Ah, Windows was not designed as a real-time system.  It's barely usable as a human-time system.  The OS and device drivers take big chunks out of your CPU time, from microseconds to good fractions of a second.

The situation is a lot better on a multi-CPU system-- if you do a bunch of "setProcessorAffinity()" calls you can kinda coerce Windows into just hogging one CPU, leaving you a lot of time on the other one.  No guarantees though.

0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
paddycAuthor Commented:
It's a Win32 console application that's run from the command line.

0
grg99Commented:
Well, time it yourself and see.   Call that RDTSC function above with the 0Fh 31h opcodes.  Compute the time since the last call.  Keep a histogram of the latency between calls.  Something like:

#define TicksPerUsec 200    // 200MHz clock

int histo[50000];   // microseconds of latency

for( i=0; i< 50000; i++ ) histo[i] = 0;

Prev = GetTSC();
for( i=1; i< 100000; i++ ) {
   Elap = ( GetTSC() - Prev ) / TicksPerUSec;;
   if( Elap > 0 && Elap < 50000 )  histo[ Elap ]++; }
}


 
0
paddycAuthor Commented:
The GetTickCount() function is not accurate enough for my application.
The QueryPerformanceCounter() function is more accurate.

How would I create a 5ms loop (or smaller) using this counter?
0
vashukumarCommented:
You can set a timer of 5msec in the folowing manner. This function is written in HP unix.
#include <sys/times.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>


long timeint()
{
clock_t start_time, end_time, query_time;
struct tms times_l;                    /*structure for times() function*/

      start_time = times(&times_l);
      end_time = times(&times_l);
      query_time = (end_time-start_time);
    return( (query_time*1000)/sysconf(_SC_CLK_TCK));

}
void main()
{
int dur;
for(int i=0; i<=50000;i++)
{
     dur +=timeint();
       if (dur>=10)
     {
       dur=0;
         printf("reached timer interval\n");
      //call your funtion here to send packet
            
     }
}

}
0
paddycAuthor Commented:
when I run the code the timeint() function returns 0 every time and so my function would never be run.
0
vashukumarCommented:
This is because your program is taking less than 5ms to complete.
you can change the validity of function by changeing for loop to an infiinite loop....so it will be called after every 5ms endlessly.


changes to be made are

Previous code:
for(int i=0; i<=50000;i++)

new code:
for(;;)
0
paddycAuthor Commented:
>>     dur +=timeint();
>>      if (dur>=10)
>>     {
>>       dur=0;
>>        printf("reached timer interval\n");
>>      //call your funtion here to send packet

dur never gets >=10 because timeint() always returns 0 so my function is always skipped.
0
vashukumarCommented:
You are missing the infinite for loop. so your whole program is taking less time. I have included a dummy function which it is calling after 2 seconds( to give you a clear view). You can adjust that time accordingly

#include <sys/times.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>


long timeint()
{
clock_t start_time, end_time, query_time;
struct tms times_l;                    //structure for times() function

      start_time = times(&times_l);
      end_time = times(&times_l);
      query_time = (end_time-start_time);
    return( (query_time*1000)/sysconf(_SC_CLK_TCK));

}

void dummy()
{
      printf("hi i am sending packet\n");
}

void main()
{
int dur;
for(;;)
{
     dur +=timeint();
       if (dur>=2000)
     {
       dur=0;
         printf("reached timer interval\n");
     
        //call your funtion here to send packet
       dummy();            
     }
}

}

regards
-varun
0
grg99Commented:
All that example code will be of no use iff the OS takes away more time than you can tolerate.  Please run my latency test program to see what kind of accuracy your particular OS and hardware can deliver.

0
vashukumarCommented:
The author wants the time resolution in msec and the CPU ticks are nowadays in megahertz.... so the purpose of the author is metby all means...
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
C

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.