Constant frame rate using threads and win32

HI, my goal is to have a separate rendeirng thread be able to render and a constant frame rate of say, 30fps.  
Info: Single CPU, win32 App, openGL, C++.  
I have one rendering thread who's sole purpose in life is:  to be created, initialized, wait to start, render at a constant frame rate, wait for a signal to stop rendering, then exit.  Let's also assume the code is thread safe :)

So, at every 33.3 ms, the thread will execute some rendering functions, then "go to sleep" and do nothing until the next 33.3ms has ellapsed.  

Q1: What are good coding techniques in general to accomplish something like this.?

Q2:  Please comment on the following 2 methods:
Method A:
do
{
      switch (myWin->threadSync->waitEvent(T))
      {
      case WAIT_TIMEOUT:
            //Sampling Interval, so render to screen
            render();
            //check the accuracy
            samplingTimes.push_back(renderingTimer->GetTime()*1000);
            renderingTimer->Reset();
            break;
      case WAIT_OBJECT_0:
            //signal from main to exit rendering loop
            threadMain.setThreadExit(TRUE);
            break;
      default:
            break;
      }
}while(!threadMain.getThreadExit());

Method B:
while(!threadMain.getThreadExit())
{
      if (renderingTimer->GetTime()*1000 >= T)
      {
            //check the accuracy
            samplingTimes.push_back(renderingTimer->GetTime()*1000);
            renderingTimer->Reset();
            //Sampling Interval, so render graphics
            render();
      }      
}

The renderingTimer uses the QueryPerformanceCounter, so should be pretty accurate(I think?).  In method B, another thread is needed to set threadMain.threadExit to TRUE to stop the loop, where as in Method A, another thread must use the SetEvent() to stop the thread.

Q3:
In Method A, the sampling time is based on when there is a timeout in the WaitForSingleObject().  How accurate is this?  

Q4:
I'd like to know more about thread priorities and process priority classes.  I did a small test to see the effects of thread priorities using the above 2 methods.  Console App, main thread, and one single thread for rendering using the default settings.  
The main() has the thread render for 5 seconds.  And I checked how accurate the sampling times were.  Below are the results:
THREAD_PRIORITY_TIME_CRITICAL:
Method A error: 6.0687
Method B error: 0.001158

THREAD_PRIORITY_HIGHEST:
Method A error: 6.0205
Method B error: 0.00086314

THREAD_PRIORITY_ABOVE_NORMAL:
Method A error: 5.9933
Method B error: 0.0010459

THREAD_PRIORITY_NORMAL:
Method A error: 5.7194
Method B error: 131200.0168



the error is the sum of the squared errors of each sampling time.  The key change is from THREAD_PRIORITY_ABOVE_NORMAL to THREAD_PRIORITY_NORMAL.  Can you please comment on the results.  Note I did not change the default process priority.  

Q5:
At the end of the day, I need an accurate(as accurate as QueryPerformanceCounter) constant frame rate.  What is the solution?

Thanks for your help




minstrelzAsked:
Who is Participating?
 
itsmeandnobodyelseConnect With a Mentor Commented:
Q1:

As Windows isn't a real-time system, it isn't guaranteed that your thread gets scheduled at 33 fps frequency. That means, that it highly depends on your CPU (e. g. hyperthreading or not) and the traffic on your machine. I would guess that any major file access by any other application will spoil your frame rate.

The next issue is the time your rendering will take. Use GetPerformanceCounter and GetPerformanceFrequency calls to find that out. I would say, if this time isn't less than 10 ms, there is no chance to ahieve 33fps.

Q2:
In both methods the GetTime()*1000 isn't accurate (i assume GetTime() gives seconds). You have to use GetPerformanceCounter to have the accuracy needed. Method2 has no wait/sleep function, so it would catch all CPU time it could get. You have to include Sleep(1); to your loop. However, the timing isn't accurate (about 10ms), so i would prefer method  A (i assume you are using a waitable timer), though i don't know whether it's more accurate than Sleep.

Q3: see Q2

Q4:
>>> the error is the sum of the squared errors of each sampling time.

I'm sorry, i don't know what you are measuring here. I would call GetSystemTime(), timeb() function or QueryPerformanceCounter to measure your real frequency (you have to calculate averages aof about 1000 runs).

Q5:
Hope, i could help to find it.

Regards, Alex


0
 
jhshuklaCommented:
i would suggest using a double buffer. i haven't done any openGL programming as of yet so i could be partially wrong. here is the pseudo code:

use double buffer
while ( ! signal to die ){
  render in the "ghost" buffer;
  wait for 33.3 ms signal;
  swap buffers;
}

an expert with more OpenGL experience should be able to provide more help.

jaydutt
0
 
grg99Commented:
First, I'd see how much CPU time the rendering thread uses per frame.  It had better be considerably less than 1/33 sec!

The exact timing is mostly irrelevant.    Your screen is getting hardware refreshed at whatever rate you've set in the Display control panel (60 to 80 Hz are typical rates).    Most display libraries sync themselves to this rate whenever you call for a full-screen update.  If they didnt you'd see all kinds of tearing and jiggling.   so "33 fps" ends up being "whatever sub-multiple of the screen refresh rate is greater and 33fps.  For example, if your refresh rate is 60 Hz, the screen will get a fresh image every OTHER scan, effectively 30fps.   If you shoot for 30fps, you run the danger of missing the exact scan rate by a little bit and effectively getting 60/3 or 20fps.





0
The 14th Annual Expert Award Winners

The results are in! Meet the top members of our 2017 Expert Awards. Congratulations to all who qualified!

 
minstrelzAuthor Commented:
response to itsmeandnobodyelse:
I dont know what hyperthreading is, so can you please explain its effect on the question.  You mention that it could depedn a lot on the traffic on the machine.  So, what about setting process priority to THREAD_PRIORITY_TIME_CRITICAL?  When would be a good/bad time to do something like this?
Also, from your response, the renderingTimer does use QueryPerformanceCounter to get the sampling times.  Lastly, does anyone know whether the Sleep() is more/less accurate than WaitForSingleObject( ) with some timeout?
thanks
0
 
minstrelzAuthor Commented:
correction to above: I mean setting the process priority to REALTIME_PRIORITY_CLASS
0
 
itsmeandnobodyelseCommented:
hyperthreading is the capability of some INTEL CPU's (newer Pentium IV, Itanium) to work similar to a double-processor unit. So, the threads of an application may work quasi-parallell and are scheduled more often than with a single CPU.

REALTIME_PRIORITY_CLASS

Any priority given doesn't make the system to a realtime system. However, if you can guarantee that there is only one thread that has this kind of priority you should have a chance that the average times of scheduling are near to your goal.

Regards, Alex
0
 
minstrelzAuthor Commented:
does anyone know whether the Sleep() is more/less/as accurate than WaitForSingleObject( ) with some timeout?  why or why not.  
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.