I am seeing bizar sheduling behaviour with opengl on different types of graphics hardware. There is apparantly a big difference between nvidia and ati opengl drivers
as far as blocking is concerned. On ATI cards, the glswapbuffers command appears to be a blocking command, which caused the original problem. Opengl applications
using less than 5% cpu on a machine with an nvidia card were using 100% on a similar machine with an ATI card.
I induced an active wait in my renderloop, measuring the render time and trying to sleep the process till just before the vertical retrace or if that is disbled the requested
framerate. This works sometimes, there are some quirks though. For starters, the windows thread switching granularity and the precission of timers is very important,
but one can work around that and so I did. The result was that the same reference app was now using about 20 to 25 % cpu on systems with ATI cards.
But, switching from the reference app to something more intensive, demonstrated yet another problem. It appears that many opengl commands can block the cpu on
an ati equiped machine. On nvidia, the commands return immeadiately whereas on ATI the cpu blocks on unpredictable opengl commands. Nvidia has the NV_FENCE
extension that allows me to do fine grained synchronisation, so I have no problems there .. but there's no equivalent on ATI, so I'm all out of ideas as to how to get
reasonable CPU usage on ATI-equiped systems.
I was also wondering how tripple buffering fits into this picture.