Decode Rate VS Playback Rate using Directshow
Posted on 2007-03-22
This is the graph that I built using directshow
Source Filter (file) -> Transform Filters -> Decoder -> FPS Counter (trans-in-place) -> Renderer
FPS Counter's function will be called each time a sample has been decoded.
The function that is called basically outputs the system time using timeGetTime()
I placed FPS Counter between decoder and renderer in order to obtain "decode rate", because I need to test the performance of the decoder.
I tested this graph on
as many of you might already know, this video clip has a playback rate of 1 fps with a total of 12 frames.
but the results i got follows (in millisecond):
> 16356332 180 milliseconds
> 16356445 113 milliseconds
> 16357354 909 milliseconds
> 16358340 986 milliseconds
> 16359342 1002 milliseconds
> 16360349 1007 milliseconds
> 16361346 997 milliseconds
> 16362342 996 milliseconds
> 16363341 999 milliseconds
> 16364344 1003 milliseconds
> 16365341 997 milliseconds
the first column represents the system time returned on each call to FPS Counter, and the second column represents the time it took to decode that frame, merely the system time of the current frame minus the system time of the previous frame.
first three frames were being decoded in less than one second, while the subsequent frames were being decoded in roughly one second, equal to the playback rate.
My question is, does this data look reasonable?
I understand that decode rate does not necessarily equal to the playback rate, since the renderer needs to do time synchronozation.
but what i am concerned is the accuracy of this data, since it has a large variations betwen the beginning frames and subsequent frames.
are there any useful tools that can produce fps in decode rate rather than playback rate?