Link to home
Start Free TrialLog in
Avatar of SysEx
SysEx

asked on

Fading a 24 bit bitmap using win32

Is there a quick way to fade a 24 bit BMP from black to the original bitmap and back? My solution so far is to get the RGB values of the bitmap and scale them form 0 to x and back. Unfortunately this solution quite slow for big (say 640x480) bitmaps because of the time it takes to calculate and blit (I use SetDIBitsToDevice) the image. Is there a way to use the palette to do this (as I understand there is no real palette in 16 or 24 bit color modes)?. Or to adjust the bitmap in the videomemory directly both in 16 and 24 bit video modes? Could I use some ROP code and PatBlt to 'blend' colors? What is the fastest way? I'm using C and the win32 API.

Thanks,

Pieter.
ASKER CERTIFIED SOLUTION
Avatar of dtowell
dtowell

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of SysEx
SysEx

ASKER

First of all, thanks for responding to my question. As you said the Get/SetDeviceGammeRamp() is of no practical use. Also the gamma ramp will change the colors on the entire screen, I'm interested in fading a bitmap in a window.

The fastest method you describe is, like I said in my question, exactly what I'm using now:
* Create 'FadeTable' once (BYTE FadeTable[256][100]) in wich every value lineary descends to zero.
* Use GetDIBits() to fill a BITMAPINFO structure
* malloc two buffers, one for the source RGB values and one for the faded RGB values to use SetDIBitsToDevice() on
* Use GetDIBits() to fill the source buffer.
Now for every step 0-99:
* Copy every byte from the source buffer to the fade buffer using the FadeTable
* Use SetDIBitsToDevice() to copy the image from the fade buffer to the screen.

For a bitmap of the size of an average window (about 1MB of RGB data) this method is far to slow, both the fading and the SetDIBitsToDevice() take to long to get a smooth fade on my P133 with S3 TRIO64V+ based videocard.

So my question is: Is there a faster way that works on the average win95 computer and works with a 16 bit color or better videomode.

Regards,

Pieter. (p.g.m.vandermeulen@student.utwente.nl)
There is still one thing you can do: use CreateDIBSection() to create/allocate the buffer.  Trust me, this is not just a convient-all-in-one-package function.  It creates the buffer "sitting on the fence" between user mode and kernel mode code.  This removes a memory copy (from user to kernel mode) that occurs when using "roll your own" DIBs.  This will still not be real fast because you still have to touch every pixel and then blit them, but it is the fastest non-DirectDraw method available in Win32.  

Another thing to consider is the maximum potential speed.  Say your bus is running at 60MHz.  Each faded pixel takes 9 bus cycles (read pixel value, read faded value, write pixel for each RGB value) and each pixel transferred to the display takes 2 cycles (read 4 bytes, write 4 bytes).  Assuming you have a 640x480 image, it takes 3.4M cycles to do one fade step; thats a little over 17 frames per second.  Of course that assumes no overhead, no non-cache hits and all pipelined code.  None of these are good assumptions.
Avatar of SysEx

ASKER

Using CreateDIBSection did speed up the blitting a bit. The limiting factor is the huge amount of data that has to be moved. I would be nice if there had been a function that could translate RGB values while blitting them from a DIB to the screen.