Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 615
  • Last Modified:

changing pixel depth in bitmaps

The problem is that when I'm in 16-bit+ color modes and I want to create a bitmap that has a color format of 8 bits per pixel, the bitmap created is drawn with colors that I do not want.  The way i create an 8 bit per pixel bitmap is to first create a bitmap in the current color mode i am in, then I use GetDibits to transform the bitmap into an 8 bits per pixel bitmap.  However, the documentation for GetDibits says that when it does this transformation, it will synthesize its own color table from a general mix of 256 colors including the 20 static colors defined in the default logical palette.  This is clearly NOT the color table i want.  I want the 8 bits per pixel bitmap to use the same colors as the original bitmap was drawn in.  (The original bitmap is definitely not drawn with more than 256 colors - it's drawn with just like 20 different colors.) The bitmap is a picture of the view within my application.

I've also thought of BitBltting from a 16+ bit bitmap to an 8-bit bitmap, but that means I have to create an 8-bit bitmap and select it into my destination device context.  The documentation says that when i do a SelectObject for a bitmap, the bitmap color format must be the same as the device context color format.  And, I am not sure how to create a device context that is in an 8-bit color format when my system is  in a 16-bit+ color mode.  How do i do this?
 
In short, when I am in 16-bit+ color resolution modes, I want to create an 8 bits per pixel bitmap (of my current view) that uses a 256 color palette that i specify.  How do i do this?  Thanks for your response(s).
   
0
Sulaco
Asked:
Sulaco
1 Solution
 
byangCommented:
Setup a BITMAPINFO structure, make it 8-bit, and put your palette RGB values in bmiColors field. Then use
CreateDIBSection(NULL,(BITMAPINFO*)pbmi,DIB_RGB_COLORS, &pvBits, NULL,0) to create the bitmap, it returns you an HBITMAP handle. Then load your bitmap into buffer pointed by pvBits. Use SetDIBitsToDevice() to draw it.

0
 
SulacoAuthor Commented:
Well, first, i don't want to draw the 8 BPP bitmap.  I want to save it out to a file.  (So, I don't need to use SetDibitsToDevice, right?)  Second, I still don't understand how loading my 16+ BPP bitmap data into the buffer pointed to by pvBits will create a 8 BPP bitmap for me.  The original bitmap i have created is in 16+ BPP.  If i simply loaded this data into the buffer pointed to by pvBits, it won't change the color format from 16+ BPP to 8 BPP, will it?

0
 
JamieRCommented:
First off, DIBs of more than 256 colours do not have palettes, they simply store the actual RGB data for each pixel, rather than an index. To dither to 8bits, create an 8bit DIB and then simply step through each pixel in the 16bit image, read the RGB value and then step though each entry in your 8bit palette until you find the nearest colour. Store this index in your corresponding 8bit image.

Windows has a (very poor) attempt at doing this automatically if you try to display a 16bit image under an 8bit driver, which is what you are seeing at the moment.

Jamie

0

Featured Post

Free Tool: Subnet Calculator

The subnet calculator helps you design networks by taking an IP address and network mask and returning information such as network, broadcast address, and host range.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now