Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
• Status: Solved
• Priority: Medium
• Security: Public
• Views: 459

# Required: a bit shifting algorithm to convert a 16 bit color to a 36 bit one.

I need an C algorithm that can convert a 16 bit color in R,G,B = 5,6,5 format to a 36 bit COLORREF of the format 0x00bbggrr.

0
Dawkins
• 11
• 6
• 5
• +3
1 Solution

Commented:
// generates 32-bit COLORREF
long rgb_to_colorref(int r, int g, int b) {
return (r + g<<2 + b<<4);
}
0

Commented:
// plus some brackets
long rgb_to_colorref(int r, int g, int b) {
return (r + (g<<2) + (b<<4));
}
0

Author Commented:
Hi Burtday, I have a 16 bit value in 5,6,5 format rather than 3 nice RGB integers.  It's the converting the asymetrical 5,6,5 part that I'm having problems with.  I can think of a long winded way of doing it involving a 32 sized lookup array which matches to the nearest colour, but I'm sure there must be an elegant way.

So the function would be of the form:

uint32 16BitToCOLORREF(uint16 16BitColor)
{
...
}
0

Commented:
I don't understand 5,6,5 as an integer.
0

Commented:
mite be the function should be of the form
unsigned long 16bitto32(unsigned short)
{
and then cast the result to an unsigned long
and return
}

printf the value returned by
printf("%x",...)//to display a unsigned decimal in hexa format
0

Author Commented:
Hi Burtday, I have a 16 bit value in 5,6,5 format rather than 3 nice RGB integers.  It's the converting the asymetrical 5,6,5 part that I'm having problems with.  I can think of a long winded way of doing it involving a 32 sized lookup array which matches to the nearest colour, but I'm sure there must be an elegant way.

So the function would be of the form:

uint32 16BitToCOLORREF(uint16 16BitColor)
{
...
}
0

Commented:
dawkins
Firstly RGB(x,y,z)is a 24 bit color format because of the 3 parameters x,y,z which have to a minimum sizeof 1 byte(in which case they are chars) .They could be other integral datatypes.
So a 16 bit color format means it cannot be RGB its got be any two of the basic colors

And secondly do you want the exact algorithm that will combine the values you pass to a function like RGB(x,y,z) to generate the 2 power 24 color combinations or just any alogorithm that does the conversion of the format(typecasting) between 16 and 32 bits
0

Author Commented:
woops it resent the last comment from me, ignore the post above! ^^

Let me explain the value that I have to start with and what I need to turn it into:  I have a 16 bit value, with 5 bits assigned to red, 6 bits to Green, and 5 bits to blue.  I need to convert this into a 32 bit COLORREF, with 8 spare bits, 8 for blue, 8 for green, 8 for red.
0

Commented:
(in^0x001F) + ((in^0x07E0)<<8) + ((in^0xF800)<<16)
0

Commented:
That masks the uint16 in into the 3 sections, shifting bits 0-4 (the high bits) left 16 bits, 5-10 left 8, and leaves 11-15 where they were... _that's not right, though_ it will map to distinct colors based on the correct r/g/b components (assuming the order is the same in the 2 structures?)

How important is speed? To spread it so black is black and white is white, you need to map the 5/6 bit numbers to 8 bits each: (out = in*8/5) or (out = (int)round((double)in*8/5)), depending.
0

Author Commented:
No the order is inverted between the two structures.  The order is R,G,B in the 5,6,5 input.  In the COLORREF output it needs to be B,G,R.

Speed is important.  It has to do this per pixel :D
0

Commented:
For speed, these algorithms are biased towards black.
I'm not sure about whether promotion to 32bit int has to be forced here.
/* masks high 5 bits, shifts them right, then multiplies to fill 8 bits */
/* masks middle 6 bits, shifts them right, multiplies to fill 8 bits, then shifts left 8 bits */
/* masks low 5 bits, multiplies to fill 8 bits, shifts left 16 bits */
/* and adds those 3 values */
uint32 out = ((in^0xF800)>>11)*8/5 + (((in^0x07E0)>>5)*8/6<<8) + ((in^0x001F)*8/5<<16);

/* it might be more straightforward to use structs */

struct {
b int: 5; /* first allocated = low bits */
g int: 6;
r int: 5;
} color_16;

uint32 out = (in.r*8/5) + (in.g*8/6 << 8) + (in.b*8/5 << 16);
0

Commented:
If in is a 16 bit integer, I think suffixing the hex masks with an L will cause promotion of the expression to long.
0

Author Commented:
Don't think that works - I used that formula with the value:

uint16 in = 0;

and it gives me a return value of 3232817!

0

Commented:
#include<stdio.h>
unsigned long _16to32(unsigned short);
void main()
{     unsigned short i=12;
unsigned long k;
k=_16to32(12);
}

unsigned long _16to32(unsigned short i)
{
unsigned long r,g,b,final;
r=g=b=final=0;
r=i&0x001f;
g=i&0x07e0;
b=i&0xf800;
final=final|r;
final=final<<8;
final=final|g;
final=final<<8;
final=final|b;
final=final<<8;
return final;
}
0

Commented:
hope the above code is some help to you
I have extracted the first 5bits(red)by bitwise AND then the next 6 bits(green) and then the last 5 bits(blue) into r,g,b;
and then pushed these values into a 32 bit unsigned integer
in the order, higher order 8bits(red) then green,then blue and last is zeroes and returned the 32bit value
0

Commented:
I'm a complete idiot. All the "^"s should be "&"s. And the silly "8/5"s should really just be shifts "<<(8-5)".

/* ok, it works this time... The 1st 3 terms move the bits to the high part of their new 32 bit home, the second 3 copy high bits of each color to the un-filled bits (red 12345 -> 12345123). If speed's an issue, the last 3 terms could concievably be left out for a 3.1% accuracy loss */
out = ((in&0xF800)>>8) + (((in&0x07E0)<<5)) + ((in&0x001F)<<19)
+ ((in&0xE000)>>13) + (((in&0x0600)>>1)) + ((in&0x001C)<<14);
0

Commented:
change the main() in the above code to this
#include<stdlib.h>
void main()
{     unsigned short i;
unsigned long k;
scanf("%d",&i);
if(i>64000&&i<0)
exit(1);
k=_16to32(i);
printf("%d",k);
}
0

Commented:
My "+"s could be changed to "|"s, too... might lead to a slight performance gain (though I don't think so).
0

Commented:
The following should do it, and it should be clear what it's doing.

uint32 16BitToCOLORREF(uint16 16BitColor)
{
uint32 bb;
uint32 gg;
uint32 rr;

/* Get the 5-bit blue, 6-bit green, and 5-bit red values
* from the 16-bit color */
bb = 16BitColor & 0x1f;
gg = (16BitColor >> 5) & 0x3f;
rr = (16BitColor >> 11);

return( (bb << 16) | (gg < 8) | (rr) );
}

0

Commented:
If you're using C++ or C99, you can make it an inline function to improve performance further (otherwise, maybe a macro's in order)

inline uint32 16BitToCOLORREF(uint16 in)
{
return ((in & 0xF800) >> 8) | ((in & 0x07E0) << 5) | ((in & 0x001F) << 19)
| ((in & 0xE000) >> 13) | ((in & 0x0600) >> 1)) | ((in & 0x001C) << 14);
}
/* or */
inline uint32 16BitToCOLORREF_cheap(uint16 in)
{
return ((in & 0xF800) >> 8) | ((in & 0x07E0) << 5) | ((in & 0x001F) << 19);
}
0

Commented:
I would prepare a table of long, with a size of 2^16,
meaning a 64K*4=256K bytes.

Initialization would be something like this:
unsigned long *PrepareTable()
{
int i;
unsigned long *table;
table = malloc( (1<<16) * sizeof(unsigned long) );

for(  i = 0 ; i < (1<<16) ; i++ )
{
table[i] =
((i>>11)&0x1f))|((i<<3)&0x3f00)|((i<<16)&0x1f0000);
}

}

Convertion:
unsigned long Convert(unsigned short col)
{
return table[col];
}
0

Commented:
Hi Dawkins,
The RGB stored in 2BYTE. Do u know how the int stored insight the memory?
0

Author Commented:
Thanks guys,

I havent had a chance to try these solutions out.  I'm off on holiday for a week, but when Im back I will try them out and award the points to someone - thanks for the help!
0

Commented:
I'd like to plug amir_0895's idea of a table, but I'd use my more complex algorithm (inline uint32 16BitToCOLORREF(uint16 in)) above to initialise each element - the extra cost isn't significant if it's once-off like that, and the added precision is a good thing.
0

## Featured Post

• 11
• 6
• 5
• +3
Tackle projects and never again get stuck behind a technical roadblock.