Sign up to receive Decoded, a new monthly digest with product updates, feature release info, continuing education opportunities, and more.

I have a .Net (C#) application that requires a color value in what appears to be a base 10 format but which must be preceeded by a minus sign. For example: -16776961 represents a standard blue color. My questions are: what color standard is this? Why would this format - whatever it is - be used instead of defining the color using the hex value?

Explanation:

Firstly, an RGB colour is made up of 3 values - one for red, green and blue. Typically you would use a byte for each value (8 bits). So to store 3 bytes worth of data you would need a 32 bit int (a 16 bit int could only hold 2 bytes). But a 32 bit int has 4 bytes and for an RGB you only need 3 bytes so the first byte would be meaningless. The second byte would represent red, the third would represent green and the fourth would be blue.

If you take a look at what -16776961 converts to in binary (i.e. in to a 32 bits) you'll see how this fits in. See http://www.binaryconvert.com/result_signed_int.html?decimal=045049054055055054057054049

Now, for the record a hex colour value is the same as an RGB value. Hex means base16, so to represent a hex value you only need 4 bits (4^2=16). But, for example, the color blue is represented by 1 byte (8 bits), so you would use 2 hex values to represent the byte. How would you represent the blue colour? You would use #0000FF, where the first two hex values represent the 8 bits for red, the second two hex values represent the 8 bit value for green, and so on.

Hope that helps.

In hex the color would be AARRGGBB where the AA is usually FF for non-transparent so a full green color would become FF00FF00 which will result in a negative value for a 32 bit integer.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.

You really need to think in bits and bytes. A 32 bit integer contains exactly 32 bits.. it might look something like this

00000000000000000000000000

or this

11111111111111111111111111

or this

00101001011110111011100101

They are what a 32bit int might look like in binary. But in decimal value the range of a 32bit integer will be between -2147483648 to 2147483647.

Let's take the colour value -16776961 for example. It has a binary value of

11111111000000000000000011

Now, we know that since there are 8 bits in a byte, there are 4 bytes in a 32bit integer. I'll separate the bites with an underline _

11111111_00000000_00000000

As bkokx explained, if we take a 32bit integer to represent ARGB then we know that

Alpha = 11111111

Red = 00000000

Green = 00000000

Blue = 11111111

Now, hex is base 16. That means that there are 16 different digits available. They are 0,1,2,3,4,5,6,7,8,9,A,B,C,

So, let's separate the bits using underscores so we can work out the hex values

Alpha = 1111_1111

Red = 0000_0000

Green = 0000_0000

Blue = 1111_1111

Let's swap the 4 bit pieces with their equivalent hex value

Alpha = F_F

Red = 0_0

Green = 0_0

Blue = F_F

If we join them all back together without the underscores we get

FF0000FF

The first two digits represent the alpha. If we're only worried about RGB and not ARGB, then we just ignore them. So the RGB value is

0000FF.

If that doesn't make sense there here are some references to learn the relationship between binary, hex, decimals and ints.

http://www.swarthmore.edu/NatSci/echeeve1/Ref/BinaryMath/NumSys.html

http://en.wikipedia.org/wiki/Binary_numeral_system

http://msdn.microsoft.com/en-us/library/system.int32.aspx