I have a .Net (C#) application that requires a color value in what appears to be a base 10 format but which must be preceeded by a minus sign. For example: -16776961 represents a standard blue color. My questions are: what color standard is this? Why would this format - whatever it is - be used instead of defining the color using the hex value?
This value you provided does actually convert to an RGB value.
Explanation:
Firstly, an RGB colour is made up of 3 values - one for red, green and blue. Typically you would use a byte for each value (8 bits). So to store 3 bytes worth of data you would need a 32 bit int (a 16 bit int could only hold 2 bytes). But a 32 bit int has 4 bytes and for an RGB you only need 3 bytes so the first byte would be meaningless. The second byte would represent red, the third would represent green and the fourth would be blue.
Now, for the record a hex colour value is the same as an RGB value. Hex means base16, so to represent a hex value you only need 4 bits (4^2=16). But, for example, the color blue is represented by 1 byte (8 bits), so you would use 2 hex values to represent the byte. How would you represent the blue colour? You would use #0000FF, where the first two hex values represent the 8 bits for red, the second two hex values represent the 8 bit value for green, and so on.
Hope that helps.
0
Restore full virtual machine or individual guest files from 19 common file systems directly from the backup file. Schedule VM backups with PowerShell scripts. Set desired time, lean back and let the script to notify you via email upon completion.
Kias - this is a very helpful explanation - thank you. I am a little confused with the conversion from hex to decimal. For example if I have a hex value = 00FF00 (which I think should be green), then how do I get to the decimal value with the minus sign? The utility does not seem to produce the correct result. Thanks again.
In hex the color would be AARRGGBB where the AA is usually FF for non-transparent so a full green color would become FF00FF00 which will result in a negative value for a 32 bit integer.
You really need to think in bits and bytes. A 32 bit integer contains exactly 32 bits.. it might look something like this
00000000000000000000000000000000
or this
11111111111111111111111111111111
or this
00101001011110111011100101001011
They are what a 32bit int might look like in binary. But in decimal value the range of a 32bit integer will be between -2147483648 to 2147483647.
Let's take the colour value -16776961 for example. It has a binary value of
11111111000000000000000011111111
Now, we know that since there are 8 bits in a byte, there are 4 bytes in a 32bit integer. I'll separate the bites with an underline _
11111111_00000000_00000000_11111111
As bkokx explained, if we take a 32bit integer to represent ARGB then we know that
Alpha = 11111111
Red = 00000000
Green = 00000000
Blue = 11111111
Now, hex is base 16. That means that there are 16 different digits available. They are 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F. To store any one of those values it will take 4 bits. Since there are 8 bits in a byte, we need to represent the 8 bits using two different hex digits.
So, let's separate the bits using underscores so we can work out the hex values
Alpha = 1111_1111
Red = 0000_0000
Green = 0000_0000
Blue = 1111_1111
Let's swap the 4 bit pieces with their equivalent hex value
Alpha = F_F
Red = 0_0
Green = 0_0
Blue = F_F
If we join them all back together without the underscores we get
FF0000FF
The first two digits represent the alpha. If we're only worried about RGB and not ARGB, then we just ignore them. So the RGB value is
0000FF.
If that doesn't make sense there here are some references to learn the relationship between binary, hex, decimals and ints.
what is the usage of the colors in the application?