dev775
asked on
Definition of Color Value
I have a .Net (C#) application that requires a color value in what appears to be a base 10 format but which must be preceeded by a minus sign. For example: -16776961 represents a standard blue color. My questions are: what color standard is this? Why would this format - whatever it is - be used instead of defining the color using the hex value?
ASKER
This is a database application. The color values set the color of a row in a gridcontrol. Thanks for your help!
This value you provided does actually convert to an RGB value.
Explanation:
Firstly, an RGB colour is made up of 3 values - one for red, green and blue. Typically you would use a byte for each value (8 bits). So to store 3 bytes worth of data you would need a 32 bit int (a 16 bit int could only hold 2 bytes). But a 32 bit int has 4 bytes and for an RGB you only need 3 bytes so the first byte would be meaningless. The second byte would represent red, the third would represent green and the fourth would be blue.
If you take a look at what -16776961 converts to in binary (i.e. in to a 32 bits) you'll see how this fits in. See http://www.binaryconvert.com/result_signed_int.html?decimal=045049054055055054057054049
Now, for the record a hex colour value is the same as an RGB value. Hex means base16, so to represent a hex value you only need 4 bits (4^2=16). But, for example, the color blue is represented by 1 byte (8 bits), so you would use 2 hex values to represent the byte. How would you represent the blue colour? You would use #0000FF, where the first two hex values represent the 8 bits for red, the second two hex values represent the 8 bit value for green, and so on.
Hope that helps.
Explanation:
Firstly, an RGB colour is made up of 3 values - one for red, green and blue. Typically you would use a byte for each value (8 bits). So to store 3 bytes worth of data you would need a 32 bit int (a 16 bit int could only hold 2 bytes). But a 32 bit int has 4 bytes and for an RGB you only need 3 bytes so the first byte would be meaningless. The second byte would represent red, the third would represent green and the fourth would be blue.
If you take a look at what -16776961 converts to in binary (i.e. in to a 32 bits) you'll see how this fits in. See http://www.binaryconvert.com/result_signed_int.html?decimal=045049054055055054057054049
Now, for the record a hex colour value is the same as an RGB value. Hex means base16, so to represent a hex value you only need 4 bits (4^2=16). But, for example, the color blue is represented by 1 byte (8 bits), so you would use 2 hex values to represent the byte. How would you represent the blue colour? You would use #0000FF, where the first two hex values represent the 8 bits for red, the second two hex values represent the 8 bit value for green, and so on.
Hope that helps.
ASKER
Kias - this is a very helpful explanation - thank you. I am a little confused with the conversion from hex to decimal. For example if I have a hex value = 00FF00 (which I think should be green), then how do I get to the decimal value with the minus sign? The utility does not seem to produce the correct result. Thanks again.
A complete color code is not only the RGB value but ARGB, it is preceeded by Alpha, or transparency if you like. See http://msdn.microsoft.com/en-us/library/system.drawing.color.aspx.
In hex the color would be AARRGGBB where the AA is usually FF for non-transparent so a full green color would become FF00FF00 which will result in a negative value for a 32 bit integer.
In hex the color would be AARRGGBB where the AA is usually FF for non-transparent so a full green color would become FF00FF00 which will result in a negative value for a 32 bit integer.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
This was an excellent and very thorough response!
what is the usage of the colors in the application?