How do you change the fore color of an image?

Posted on 2014-10-02
Last Modified: 2014-10-22
I've got an image "mask" (eg: the outline of an image all in white, with appropriate transparency).   I need to be able to draw this to an image, but change the color (say, to a shade of blue, but without messing up the transparency).  

The only methods I've found require you to iterate through each pixel in the image.  That isn't an option.  Is there another, cleaner method for simply changing the shade of color of an image?
Question by:Javin007
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 4
  • 3
LVL 36

Expert Comment

ID: 40358492
Is there another, cleaner method for simply changing the shade of color of an image?
Assuming that you are using BufferedImage objects, then yes, you can use BufferedImageOp's to do this kind of processing. One implementation of BufferedImageOp is the RescaleOp which should do exactly what you want. Check the following...
BufferedImage image = ....           // Get your image somehow, and assuming that your image has 4 channels, RGBA, and all RGB channels have a value of 1 (meaning pure white) and Alpha channel varies for our transparency
RescaleOp rescale = new RescaleOp({ 0f, 0f, 1f, 1f }, { 0f, 0f, 0f, 0f }, null);

// Now there are a number of ways to "apply" the rescale op

// 1. Change the image in place, ie. overwrite the original in-memory image
rescale.filter(image, image);

// 2. Apply the rescale as you draw the image to the output/another image, ie. keeps the original in-memory image intact so that you could draw it out using a different RescaleOp that produces a different colour, etc
Graphics2D g2d = .....    // Your output, either a visible component or another image, etc
g2d.drawImage(image, rescale, x, y);

// 3. You could use BufferedImageFilter if that's what you are using...

Open in new window

The values passed to the RescaleOp constructor determine the color that will be produced. In this example, the first array of floats and the "scaling factors" for each channel of the source image. I am zeroing out the Red and Green channels, leaving the Blue channel as is which will change the white mask to a blue one, and the Alpha channel is left as is too so that your transparency is maintained. The second array of floats are "offsets" that get added to the original source channels but we don't need to do that, hence they're all zero.

So you would just have to fiddle with the first 3 floats in the first array to change the resultant colour.

Author Comment

ID: 40368989
Ack!  So sorry, McCarl.  This is the second time you've helped me and I've given no response.  :/  I don't think I'm getting notifications that people have responded anymore.  Let me play with what you've got here, and I'll get back to you.

Author Comment

ID: 40369004
Okay read over what you said but haven't messed with it in code yet.  Looks simple enough.  

Now my question is, what level of accuracy do you get here?  I'm hoping to use this as a method of making it simple to "pick" items with a mouse in a 2D game.  So every object on-screen will also have its "mask" drawn to an off-screen buffer where their RGB value is their unique identifier.  When a user clicks the screen, I will simply check the color of the corresponding pixel on the off-screen buffer to see which object was clicked.  (Unless you have a sexier method, that's all I've got.)  

So if this is using floats, how accurate can I rely on the value being?  Since a "1" on the RGB scale would have a difference of 0.00390625, and I know that floating point numbers can sometimes get wonky, is there a risk of a specific color not actually being the exact color I'm looking for after it's been drawn to a buffer?

And you wouldn't happen to know the "getPixelColor" equivalent off the top of your head?

Thanks for all your help!
What Is Transaction Monitoring and who needs it?

Synthetic Transaction Monitoring that you need for the day to day, which ensures your business website keeps running optimally, and that there is no downtime to impact your customer experience.

LVL 36

Accepted Solution

mccarl earned 500 total points
ID: 40369719
Firstly, I don't think there will be any problem with the precision of the float data type with what you are trying to do.

Secondly, the way you are implementing this MAY work but there would be a number of considerations to make.

Understand that every pixel in your square mask image will have that colour, ie. say you have a mask image of a circle, the image itself is square, and ALL the pixels have colour say (123, 234, 135) it is just that the pixels inside the circle have an alpha (transparency value) of 1.0f all fully opaque, and the pixels outside the circle have an alpha of 0.0f, fully transparent, with some pixels on the edge of the circle somewhere in between to give the antialiased effect. If the user clicked some where just outside the circle (but inside the square bounding box of the image) you would still get your colour (or "identifier") of this object. You could cater for this by only registering a "hit" if the alpha component retrieved is greater than a threshold, say 0.1f or whatever works for you
Do any of these objects overlap? Either in terms of their square image bounding boxes or of the actual shapes represented by the masks? If so, then the Composite rule that is used when you "draw" each mask to your off screen image will determine exactly what the resultant off screen image looks like, and if any of these objects overlap, the resultant colour of the pixels that overlap will be blended in some way and so their use as "identifiers" would probably be lost

With the limited amount of info about exactly what you want to do, my "sexier" ;) way to do it would probably be something like....

Have a data structure (say a List) that stores "objects" and they are stored in their "z-order", ie the objects that are logically "on top" of other objects are stored first in the list. The data structure of each object would be something like this... a) the unique identifier of the object (or other object properties), b) the x and y co-ordinates of where the mask image bounding box is located on the screen and c) the actual mask image itself.

Now, when you get a click at a specific point x, y on the screen, you start iterating through your list of "objects" in normal order (so you will see the "top" most object first). For each object, you check if the clicked point (x,y) is greater than the x,y of where the object is located, but less than x+width, y+ height of the object, ie. you are just checking if the clicked point lies somewhere on the objects image bounding box. If not, you can skip this object and move on to the next. But if the clicked point is within the bounding box, then you can subtract the x,y of the objects bounding box from the clicked points x,y (which gives you the co-ordinates within the image of where it was clicked) and then you can get the alpha component of your mask at that location. If the alpha > some threshold, then this is the object that was selected, otherwise go on to the next one in the list.

No, it might not seem as sexy as what you are proposing, but I think it would have less issues, ie. it doesn't require mapping a unique identifier to a colour and therefore, it doesn't have any issues if colour get blended when drawing you mapping image. It also, might seem like it would add a lot to the processing required, but I think that you might find that the performance hit is negligible. Most object in the list don't get considered because they don't pass the first test, which is not much computation, then only a few objects need to have their masks "tested" to see if the click is really on the object or not.

And you wouldn't happen to know the "getPixelColor" equivalent off the top of your head?
I'm not 100% sure what you are asking, but is this the answer...,%20int)

Hope all that helps, if there is anything unclear about the above, just let us know.

Author Comment

ID: 40397243
Basically, imagine a scene with multiple components, but those components may not be squares, circles, or predictable shapes.  (This is specifically for a graphical representation of data, but let's use a video game analogy.)  

Suppose you have the ground rendered first.  You set this to color "0" then render it in the "PixelColorPicker" background.  Then you draw a "sprite" on top of that ground, and you render that as color "1".  That sprite has a sword, so say you render that separately as color "2".  And so on and so forth.  

So if a player were to click the sprite, you could tell if they were aiming for the sprite's sword (say, to knock it out of their hand) or the body itself.  So long as the sprite's alphas are only 0 and 1, then there shouldn't be any problem with blending throwing off your color.  (I did something like this for a 3D game once, rendering to a "picker" buffer with full emissive, and without anti-aliasing, and it worked well.)  The assigning of the color isn't difficult, as the color of the object is simply the object's ID.  With RGB at 256, this gives me a possible 16,777,216 object IDs (and if I hit even a tenth of that number, I'm doing something wrong.)  Thus, when I pull the pixel color, and convert it to the "long" color, I immediately have the ID of the object clicked, and vice versa (the object's ID determines its color).  

Your idea of first checking by bounding box, then going "deeper" might actually be the "right" answer.  (My method requires doubling your texture memory - due to the need for the mask - as well as having an additional screen buffer to render to.  When the user clicks the screen, you have to first render everything to the color buffer, then pick a pixel.  Your method may, in fact, be much faster.)

Unfortunately, I've been slammed, and haven't had the opportunity to see which works best.  Hopefully I can find a moment today to give it a test.  

Thanks for all your help!

Author Closing Comment

ID: 40397279
The more I thought about it, the more this seems to be the right answer.  My method would always end up using more RAM, as well as complicating things by needing a completely separate render method.  For 2D purposes, I believe this is the superior method, as it also prevents false "hits".  Thanks!
LVL 36

Expert Comment

ID: 40398569
Glad to help! :)

Featured Post

Get 15 Days FREE Full-Featured Trial

Benefit from a mission critical IT monitoring with Monitis Premium or get it FREE for your entry level monitoring needs.
-Over 200,000 users
-More than 300,000 websites monitored
-Used in 197 countries
-Recommended by 98% of users

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Java Flight Recorder and Java Mission Control together create a complete tool chain to continuously collect low level and detailed runtime information enabling after-the-fact incident analysis. Java Flight Recorder is a profiling and event collectio…
Go is an acronym of golang, is a programming language developed Google in 2007. Go is a new language that is mostly in the C family, with significant input from Pascal/Modula/Oberon family. Hence Go arisen as low-level language with fast compilation…
This theoretical tutorial explains exceptions, reasons for exceptions, different categories of exception and exception hierarchy.
This tutorial will introduce the viewer to VisualVM for the Java platform application. This video explains an example program and covers the Overview, Monitor, and Heap Dump tabs.

690 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question