Link to home
Start Free TrialLog in
Avatar of codevomit
codevomit

asked on

UIImage Drawing anomaly (Possibly something to do with CGContext)

Hi,

I'm having a anomaly when altering the pixels of a UIImage, I've drawn a box and then I've colored every pixel of the image with a light blue, so the box should forever be gone right? Wrong!

What you'll see is the screen being fully blue, then the box showing it's unwelcome face up in a few seconds, then the screen becomes fully blue, going backwards and forwards.

I'm thinking this maybe something to do with the CGContext, does this store the paths I used previously to fill in a poly?

I've supplied the a cut down version of my project that will demo my problem, it's triggered by a touches finished, so you will have to click down to see the show.

Thanks in advance.
Avatar of pgnatyuk
pgnatyuk
Flag of Israel image

I think you draw in a wrong place of your application.
If you draw something on the screen of your app make it in drawRect method of the view.

Here is a very simple tutorial:
How to Draw Shapes with Core Graphics
http://howtomakeiphoneapps.com/2009/08/how-to-draw-shapes-with-core-graphics/

Another one:
An iPhone Graphics Drawing Tutorial using Quartz 2D
http://www.techotopia.com/index.php/An_iPhone_Graphics_Drawing_Tutorial_using_Quartz_2D
Avatar of codevomit
codevomit

ASKER

Thanks for your reply, But my poly Is drawn in the correct place as it is the intention is to further draw on it pixel by pixel to use as a mask.

My problem is that writing to every pixel of that image doesn't destroy and send my poly into the void never to return. It's the poly that will not die. It resurrects within five seconds of been covered to death in light blue pixels.

Does the context store previously used paths and then redraws? Because that's defiantly something I don't want to happen. How do I stop it?
Each time you redraw everything you want to see on the screen. If you want to save your drawings, save it in an image object. This object can be reused - next time you may draw the image.
But, in the simplest case, or in a common case, draw everything in drawRect. Everything. Not only a change compare to a previous state.
Would the image object be of the UIImage class? Because I am doing that already.
The white polygon you see is not to be displayed, it is to act as a mask with a texture, the mask is to be pixel by pixel depending on where the user touches the screen to edit the shape of the mask, hence not being drawn in the drawRect function but being stored for later.

I'm only drawing it to the screen now because I want to test that the pixels of the mask have been indeed edited.
I do not see any white polygon yet.
My apologies, I attached my projected to the topic at the start, but it looks like it didn't work, files with the m extension are not part of the accepted list.
-(void)drawRect:(CGRect)rect
{
	CGContextRef context = UIGraphicsGetCurrentContext();
	CGContextDrawImage(context, CGRectMake(0, 0, 320, 480), self.boxMaskDeleted.CGImage);
}

Open in new window

I think you have two different questions. You simply started to talk about the same in both threads.
If this question can get solved then the other doesn't need to be.
What the problem? Why you want to close one of them?
:)
You may close both and open new one if you wish.
Good night. Let's continue tomorrow.
If you wish, choose one of these threads and continue in it. Don't touch the second one. You may close it later.
 
Cool, good night :)
From my point of view here you will see everything you need:
http://www.trembl.org/codec/tag/uiimage/
Check paragraph "UIImage ¿ pixelData ¿ UIImage Rountrip" in this article.

Sorry, but your code is wrong. It's even difficult to understand what exactly you want to do. Probably, it's a temporary situation and you will finish it later.

What exactly you need? Load an image, draw it on a screen, draw a rectangle by changing the pixels (without Core Graphics line drawing functions) on that and save the picture in a file?
Basically I need to draw a poly which is to be used as mask, and then parts of that mask are to be destroyed by a pixel by pixel process depending on where the user has put their finger.

There's a massive fundamental principal here which I've outlined multiple times, but I feel like It's not being perceived so I'm going to attach a screen shot to demonstrate the problem. I cannot possibly be the first person in history of computing who want's to set a pixel of an image a different colour and want's it to stay that way until I say differently.

Now what the image demonstrates, I have my poly image, which I have preprocessed, and then I'm using the drawRect function to display to the screen. Second I click down, and every pixel of that stored image is over written by a light shade of blue, this is what we wanted to achieve, all is right, following so far? Now lastly, this is 5 seconds later the poly returns, THIS IS MY MAIN PROBLEM (Sorry if it looks like I'm shouting, I'm only using caps highlight the problem, shouting is not my intention.). I have written to every byte of that stored image, so the white cube should no longer exist, where is it coming from?

Now, you keep telling me, that all of my draw logic should be contained inside the drawRect function and that function alone, as someone who's written a few games in different languages, I respectfully disagree, you never contain all of the draw logic in the main paint function as it would make the pre-processing images a pointless task. And I doubt it would be any different for Objective C.

A good example of this, is if I had a bunch of paths that described a sprite, rather then draw them constantly in the main paint function, I would draw the paths once before the start of the level in some sort of initialise function and store that image to be copied later to something like a back-buffer, as copying pixels is much faster then redrawing paths for every frame.

Now the white cube, is to be composited later in the drawRect function with a texture (that part I have not supplied with code) which is that part I want to be displayed, I need to be able to modify the white cube's textures pixels to make it look like the user is scratching away the texture with their finger. I wouldn't be normally displaying the white cube straight to the screen, but I'm doing it in order to test whether me writing to it's pixels has been a success.

It's a bit of an essay here, but I wanted to make sure that you have the entire context of what's happening and what is wanted to be achieved.

Remember, the problem is overwritten pixels are being resurrected from the dead.
Screens.png
I can see why you may think the code is difficult to understand, due to the restrictions of experts-exchange I'm forced to put everything into one file to be able to post it up here, it's actually very simple and easy to understand, if you have the time to pull the code appart and put the chunks back into their respective files.
Here's a link to the project archive http://www.4shared.com/file/OB6unwTD/Archive.html
>>I can see why you may think the code is difficult to understand,
You didn't post everything. Right now there is nothing to understand. Sorry. In the current stage 70% of the code does not work.
I attached a fast and dirty sample that will draw a polygon and save it in a png-file. This file can be loaded and drawn on the screen.  
If you want to make a mask from this image, add the function from this link:
http://iphonedevelopertips.com/cocoa/how-to-mask-an-image.html
or from this:
http://stackoverflow.com/questions/633051/creating-mask-with-cgimagemaskcreate-is-all-black-iphone

#import <UIKit/UIKit.h>

#pragma mark -
#pragma mark View

@interface AView: UIView 
{
    UIImage*    loadedImage;
    UIImage*    savedImage;
    BOOL        drawMode;
}

@property (nonatomic, retain) UIImage* loadedImage;
@property (nonatomic, retain) UIImage* savedImage;
@property BOOL drawMode;

@end

@implementation AView

@synthesize loadedImage;
@synthesize savedImage;
@synthesize drawMode;

- (void)drawRect: (CGRect)rect
{
    CGContextRef context = UIGraphicsGetCurrentContext();
    
    BOOL drawn = NO;
    
    UIImage* currentImage = [self loadedImage];
    if (currentImage != nil)
    {
        drawn = YES;
        CGPoint point = CGPointMake(0, 0);        
        [currentImage drawAtPoint: point];
    }
    
    if (drawMode)
    {
        drawn = YES;

        CGContextSetLineWidth(context, 2.0);
        CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
        
        CGContextMoveToPoint(context, 100, 100);
        CGContextAddLineToPoint(context, 100, 200);
        CGContextAddLineToPoint(context, 200, 200);
        CGContextAddLineToPoint(context, 200, 100);
        CGContextAddLineToPoint(context, 150, 50);
        CGContextAddLineToPoint(context, 100, 100);
        
        CGContextSetFillColorWithColor(context, [UIColor redColor].CGColor);
        CGContextFillPath(context);
    }
    
    // Save the drawn image
    if (drawn)
    {
        UIImage* newImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
        [self setSavedImage: newImage];
    }
}

- (void)dealloc
{
    [self setLoadedImage: nil];
    [self setSavedImage: nil];
    [super dealloc];
}

@end

#pragma mark -
#pragma mark Application Delegate

@interface ADelegate : NSObject <UIApplicationDelegate> 
{
    UIWindow *window;
    AView *view;
}

@property (nonatomic, retain) IBOutlet UIWindow *window;
@property (nonatomic, retain) IBOutlet AView *view;

-(UIImage*)loadImage;
-(void)saveImage: (UIImage*)image;
-(void)btnDraw: (id)sender;
-(void)btnClear: (id)sender;
-(void)btnSave: (id)sender;
-(void)btnLoad: (id)sender;

@end

@implementation ADelegate

@synthesize window;
@synthesize view;

- (void)saveImage: (UIImage*)image
{
    if (image != nil)
    {
        NSString* path = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/test1.png"];
        NSData* data = UIImagePNGRepresentation(image);
        [data writeToFile:path atomically:YES];
    }
}

- (UIImage*)loadImage
{
    NSString* path = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/test1.png"];
    UIImage* image = [UIImage imageWithContentsOfFile:path];
    return image;
}

-(void)btnDraw: (id)sender
{
    [view setDrawMode: YES];
    [view setNeedsDisplay];
}

-(void)btnClear: (id)sender
{
    [view setDrawMode: NO];
    [view setLoadedImage: nil];
    [view setSavedImage: nil];
    [view setNeedsDisplay];
}

-(void)btnLoad: (id)sender
{
    UIImage* image = [self loadImage];
    [view setDrawMode: NO];
    [view setLoadedImage: image];
    [view setNeedsDisplay];
}

-(void)btnSave: (id)sender
{
    UIImage* image = [view savedImage];
    [self saveImage: image];
}

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions 
{   
    CGRect frame = [[UIScreen mainScreen] bounds];
    window = [[UIWindow alloc] initWithFrame: frame];
    [window setBackgroundColor: [UIColor whiteColor]];
    
    CGRect viewFrame = [[UIScreen mainScreen] applicationFrame];
    viewFrame.size.height -= 50;
    view = [[AView alloc] initWithFrame: viewFrame];
    [view setBackgroundColor: [UIColor grayColor]];
    [window addSubview: view];    
    
    CGRect btnFrame;
    btnFrame.origin.x = viewFrame.origin.x + 2;
    btnFrame.origin.y = frame.origin.y + frame.size.height - 48;
    btnFrame.size.height = 40;
    btnFrame.size.width = 60;
    UIButton *btn = [[UIButton alloc] initWithFrame: btnFrame];
    [btn setTitle: @"Draw" forState: UIControlStateNormal];
    [btn setBackgroundColor: [UIColor grayColor]];
    [btn setTitleColor: [UIColor blueColor] forState: UIControlStateNormal];
    [btn addTarget: self action: @selector(btnDraw:) forControlEvents: UIControlEventTouchDown]; 
    [window addSubview: btn];   
    [btn release];
    
    btnFrame.origin.x += btnFrame.size.width + 2;
    btn = [[UIButton alloc] initWithFrame: btnFrame];
    [btn setTitle: @"Clear" forState: UIControlStateNormal];
    [btn setBackgroundColor: [UIColor grayColor]];
    [btn setTitleColor: [UIColor blueColor] forState: UIControlStateNormal];
    [btn addTarget: self action: @selector(btnClear:) forControlEvents: UIControlEventTouchDown]; 
    [window addSubview: btn];   
    [btn release];
    
    btnFrame.origin.x += btnFrame.size.width + 2;
    btn = [[UIButton alloc] initWithFrame: btnFrame];
    [btn setTitle: @"Save" forState: UIControlStateNormal];
    [btn setBackgroundColor: [UIColor grayColor]];
    [btn setTitleColor: [UIColor blueColor] forState: UIControlStateNormal];
    [btn addTarget: self action: @selector(btnSave:) forControlEvents: UIControlEventTouchDown]; 
    [window addSubview: btn];   
    [btn release];
    
    btnFrame.origin.x += btnFrame.size.width + 2;
    btn = [[UIButton alloc] initWithFrame: btnFrame];
    [btn setTitle: @"Load" forState: UIControlStateNormal];
    [btn setBackgroundColor: [UIColor grayColor]];
    [btn setTitleColor: [UIColor blueColor] forState: UIControlStateNormal];
    [btn addTarget: self action: @selector(btnLoad:) forControlEvents: UIControlEventTouchDown]; 
    [window addSubview: btn];   
    [btn release];

    [window makeKeyAndVisible];	
	return YES;
}

- (void)dealloc 
{
    [self setView: nil];
    [self setWindow: nil];
    [super dealloc];
}

@end

#pragma mark -
#pragma mark main fucntion

int main(int argc, char *argv[]) {
    
    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
    int retVal = UIApplicationMain(argc, argv, nil, @"ADelegate");
    [pool release];
    return retVal;
}

Open in new window

Screen-shot-2010-09-29-at-10.58..png
You can make the same project as here:
If You Don't Need Interface Builder.
https://www.experts-exchange.com/A_3714.html
cut the code from the previous comment and paste it in main.m file. MainWindow.xib and all other h- m-files are deleted from the project - I don't need them for this example.
I've download and compiled your project.
You launched a timer that sends setNeedsDisplay message to the view. This message causes the screen redrawing.
When I launched the app, that was a white polygon on a black background, then I touched the screen and got a blue background. I switched to the desktop and back and didn't see the polygon. It was redrawn in few seconds when the timer event came.

iPhone-Simulator-1.jpg
iPhone-Simulator.jpg
Thanks for the mask link, that's defiantly going to come in handy later :)

However my main problem has still not been addressed, drawing the polygon is not the issue, I want to be able to edit the resulting image pixel by pixel afterwards, my method is exactly the same as what is in the links you have posted, where I get a pointer to the bytes of the image allowing me to directly modify the bytes, and then create a new UIImage from the bytes.

Why isn't the white cube getting sent to the light blue abyss, surely writing to all of it's bytes would of destroyed it? If you got a dirty hack for what I'm actually trying to do, that would be fantastic.
You need to take an image, get context, get data and now you can edit. Then you will draw the image in drawRect of the view, if you need. That's how I see the solution.
That's what I'm already doing.
So everything's fine already?
No, everything is not Ok, that's the problem.

I get the image
Get the context
Get the data
Edit the data so that everything is blue
Create a new UIImage from the data

Whatever the image was previously should be no more, it should be covered in blue. This is very basic Computer Science, I don't see why this fundamental concept is so hard to understand.

If I had an integer which is set to something like 83, then I set it to 21, then that integer should remain 21 unless I write to it again else where, it shouldn't show as 83 when I next read from it. This is what's happening here, I've set every pixel to blue, written over every byte of that image, providing no other process writes to that image it should just be picture comprised entirely of the same colour light blue pixels.

This is a massive anomaly from the expected outcome, the only explanation I can think of to why this polygon keeps coming back from my limited experience with Apple's graphics libraries is that it maybe the context stores line paths and is redrawing them? I don't know if this is the case.
If you can show me draw a poly with you line path, and then with the resulting image change a good bunch of random pixels of that image to a different colour, I can see where I'm going wrong.
I've drawn two rectangles. You can find a polygon in one of the previous comments:
        CGContextSetLineWidth(context, 2.0);
        CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
       
        CGContextMoveToPoint(context, 100, 100);
        CGContextAddLineToPoint(context, 100, 200);
        CGContextAddLineToPoint(context, 200, 200);
        CGContextAddLineToPoint(context, 200, 100);
        CGContextAddLineToPoint(context, 150, 50);
        CGContextAddLineToPoint(context, 100, 100);
       
        CGContextSetFillColorWithColor(context, [UIColor redColor].CGColor);
        CGContextFillPath(context);

I wish you good luck. I have to leave now.

#import <UIKit/UIKit.h>

#pragma mark -
#pragma mark Application Delegate

@interface myDelegate : NSObject<UIApplicationDelegate>
{
    UIImage *image;
}

-(UIImage*)makeImage: (CGRect)rect;

@end

@implementation myDelegate

- (void)applicationDidFinishLaunching: (UIApplication*)application
{
    CGRect rect = [[UIScreen mainScreen] bounds];
    UIWindow* window = [[UIWindow alloc] initWithFrame: rect];
    [window setBackgroundColor: [UIColor whiteColor]];
    
    rect = [[UIScreen mainScreen] applicationFrame];    
    image = [self makeImage: rect];
    
	UIImageView* contentView = [[UIImageView alloc] initWithFrame: rect];
	[contentView setImage:image];

	[window addSubview: contentView];
    [contentView release]; 
    
    [window makeKeyAndVisible];
}

- (UIImage*)makeImage: (CGRect)rect
{
    CGFloat width = CGRectGetWidth(rect);
    CGFloat height = CGRectGetHeight(rect);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    
    size_t bitsPerComponent = 8;
    size_t bytesPerPixel    = 4;
    size_t bytesPerRow      = (width * bitsPerComponent * bytesPerPixel + 7) / 8;
    size_t dataSize         = bytesPerRow * height;
    
    unsigned char *data = malloc(dataSize);
    memset(data, 0, dataSize);
    
    CGContextRef context = CGBitmapContextCreate(data, width, height, 
                                                 bitsPerComponent, bytesPerRow, colorSpace, 
                                                 kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);

    
    // a drawing gor a test.
    CGContextSetRGBFillColor (context, 1, 0, 0, 1);
    CGContextFillRect (context, CGRectMake (0, 0, 200, 100 ));
    CGContextSetRGBFillColor (context, 0, 0, 1, .5);
    CGContextFillRect (context, CGRectMake (0, 0, 100, 200));
    
    CGColorSpaceRelease(colorSpace);
    CGImageRef imageRef = CGBitmapContextCreateImage(context);
    UIImage *result = [[UIImage imageWithCGImage:imageRef] retain];
    CGImageRelease(imageRef);
    CGContextRelease(context);
    free(data);    
    return result;
}

- (void)dealloc
{
    [image release];
    [super dealloc];
}

@end


int main(int argc, char *argv[]) 
{
    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
    int retVal = UIApplicationMain(argc, argv, nil, @"myDelegate");
    [pool release];
    return retVal;
}

Open in new window

iPhone-Simulator-2.jpg
Pay attention:
   for (int y = 20; y < 100; ++y)
    {
        for (int x = 20; x < 100; ++x)
        {
            int byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
            data[byteIndex + 0] = 0;
            data[byteIndex + 1] = 127;
            data[byteIndex + 2] = 127;
            data[byteIndex + 3] = 127;
        }
    }
   

- (UIImage*)makeImage: (CGRect)rect
{
    CGFloat width = CGRectGetWidth(rect);
    CGFloat height = CGRectGetHeight(rect);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    
    size_t bitsPerComponent = 8;
    size_t bytesPerPixel    = 4;
    size_t bytesPerRow      = (width * bitsPerComponent * bytesPerPixel + 7) / 8;
    size_t dataSize         = bytesPerRow * height;
    
    unsigned char *data = malloc(dataSize);
    memset(data, 0, dataSize);

    for (int y = 20; y < 100; ++y)
    {
        for (int x = 20; x < 100; ++x)
        {
            int byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
            data[byteIndex + 0] = 0;
            data[byteIndex + 1] = 127;
            data[byteIndex + 2] = 127;
            data[byteIndex + 3] = 127;
        }
    }
    
    CGContextRef context = CGBitmapContextCreate(data, width, height, 
                                                 bitsPerComponent, bytesPerRow, colorSpace, 
                                                 kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);

    
    // a drawing for a test.
    CGContextSetRGBFillColor (context, 1, 0, 0, 1);
    CGContextFillRect (context, CGRectMake (0, 0, 200, 100 ));
    CGContextSetRGBFillColor (context, 0, 0, 1, .5);
    CGContextFillRect (context, CGRectMake (0, 0, 100, 200));
    
    CGColorSpaceRelease(colorSpace);
    CGImageRef imageRef = CGBitmapContextCreateImage(context);
    UIImage *result = [[UIImage imageWithCGImage:imageRef] retain];
    CGImageRelease(imageRef);
    CGContextRelease(context);
    free(data);    
    return result;
}

Open in new window

iPhone-Simulator-3.jpg
What you've done there is modify the pixels before filling the rectangles, which is trivial I think you'll agree. It is also the opposite of what I want to do.

Modifying the pixels of an existing image is the objective this post.

See if you can draw the rectangles first  then modify the pixels, the light blue box you've drawn by pixel manipulation must overlap the coloured rectangles rectangle.

If you can show me that, I will give you those well deserved points in a heartbeat :)


pixlemanipulation.JPG
Is it an exam for me?
Why you think something's impossible?

I have my own tasks, I think more interesting.
I'd say you to try to make this change: in the function makeImage you can pass new parameter UIImage sourceImage. Then draw this image on the bitmap graphics context created in this function function.

Now, I simply moved the code drawing this strange rectangle - when the bitmap context already created. Anyway it should be before I make the result UIImage object - I need to see the final drawing.

- (UIImage*)makeImage: (CGRect)rect
{
    CGFloat width = CGRectGetWidth(rect);
    CGFloat height = CGRectGetHeight(rect);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    
    size_t bitsPerComponent = 8;
    size_t bytesPerPixel    = 4;
    size_t bytesPerRow      = (width * bitsPerComponent * bytesPerPixel + 7) / 8;
    size_t dataSize         = bytesPerRow * height;
    
    unsigned char *data = malloc(dataSize);
    memset(data, 0, dataSize);

   
    CGContextRef context = CGBitmapContextCreate(data, width, height, 
                                                 bitsPerComponent, bytesPerRow, colorSpace, 
                                                 kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);

    
    // a drawing for a test.
    CGContextSetRGBFillColor (context, 1, 0, 0, 1);
    CGContextFillRect (context, CGRectMake (0, 0, 200, 100 ));
    CGContextSetRGBFillColor (context, 0, 0, 1, .5);
    CGContextFillRect (context, CGRectMake (0, 0, 100, 200));

    // draw a rectange in a such strange way:
    for (int y = 120; y < 200; ++y)
    {
        for (int x = 20; x < 100; ++x)
        {
            int byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
            data[byteIndex + 0] = 0;
            data[byteIndex + 1] = 200;
            data[byteIndex + 2] = 200;
            data[byteIndex + 3] = 200;
        }
    }
    
    CGColorSpaceRelease(colorSpace);
    CGImageRef imageRef = CGBitmapContextCreateImage(context);
    UIImage *result = [[UIImage imageWithCGImage:imageRef] retain];
    CGImageRelease(imageRef);
    CGContextRelease(context);
    free(data);    
    return result;
}

Open in new window

iPhone-Simulator-4.jpg
It's not a test, I'm simply trying to get my stated problem answered.

Drawing those pixels in an unoccupied space is not the problem. My problem arises when altering already occupied pixels.

Like I said, and demonstrated with an image, the light blue pixel box Must Overlap the coloured rectangles.

If you can show me code that can do this then my problem is solved!
Just for clarification.
pixlemanipulation.JPG
ASKER CERTIFIED SOLUTION
Avatar of pgnatyuk
pgnatyuk
Flag of Israel image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Cool, I'll check that out later when I get access to my mac.
This function may look a bit better, if you will not need this low-level access to the pixels.
Everything I took from "Quartz 2D Programming Guide".
You can find there "Bitmap Images and Image Masks":
http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_images/dq_images.html%23//apple_ref/doc/uid/TP30001066-CH212-TPXREF101

-(UIImage *)oneMore:(UIImage*)source
{
    CGSize size = [source size];
    UIGraphicsBeginImageContext(size);
    
    CGRect rect = CGRectMake(0.0f ,0.0f, size.width, size.height);
    [source drawInRect:rect];

    // a drawing for a test.
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGContextSetRGBFillColor (context, 1, 0, 0, 1);
    CGContextFillRect (context, CGRectMake (0, 0, 200, 100 ));
    CGContextSetRGBFillColor (context, 0, 0, 1, .5);
    CGContextFillRect (context, CGRectMake (0, 0, 100, 200));
    
    //get image
    UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    
    return (result);
}

Open in new window

iPhone-Simulator-6.jpg
It was very difficult to get my point across, but it was worth it in the end, I've tested the solution and it fits my needs.

Thanks
Well, I applied the changes and I got the resurrecting pixels again. After some experimentation of placing the call to the pixel draw functions in different touch events, I concluded that this was a Threading issue.

Synchronising parts of certain functions sorted it out.

I've made a BufferedImage class that sort gives the same functionality as the Java one, the setRGBA function doesn't use alpha blending though as I needed it to paint exactly what I told it to, but I think it should be easy enough to modify to do so.

I should of been claiming the points as this answer fully address the problem. but unfortunately I have already awarded them.
//
//  BufferedImage.h
//  ATest
//
//  Created by Yves Wheeler on 01/10/2010.
//  Copyright 2010 None. All rights reserved.
//


@interface BufferedImage : NSObject {
	CGContextRef context;
	CGColorSpaceRef colorSpace; 
	unsigned char *data;
	size_t width;
	size_t height;
	size_t bitsPerComponent;
	size_t bytesPerPixel;
	size_t bytesPerRow;
	size_t dataSize;
}

@property CGContextRef context;
@property CGColorSpaceRef colorSpace;
@property unsigned char *data;
@property size_t width;
@property size_t height;
@property size_t bitsPerComponent;
@property size_t bytesPerPixel;
@property size_t bytesPerRow;
@property size_t dataSize;

-(UIImage*)getUIImage;
-(void)setRGBAatX:(int) x
				y:(int) y
			  red:(Byte) red	
			green:(Byte) green
			 blue:(Byte) blue
			alpha:(Byte) alpha;
-(void)setRGBAatX:(int) x
				y:(int) y
		  integer:(unsigned int) integer;
-(CGContextRef)getCGContext;
-(unsigned int)getRGBAatX:(int) x
			   y:(int) y;

@end


//
//  BufferedImage.m
//  ATest
//
//  Created by Yves Wheeler on 01/10/2010.
//  Copyright 2010 None. All rights reserved.
//

#import "BufferedImage.h"


@implementation BufferedImage

@synthesize context;
@synthesize colorSpace;
@synthesize data;
@synthesize width;
@synthesize height;
@synthesize bitsPerComponent;
@synthesize bytesPerPixel;
@synthesize bytesPerRow;
@synthesize dataSize;

- (void)finalize
{
	CGColorSpaceRelease(self.colorSpace);
	CGContextRelease(self.context);
	free(self.data);
}

-(CGContextRef)getCGContext{
	@synchronized(self) {
		return self.context;
	}
}

-(UIImage*)getUIImage{
	@synchronized(self) {
		CGImageRef imageRef = CGBitmapContextCreateImage(self.context);
		UIImage *result = [UIImage imageWithCGImage:imageRef];
		CGImageRelease(imageRef);
		
		return result;	
	}
}

-(BufferedImage*)init{
	size_t w = 320;
	size_t h = 480;
	
	CGColorSpaceRef colorS = CGColorSpaceCreateDeviceRGB();
	
	size_t bitsPerC = 8;
	size_t bytesPerP    = 4;
	size_t bytesPR      = (w * bitsPerC * bytesPerP + 7) / 8;
	size_t dataS        = bytesPR * h;
	
	unsigned char *d = malloc(dataS);
	memset(d, 0, dataS);
	
	CGContextRef con = CGBitmapContextCreate(d, w, h, 
											 bitsPerC, bytesPR, colorS, 
											 kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);	
	
	CGImageRef imageRef = CGBitmapContextCreateImage(con);
	UIImage *result = [UIImage imageWithCGImage:imageRef];
	CGImageRelease(imageRef);
	
	CGColorSpaceRelease(colorS);
	CGContextRelease(con);
	free(d);
	
	return [self initWithImage:result];
}

-(BufferedImage*)initWithImage:(UIImage*)source
{
	if(self = [super init]){
		CGImageRef sourceRef = source.CGImage;
		self.width = CGImageGetWidth(sourceRef);
		self.height = CGImageGetHeight(sourceRef);
		
		self.colorSpace = CGColorSpaceCreateDeviceRGB();
		
		self.bitsPerComponent = 8;
		self.bytesPerPixel    = 4;
		self.bytesPerRow      = (self.width * self.bitsPerComponent * self.bytesPerPixel + 7) / 8;
		self.dataSize         = self.bytesPerRow * self.height;
		
		self.data = malloc(self.dataSize);
		memset(data, 0, self.dataSize);
		
		self.context = CGBitmapContextCreate(data, self.width, self.height, 
											 self.bitsPerComponent, self.bytesPerRow, self.colorSpace, 
											 kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
		
		CGContextDrawImage(self.context, CGRectMake(0, 0, self.width, self.height), sourceRef);
	}
	return self;
}

-(void)setRGBAatX:(int) x
				y:(int) y
			  red:(Byte) red	
			green:(Byte) green
			 blue:(Byte) blue
			alpha:(Byte) alpha{
	@synchronized(self) {
		int byteIndex = ((self.bytesPerRow * ((self.height-1)-y)) + x * self.bytesPerPixel);
		data[byteIndex + 0] = red;
		data[byteIndex + 1] = green;
		data[byteIndex + 2] = blue;
		data[byteIndex + 3] = alpha;
	}
}

-(void)setRGBAatX:(int) x
				y:(int) y
		  integer:(unsigned int) integer{
	
		Byte red = (integer >> 24) & 0xFF;
		Byte green = (integer >> 16) & 0xFF;
		Byte blue = (integer >> 8) & 0xFF;
		Byte alpha = integer & 0xFF;
		
		[self setRGBAatX:x y:y red:red green:green blue:blue alpha:alpha];
}

-(unsigned int)getRGBAatX:(int) x
						y:(int) y {
	@synchronized(self) {
		int byteIndex = ((self.bytesPerRow * ((self.height-1)-y)) + x * self.bytesPerPixel);
		unsigned int result = 0;
		
		result |= data[byteIndex + 0];
		result <<= 8;
		result |= data[byteIndex + 1];
		result <<= 8;
		result |= data[byteIndex + 2];
		result <<= 8;
		result |= data[byteIndex + 3];
		
		return result;
	}
}

@end

Open in new window