I'm trying some experimental imagestuff on ipad and I'm trying to store every pixel's colordata into one array to increase performance for reading every pixel colordata,
right now I have a timer that calls my DrawRect as much as possible, in my DrawRect function I have this:
-(void)drawRect:(CGRect)rect
{
UIGraphicsBeginImageContext(self.frame.size);
[currentImage.image drawInRect:CGRectMake(0, 0, 768, 1004)];
CGContextSetLineWidth(UIGraphicsGetCurren开发者_如何转开发tContext(), 0.3);
r_x = r_x + 1;
if (r_x == 768) {
r_x = 1;
r_y = r_y + 1;
}
if (r_y == 1004) {
NSLog(@"color = %@", mijnArray_kleur);
}
CGPoint point2_1 = CGPointMake(r_x, r_y);
GetColor *mycolor = [GetColor alloc];
UIColor *st = [mycolor getPixelColorAtLocation:point2_1];
[mijnArray_kleur addObject:st];
[mycolor release];
CGContextSetFillColorWithColor(UIGraphicsGetCurrentContext(), [st CGColor]);
CGContextFillRect(UIGraphicsGetCurrentContext(), CGRectMake(r_x,r_y,1,1));
}
and getPixelColorAtLocation is a custom class that returns the UIDeviceRGBColorSpace values of a pixel
with this it takes me about 4 hours (yes, hours :p) to complete one image, is there anything faster/improvements?
Thanks!
Thys
[Copied from comment for clarity] Not that I know objective C, at all, but it seems to me like your function iterates over 768 * 1004 values and thus draws that many rectangles. Guessing about 60 frames/second, this would take 3h 40min. Am I wrong here?
精彩评论