I asked before about pixel-pushing, and have now managed to get far enough to get noise to show up on the screen. Here’s how I init:
CGDataProviderRef provider; bitmap = malloc(320*480*4); provider = CGDataProviderCreateWithData(NULL, bitmap, 320*480*4, NULL); CGColorSpaceRef colorSpaceRef; colorSpaceRef = CGColorSpaceCreateDeviceRGB(); ir = CGImageCreate( 320, 480, 8, 32, 4 * 320, colorSpaceRef, kCGImageAlphaNoneSkipLast, provider, NULL, NO, kCGRenderingIntentDefault );
Here’s how I render each frame:
for (int i=0; i<320*480*4; i++) { bitmap[i] = rand()%256; } CGRect rect = CGRectMake(0, 0, 320, 480); CGContextDrawImage(context, rect, ir);
Problem is this is awfully awfully slow, around 5fps. I think my path to publish the buffer must be wrong. Is it even possible to do full-screen pixel-based graphics that I could update at 30fps, without using the 3D chip?
The slowness is almost certainly in the noise generation. If you run this in Instruments you’ll probably see that a ton of time is spent sitting in your loop.
Another smaller issue is your colorspace. If you use the screen’s colorspace, you’ll avoid a colorspace conversion which is potentially expensive.
If you can use CoreGraphics routines for your drawing, you’d be better served by creating a CGLayer for the drawing context instead of creating a new object each time.
The bytesPerRow component is also important for performance. It should be a factor of 32 IIRC. There’s some code available link text that shows how to compute it.
And yeah, for raw performance, OpenGL.