CIImages, they’re the new hotness. They go well with a smattering of CIFilters (also high on the hotness scale). However.. the thing about CIImages (and filters) is that they wait until you use them for something (like rendering) before they actually do the render. This is pretty cool if all you want to do is see what is going on (ie your final destination is a view of some kind, which I imagine it is for most applications).. however, if you are like me, and want to get at the raw bits, then it can be a bit harder.
well.. actually it isn’t hard to GET to the bits ‘n’ bytes, but it is slooooow. (or so it seems from my various experiments)
I have tried all the ways i can think of to freeze-dry a CIImage into a useable byte buffer:
I started simple and just made an NSBitmapImageRep with – (id)initWithCIImage:(CIImage *)ciImage.
I generated a CGBitmapContext and then drew the CIImage into it.
I generated an NSGraphicsContext with a bitmap, and then drew into it’s CIContext.
I tried using an offscreen version of the NSViews that i was rendering the CIImages into (the ones that render so fast when you can SEE them) and then using – (void)cacheDisplayInRect:(NSRect)rect toBitmapImageRep:(NSBitmapImageRep *)bitmapImageRep to get the bits out of them.
And I tried all sorts of crazy-ass combinations of all of the above.
Sadly, every single one of them seems to take at least 15-30x as long as it takes to render into a visible view. (in fact, they all take such a similar amount of time, that I am pretty sure that they are all doing the same thing under the hood). I am by no means an expert on the new CIImage/CIFilter stuff, but i am presuming that this all has to do with where the image processing is taking place. and I am also presuming that in the case of a visible view, all those bits are out on the graphics processor, and the minute i try to get my grubby paws on them, they have to be moved all they way back to the main processor, hence the terrible soul-crushing overhead.
(as a reference: on my 2.33 Ghz macbookpro with a shiteload of RAM, the CIImages if left to their own devices in a poorly programmed NSView subclass will render out to the screen in about 400us. once I try to make that data available to the application, it takes more like 25000us. Which is too slow)
There are still a few more options, mostly involving rendering the CIImage into an OpenGL texture and trying to get at the bits that way. (which i may try this weekend)
and the last method, which would be the holy grail of methods would be to somehow distill the blob detection algorithm into a form that could be compiled into a CIFilter kernel. and then find some way to spit out the blob tracking info… but that is probably impossible.
in any case, I have put down the idea of switching to a fully CIImage backed algo for the BBTouch stuff (at least for the time being). I just cant get it to go fast enough. So! if anyone knows of any good ways of getting the byte buffer out of a CIImage in a speedy manner, i would love to know it.