CIImage and the bitmap problem

Sooo…

CIImages, they’re the new hotness. They go well with a smattering of CIFilters (also high on the hotness scale). However.. the thing about CIImages (and filters) is that they wait until you use them for something (like rendering) before they actually do the render. This is pretty cool if all you want to do is see what is going on (ie your final destination is a view of some kind, which I imagine it is for most applications).. however, if you are like me, and want to get at the raw bits, then it can be a bit harder.

well.. actually it isn’t hard to GET to the bits ‘n’ bytes, but it is slooooow. (or so it seems from my various experiments)

I have tried all the ways i can think of to freeze-dry a CIImage into a useable byte buffer:

I started simple and just made an NSBitmapImageRep with – (id)initWithCIImage:(CIImage *)ciImage.
I generated a CGBitmapContext and then drew the CIImage into it.
I generated an NSGraphicsContext with a bitmap, and then drew into it’s CIContext.
I tried using an offscreen version of the NSViews that i was rendering the CIImages into (the ones that render so fast when you can SEE them) and then using – (void)cacheDisplayInRect:(NSRect)rect toBitmapImageRep:(NSBitmapImageRep *)bitmapImageRep to get the bits out of them.
And I tried all sorts of crazy-ass combinations of all of the above.

Sadly, every single one of them seems to take at least 15-30x as long as it takes to render into a visible view. (in fact, they all take such a similar amount of time, that I am pretty sure that they are all doing the same thing under the hood). I am by no means an expert on the new CIImage/CIFilter stuff, but i am presuming that this all has to do with where the image processing is taking place. and I am also presuming that in the case of a visible view, all those bits are out on the graphics processor, and the minute i try to get my grubby paws on them, they have to be moved all they way back to the main processor, hence the terrible soul-crushing overhead.

(as a reference: on my 2.33 Ghz macbookpro with a shiteload of RAM, the CIImages if left to their own devices in a poorly programmed NSView subclass will render out to the screen in about 400us. once I try to make that data available to the application, it takes more like 25000us. Which is too slow)

There are still a few more options, mostly involving rendering the CIImage into an OpenGL texture and trying to get at the bits that way. (which i may try this weekend)

and the last method, which would be the holy grail of methods would be to somehow distill the blob detection algorithm into a form that could be compiled into a CIFilter kernel. and then find some way to spit out the blob tracking info… but that is probably impossible.

sigh..

in any case, I have put down the idea of switching to a fully CIImage backed algo for the BBTouch stuff (at least for the time being). I just cant get it to go fast enough. So! if anyone knows of any good ways of getting the byte buffer out of a CIImage in a speedy manner, i would love to know it.

This entry was posted in BBTouch, code, multitouch. Bookmark the permalink.

3 Responses to CIImage and the bitmap problem

  1. beeduul says:

    hi ben,

    i didn’t get as far as you did, but i was googling about the internets a couple of weeks ago trying to find some answers to the same question. sounds like your experimenting bore out pretty much what folks were saying. :(

    i’m hoping OpenCL in snow leopard might offer some more flexible programming directly on the GPU, akin to what you’re suggesting w/ the CIFilter kernel. have you looked into CUDA (if you have an Nvidia gpu… maybe ATI has somehing similar?)

    thanks for all the hard work, i’ve been playing around w/ BBTouch recently. good stuff!

    cheers,
    jeremy

  2. richard says:

    hi ben,

    great blog btw, really enjoying watching the project progress.

    so this CIImage thing. i’ve come across the same problem, unfortunately i too haven’t found a better solution – yet. i did some experiments and found that the ‘bitmapData’ method of nsbitmapimagerep was the quickest way to get to the data – over ‘colorAtX:Y’ and ‘getPixelAtX:Y’.

    This did cause a problem however with video data from DV cameras, where as colorAtX:Y & getPixelAt:X:Y were fine. Although i think this is just a Y’CbCr – RGB issue, and can be fixed with some color space conversion.

    anyways, i will let you know if i find a good solution.

    keep up the great work!
    cheers,

    rich

  3. Ben says:

    Hey guys,

    yeah, i don’t think it can be done. (quickly i mean) there are subtle hints in the docs that kind of suggest that is the case, but i was trying hard to ignore them :-) i really really wanted it to work.. oh well. It is actually possible to use a custom filter or QCPatch to do the blob detection, which might speed things up dramatically as well. but the solution is a bit more time-intensive than i have at the moment.

    Also, i guess it isn’t necessary in the strictest sense (with BBtouch i mean) it is already doing background subtraction, and now inversion, i could very easily do blurring as well (by downsampling the size of the image) so i guess i really dont need CIIFilters.. but i had so wanted to use them (them being the new hotness and all :-)

    i may have to write my own barrel distortion correction filters tho. :-)

    edit: Richard, i meant to mention that (just FYI, in case you were curious :-) BBTouch handles the color space issue by doing a color conversion straight off the sequence grabber (ie when i build a CGBitmapContext that will eventually become the NSBitmapImageRep backing for all the detection and whatnot) I convert right to greyscale, which is a relatively fast conversion from any colorspace, and conveniently exactly the kind of thing you want for blob detection :-)

    cheers!
    -b

Leave a Reply