I'm writing code to decompress a native annex-B H.264 stream, and I'm going through the process of parsing the stream, creating a CMVideoFormatDescription from the SPS/PPS NALUs, and wrapping the other NALUs I extract from the stream in CMSampleBuffers.
I'm suffering from a mental block on how to handle the CMBlockBuffer and CMSampleBuffer memory for the decoder. I believe my issue is more of a lack of thorough understanding of how CF handles memory than anything else, so my question is really more about that, but I'm hoping the context is helpful.
If I create a CMBlockBuffer like this:
CMBlockBufferRef blockBuffer;
OSStatus status = CMBlockBufferCreateWithMemoryBlock(NULL,
memoryBlock,
blockBufferLength,
kCFAllocatorNull,
NULL,
0,
blockBufferLength,
kCMBlockBufferAlwaysCopyDataFlag | kCMBlockBufferAssureMemoryNowFlag,
&blockBuffer);
and add it to a CMSampleBuffer like this:
CMSampleBufferRef sampleBuffer;
status = CMSampleBufferCreate(kCFAllocatorDefault,
blockBuffer,
true,
NULL,
NULL,
formatDescription,
1,
0,
NULL,
1,
&sampleSize,
&sampleBuffer);
How should I handle the block buffer? Does the SampleBuffer retain the memory of the block buffer, or do I need to do something to make sure it is not deallocated?
Also, pertaining to the asynchronous decode process, is there a sensible way to know when the decoder is done with the CMSampleBuffer so I can dispose of it?
My intuition tells me the CMSampleBuffer would retain the CMBlockBuffer, and the VTDecodeSession would retain the CMSampleBuffer until it's done decoding, but this is undocumented territory I'm wandering in so looking for some direction. The results I'm getting imply my intuition might be wrong, so I need rule out memory management as an issue to keep my sanity...