2


我是 iOS 开发和 stackoverflow 的新手,所以如果我的代码看起来不那么好,请多多包涵。

我已经使用 ARC 设置了一个测试应用程序,并使用AVAssetWriter我的应用程序包中的图像创建了一个视频。一切都按预期工作,并且视频可以正确创建,但是当我使用 Instruments 分析我的应用程序时,我得到了内存泄漏,我真的不知道如何修复,因为我在 Detail 视图中看不到与我的代码相关的任何内容(所有泄漏的对象都是 Malloc 并且负责的库是VideoToolbox)。

这是我在视图控制器类中开始编写视频时调用的方法:

- (void)writeVideo
{
    // Set the frameDuration ivar (50/600 = 1 sec / 12 number of frames)
    frameDuration = CMTimeMake(50, 600);
    nextPresentationTimeStamp = kCMTimeZero;

    [self deleteTempVideo];

    NSError *error = nil;
    AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:self.videoPath] fileType:AVFileTypeQuickTimeMovie error:&error];
    if (!error) {
        // Define video settings to be passed to the AVAssetWriterInput instance
        NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                       AVVideoCodecH264, AVVideoCodecKey, 
                                       [NSNumber numberWithInt:640],AVVideoWidthKey, 
                                       [NSNumber numberWithInt:480], AVVideoHeightKey, nil];
        // Instanciate the AVAssetWriterInput
        AVAssetWriterInput *writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
        // Instanciate the AVAssetWriterInputPixelBufferAdaptor to be connected to the writer input
        AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:nil];
        // Add the writer input to the writer and begin writing
        [writer addInput:writerInput];
        [writer startWriting];
        [writer startSessionAtSourceTime:nextPresentationTimeStamp];
        //
        dispatch_queue_t mediaDataRequestQueue = dispatch_queue_create("Media data request queue", NULL);
        [writerInput requestMediaDataWhenReadyOnQueue:mediaDataRequestQueue usingBlock:^{
            while (writerInput.isReadyForMoreMediaData) {
                CVPixelBufferRef nextBuffer = [self fetchNextPixelBuffer];
                if (nextBuffer) {
                    [pixelBufferAdaptor appendPixelBuffer:nextBuffer withPresentationTime:nextPresentationTimeStamp];
                    nextPresentationTimeStamp = CMTimeAdd(nextPresentationTimeStamp, frameDuration);
                    CVPixelBufferRelease(nextBuffer);                    
                    dispatch_async(dispatch_get_main_queue(), ^{
                        NSUInteger totalFrames = [self.imagesNames count]; 
                        float progress = 1.0 * (totalFrames - [self.imageNamesCopy count]) / totalFrames;
                        [self.progressBar setProgress:progress animated:YES];
                    });
                } else {
                    [writerInput markAsFinished];
                    [writer finishWriting];
                    [self loadVideo];
                    dispatch_release(mediaDataRequestQueue);
                    break;
                }
            }
        }];
    }
}


这是我用于获取像素缓冲区以附​​加到在前一个方法中实例化的像素缓冲区适配器的方法:

// Consume the imageNamesCopy mutable array and return a CVPixelBufferRef relative to the last object of the array
- (CVPixelBufferRef)fetchNextPixelBuffer
{
    NSString *imageName = [self.imageNamesCopy lastObject];
    if (imageName) [self.imageNamesCopy removeLastObject];
    // Create an UIImage instance
    UIImage *image = [UIImage imageNamed:imageName];
    CGImageRef imageRef = image.CGImage;    

    CVPixelBufferRef buffer = NULL;
    size_t width = CGImageGetWidth(imageRef);
    size_t height = CGImageGetHeight(imageRef);
    // Pixel buffer options
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, 
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
    // Create the pixel buffer
    CVReturn result = CVPixelBufferCreate(NULL, width, height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options, &buffer);
    if (result == kCVReturnSuccess && buffer) {
        CVPixelBufferLockBaseAddress(buffer, 0);
        void *bufferPointer = CVPixelBufferGetBaseAddress(buffer);
        // Define the color space
        CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
        // Create the bitmap context to draw the image
        CGContextRef context = CGBitmapContextCreate(bufferPointer, width, height, 8, 4 * width, colorSpace, kCGImageAlphaNoneSkipFirst);
        CGColorSpaceRelease(colorSpace);
        if (context) {
            CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
            CGContextRelease(context);
        }
        CVPixelBufferUnlockBaseAddress(buffer, 0);
    }
    return buffer;
}
4

2 回答 2

1

我发现内存泄漏与视频编写代码无关。泄漏的代码似乎是[self deleteTempVideo]我从内部调用的方法,- (void)writeVideo

我仍然需要弄清楚它有什么问题,但我想我的问题此时超出了范围。
这是代码- (void)deleteTempVideo

- (void)deleteTempVideo
{
    NSFileManager *fileManager = [NSFileManager defaultManager];
    if ([fileManager isReadableFileAtPath:self.videoPath]) [fileManager removeItemAtPath:self.videoPath error:nil];
}

这是我用来访问 self.videoPath @property 的 getter:

- (NSString *)videoPath
{
    if (!_videoPath) {
        NSString *fileName = @"test.mov";
        NSString *directoryPath = [NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES) lastObject];
        _videoPath = [directoryPath stringByAppendingPathComponent:fileName];
    }
    return _videoPath;
}
于 2012-01-13T11:19:55.047 回答
0

至于我的经验,当你同时使用弧和块时,你不应该直接使用成员变量。相反,您可以使用弱属性。

假设您有一个名为 ClassA 的类和一个名为 _memberA 的成员变量。通常我会定义memberA的一个属性,然后我可以像下面这样编写代码。

__weak ClassA *weakSelf = self;
SomeVariable.function = ^() {
    weakSelf.memberA ..... (not use _memberA Here)
}

PS。如果块来自某个静态函数,就像 [UIView 动画:持续时间:完成:],那么使用成员变量时似乎没问题。

我以这种方式解决了许多内存泄漏。

于 2013-09-26T09:24:16.610 回答