1

我已经用ios5.1成功构建了ffmpeg和iFrameExtractor,但是当我播放视频时,没有声音

// Register all formats and codecs
avcodec_register_all();
av_register_all();
avformat_network_init();


if(avformat_open_input(&pFormatCtx, [@"http://somesite.com/test.mp4" cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL) != 0) {
    av_log(NULL, AV_LOG_ERROR, "Couldn't open file\n");
    goto initError;
}

日志是

[swscaler @ 0xdd3000] No accelerated colorspace conversion found from
 yuv420p to rgb24. 2012-10-22 20:42:47.344 iFrameExtractor[356:707]
 video duration: 5102.840000 2012-10-22 20:42:47.412
 iFrameExtractor[356:707] video size: 720 x 576 2012-10-22 20:42:47.454
 iFrameExtractor[356:707] Application windows are expected to have a
 root view

这是我的 ffmpeg 0.11.1 的配置文件:

#!/bin/tcsh -f

rm -rf compiled/*

./configure \
--cc=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc \
--as='/usr/local/bin/gas-preprocessor.pl /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc' \
--sysroot=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS5.1.sdk \
--target-os=darwin \
--arch=arm \
--cpu=cortex-a8 \
--extra-cflags='-arch armv7' \
--extra-ldflags='-arch armv7 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS5.1.sdk' \
--prefix=compiled/armv7 \
--enable-cross-compile \
--enable-nonfree \
--disable-armv5te \
--disable-swscale-alpha \
--disable-doc \
--disable-ffmpeg \
--disable-ffplay \
--disable-ffprobe \
--disable-ffserver \
--enable-decoder=h264 \
--enable-decoder=svq3 \
--disable-asm \
--disable-bzlib \
--disable-gpl \
--disable-shared \
--enable-static \
--disable-mmx \
--disable-neon \
--disable-decoders \
--disable-muxers \
--disable-demuxers \
--disable-devices \
--disable-parsers \
--disable-encoders \
--enable-protocols \
--disable-filters \
--disable-bsfs \
--disable-postproc \
--disable-debug 
4

2 回答 2

2

因此,假设您确实有如下代码块,您如何处理音频,您必须使用其中一个音频 api 来处理它,如果您主要处理已知类型,则 audioQueues 可能是最简单的。

首先在您的初始化中从流中获取音频信息

// Retrieve stream information
    if(av_find_stream_info(pFormatCtx)<0)
        return ; // Couldn't find stream information

    // Find the first video stream
    videoStream=-1;
    for(int i=0; i<pFormatCtx->nb_streams; i++) {
        if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO)
        {
            videoStream=i;
                   }

        if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO)
        {
            audioStream=i;
            NSLog(@"found audio stream");
        }

    }


Then later in your processing loop do something like this.

 while(!frameFinished && av_read_frame(pFormatCtx, &packet)>=0) {
        // Is this a packet from the video stream?
        if(packet.stream_index==videoStream) {
            // Decode video frame
              //do something with the video.

        }
         if(packet.stream_index==audioStream) {
            // NSLog(@"audio stream");

             //do something with the audio packet, here we simply add it to a processing
             queue to be handled by another thread.

             [audioPacketQueueLock lock];
             audioPacketQueueSize += packet.size;
             [audioPacketQueue addObject:[NSMutableData dataWithBytes:&packet length:sizeof(packet)]];
             [audioPacketQueueLock unlock];

要播放音频,您可以查看一些示例

https://github.com/mooncatventures-group/FFPlayer-beta1/blob/master/FFAVFrames-test/AudioController.m

于 2012-10-22T13:23:54.090 回答
2

这里没有足够的信息。

例如,您要打开什么网址?

日志中有消息的地方。我知道使用 0.11 版您会收到一些关于您的警告,但不包括 network_init,但这不会阻止它工作。在以前的版本中,有些事情发生了变化,即。您曾经能够附加 ?tcp 以指定 ffmpeg 正在使用 tcp,但现在必须在字典中完成。

如果可能,请提供系统日志和构建日志。

这是我们的一个应用程序中的一个示例

avcodec_register_all();
        avdevice_register_all();
        av_register_all();
        avformat_network_init();


        const char *filename = [url UTF8String];
        NSLog(@"filename = %@" ,url);
       // err = av_open_input_file(&avfContext, filename, NULL, 0, NULL);
        AVDictionary *opts = 0;

        if (usesTcp) {
            av_dict_set(&opts, "rtsp_transport", "tcp", 0);
            }

         err = avformat_open_input(&avfContext, filename, NULL, &opts);
        av_dict_free(&opts);
        if (err) {
            NSLog(@"Error: Could not open stream: %d", err);

            return nil;
        }
        else {
            NSLog(@"Opened stream");
        }
于 2012-10-21T15:19:44.523 回答