2

我正在做一些事情,比如从 iPod 库中流式传输音频,通过网络或蓝牙发送数据,以及使用音频队列播放。

感谢这个问题和代码。帮我很多。

我有两个问题。

  1. 我应该从一台设备发送什么到另一台设备?CMSampleBufferRef?音频缓冲区?数据?音频队列缓冲区?包?我不知道。

  2. 当应用程序完成播放时,它崩溃了,我得到了错误(-12733)。我只想知道如何处理错误而不是让它崩溃。(检查OSState?当错误发生时,停止它?)

    错误:无法读取样本数据 (-12733)

4

1 回答 1

6

我将首先回答您的第二个问题 - 不要等待应用程序崩溃,您可以通过检查您正在阅读的 CMSampleBufferRef 中可用的样本数量来停止从轨道中提取音频;例如(此代码也将包含在我的答案的第二部分):

CMSampleBufferRef sample;
sample = [readerOutput copyNextSampleBuffer];

CMItemCount numSamples = CMSampleBufferGetNumSamples(sample);

if (!sample || (numSamples == 0)) {
  // handle end of audio track here
  return;
}

关于您的第一个问题,这取决于您要抓取的音频类型 - 它可能是 PCM(非压缩)或 VBR(压缩)格式。我什至不会费心解决 PCM 部分,因为通过网络将未压缩的音频数据从一部手机发送到另一部手机根本不明智 - 它不必要地昂贵并且会阻塞您的网络带宽。所以我们留下了VBR数据。为此,您必须发送内容AudioBufferAudioStreamPacketDescription从样本中提取。但是话又说回来,最好用代码来解释我在说什么:

-(void)broadcastSample
{
    [broadcastLock lock];

CMSampleBufferRef sample;
sample = [readerOutput copyNextSampleBuffer];

CMItemCount numSamples = CMSampleBufferGetNumSamples(sample);

if (!sample || (numSamples == 0)) {
    Packet *packet = [Packet packetWithType:PacketTypeEndOfSong];
    packet.sendReliably = NO;
    [self sendPacketToAllClients:packet];
    [sampleBroadcastTimer invalidate];
    return;
}


        NSLog(@"SERVER: going through sample loop");
        Boolean isBufferDataReady = CMSampleBufferDataIsReady(sample);



        CMBlockBufferRef CMBuffer = CMSampleBufferGetDataBuffer( sample );                                                         
        AudioBufferList audioBufferList;  

        CheckError(CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
                                                                           sample,
                                                                           NULL,
                                                                           &audioBufferList,
                                                                           sizeof(audioBufferList),
                                                                           NULL,
                                                                           NULL,
                                                                           kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
                                                                           &CMBuffer
                                                                           ),
                   "could not read sample data");

        const AudioStreamPacketDescription   * inPacketDescriptions;

        size_t                               packetDescriptionsSizeOut;
        size_t inNumberPackets;

        CheckError(CMSampleBufferGetAudioStreamPacketDescriptionsPtr(sample, 
                                                                     &inPacketDescriptions,
                                                                     &packetDescriptionsSizeOut),
                   "could not read sample packet descriptions");

        inNumberPackets = packetDescriptionsSizeOut/sizeof(AudioStreamPacketDescription);

        AudioBuffer audioBuffer = audioBufferList.mBuffers[0];



        for (int i = 0; i < inNumberPackets; ++i)
        {

            NSLog(@"going through packets loop");
            SInt64 dataOffset = inPacketDescriptions[i].mStartOffset;
            UInt32 dataSize   = inPacketDescriptions[i].mDataByteSize;            

            size_t packetSpaceRemaining = MAX_PACKET_SIZE - packetBytesFilled - packetDescriptionsBytesFilled;
            size_t packetDescrSpaceRemaining = MAX_PACKET_DESCRIPTIONS_SIZE - packetDescriptionsBytesFilled;        

            if ((packetSpaceRemaining < (dataSize + AUDIO_STREAM_PACK_DESC_SIZE)) || 
                (packetDescrSpaceRemaining < AUDIO_STREAM_PACK_DESC_SIZE))
            {
                if (![self encapsulateAndShipPacket:packet packetDescriptions:packetDescriptions packetID:assetOnAirID])
                    break;
            }

            memcpy((char*)packet + packetBytesFilled, 
                   (const char*)(audioBuffer.mData + dataOffset), dataSize);

            memcpy((char*)packetDescriptions + packetDescriptionsBytesFilled, 
                   [self encapsulatePacketDescription:inPacketDescriptions[i]
                                         mStartOffset:packetBytesFilled
                    ],
                   AUDIO_STREAM_PACK_DESC_SIZE);  


            packetBytesFilled += dataSize;
            packetDescriptionsBytesFilled += AUDIO_STREAM_PACK_DESC_SIZE; 

            // if this is the last packet, then ship it
            if (i == (inNumberPackets - 1)) {          
                NSLog(@"woooah! this is the last packet (%d).. so we will ship it!", i);
                if (![self encapsulateAndShipPacket:packet packetDescriptions:packetDescriptions packetID:assetOnAirID])
                    break;

            }

        }

    [broadcastLock unlock];
}

我在上面代码中使用的一些方法是您不需要担心的方法,例如为每个数据包添加标头(我正在创建自己的协议,您可以创建自己的协议)。有关更多信息,请参阅教程。

- (BOOL)encapsulateAndShipPacket:(void *)source
              packetDescriptions:(void *)packetDescriptions
                        packetID:(NSString *)packetID
{

    // package Packet
    char * headerPacket = (char *)malloc(MAX_PACKET_SIZE + AUDIO_BUFFER_PACKET_HEADER_SIZE + packetDescriptionsBytesFilled);

    appendInt32(headerPacket, 'SNAP', 0);    
    appendInt32(headerPacket,packetNumber, 4);    
    appendInt16(headerPacket,PacketTypeAudioBuffer, 8);   
    // we use this so that we can add int32s later
    UInt16 filler = 0x00;
    appendInt16(headerPacket,filler, 10);    
    appendInt32(headerPacket, packetBytesFilled, 12);
    appendInt32(headerPacket, packetDescriptionsBytesFilled, 16);    
    appendUTF8String(headerPacket, [packetID UTF8String], 20);


    int offset = AUDIO_BUFFER_PACKET_HEADER_SIZE;        
    memcpy((char *)(headerPacket + offset), (char *)source, packetBytesFilled);

    offset += packetBytesFilled;

    memcpy((char *)(headerPacket + offset), (char *)packetDescriptions, packetDescriptionsBytesFilled);

    NSData *completePacket = [NSData dataWithBytes:headerPacket length: AUDIO_BUFFER_PACKET_HEADER_SIZE + packetBytesFilled + packetDescriptionsBytesFilled];        



    NSLog(@"sending packet number %lu to all peers", packetNumber);
    NSError *error;    
    if (![_session sendDataToAllPeers:completePacket withDataMode:GKSendDataReliable error:&error])   {
        NSLog(@"Error sending data to clients: %@", error);
    }   

    Packet *packet = [Packet packetWithData:completePacket];

    // reset packet 
    packetBytesFilled = 0;
    packetDescriptionsBytesFilled = 0;

    packetNumber++;
    free(headerPacket);    
    //  free(packet); free(packetDescriptions);
    return YES;

}

- (char *)encapsulatePacketDescription:(AudioStreamPacketDescription)inPacketDescription
                          mStartOffset:(SInt64)mStartOffset
{
    // take out 32bytes b/c for mStartOffset we are using a 32 bit integer, not 64
    char * packetDescription = (char *)malloc(AUDIO_STREAM_PACK_DESC_SIZE);

    appendInt32(packetDescription, (UInt32)mStartOffset, 0);
    appendInt32(packetDescription, inPacketDescription.mVariableFramesInPacket, 4);
    appendInt32(packetDescription, inPacketDescription.mDataByteSize,8);    

    return packetDescription;
}

接收数据:

- (void)receiveData:(NSData *)data fromPeer:(NSString *)peerID inSession:(GKSession *)session context:(void *)context
{

    Packet *packet = [Packet packetWithData:data];
    if (packet == nil)
    {
         NSLog(@"Invalid packet: %@", data);
        return;
    }

    Player *player = [self playerWithPeerID:peerID];

    if (player != nil)
    {
        player.receivedResponse = YES;  // this is the new bit
    } else {
        Player *player = [[Player alloc] init];
        player.peerID = peerID;
        [_players setObject:player forKey:player.peerID];
    }

    if (self.isServer)
    {
        [Logger Log:@"SERVER: we just received packet"];   
        [self serverReceivedPacket:packet fromPlayer:player];

    }
    else
        [self clientReceivedPacket:packet];
}

笔记:

  1. 有很多网络细节我没有在这里介绍(即,在接收数据部分。我使用了很多定制的对象,但没有扩展它们的定义)。我没有,因为解释所有这些超出了关于 SO 的一个答案的范围。但是,您可以遵循 Ray Wenderlich 的优秀教程。他花时间解释网络原理,我上面使用的架构几乎是逐字从他那里得到的。但是有一个问题(见下一点)

  2. 根据您的项目,GKSession 可能不适合(特别是如果您的项目是实时的,或者如果您需要同时连接 2-3 台以上的设备)它有很多限制。您将不得不深入挖掘并直接使用 Bonjour。iPhone 酷项目有一个很好的快速章节,提供了使用 Bonjour 服务的一个很好的例子。它并不像听起来那么可怕(苹果文档在这个主题上有点霸道)。

  3. 我注意到您使用 GCD 进行多线程处理。同样,如果您正在处理实时,那么您不想使用为您完成繁重工作的高级框架(GCD 就是其中之一)。有关此主题的更多信息,请阅读这篇出色的文章还可以在此答案的评论中阅读我和贾斯汀之间的长时间讨论。

  4. 您可能想查看iOS 6 中引入的MTAudioProcessingTap。它可能会在处理 AVAsset 时为您节省一些麻烦。我没有测试这些东西。它是在我完成所有工作后出现的。

  5. 最后但并非最不重要的一点是,您可能需要查看学习核心有声读物。这是关于这个主题的广泛认可的参考。我记得在你问这个问题的时候,我和你一样被困住了。核心音频很重,需要时间才能融入。所以只会给你指点。您将不得不花时间自己吸收这些材料,然后您将弄清楚事情是如何进行的。祝你好运!

于 2013-02-04T17:18:03.117 回答