5

我是音频框架的新手,有人帮我编写通过麦克风捕获正在播放的音频文件吗?

下面是通过 iphone 扬声器播放麦克风输入的代码,现在我想将音频保存在 iphone 中以备将来使用。

我从这里找到了使用麦克风录制音频的代码 http://www.stefanpopp.de/2011/capture-iphone-microphone/

/**

Code start from here for playing the recorded voice 

*/

static OSStatus playbackCallback(void *inRefCon, 
                                 AudioUnitRenderActionFlags *ioActionFlags, 
                                 const AudioTimeStamp *inTimeStamp, 
                                 UInt32 inBusNumber, 
                                 UInt32 inNumberFrames, 
                                 AudioBufferList *ioData) {    

    /**
     This is the reference to the object who owns the callback.
     */
    AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;

    // iterate over incoming stream an copy to output stream
    for (int i=0; i < ioData->mNumberBuffers; i++) { 
        AudioBuffer buffer = ioData->mBuffers[i];

        // find minimum size
        UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);

        // copy buffer to audio buffer which gets played after function return
        memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);

        // set data size
        buffer.mDataByteSize = size; 

         // get a pointer to the recorder struct variable
Recorder recInfo = audioProcessor.audioRecorder;
// write the bytes
OSStatus audioErr = noErr;
if (recInfo.running) {
    audioErr = AudioFileWriteBytes (recInfo.recordFile,
                                    false,
                                    recInfo.inStartingByte,
                                    &size,
                                    &buffer.mData);
    assert (audioErr == noErr);
    // increment our byte count
    recInfo.inStartingByte += (SInt64)size;// size should be number of bytes
    audioProcessor.audioRecorder = recInfo;

     }
    }

    return noErr;
}

-(void)prepareAudioFileToRecord{

NSArray *paths =             NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;

NSTimeInterval time = ([[NSDate date] timeIntervalSince1970]); // returned as a double
long digits = (long)time; // this is the first 10 digits
int decimalDigits = (int)(fmod(time, 1) * 1000); // this will get the 3 missing digits
//    long timestamp = (digits * 1000) + decimalDigits;
NSString *timeStampValue = [NSString stringWithFormat:@"%ld",digits];
//    NSString *timeStampValue = [NSString stringWithFormat:@"%ld.%d",digits ,decimalDigits];


NSString *fileName = [NSString stringWithFormat:@"test%@.caf",timeStampValue];
NSString *filePath = [basePath stringByAppendingPathComponent:fileName];
NSURL *fileURL = [NSURL fileURLWithPath:filePath];
// modify the ASBD (see EDIT: towards the end of this post!)
audioFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;

// set up the file (bridge cast will differ if using ARC)
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL,
                                  kAudioFileCAFType,
                                  &audioFormat,
                                  kAudioFileFlags_EraseFile,
                                  &audioRecorder.recordFile);


assert (audioErr == noErr);// simple error checking
audioRecorder.inStartingByte = 0;
audioRecorder.running = true;
self.audioRecorder = audioRecorder;

}

在此先感谢巴拉

4

2 回答 2

8

要将 AudioBuffer 中的字节写入本地文件,我们需要AudioFileServices 链接类的帮助,该链接类包含在AudioToolbox框架中。

从概念上讲,我们将执行以下操作 - 设置一个音频文件并维护对它的引用(我们需要这个引用可以从您包含在您的帖子中的渲染回调中访问)。我们还需要跟踪每次调用回调时写入的字节数。最后一个检查标志将让我们知道停止写入文件并关闭文件。

因为您提供的链接中的代码声明了一个AudioStreamBasicDescription,它是 LPCM,因此是恒定的比特率,我们可以使用AudioFileWriteBytes函数(编写压缩音频更复杂,而是使用 AudioFileWritePackets 函数)。

让我们首先声明一个自定义结构(其中包含我们需要的所有额外数据)并添加此自定义结构的实例变量,并创建一个指向该结构变量的属性。我们将把它添加到AudioProcessor自定义类中,因为您已经可以从您在此行中进行类型转换的回调中访问此对象。

AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;

将此添加到AudioProcessor.h(@interface 上方)

typedef struct Recorder {
AudioFileID recordFile;
SInt64 inStartingByte;
Boolean running;
} Recorder;

现在让我们添加一个实例变量并将其设为指针属性并将其分配给实例变量(以便我们可以从回调函数中访问它)。在@interface 中添加一个名为audioRecorder的实例变量,并使ASBD 可用于该类。

Recorder audioRecorder;
AudioStreamBasicDescription recordFormat;// assign this ivar to where the asbd is created in the class

在方法-(void)initializeAudio中注释掉或删除这一行,因为我们已将 recordFormat 设置为 ivar。

//AudioStreamBasicDescription recordFormat;

现在将kAudioFormatFlagIsBigEndian格式标志添加到设置 ASBD 的位置。

// also modify the ASBD in the AudioProcessor classes -(void)initializeAudio method (see EDIT: towards the end of this post!)
    recordFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;

最后将其添加为指向audioRecorder实例变量的指针的属性,不要忘记在AudioProcessor.m中合成它。我们将指针属性命名为audioRecorderPointer

@property Recorder *audioRecorderPointer;

// in .m synthesise the property
@synthesize audioRecorderPointer;

现在让我们将指针分配给 ivar(这可以放在AudioProcessor类的-(void)initializeAudio方法中)

// ASSIGN POINTER PROPERTY TO IVAR
self.audioRecorderPointer = &audioRecorder;

现在在AudioProcessor.m中添加一个方法来设置文件并打开它,以便我们可以写入它。这应该在您开始运行 AUGraph 之前调用。

-(void)prepareAudioFileToRecord {
// lets set up a test file in the documents directory
    NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
    NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
    NSString *fileName = @"test_recording.aif";
    NSString *filePath = [basePath stringByAppendingPathComponent:fileName];
    NSURL *fileURL = [NSURL fileURLWithPath:filePath];

// set up the file (bridge cast will differ if using ARC)
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL,
                                  kAudioFileAIFFType,
                                  recordFormat,
                                  kAudioFileFlags_EraseFile,
                                  &audioRecorder.recordFile);
assert (audioErr == noErr);// simple error checking
audioRecorder.inStartingByte = 0;
audioRecorder.running = true;
}

好的,我们快到了。现在我们有一个要写入的文件,以及一个可以从渲染回调中访问的AudioFileID 。因此,在您发布的回调函数中,在方法末尾返回 noErr 之前添加以下内容。

// get a pointer to the recorder struct instance variable
Recorder *recInfo = audioProcessor.audioRecorderPointer;
// write the bytes
OSStatus audioErr = noErr;
if (recInfo->running) {
audioErr = AudioFileWriteBytes (recInfo->recordFile,
                                false,
                                recInfo->inStartingByte,
                                &size,
                                buffer.mData);
assert (audioErr == noErr);
// increment our byte count
recInfo->inStartingByte += (SInt64)size;// size should be number of bytes
}

当我们想要停止录制(可能由某些用户操作调用)时,只需将正在运行的布尔值设为 false 并像这样在 AudioProcessor 类的某处关闭文件。

audioRecorder.running = false;
OSStatus audioErr = AudioFileClose(audioRecorder.recordFile);
assert (audioErr == noErr);

编辑:样本的字节序需要为文件的大字节序,因此将kAudioFormatFlagIsBigEndian位掩码标志添加到在相关链接中找到的源代码中的 ASBD。

有关此主题的更多信息,Apple 文档是一个很好的资源,我还建议阅读 Chris Adamson 和 Kevin Avila 的“Learning Core Audio”(我拥有一份副本)。

于 2014-01-05T06:25:18.183 回答
1

使用音频队列服务。

Apple 文档中有一个示例完全符合您的要求:

音频队列服务编程指南 - 录制音频

于 2013-12-30T07:06:16.350 回答