6

我已经研究过要在 iphone 中播放与我给出的频率和分贝有关的哔声。

我提到的链接:

http://developer.apple.com/library/ios/#samplecode/MusicCube/Introduction/Intro.html#//apple_ref/doc/uid/DTS40008978

http://www.politepix.com/2010/06/18/decibel-metering-from-an-iphone-audio-unit/

http://atastypixel.com/blog/using-remoteio-audio-unit/

http://www.politepix.com/2010/06/18/decibel-metering-from-an-iphone-audio-unit/

如何播放特定频率和框架未找到 AudioUnit 问题的声音

我还使用 Flite 在我的应用程序中进行文本转语音。

我可以知道,是否可以使用 flite 在 iphone 中播放与频率和分贝相关的哔声。

我知道他们正在根据输入创建一个音频文件(仅与音高、方差、速度和给定的字符串有关)并在创建后通过 Audioplayer 播放。

但是他们没有自定义方法来设置频率和分贝!!!!

那么任何人都可以为我提供一种在 iphone 中执行此操作的好方法。

对此问题的任何帮助表示赞赏。

谢谢

4

1 回答 1

0

此类允许您以给定的频率和给定的幅度播放哔声。它使用来自 AudioToolbox.framework 的AudioQueues。这只是一个草图,很多东西都应该改进,但是创建信号的机制是有效的。

如果您看到@interface.

#import <AudioToolbox/AudioToolbox.h>
#define TONE_SAMPLERATE 44100.

@interface Tone : NSObject {
    AudioQueueRef queue;
    AudioQueueBufferRef buffer;
    BOOL rebuildBuffer;
}
@property (nonatomic, assign) NSUInteger frequency;
@property (nonatomic, assign) CGFloat dB;

- (void)play;
- (void)pause;
@end


@implementation Tone
@synthesize dB=_dB,frequency=_frequency;

void handleBuffer(void *inUserData,
                  AudioQueueRef inAQ,
                  AudioQueueBufferRef inBuffer);

#pragma mark - Initialization and deallocation -

- (id)init
{
    if ((self=[super init])) {

        _dB=0.;
        _frequency=440;
        rebuildBuffer=YES;

        // TO DO: handle AudioQueueXYZ's failures!!

        // create a descriptor containing a LPCM, mono, float format
        AudioStreamBasicDescription desc;

        desc.mSampleRate=TONE_SAMPLERATE;
        desc.mFormatID=kAudioFormatLinearPCM;
        desc.mFormatFlags=kLinearPCMFormatFlagIsFloat;
        desc.mBytesPerPacket=sizeof(float);
        desc.mFramesPerPacket=1;
        desc.mBytesPerFrame=sizeof(float);
        desc.mChannelsPerFrame=1;
        desc.mBitsPerChannel=8*sizeof(float);

        // create a new queue
        AudioQueueNewOutput(&desc,
                            &handleBuffer,
                            self,
                            CFRunLoopGetCurrent(),
                            kCFRunLoopCommonModes,
                            0,
                            &queue);

        // and its buffer, ready to hold 1" of data
        AudioQueueAllocateBuffer(queue,
                                 sizeof(float)*TONE_SAMPLERATE,
                                 &buffer);

        // create the buffer and enqueue it
        handleBuffer(self, queue, buffer);

    }
    return self;
}

- (void)dealloc
{
    AudioQueueStop(queue, YES);
    AudioQueueFreeBuffer(queue, buffer);
    AudioQueueDispose(queue, YES);

    [super dealloc];
}

#pragma mark - Main function -

void handleBuffer(void *inUserData,
                AudioQueueRef inAQ,
                AudioQueueBufferRef inBuffer) {

    // this function takes care of building the buffer and enqueuing it.

    // cast inUserData type to Tone
    Tone *tone=(Tone *)inUserData;

    // check if the buffer must be rebuilt
    if (tone->rebuildBuffer) {

        // precompute some useful qtys
        float *data=inBuffer->mAudioData;
        NSUInteger max=inBuffer->mAudioDataBytesCapacity/sizeof(float);

        // multiplying the argument by 2pi changes the period of the cosine
        //  function to 1s (instead of 2pi). then we must divide by the sample
        //  rate to get TONE_SAMPLERATE samples in one period.
        CGFloat unit=2.*M_PI/TONE_SAMPLERATE;
        // this is the amplitude converted from dB to a linear scale
        CGFloat amplitude=pow(10., tone.dB*.05);

        // loop and simply set data[i] to the value of cos(...)
        for (NSUInteger i=0; i<max; ++i)
            data[i]=(float)(amplitude*cos(unit*(CGFloat)(tone.frequency*i)));

        // inform the queue that we have filled the buffer
        inBuffer->mAudioDataByteSize=sizeof(float)*max;

        // and set flag
        tone->rebuildBuffer=NO;
    }

    // reenqueue the buffer
    AudioQueueEnqueueBuffer(inAQ,
                            inBuffer,
                            0,
                            NULL);

    /* TO DO: the transition between two adjacent buffers (the same one actually)
              generates a "tick", even if the adjacent buffers represent a continuous signal.
              maybe using two buffers instead of one would fix it.
     */
}

#pragma - Properties and methods -

- (void)play
{
    // generate an AudioTimeStamp with "0" simply!
    //  (copied from FillOutAudioTimeStampWithSampleTime)

    AudioTimeStamp time;

    time.mSampleTime=0.;
    time.mRateScalar=0.;
    time.mWordClockTime=0.;
    memset(&time.mSMPTETime, 0, sizeof(SMPTETime));
    time.mFlags = kAudioTimeStampSampleTimeValid;

    // TO DO: maybe it could be useful to check AudioQueueStart's return value
    AudioQueueStart(queue, &time);
}

- (void)pause
{
    // TO DO: maybe it could be useful to check AudioQueuePause's return value
    AudioQueuePause(queue);
}

- (void)setFrequency:(NSUInteger)frequency
{
    if (_frequency!=frequency) {
        _frequency=frequency;

        // we need to update the buffer (as soon as it stops playing)
        rebuildBuffer=YES;
    }
}

- (void)setDB:(CGFloat)dB
{
    if (dB!=_dB) {
        _dB=dB;

        // we need to update the buffer (as soon as it stops playing)
        rebuildBuffer=YES;
    }
}

@end
  • 该类生成一个以给定整数频率(幅值*cos(2pi*frequency*t))振荡的cos波形;整个工作是通过void handleBuffer(...)使用具有线性 PCM、单声道、浮点 @44.1kHz 格式的 AudioQueue 来完成的。为了改变信号形状,你可以改变那条线。例如,下面的代码将产生一个方波:

    float x = fmodf(unit*(CGFloat)(tone.frequency*i), 2 * M_PI);
    data[i] = amplitude * (x > M_PI ? -1.0 : 1.0);
    
  • 对于浮点频率,您应该考虑在一秒钟的音频数据中不一定有整数次振荡,因此表示的信号在两个缓冲区之间的连接处是不连续的,并产生一个奇怪的“滴答声”。例如,您可以设置较少的样本,以便结点位于信号周期的末尾。

  • 正如 Paul R 指出的那样,您应该首先校准硬件,以便在您在实现中设置的值与设备产生的声音之间进行可靠的转换。实际上,这段代码中生成的浮点样本的范围是 -1 到 1,所以我只是将幅度值转换为 dB ( 20*log_10(amplitude) )。
  • 查看评论以了解实施中的其他细节和“已知限制”(所有这些“TO DO”)。Apple 在其参考资料中详细记录了所使用的功能。
于 2012-11-11T02:13:52.457 回答