1

我正在使用音频单元进行一些自定义音频后处理。我有两个文件要合并在一起(下面的链接),但是在输出中出现了一些奇怪的噪音。我究竟做错了什么?

我已经验证在此步骤之前,2 个文件 (workTrack1workTrack2) 处于正确状态并且听起来不错。在此过程中也不会出现任何错误。

缓冲区处理代码

- (BOOL)mixBuffersWithBuffer1:(const int16_t *)buffer1 buffer2:(const int16_t *)buffer2 outBuffer:(int16_t *)mixbuffer outBufferNumSamples:(int)mixbufferNumSamples {
    BOOL clipping = NO;

    for (int i = 0 ; i < mixbufferNumSamples; i++) {
        int32_t s1 = buffer1[i];
        int32_t s2 = buffer2[i];
        int32_t mixed = s1 + s2;

        if ((mixed < -32768) || (mixed > 32767)) {
            clipping = YES; // don't break here because we dont want to lose data, only to warn the user
        }

        mixbuffer[i] = (int16_t) mixed;
    }
    return clipping;
}

混音代码

////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////      PHASE 4      ////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// In phase 4, open workTrack1 and workTrack2 for reading,
// mix together, and write out to outfile.

// open the outfile for writing -- this will erase the infile if they are the same, but its ok cause we are done with it
err = [self openExtAudioFileForWriting:outPath audioFileRefPtr:&outputAudioFileRef numChannels:numChannels];
if (err) { [self cleanupInBuffer1:inBuffer1 inBuffer2:inBuffer2 outBuffer:outBuffer err:err]; return NO; }

// setup vars
framesRead = 0;
totalFrames = [self totalFrames:mixAudioFile1Ref]; // the long one.
NSLog(@"Mix-down phase, %d frames (%0.2f secs)", totalFrames, totalFrames / RECORD_SAMPLES_PER_SECOND);

moreToProcess = YES;
while (moreToProcess) {

    conversionBuffer1.mBuffers[0].mDataByteSize = LOOPER_BUFFER_SIZE;
    conversionBuffer2.mBuffers[0].mDataByteSize = LOOPER_BUFFER_SIZE;

    UInt32 frameCount1 = framesInBuffer;
    UInt32 frameCount2 = framesInBuffer;

    // Read a buffer of input samples up to AND INCLUDING totalFrames
    int numFramesRemaining = totalFrames - framesRead; // Todo see if we are off by 1 here.  Might have to add 1
    if (numFramesRemaining == 0) {
        moreToProcess = NO; // If no frames are to be read, then this phase is finished

    } else {
        if (numFramesRemaining < frameCount1) { // see if we are near the end
            frameCount1 = numFramesRemaining;
            frameCount2 = numFramesRemaining;
            conversionBuffer1.mBuffers[0].mDataByteSize = (frameCount1 * bytesPerFrame);
            conversionBuffer2.mBuffers[0].mDataByteSize = (frameCount2 * bytesPerFrame);
        }

        NSbugLog(@"Attempting to read %d frames from mixAudioFile1Ref", (int)frameCount1);
        err = ExtAudioFileRead(mixAudioFile1Ref, &frameCount1, &conversionBuffer1);
        if (err) { [self cleanupInBuffer1:inBuffer1 inBuffer2:inBuffer2 outBuffer:outBuffer err:err]; return NO; }

        NSLog(@"Attempting to read %d frames from mixAudioFile2Ref", (int)frameCount2);
        err = ExtAudioFileRead(mixAudioFile2Ref, &frameCount2, &conversionBuffer2);
        if (err) { [self cleanupInBuffer1:inBuffer1 inBuffer2:inBuffer2 outBuffer:outBuffer err:err]; return NO; }

        NSLog(@"Read %d frames from mixAudioFile1Ref in mix-down phase", (int)frameCount1);
        NSLog(@"Read %d frames from mixAudioFile2Ref in mix-down phase", (int)frameCount2);

        // If no frames were returned, phase is finished
        if (frameCount1 == 0) {
            moreToProcess = NO;

        } else { // Process pcm data

            // if buffer2 was not filled, fill with zeros
            if (frameCount2 < frameCount1) {
                bzero(inBuffer2 + frameCount2, (frameCount1 - frameCount2));
                frameCount2 = frameCount1;
            }

            const int numSamples = (frameCount1 * bytesPerFrame) / sizeof(int16_t);

            if ([self mixBuffersWithBuffer1:(const int16_t *)inBuffer1
                                    buffer2:(const int16_t *)inBuffer2
                                  outBuffer:(int16_t *)outBuffer
                        outBufferNumSamples:numSamples]) {
                NSLog(@"Clipping");
            }
            // Write pcm data to the main output file
            conversionOutBuffer.mBuffers[0].mDataByteSize = (frameCount1 * bytesPerFrame);
            err = ExtAudioFileWrite(outputAudioFileRef, frameCount1, &conversionOutBuffer);

            framesRead += frameCount1;
        } // frame count
    } // else

    if (err) {
        moreToProcess = NO;
    }
} // while moreToProcess

// Check for errors
TTDASSERT(framesRead == totalFrames);
if (err) {
    if (error) *error = [NSError errorWithDomain:kUAAudioSelfCrossFaderErrorDomain
                                            code:UAAudioSelfCrossFaderErrorTypeMixDown
                                        userInfo:[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:err],@"Underlying Error Code",[self commonExtAudioResultCode:err],@"Underlying Error Name",nil]];
    [self cleanupInBuffer1:inBuffer1 inBuffer2:inBuffer2 outBuffer:outBuffer err:err];
    return NO;
}
NSLog(@"Done with mix-down phase");


假设

  • mixAudioFile1Ref总是长于mixAudioFile2Ref
  • mixAudioFile2Ref字节用完后,outputAudioFileRef听起来应该与mixAudioFile2Ref

预期的声音应该在开始时将淡入与淡出混合,以在轨道循环时产生自交叉淡入淡出。请听输出,看看代码,让我知道我哪里出错了。

源音http ://cl.ly/2g2F2A3k1r3S36210V23
产生的音http ://cl.ly/3q2w3S3Y0x0M3i2a1W3v

4

2 回答 2

1

原来这里有两个问题。

缓冲区处理代码

int32_t mixed = s1 + s2;导致剪辑。更好的方法是除以混合通道的数量:int32_t mixed = (s1 + s2)/2;然后在稍后的另一遍中进行归一化。

Frames != bytes 当声音用完时将第二个音轨的缓冲区归零时,我错误地将偏移量和持续时间设置为帧而不是字节。这会在缓冲区中产生垃圾并产生您定期听到的噪音。易于修复:

if (frameCount2 < frameCount1) {
    bzero(inBuffer2 + (frameCount2 * bytesPerFrame), (frameCount1 - frameCount2) * bytesPerFrame);
    frameCount2 = frameCount1;
}

现在样本很棒:http ://cl.ly/1E2q1L441s2b3e2X2z0J

于 2010-11-11T23:59:07.880 回答
0

您发布的答案看起来不错;我只能看到一个小问题。您的削波解决方案除以 2 会有所帮助,但它也相当于应用 50% 的增益减少。这标准化不同;归一化是查看整个音频文件、找到最高峰值并应用给定增益降低以使该峰值达到某个水平(通常为 0.0dB)的过程。结果是在正常(即非削波)情况下,输出信号将非常低,需要再次提升。

在您的混音过程中,您无疑遇到了导致失真的溢出,因为该值会环绕并导致信号跳跃。相反,您想要做的是应用一种称为“砖墙限制器”的技术,它基本上对正在裁剪的样本应用一个硬天花板。最简单的方法是:

int32_t mixed = s1 + s2;
if(mixed >= 32767) {
  mixed = 32767;
}
else if(mixed <= -32767) {
  mixed = -32767;
}

这种技术的结果是,您会在被削波的样本周围听到一些失真,但声音不会像整数溢出那样被完全破坏。失真虽然存在,但不会破坏聆听体验。

于 2010-11-12T08:44:18.040 回答