我正在使用音频单元进行一些自定义音频后处理。我有两个文件要合并在一起(下面的链接),但是在输出中出现了一些奇怪的噪音。我究竟做错了什么?
我已经验证在此步骤之前,2 个文件 (workTrack1
和workTrack2
) 处于正确状态并且听起来不错。在此过程中也不会出现任何错误。
缓冲区处理代码:
- (BOOL)mixBuffersWithBuffer1:(const int16_t *)buffer1 buffer2:(const int16_t *)buffer2 outBuffer:(int16_t *)mixbuffer outBufferNumSamples:(int)mixbufferNumSamples {
BOOL clipping = NO;
for (int i = 0 ; i < mixbufferNumSamples; i++) {
int32_t s1 = buffer1[i];
int32_t s2 = buffer2[i];
int32_t mixed = s1 + s2;
if ((mixed < -32768) || (mixed > 32767)) {
clipping = YES; // don't break here because we dont want to lose data, only to warn the user
}
mixbuffer[i] = (int16_t) mixed;
}
return clipping;
}
混音代码:
////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////// PHASE 4 ////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// In phase 4, open workTrack1 and workTrack2 for reading,
// mix together, and write out to outfile.
// open the outfile for writing -- this will erase the infile if they are the same, but its ok cause we are done with it
err = [self openExtAudioFileForWriting:outPath audioFileRefPtr:&outputAudioFileRef numChannels:numChannels];
if (err) { [self cleanupInBuffer1:inBuffer1 inBuffer2:inBuffer2 outBuffer:outBuffer err:err]; return NO; }
// setup vars
framesRead = 0;
totalFrames = [self totalFrames:mixAudioFile1Ref]; // the long one.
NSLog(@"Mix-down phase, %d frames (%0.2f secs)", totalFrames, totalFrames / RECORD_SAMPLES_PER_SECOND);
moreToProcess = YES;
while (moreToProcess) {
conversionBuffer1.mBuffers[0].mDataByteSize = LOOPER_BUFFER_SIZE;
conversionBuffer2.mBuffers[0].mDataByteSize = LOOPER_BUFFER_SIZE;
UInt32 frameCount1 = framesInBuffer;
UInt32 frameCount2 = framesInBuffer;
// Read a buffer of input samples up to AND INCLUDING totalFrames
int numFramesRemaining = totalFrames - framesRead; // Todo see if we are off by 1 here. Might have to add 1
if (numFramesRemaining == 0) {
moreToProcess = NO; // If no frames are to be read, then this phase is finished
} else {
if (numFramesRemaining < frameCount1) { // see if we are near the end
frameCount1 = numFramesRemaining;
frameCount2 = numFramesRemaining;
conversionBuffer1.mBuffers[0].mDataByteSize = (frameCount1 * bytesPerFrame);
conversionBuffer2.mBuffers[0].mDataByteSize = (frameCount2 * bytesPerFrame);
}
NSbugLog(@"Attempting to read %d frames from mixAudioFile1Ref", (int)frameCount1);
err = ExtAudioFileRead(mixAudioFile1Ref, &frameCount1, &conversionBuffer1);
if (err) { [self cleanupInBuffer1:inBuffer1 inBuffer2:inBuffer2 outBuffer:outBuffer err:err]; return NO; }
NSLog(@"Attempting to read %d frames from mixAudioFile2Ref", (int)frameCount2);
err = ExtAudioFileRead(mixAudioFile2Ref, &frameCount2, &conversionBuffer2);
if (err) { [self cleanupInBuffer1:inBuffer1 inBuffer2:inBuffer2 outBuffer:outBuffer err:err]; return NO; }
NSLog(@"Read %d frames from mixAudioFile1Ref in mix-down phase", (int)frameCount1);
NSLog(@"Read %d frames from mixAudioFile2Ref in mix-down phase", (int)frameCount2);
// If no frames were returned, phase is finished
if (frameCount1 == 0) {
moreToProcess = NO;
} else { // Process pcm data
// if buffer2 was not filled, fill with zeros
if (frameCount2 < frameCount1) {
bzero(inBuffer2 + frameCount2, (frameCount1 - frameCount2));
frameCount2 = frameCount1;
}
const int numSamples = (frameCount1 * bytesPerFrame) / sizeof(int16_t);
if ([self mixBuffersWithBuffer1:(const int16_t *)inBuffer1
buffer2:(const int16_t *)inBuffer2
outBuffer:(int16_t *)outBuffer
outBufferNumSamples:numSamples]) {
NSLog(@"Clipping");
}
// Write pcm data to the main output file
conversionOutBuffer.mBuffers[0].mDataByteSize = (frameCount1 * bytesPerFrame);
err = ExtAudioFileWrite(outputAudioFileRef, frameCount1, &conversionOutBuffer);
framesRead += frameCount1;
} // frame count
} // else
if (err) {
moreToProcess = NO;
}
} // while moreToProcess
// Check for errors
TTDASSERT(framesRead == totalFrames);
if (err) {
if (error) *error = [NSError errorWithDomain:kUAAudioSelfCrossFaderErrorDomain
code:UAAudioSelfCrossFaderErrorTypeMixDown
userInfo:[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:err],@"Underlying Error Code",[self commonExtAudioResultCode:err],@"Underlying Error Name",nil]];
[self cleanupInBuffer1:inBuffer1 inBuffer2:inBuffer2 outBuffer:outBuffer err:err];
return NO;
}
NSLog(@"Done with mix-down phase");
假设
mixAudioFile1Ref
总是长于mixAudioFile2Ref
mixAudioFile2Ref
字节用完后,outputAudioFileRef
听起来应该与mixAudioFile2Ref
预期的声音应该在开始时将淡入与淡出混合,以在轨道循环时产生自交叉淡入淡出。请听输出,看看代码,让我知道我哪里出错了。
源音:http ://cl.ly/2g2F2A3k1r3S36210V23
产生的音:http ://cl.ly/3q2w3S3Y0x0M3i2a1W3v