我正在通过我的应用程序中的 OpenEars 功能实现语音到文本。我还使用Rejecto
插件来更好地识别并RapidEars
获得更快的结果。目标是检测短语和单个单词,例如:
lmGenerator = [[LanguageModelGenerator alloc] init];
NSArray *words = [NSArray arrayWithObjects:@"REBETANDEAL",@"NEWBET",@"REEEBET", nil];
NSString *name = @"NameIWantForMyLanguageModelFiles";
NSError *err = [lmGenerator generateRejectingLanguageModelFromArray:words
withFilesNamed:name
withOptionalExclusions:nil
usingVowelsOnly:FALSE
withWeight:nil
forAcousticModelAtPath:[AcousticModel pathToModel:@"AcousticModelEnglish"]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" to create a Spanish Rejecto model.
// Change "AcousticModelEnglish" to "AcousticModelSpanish" to create a Spanish language model instead of an English one.
NSDictionary *languageGeneratorResults = nil;
NSString *lmPath = nil;
NSString *dicPath = nil;
if([err code] == noErr) {
languageGeneratorResults = [err userInfo];
lmPath = [languageGeneratorResults objectForKey:@"LMPath"];
dicPath = [languageGeneratorResults objectForKey:@"DictionaryPath"];
} else {
NSLog(@"Error: %@",[err localizedDescription]);
}
// Change "AcousticModelEnglish" to "AcousticModelSpanish" to perform Spanish recognition instead of English.
[self.pocketsphinxController setRapidEarsToVerbose:FALSE]; // This defaults to FALSE but will give a lot of debug readout if set TRUE
[self.pocketsphinxController setRapidEarsAccuracy:10]; // This defaults to 20, maximum accuracy, but can be set as low as 1 to save CPU
[self.pocketsphinxController setFinalizeHypothesis:TRUE]; // This defaults to TRUE and will return a final hypothesis, but can be turned off to save a little CPU and will then return no final hypothesis; only partial "live" hypotheses.
[self.pocketsphinxController setFasterPartials:TRUE]; // This will give faster rapid recognition with less accuracy. This is what you want in most cases since more accuracy for partial hypotheses will have a delay.
[self.pocketsphinxController setFasterFinals:FALSE]; // This will give an accurate final recognition. You can have earlier final recognitions with less accuracy as well by setting this to TRUE.
[self.pocketsphinxController startRealtimeListeningWithLanguageModelAtPath:lmPath dictionaryAtPath:dicPath acousticModelAtPath:[AcousticModel pathToModel:@"AcousticModelEnglish"]]; // Starts the rapid recognition loop. Change "AcousticModelEnglish" to "AcousticModelSpanish" in order to perform Spanish language recognition.
[self.openEarsEventsObserver setDelegate:self];
大多数时候结果很好,但有时它会从单独的字符串对象中混合。例如我传递words
数组:@[@"ME AND YOU",@"YOU",@"ME"]
并且输出可以是:"YOU ME ME ME AND"
。我不希望它只识别短语的一部分。请问有什么想法吗?