我在我的应用程序中使用OpenEars来识别一些单词和句子。我遵循了离线语音识别的基本教程并在 Swift 中执行了移植。这是设置过程
self.openEarsEventsObserver = OEEventsObserver()
self.openEarsEventsObserver.delegate = self
let lmGenerator: OELanguageModelGenerator = OELanguageModelGenerator()
addWords()
let name = "LanguageModelFileStarSaver"
lmGenerator.generateLanguageModelFromArray(words, withFilesNamed: name, forAcousticModelAtPath: OEAcousticModel.pathToModel("AcousticModelEnglish"))
lmPath = lmGenerator.pathToSuccessfullyGeneratedLanguageModelWithRequestedName(name)
dicPath = lmGenerator.pathToSuccessfullyGeneratedDictionaryWithRequestedName(name)
识别在一个安静的房间里对单个单词和整个句子都很有效(我会说它有 90% 的命中率)。然而,当我在安静的酒吧里尝试背景噪音很轻的时候,应用程序在识别单词时遇到了严重的困难。当有背景噪音时,有什么方法可以改善语音识别?