我很难弄清楚如何解决这个问题,我不确定我是否没有正确设置线程,或者是否有可能正确解决问题。
这是一个 Android 应用程序,可在特定时间将某些字符串作为 TTS(使用原生 Android TTS)读取。在此 TTS 阅读过程中,用户应该能够使用诸如“停止”或“暂停”之类的指令进行干预。这种识别是通过使用 iSpeech API 完成的。
我们当前的解决方案是让 TTS 作为线程运行,以输出正确的字符串。一旦用户按下按钮开始语音识别(使用 Intent),应用程序就会进行语音识别并完美处理,但 TTS 不再输出任何内容。Logcat 显示以下错误:
11-28 02:18:57.072:W/TextToSpeech(16383):说话失败:未绑定到 TTS 引擎
我曾考虑让语音识别成为暂停 TTS 的自己的线程,但问题是控制 TTS 的计时器将与它应该的不同步。
任何建议或帮助将不胜感激。
关于线程和意图的相关代码如下:
线
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
//Prevent device from sleeping mid build.
getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
setContentView(R.layout.activity_build_order);
mPlayer = MediaPlayer.create(BuildOrderActivity.this, R.raw.bing);
params.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID,"stringId");
tts = new TextToSpeech(BuildOrderActivity.this, new TextToSpeech.OnInitListener() {
@SuppressWarnings("deprecation")
public void onInit(int status) {
if(status != TextToSpeech.ERROR)
{
tts.setLanguage(Locale.US);
tts.setOnUtteranceCompletedListener(new OnUtteranceCompletedListener() {
public void onUtteranceCompleted(String utteranceId) {
mPlayer.start();
}
});
}
}
});
buttonStart = (Button) findViewById(R.id.buttonStartBuild);
buttonStart.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
startBuild = new StartBuildRunnable();
Thread t = new Thread(startBuild);
t.start();
}
});
...//code continues oncreate setup for the view}
public class StartBuildRunnable implements Runnable {
public void run() {
double delay;
buildActions = parseBuildXMLAction();
buildTimes = parseBuildXMLTime();
say("Build has started");
delayForNextAction((getSeconds(buildTimes.get(0)) * 1000));
say(buildActions.get(0));
for (int i = 1; i < buildActions.size(); i++)
{
delay = calcDelayUntilNextAction(buildTimes.get(i - 1), buildTimes.get(i));
delayForNextAction((long) (delay * 1000));
say(buildActions.get(i));
//listViewBuildItems.setSelection(i);
}
say("Build has completed");
}
}
意图
/**
* Fire an intent to start the speech recognition activity.
* @throws InvalidApiKeyException
*/
private void startRecognition() {
setupFreeFormDictation();
try {
recognizer.startRecord(new SpeechRecognizerEvent() {
@Override
public void onRecordingComplete() {
updateInfoMessage("Recording completed.");
}
@Override
public void onRecognitionComplete(SpeechResult result) {
Log.v(TAG, "Recognition complete");
//TODO: Once something is recognized, tie it to an action and continue recognizing.
// currently recognizes something in the grammar and then stops listening until
// the next button press.
if (result != null) {
Log.d(TAG, "Text Result:" + result.getText());
Log.d(TAG, "Text Conf:" + result.getConfidence());
updateInfoMessage("Result: " + result.getText() + "\n\nconfidence: " + result.getConfidence());
} else
Log.d(TAG, "Result is null...");
}
@Override
public void onRecordingCancelled() {
updateInfoMessage("Recording cancelled.");
}
@Override
public void onError(Exception exception) {
updateInfoMessage("ERROR: " + exception.getMessage());
exception.printStackTrace();
}
});
} catch (BusyException e) {
e.printStackTrace();
} catch (NoNetworkException e) {
e.printStackTrace();
}
}