0

我正在使用react-native-voice将语音转换为 React Native 应用程序中的文本,但我无法将先前识别的文本与最新识别的测试连接起来。

每次识别一个新句子时,它都会替换以前识别的文本,我想要实现的是它应该识别并将前一个句子与最新识别的测试连接起来。

这是代码

反应原生语音代码

async _startDefectDescriptionRecognition(e) {
    this.setState({
        recognized: '',
        started: '',
    });

    try {
        await Voice.start('en-UK');
    } catch (e) {
        console.error(e);
    }
}
onSpeechStart(e) {
    this.setState({
        started: '√',
    });
}
onSpeechRecognized(e) {
    this.setState({
        recognized: '√',
    });
}

onSpeechResults(e) {
    this.setState({
        defectDescriptionSpeechResult: e.value,
    });
}
updateDefectDescription(defectSpeech) {
    this.state.conditionDefectDescription = defectSpeech
}

渲染代码

{this.state.defectDescriptionSpeechResult.map((result, index) => this.updateDefectDescription(result))}
<View style={{ width: '45%', }}>
    <TextInput
        placeholder="Defect Description"
        ref={input => { this.defectDescriptionClear = input }}
        multiline={true}
        onChangeText={(conditionDefectDescription) => this.setState({ conditionDefectDescription: conditionDefectDescription })}
        style={[styles.TextInputStyle, { height: 90, width: '100%', textAlign: 'center', fontSize: 15 }]}>
        {this.state.conditionDefectDescription}
    </TextInput>   
</View>
<TouchableOpacity style={{ paddingLeft: '0.5%', paddingRight: '1.5%' }}
    onPress={this._startDefectDescriptionRecognition.bind(this)}>    
    <Icons name='microphone-outline' style={this.state.demo1 == true ? { fontSize: 50, color: '#f12711' } : { fontSize: 50, color: '#23C3F0' }} />
</TouchableOpacity>

输出:识别的新文本正在替换旧文本

预期输出:识别的新文本应连接旧文本

react-native-voice 的完整文档可以在这里找到

react-native-voice 的工作示例可以在这里找到

4

1 回答 1

0

这只是在运行时重新绑定侦听器的问题,如下所示:

export class _VoiceRecorder extends React.PureComponent<IInner> {
  private startRecording = async () => {
    const { setIsRecording, t } = this.props;
      try {
        Voice.onSpeechResults = this.onSpeechResultsHandler;
        Voice.onSpeechError = this.onSpeechErrorHandler;
        Voice.onSpeechEnd = this.onSpeechEndHandler;

        // @TODO: Needs to be i18ned
        await Voice.start('de-DE');
        setIsRecording(true);
      } catch (e) {
        log.warn('Something went wrong voice recording', e);
      }
  };

  private stopRecording = async (e?: React.SyntheticEvent) => {
    const { setIsRecording } = this.props;
    e?.stopPropagation();

    try {
      await Voice.stop();
      await Voice.cancel();
    } catch (e) {
    } finally {
      setIsRecording(false);
    }
  };

  private onSpeechResultsHandler = ({ value }: { value: string[] }) => {
    if (value[0]) {
      this.props.setTempResult(value[0]);
      if (Platform.OS === 'android') {
        this.onSpeechEndHandler();
      }
    }
  };

  private onSpeechEndHandler = () => {
    this.props.setIsRecording(false);
    this.props.onChange(this.props.tempResult);
    this.props.setTempResult('');
  };

  private onSpeechErrorHandler = (e: any) => {
    this.props.setIsRecording(false);
    this.props.setTempResult('');
  };

  public render() {
// component UI code comes here
}
于 2020-04-29T05:30:38.110 回答