1

我正在尝试将谷歌云语音 API 集成到我的演示应用程序中。我得到的结果如下:

    {
    results {
      alternatives {
        transcript: "hello"
      }
      stability: 0.01
    }
}

获得响应的代码:

[[SpeechRecognitionService sharedInstance] streamAudioData:self.audioData
                                                withCompletion:^(StreamingRecognizeResponse *response, NSError *error) {
                                                  if (error) {
                                                    NSLog(@"ERROR: %@", error);
                                                    _textView.text = [error localizedDescription];
                                                    [self stopAudio:nil];
                                                  } else if (response) {
                                                    BOOL finished = NO;
                                                    //NSLog(@"RESPONSE: %@", response.resultsArray);
                                                    for (StreamingRecognitionResult *result in response.resultsArray) {
                                                        NSLog(@"result : %@",result);
                                                        //_textView.text = result.alternatives.transcript;
                                                      if (result.isFinal) {
                                                        finished = YES;
                                                      }
                                                    }

                                                    if (finished) {
                                                      [self stopAudio:nil];
                                                    }
                                                  }
                                                }
     ];

我的问题是,我得到的响应不是正确的 JSON,那么我如何获得 key 的值transcript?任何帮助,将不胜感激。谢谢。

4

2 回答 2

1

对于正在寻找此问题解决方案的人:

for (StreamingRecognitionResult *result in response.resultsArray) {
    for (StreamingRecognitionResult *alternative in result.alternativesArray) {
        _textView.text = [NSString stringWithFormat:@"%@",[alternative valueForKey:@"transcript"]];
    }
    if (result.isFinal) {
        finished = YES;
    }
}

这就是我为transcript不断获得价值所做的事情。

于 2017-02-07T09:35:32.380 回答
1

这是解决您在 Swift4/iOS11.2.5 上的问题的代码,享受吧!

SpeechRecognitionService.sharedInstance.streamAudioData(audioData, completion:
{ [weak self] (response, error) in
    guard let strongSelf = self else {
        return
    }
    if let error = error {
        print("*** Streaming ASR ERROR: "+error.localizedDescription)
    } else if let response = response {
        for result in response.resultsArray {
            print("result i: ")  //log to console
            print(result)
            if let alternative = result as? StreamingRecognitionResult {
                for a in alternative.alternativesArray{
                    if let ai = a as? SpeechRecognitionAlternative{
                        print("alternative i: ")  //log to console
                        print(ai)
                        if(alternative.isFinal){
                            print("*** FINAL ASR result: "+ai.transcript)
                            strongSelf.stopGoogleStreamingASR(strongSelf)
                        }
                        else{
                            print("*** PARTIAL ASR result: "+ai.transcript)
                        }
                    }
                }
                
            }
            else{
                print("ERROR: let alternative = result as? StreamingRecognitionResult")
            }
        }
    }
})
于 2018-01-30T09:15:47.453 回答