0

我正在尝试使用 IBM 的语音到文本服务的 websocket 实现。目前我无法弄清楚如何通过连接发送 .wav 文件。我知道我需要把它变成一个 blob,但我不知道该怎么做。现在我收到以下错误:

You must pass a Node Buffer object to WebSocketConnec

-或者-

Could not read a WAV header from a stream of 0 bytes

...取决于我尝试传递给服务的内容。应该注意的是,我正确地发送了开始消息并使其进入侦听状态。

4

1 回答 1

1

从 v1.0(仍处于beta 版本)开始,watson-developer-cloud npm 模块支持 websockets。

npm install watson-developer-cloud@1.0.0-beta.2

识别 wav 文件:

var watson = require('watson-developer-cloud');
var fs = require('fs');

var speech_to_text = watson.speech_to_text({
  username: 'INSERT YOUR USERNAME FOR THE SERVICE HERE',
  password: 'INSERT YOUR PASSWORD FOR THE SERVICE HERE',
  version: 'v1',
});


// create the stream
var recognizeStream = speech_to_text.createRecognizeStream({ content_type: 'audio/wav' });

// pipe in some audio
fs.createReadStream('audio-to-recognize.wav').pipe(recognizeStream);

// and pipe out the transcription
recognizeStream.pipe(fs.createWriteStream('transcription.txt'));


// listen for 'data' events for just the final text
// listen for 'results' events to get the raw JSON with interim results, timings, etc.

recognizeStream.setEncoding('utf8'); // to get strings instead of Buffers from `data` events

['data', 'results', 'error', 'connection-close'].forEach(function(eventName) {
  recognizeStream.on(eventName, console.log.bind(console, eventName + ' event: '));
});

在此处查看更多示例。

于 2015-11-05T04:39:59.820 回答