我正在通过 Node SDK 使用 IBM Watson 自然语言理解服务进行文本分析。我有一个长度接近 20 到 30 的句子数组。当我尝试遍历数组并调用 NLU 分析 API 时,我得到了Error : Too Many Request
.
没有可用于批量文本分析过程的 API,我在 NLU 服务文档中看不到任何限制。我正在使用标准计划。
有没有办法摆脱这个错误?我想分析一个句子数组。
错误日志:
Error: Too Many Requests
at Request._callback (/home/vcap/app/node_modules/watson-developer-cloud/lib/requestwrapper.js:99:21)
at Request.self.callback (/home/vcap/app/node_modules/request/request.js:186:22)
at emitTwo (events.js:126:13)
at Request.emit (events.js:214:7)
at Request.<anonymous> (/home/vcap/app/node_modules/request/request.js:1163:10)
at emitOne (events.js:116:13)
at Request.emit (events.js:211:7)
at IncomingMessage.<anonymous> (/home/vcap/app/node_modules/request/request.js:1085:12)
at Object.onceWrapper (events.js:313:30)
at emitNone (events.js:111:20)
at IncomingMessage.emit (events.js:208:7)
at endReadableNT (_stream_readable.js:1056:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
更新:
添加源代码:
segmentList.forEach((segment) => {
var params = {
text: segment,
features: {
keywords: {
sentiment: false
},
sentiment: {
document: true
}
}
};
logger.info("Analyse Segment : " + segment);
return new Promise((resolve) => {
NLUService.analyze(params, (err, data) => {
if (err != null) {
logger.error(err.stack);
}
ctr += 1;
// Inserting data.sentiment.document.score to Database
.
.
.
if (ctr == callSegments.length)
resolve();
});
}).then(() => resolve());
});