1

很抱歉交叉发布。以下问题也发布在 Elastic Search 的 google 组中。

我正在尝试在我的虚拟机上使用 python 客户端中的节俭连接来索引 elasticsearch 1.2.0 中的一些文档。但是在索引了大约 1200~1800 个文档之后,我得到了一个 TSocket 超时。这是回溯 -

Traceback (most recent call last):
  File "new_insert_bulk.py", line 400, in <module>
    actions = define_products(namespace_id, store_ids, int(sys.argv[3]), category_ids, category_names, actions, cluster_id)
  File "new_insert_bulk.py", line 371, in define_products
    print helpers.bulk(es, actions)
  File "/home/ubuntu/.virtualenvs/elasticsearch/local/lib/python2.7/site-packages/elasticsearch/helpers.py", line 148, in bulk
    for ok, item in streaming_bulk(client, actions, **kwargs):
  File "/home/ubuntu/.virtualenvs/elasticsearch/local/lib/python2.7/site-packages/elasticsearch/helpers.py", line 107, in streaming_bulk
    resp = client.bulk(bulk_actions, **kwargs)
  File "/home/ubuntu/.virtualenvs/elasticsearch/local/lib/python2.7/site-packages/elasticsearch/client/utils.py", line 70, in _wrapped
    return func(*args, params=params, **kwargs)
  File "/home/ubuntu/.virtualenvs/elasticsearch/local/lib/python2.7/site-packages/elasticsearch/client/__init__.py", line 568, in bulk
    params=params, body=self._bulk_body(body))
  File "/home/ubuntu/.virtualenvs/elasticsearch/local/lib/python2.7/site-packages/elasticsearch/transport.py", line 274, in perform_request
    status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore)
  File "/home/ubuntu/.virtualenvs/elasticsearch/local/lib/python2.7/site-packages/elasticsearch/connection/thrift.py", line 62, in perform_request
    response = tclient.execute(request)
  File "/home/ubuntu/.virtualenvs/elasticsearch/local/lib/python2.7/site-packages/elasticsearch/connection/esthrift/Rest.py", line 42, in execute
    return self.recv_execute()
  File "/home/ubuntu/.virtualenvs/elasticsearch/local/lib/python2.7/site-packages/elasticsearch/connection/esthrift/Rest.py", line 53, in recv_execute
    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
  File "build/bdist.linux-x86_64/egg/thrift/protocol/TBinaryProtocol.py", line 126, in readMessageBegin
  File "build/bdist.linux-x86_64/egg/thrift/protocol/TBinaryProtocol.py", line 206, in readI32
  File "build/bdist.linux-x86_64/egg/thrift/transport/TTransport.py", line 58, in readAll
  File "build/bdist.linux-x86_64/egg/thrift/transport/TTransport.py", line 159, in read
  File "build/bdist.linux-x86_64/egg/thrift/transport/TSocket.py", line 105, in read 
socket.timeout: timed out

我已在 log2.txt 中附加了服务器日志。我还附上了 bigdesk 的屏幕截图,这可能会有所帮助。以前,当文档的复杂性较低时,每个请求中大约 1300 个文档的批量索引请求将被索引而不会出现任何问题。目前我正在尝试每个请求大约 200 个文档,但它超时了。我将日志记录级别保持在“TRACE”。

日志上的这个块 -

[2014-07-07 22:24:56,980][TRACE][lucene.iw                ] [Blaquesmith][in_0][0] elasticsearch[Blaquesmith][scheduler][T#1] DW: anyChanges? numDocsInRam=0 deletes=false hasTickets:false pendingChangesInFullFlush: false
[2014-07-07 22:24:57,277][TRACE][lucene.iw                ] [Blaquesmith][in_0][1] elasticsearch[Blaquesmith][scheduler][T#1] DW: anyChanges? numDocsInRam=0 deletes=false hasTickets:false pendingChangesInFullFlush: false
[2014-07-07 22:24:57,277][TRACE][lucene.iw                ] [Blaquesmith][in_0][1] elasticsearch[Blaquesmith][scheduler][T#1] IW: nrtIsCurrent: infoVersion matches: true; DW changes: false; BD changes: false
[2014-07-07 22:24:57,277][TRACE][lucene.iw                ] [Blaquesmith][in_0][1] elasticsearch[Blaquesmith][scheduler][T#1] DW: anyChanges? numDocsInRam=0 deletes=false hasTickets:false pendingChangesInFullFlush: false
[2014-07-07 22:24:57,496][TRACE][lucene.iw                ] [Blaquesmith][in_0][2] elasticsearch[Blaquesmith][scheduler][T#1] DW: anyChanges? numDocsInRam=0 deletes=false hasTickets:false pendingChangesInFullFlush: false
[2014-07-07 22:24:57,496][TRACE][lucene.iw                ] [Blaquesmith][in_0][2] elasticsearch[Blaquesmith][scheduler][T#1] IW: nrtIsCurrent: infoVersion matches: true; DW changes: false; BD changes: false
[2014-07-07 22:24:57,496][TRACE][lucene.iw                ] [Blaquesmith][in_0][2] elasticsearch[Blaquesmith][scheduler][T#1] DW: anyChanges? numDocsInRam=0 deletes=false hasTickets:false pendingChangesInFullFlush: false
[2014-07-07 22:24:57,980][TRACE][lucene.iw                ] [Blaquesmith][in_0][0] elasticsearch[Blaquesmith][scheduler][T#1] DW: anyChanges? numDocsInRam=0 deletes=false hasTickets:false pendingChangesInFullFlush: false
[2014-07-07 22:24:57,981][TRACE][lucene.iw                ] [Blaquesmith][in_0][0] elasticsearch[Blaquesmith][scheduler][T#1] IW: nrtIsCurrent: infoVersion matches: true; DW changes: false; BD changes: false
[2014-07-07 22:24:57,981][TRACE][lucene.iw                ] [Blaquesmith][in_0][0] elasticsearch[Blaquesmith][scheduler][T#1] DW: anyChanges? numDocsInRam=0 deletes=false hasTickets:false pendingChangesInFullFlush: false

即使在超时后很长一段时间后也会不断重复。超时索引请求大约是 22:21:00。我不认为该文档可能无效(使用映射或其他东西),因为我在更改 2 个字段后索引同一个文档。

我怎样才能避免这种超时?

日志中的重复块是什么意思?

我怎么能确定这不会在生产系统中偶尔发生?我会很高兴有任何帮助。

日志和 BigDesk 屏幕截图 - https://drive.google.com/folderview?id=0B7tAo_9BLHg4cjNyOTlQaDF3QjA&usp=sharing

4

0 回答 0