选择了带有graylog2的elasticsearch,我度过了非常糟糕的一周。我正在尝试使用 Python 对 ES 中的数据运行查询。
我尝试过关注客户。
- ESClient - 非常奇怪的结果,我认为它没有维护,query_body 没有效果它返回所有结果。
- Pyes - 不可读,无证。我已经浏览了资源并且无法弄清楚如何运行一个简单的查询,也许我不是那么聪明。我什至可以以 json 格式运行基本查询,然后简单地使用 Python 对象/迭代器对结果进行分析。但 Pyes 并不容易。
Elasticutils - 另一个文档,但没有完整的示例。我收到以下错误并附有代码。我什至不知道它是如何使用这个 S() 连接到正确的主机的?
es = get_es(hosts=HOST, default_indexes=[INDEX])
basic_s = S().indexes(INDEX).doctypes(DOCTYPE).values_dict()
结果:
print basic_s.query(message__text="login/delete")
File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 223, in __repr__
data = list(self)[:REPR_OUTPUT_SIZE + 1]
File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 623, in __iter__
return iter(self._do_search())
File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 573, in _do_search
hits = self.raw()
File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 615, in raw
hits = es.search(qs, self.get_indexes(), self.get_doctypes())
File "/usr/lib/python2.7/site-packages/pyes/es.py", line 841, in search
return self._query_call("_search", body, indexes, doc_types, **query_params)
File "/usr/lib/python2.7/site-packages/pyes/es.py", line 251, in _query_call
response = self._send_request('GET', path, body, querystring_args)
File "/usr/lib/python2.7/site-packages/pyes/es.py", line 208, in _send_request
response = self.connection.execute(request)
File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 167, in _client_call
return getattr(conn.client, attr)(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 59, in execute
response = self.client.urlopen(Method._VALUES_TO_NAMES[request.method], uri, body=request.body, headers=request.headers)
File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen
return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again
File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen
return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again
File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen
return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again
File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen
return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again
File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 255, in urlopen
raise MaxRetryError("Max retries exceeded for url: %s" % url)
pyes.urllib3.connectionpool.MaxRetryError: Max retries exceeded for url: /graylog2/message/_search
我希望这个好项目的开发者能提供一些完整的例子。即使看来源,我也完全失去了。
有什么解决方案吗,用 elasticsearch 和 python 为我提供帮助,或者我应该放弃所有这些并支付一个不错的 splunk 帐户并结束这种痛苦。
我正在继续使用 curl,下载整个 json 结果并 json 加载它。希望可行,尽管 curl 从 elasticsearch 下载 100 万条消息可能不会发生。