3

我在“Cygnus can not persist data on Cosmos global instance”这个问题中讨论了同样的连接问题。但是,阅读后我没有找到解决方案。

现在,我最近在 FILAB 中部署了两个虚拟机(两个虚拟机都包含 Orion ContextBroker 0.26.1 和 Cygnus 0.11.0)。

当我尝试通过 Cygnus 在 Cosmos 上保存数据时,我收到以下错误消息(两个 VM 中相同):

2015-12-17 19:03:00,221 (SinkRunner-PollingRunner-DefaultSinkProcessor)     
[ERROR - com.telefonica.iot.cygnus.sinks.OrionSink.process(OrionSink.java:305)]
 Persistence error (The /user/rmartinezcarreras/def_serv/def_serv_path/room1_room     
directory could not be created in HDFS. Server response: 503 Service unavailable)

另一方面,当我尝试从任何 VM 的命令行发出请求时,我得到下一个响应:

[root@orionlarge centos]# curl -v -X GET "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/rmartinezcarreras/?       
op=liststatus&user.name=rmartinezcarreras" -H "X-Auth-Token: XXXXXXX"
* About to connect() to cosmos.lab.fiware.org port 14000 (#0)
*   Trying 130.206.80.46... connected
* Connected to cosmos.lab.fiware.org (130.206.80.46) port 14000 (#0)
> GET /webhdfs/v1/user/rmartinezcarreras/?    
op=liststatus&user.name=rmartinezcarreras HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7     
NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: cosmos.lab.fiware.org:14000
> Accept: */*
> X-Auth-Token: XXXXX
>
* Closing connection #0
* Failure when receiving data from the peer
curl: (56) Failure when receiving data from the peer

尽管如此,从外部 VM(FILAB 外部):

[root@dsieBroker orion]# curl -v -X GET     
"http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/rmartinezcarreras/?   
op=liststatus&user.name=rmartinezcarreras" -H "X-Auth-Token: XXXXX"
* About to connect() to cosmos.lab.fiware.org port 14000 (#0)
*   Trying 130.206.80.46... connected
* Connected to cosmos.lab.fiware.org (130.206.80.46) port 14000 (#0)
> GET /webhdfs/v1/user/rmartinezcarreras/?   
op=liststatus&user.name=rmartinezcarreras HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7    
NSS/3.19.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: cosmos.lab.fiware.org:14000
> Accept: */*
> X-Auth-Token: XXXXXX
> 
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: HEAD, POST, GET, OPTIONS, DELETE
< Access-Control-Allow-Headers: origin, content-type, X-Auth-Token, Tenant-    
ID, Authorization
< server: Apache-Coyote/1.1
< set-cookie:
hadoop.auth="u=rmartinezcarreras&p=rmartinezcarreras&t=simple&e=XXXXXX&s=
XXXXhD    8="; Version=1; Path=/
< Content-Type: application/json; charset=utf-8
< transfer-encoding: chunked
< date: Thu, 17 Dec 2015 18:52:46 GMT
< connection: close
< Content-Length: 243
< ETag: W/"f3-NL9+bYJLweyFpoJfNgjQrg"
< 
{"FileStatuses":{"FileStatus":       
[{"pathSuffix":"def_serv","type":"DIRECTORY","length":0,"owner":
"rmartinezcarreras","group":"rmartinezcarreras","permission":"740",
"accessTime":0,"modificationTime":1450349251833,"blockSize":0,
"replication":0}]}}
* Closing connection #0

还可以从我的 Cosmos 帐户中获得良好的结果。

我该如何解决这个问题?这似乎是一个连接问题。你可以帮帮我吗?

先感谢您

4

1 回答 1

0

最后,这是我们用于身份验证和授权的 OAuth2 代理的问题。它所基于的底层Express模块是在存在content-length另一个标头时添加标transfer-encoding: chunked头。正如在另一个问题中所研究的那样,这种组合不符合RFC,并且导致某些完全兼容的客户端实现正在重置连接。

于 2016-01-25T16:53:16.100 回答