0

使用 OpenID 和 Keycloak 的 Kibana 单点登录。我已经按照 opendistro 文档配置了设置。 https://opendistro.github.io/for-elasticsearch-docs/docs/security-configuration/openid-connect/

码头工人-compose.yml

version: '3'
services:
  elasticsearch:
    image: amazon/opendistro-for-elasticsearch:0.7.0
    container_name: odfe-elasticsearch
    environment:
      -  discovery.type=single-node     
      -  bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
      -  "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - odfe-data1:/usr/share/elasticsearch/data
      - ./elastisearch-opendistro-sec/config.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/config.yml
    ports:
      - 9200:9200
      - 9600:9600 # required for Performance Analyzer
    networks:
      - odfe-net
  kibana:
    image: amazon/opendistro-for-elasticsearch-kibana:0.7.0
    container_name: odfe-kibana
    ports:
      - 5601:5601
    volumes:
      - ./kibana-opendistro-sec/kibana.yml:/usr/share/kibana/config/kibana.yml
    expose:
      - "5601"
    environment:
      ELASTICSEARCH_URL: https://odfe-elasticsearch:9200
      ELASTICSEARCH_HOSTS: https://odfe-elasticsearch:9200
    networks:
      - odfe-net
volumes:
  odfe-data1:
networks:
  odfe-net:

keycloak-compose.yml

version: '3'
services:
  mysql:
      image: mysql:5.7
      volumes:
        - mysql_data:/var/lib/mysql
      environment:
        MYSQL_ROOT_PASSWORD: root
        MYSQL_DATABASE: keycloak
        MYSQL_USER: keycloak
        MYSQL_PASSWORD: password
      networks:
        - odfe-net
  keycloak:
      image: jboss/keycloak
      environment:
        DB_VENDOR: MYSQL
        DB_ADDR: mysql
        DB_DATABASE: keycloak
        DB_USER: keycloak
        DB_PASSWORD: password
        KEYCLOAK_USER: admin
        KEYCLOAK_PASSWORD: admin
      networks:
        - odfe-net
      ports:
        - 8080:8080
      depends_on:
        - mysql
volumes:
  mysql_data:
networks:
  odfe-net:

配置.yml

opendistro_security:
  dynamic:
    authc:
      basic_internal_auth_domain:
        enabled: true
        order: 0
        http_authenticator:
          type: basic
          challenge: false
        authentication_backend:
          type: internal
      openid_auth_domain:
        enabled: true
        order: 1
        http_authenticator:
          type: openid
          challenge: false
        config:
          subject_key: preferred_username
          roles_key: roles
          openid_connect_url: http://172.29.0.3:8080/auth/realms/master/.well-known/openid-configuration
        authentication_backend:
          type: noop  

kibana.yml

opendistro_security.auth.type: "openid"
opendistro_security.openid.connect_url: "http://172.29.0.3:8080/auth/realms/master/.well-known/openid-configuration"
opendistro_security.openid.client_id: "kibana-sso"
opendistro_security.openid.client_secret: "841d796a-bc3a-4cc8-9fb9-bed6221f66b4"


elasticsearch.url: "https://odfe-elasticsearch:9200"
elasticsearch.username: "kibanaserver"
elasticsearch.password: "kibanaserver"
elasticsearch.ssl.verificationMode: none
elasticsearch.requestHeadersWhitelist: ["Authorization", "security_tenant"]

open Id 连接端点需要在 kibana.yml 和 config.yml 文件中指定。

当我在 openid 连接端点 url http://localhost:8080/auth/realms/master/.well-known/openid-configuration中使用 localhost 时, 我收到以下错误。

"Client request error: connect ECONNREFUSED 127.0.0.1:8080"}
odfe-kibana      | /usr/share/kibana/plugins/opendistro_security/lib/auth/types/openid/OpenId.js:151
odfe-kibana      |                 throw new Error('Failed when trying to obtain the endpoints from your IdP');

遵循此链接中给出的解决方案后,错误得到解决: ECONNREFUSED nodeJS with express inside docker container

kibana 在 localhost:5601 运行,但是,当我尝试在浏览器中加载页面时,我得到 ERR_EMPTY_RESPONSE。

以下是日志:

odfe-kibana      | {"type":"log","@timestamp":"2019-06-27T23:00:54Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: https://odfe-elasticsearch:9200/"}
odfe-kibana      | {"type":"log","@timestamp":"2019-06-27T23:00:54Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
odfe-elasticsearch | [2019-06-27T23:00:54,613][INFO ][c.a.o.e.p.h.c.PerformanceAnalyzerConfigAction] [8EPY7C_] PerformanceAnalyzer Enabled: true
odfe-elasticsearch | Registering Handler
odfe-elasticsearch | [2019-06-27T23:00:54,687][INFO ][o.e.n.Node               ] [8EPY7C_] initialized
odfe-elasticsearch | [2019-06-27T23:00:54,687][INFO ][o.e.n.Node               ] [8EPY7C_] starting ...
odfe-elasticsearch | [2019-06-27T23:00:54,918][INFO ][o.e.t.TransportService   ] [8EPY7C_] publish_address {172.29.0.5:9300}, bound_addresses {0.0.0.0:9300}
odfe-elasticsearch | [2019-06-27T23:00:54,967][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [8EPY7C_] Check if .opendistro_security index exists ...
odfe-elasticsearch | [2019-06-27T23:00:55,064][INFO ][c.a.o.s.h.OpenDistroSecurityHttpServerTransport] [8EPY7C_] publish_address {172.29.0.5:9200}, bound_addresses {0.0.0.0:9200}
odfe-elasticsearch | [2019-06-27T23:00:55,067][INFO ][o.e.n.Node               ] [8EPY7C_] started
odfe-elasticsearch | [2019-06-27T23:00:55,070][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [8EPY7C_] 4 Open Distro Security modules loaded so far: [Module [type=AUDITLOG, implementing class=com.amazon.opendistroforelasticsearch.security.auditlog.impl.AuditLogImpl], Module [type=MULTITENANCY, implementing class=com.amazon.opendistroforelasticsearch.security.configuration.PrivilegesInterceptorImpl], Module [type=DLSFLS, implementing class=com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityFlsDlsIndexSearcherWrapper], Module [type=REST_MANAGEMENT_API, implementing class=com.amazon.opendistroforelasticsearch.security.dlic.rest.api.OpenDistroSecurityRestApiActions]]
odfe-elasticsearch | [2019-06-27T23:00:55,558][INFO ][o.e.g.GatewayService     ] [8EPY7C_] recovered [2] indices into cluster_state
odfe-elasticsearch | [2019-06-27T23:00:56,394][INFO ][o.e.c.r.a.AllocationService] [8EPY7C_] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.opendistro_security][0]] ...]).
odfe-elasticsearch | [2019-06-27T23:00:56,684][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [8EPY7C_] Node '8EPY7C_' initialized
odfe-kibana      | {"type":"log","@timestamp":"2019-06-27T23:00:57Z","tags":["status","plugin:elasticsearch@6.5.4","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at https://odfe-elasticsearch:9200/."}
odfe-kibana      | {"type":"log","@timestamp":"2019-06-27T23:00:57Z","tags":["listening","info"],"pid":1,"message":"Server running at http://localhost:5601"}
4

1 回答 1

0

我不会将“localhost”与 Keycloak 和 Docker 结合使用,尤其是在您运行 Docker for Mac 时。

您看到的错误 (connect ECONNREFUSED 127.0.0.1:8080") 意味着 Kibana 正在尝试在端口 8080 上连接到自身(它自己的 Docker 容器)。这似乎令人困惑,但“localhost”在每个端口上都有非常具体的含义机器——不要忘记,每个 Docker 容器都是它自己的机器。相反,您希望它从 docker 网络连接到您的主机上的 8080 端口。

为此,我建议使用“127.0.0.1.xip.io”(查看 xip.io 了解这是什么)作为您的域名。您可能还需要在 docker-compose 文件中使用“extra_hosts”配置此地址。

于 2019-07-02T22:21:40.253 回答