0

我正在 AWS KOPS 设置上设置 Vora 2.1。

    ./install.sh --accept-license --deployment-type=cloud --enable-rbac=no  --namespace=vora --docker-registry=<localrepository>:5000 --vora-admin-username=voraadmin --vora-admin-password=<secret> --cert-domain=<custeromerdomain> --interactive-security-configuration=no --vsystem-storage-class=aws-efs --vsystem-load-nfs-modules

以下是我的错误:

Wait until pod vora-deployment-operator-cc84bff65-hgtt4 is running...
Wait until containers in the pod vora-deployment-operator-cc84bff65-hgtt4 are ready...
Wait until voracluster CRD is created...
No resources found.
Deploying vora-cluster with: helm install --namespace vora -f values.yaml -f /install/SAPVora-2.1.60-DistributedRuntime/stateful-replica-conf.yaml   --set docker.registry=172.20.41.35:5000   --set rbac.enabled=false   --set imagePullSecret=   --set docker.imagePullSecret=   --set version.package=2.1.60 --set docker.image=vora/dqp --set docker.imageTag=2.1.32.25-vora-2.1 --set components.globalParameters.security.docker.image=vora/init-security --set components.globalParameters.security.docker.imageTag=0.0.9 --set components.globalParameters.security.enable=true --set components.globalParameters.security.context=consumer --set components.globalParameters.security.contextRoot=/etc/vora-security --set version.component=2.1.32.25-vora-2.1 --set name=vora --set dontUseExternalStorage=false --set useHostPath=false --set components.disk.useHostPath=false --set components.dlog.useHostPath=false  .
NAME:   quaffing-cow
LAST DEPLOYED: Thu Mar 29 09:53:24 2018
NAMESPACE: vora
STATUS: DEPLOYED

RESOURCES:
==> v1/VoraCluster
NAME  KIND
vora  VoraCluster.v1.sap.com


Hang tight while we grab the latest from your chart repositories...
...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts):
        Get http://127.0.0.1:8879/charts/index.yaml: dial tcp 127.0.0.1:8879: getsockopt: connection refused
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 1 charts
Downloading consul from repo https://kubernetes-charts.storage.googleapis.com/
Deleting outdated charts
vora-vsystem is already installed, skipping...
Deploying vora-thriftserver with: helm install --namespace vora -f values.yaml -f /install/SAPVora-2.1.60-DistributedRuntime/stateful-replica-conf.yaml   --set docker.registry=172.20.41.35:5000   --set rbac.enabled=false   --set imagePullSecret=   --set docker.imagePullSecret=   --set version.package=2.1.60 --set thriftserver.docker.image=vora/thriftserver --set thriftserver.docker.imageTag=2.1.14.25-vora-2.1 --set auth.enable=true --set secop.ctxRoot=/etc/vora-security --set secop.ctxName=consumer --set secop.docker.image=vora/init-security --set secop.docker.imageTag=0.0.9 --set version.component=2.1.14.25-vora-2.1 .
NAME:   knotted-macaw
LAST DEPLOYED: Thu Mar 29 09:53:29 2018
NAMESPACE: vora
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME               CLUSTER-IP     EXTERNAL-IP  PORT(S)    AGE
vora-thriftserver  100.69.133.27  <none>       10001/TCP  1s

==> v1beta1/Deployment
NAME               DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
vora-thriftserver  1        1        1           0          1s


Authentication is enabled!
Running validation...
Wait until vora cluster is ready...
Wait until vora cluster is ready...
...........
Wait until vora cluster is ready...
Wait until vora cluster is ready...
Timeout while waiting for vora cluster! See below for more details:
Name:         vora
Namespace:    vora
Labels:       <none>
Annotations:  <none>
API Version:  sap.com/v1
Kind:         VoraCluster
Metadata:
  Cluster Name:
  Creation Timestamp:  2018-03-29T09:53:24Z
  Generation:          0
  Resource Version:    497995
  Self Link:           /apis/sap.com/v1/namespaces/vora/voraclusters/vora
  UID:                 055fc3ab-3337-11e8-8c30-0aa4c3a975fc
Spec:
  Components:
    Catalog:
      Replicas:     1
      Trace Level:  info
    Disk:
      Db Space Size:            10000
      Initial Delay Seconds:    180
      Large Memory Limit:       3000
      Main Cache Memory Limit:  3000
      Network Drivers List:     none
      Pv:
        Volume Claim Annotations:        <nil>
      Replicas:                          1
      Storage Size:                      50Gi
      Temporary Cache Memory Limit:      3000
      Termination Grace Period Seconds:  300
      Trace Level:                       info
    Dlog:
      Buffer Size:            4g
      Initial Delay Seconds:  15
      Pv:
        Volume Claim Annotations:        <nil>
      Replication Factor:                2
      Standby Factor:                    1
      Storage Size:                      50Gi
      Termination Grace Period Seconds:  60
      Trace Level:                       info
    Doc Store:
      Replicas:     1
      Trace Level:  info
    Global Parameters:
      Health Check:
        Deregister Timeout:                2m
        Initial Delay Seconds:             15
        Period Seconds:                    5
        Termination Grace Period Seconds:  60
      Security:
        Context:       consumer
        Context Root:  /etc/vora-security
        Image:         172.20.41.35:5000/vora/init-security:0.0.9
      Trace Level:     info
    Graph:
      Replicas:     1
      Trace Level:  info
    Landscape:
      Bootstrapping:       True
      Replicas:            1
      Replication Factor:  1
      Trace Level:         info
    Relational:
      Replicas:     1
      Trace Level:  info
    Time Series:
      Replicas:     1
      Trace Level:  info
    Tx Broker:
      Replicas:     1
      Trace Level:  info
    Tx Coordinator:
      Node Port:     0
      Replicas:      1
      Service Type:  NodePort
      Trace Level:   info
    Tx Lock Manager:
      Replicas:     1
      Trace Level:  info
  Docker:
    Image:              172.20.41.35:5000/vora/dqp:2.1.32.25-vora-2.1
    Image Pull Secret:
  Version:
    Component:  2.1.32.25-vora-2.1
    Package:    2.1.60
Status:
  Message:  Less available workers than Distributed Log requirements
  State:    Failed
Events:
  Type  Reason               Age   From                      Message
  ----  ------               ----  ----                      -------
            Update Vora Cluster  10m   vora-deployment-operator  Processing failed: less available workers than Distributeed Log requirements
            New Vora Cluster     10m   vora-deployment-operator  Started processing
    Timeout waiting for vora cluster! Please check the status of the cluster from above logs and kubernetes dashboard...

还有一些检查

       kubectl get pods --namespace=vora -w
    NAME                                                   READY     STATUS      RESTARTS   AGE
    vora-consul-0                                          1/1       Running     0          40m
    vora-consul-1                                          1/1       Running     0          39m
    vora-consul-2                                          1/1       Running     0          39m
    vora-deployment-operator-cc84bff65-hgtt4               1/1       Running     0          38m
    vora-elasticsearch-logging-v1-6cd4d466dc-gml9d         1/1       Running     0          38m
    vora-elasticsearch-logging-v1-6cd4d466dc-k882r         1/1       Running     0          38m
    vora-elasticsearch-retention-policy-5876dc64d4-6rb2l   1/1       Running     0          38m
    vora-fluentd-kubernetes-v1.21-95xt2                    1/1       Running     0          38m
    vora-fluentd-kubernetes-v1.21-f856k                    1/1       Running     0          38m
    vora-grafana-7b5454487b-xgbjt                          1/1       Running     0          38m
    vora-grafana-set-datasource-nwkt4                      0/1       Completed   1          38m
    vora-kibana-logging-c9565b88f-wm87j                    1/1       Running     0          38m
7    vora-kibana-logging-set-settings-h2vs2                 0/1       Completed   1          38m
    vora-prometheus-kube-state-metrics-57bb8bdb76-xlx4l    1/1       Running     0          38m
    vora-prometheus-node-exporter-m7znt                    1/1       Running     0          38m
    vora-prometheus-node-exporter-mp5ls                    1/1       Running     0          38m
    vora-prometheus-pushgateway-85dcf9f96f-j74j2           1/1       Running     0          38m
    vora-prometheus-pushgateway-cleaner-7ddf5657f-nwzrc    1/1       Running     0          38m
    vora-prometheus-server-797df6d8fb-5s7zd                2/2       Running     0          38m
    vora-security-operator-77f7fb9f5-zfs2z                 1/1       Running     0          40m
    vora-thriftserver-845646d95-5cz45                      2/2       Running     0          38m
    ^Cadmin@ip-172-20-41-35:/install/SAPVora-2.1.60-DistributedRuntime$   helm test kindred-clam
    Error: release: "kindred-clam" not found
    admin@ip-172-20-41-35:/install/SAPVora-2.1.60-DistributedRuntime$ kubectl exec vora-consul-0 consul members --namespace=vora | grep server
    vora-consul-0  100.96.1.9:8301   alive   server  0.9.0  2         dc1
    vora-consul-1  100.96.0.18:8301  alive   server  0.9.0  2         dc1
    vora-consul-2  100.96.1.10:8301  alive   server  0.9.0  2         dc1

似乎安装程序根本没有创建集群:

kubectl get vc CRD -n vora 来自服务器的错误 (NotFound): voraclusters.sap.com "CRD" not found

有没有办法手动创建集群?或者,这甚至是我的问题还是其他问题?

4

3 回答 3

1

上面的问题是错误“处理失败:可用工作人员少于分布式日志要求”。

使用 Vora 2.1,默认情况下您需要 1 个 master 和 3 个 worker。最小规模为 1 名主人和 2 名工人。要在 Vora 2.1 中仅使用 2 个 worker,您需要在 deployment/helm/vora-cluster/values.yaml 中更改 DLOG 的 replicationFactor

原件(需要 3 名工人;每个 DLOG 一名)

  dlog:
    replicationFactor: 2
    standbyFactor: 1

最少(2 个工作人员;需要更改 replicationFactor)

  dlog:
    replicationFactor: 1
    standbyFactor: 1
于 2018-03-30T13:26:39.230 回答
0

谢谢弗兰克!

通过降低复制因子,我现在能够完成安装。

现在我可以继续在 Hadoop 集群上进行设置

于 2018-03-30T14:08:35.737 回答
0

你有多少个节点?推荐大小为 1 个主节点和 2 个工作节点。通常,没有 Vora pod 被安排在 master 中,因为它是不可调度的节点。所以所有的 Pod 都安排在工作节点中,而 dlog 服务至少需要 2 个节点。如果您有 2 个节点,包括主节点,则使主节点可调度。我希望它能解决你的问题。

于 2018-03-30T08:10:59.047 回答