在创建集群时,我打开了 gcp 控制台并输入了这个脚本
gcloud dataproc clusters create clusterName --bucket bucketName --region europe-west3 --zone europe-west3-a --master-machine-type n1-standard-16 --master-boot-disk-type pd-ssd --master-boot-disk-size 200 --num-workers 2 --worker-machine-type n1-highmem-16 --worker-boot-disk-size 200 --image-version 2.0-debian10 --max-idle 3600s --optional-components JUPYTER --initialization-actions 'gs://goog-dataproc-initialization-actions-europe-west3/python/pip-install.sh','gs://goog-dataproc-initialization-actions-europe-west3/connectors/connectors.sh' --metadata 'PIP_PACKAGES=pyspark==3.1.2 tensorflow keras elephas==3.0.0',spark-bigquery-connector-version=0.21.0,bigquery-connector-version=1.2.0 --project projectName --enable-component-gateway
脚本的 -initialization-actions 部分对我有用:
--initialization-actions 'gs://goog-dataproc-initialization-actions-europe-west3/python/pip-install.sh','gs://goog-dataproc-initialization-actions-europe-west3/connectors/connectors.sh' --metadata 'PIP_PACKAGES=pyspark==3.1.2 tensorflow keras elephas==3.0.0',spark-bigquery-connector-version=0.21.0,bigquery-connector-version=1.2.0