我有一个 AWS Glue 工作,其工作非常简单:将大型 CSV gzip 文件分成 1GB 的文件。
在我的测试中,我将 4 个文件上传到存储桶中,每个文件大约 5GB。然而,作业总是将所有文件分配给单个工作人员,而不是分配给所有工作人员。
活动工作者日志:
[Executor task launch worker for task 3] s3n.S3NativeFileSystem (S3NativeFileSystem.java:open(1323)): Opening 's3://input/IN-4.gz' for reading
[Executor task launch worker for task 0] s3n.S3NativeFileSystem (S3NativeFileSystem.java:open(1323)): Opening 's3://input/IN-1.gz' for reading
[Executor task launch worker for task 2] s3n.S3NativeFileSystem (S3NativeFileSystem.java:open(1323)): Opening 's3://input/IN-3.gz' for reading
[Executor task launch worker for task 1] s3n.S3NativeFileSystem (S3NativeFileSystem.java:open(1323)): Opening 's3://input/IN-2.gz' for reading
[Executor task launch worker for task 0] zlib.ZlibFactory (ZlibFactory.java:<clinit>(49)): Successfully loaded & initialized native-zlib librar
一名休息工人日志:
storage.BlockManager (Logging.scala:logInfo(54)): Initialized BlockManager: BlockManagerId(3, 172.31.0.109, 35849, None)
其余工作人员卡在这一步,无休止地等待,并将所有 20GB 文件分配给单个活动任务
它的作业脚本如下:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "crawled-database", table_name = "input", transformation_ctx = "datasource0", additional_options = {"groupFiles": "inPartition", "compressionType": "gzip"})
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [ ("tagids", "string", "internal_tagids", "string"), ("channel", "long", "internal_channel", "long")], transformation_ctx = "applymapping1")
datasink2 = glueContext.write_dynamic_frame.from_options(frame = applymapping1, connection_type = "s3", connection_options = {"path": "s3://glue-report-staging", "groupFiles": "inPartition", "groupSize": "1073741824", "compression": "gzip"}, format = "csv", transformation_ctx = "datasink2")
job.commit()