1

我有这种格式的数据:student_id, course_id,grade,other_information。这适用于大量学生,比如数十亿。我编写了一个 perl 脚本来处理学生的数据。所以想到使用hadoop框架通过将每个学生的数据流式传输到perl脚本来加速这个过程。

这就是我的做法:

student_data = LOAD 'source' using PigStorage('\t') As (stud_id:string,...)
grp_student = group student_data by stud_id;
final_data = foreach grp_student {
    flat_data = flatten(grp_student)
    each_stud_data = generate flat_data;
    result = STREAM each_stud_data THROUGH 'some perl script';
}

store final_data into '/some_location';

问题:我收到此错误Syntax error, unexpected symbol at or near 'flatten'。试图谷歌但徒劳无功。有人可以帮忙吗?

4

1 回答 1

1

一些提示:在嵌套的 foreach 中不允许展平。generate 必须是最后一条语句。

关于Stream命令Pig docs

About Data Guarantees
Data guarantees are determined based on the position of the streaming operator in the Pig script.

[...]
Grouped data – The data for the same grouped key is guaranteed to be provided to the streaming application contiguously
[...]

因此,如果您调整脚本以应对它连续获取组键的所有数据的事实,它可能会奏效。

student_data = LOAD 'source' using PigStorage('\t') As (stud_id:string,...);
grp_student = GROUP student_data BY stud_id;
flat_data = FOREACH grp_student GENERATE FLATTEN(student_data);
result = STREAM flat_data THROUGH 'some perl script';
于 2013-11-05T12:20:30.113 回答