请看下面的代码:
wcmapper.php(hadoop 流作业的映射器)
#!/usr/bin/php
<?php
//sample mapper for hadoop streaming job
$word2count = array();
// input comes from STDIN (standard input)
while (($line = fgets(STDIN)) !== false) {
// remove leading and trailing whitespace and lowercase
$line = strtolower(trim($line));
// split the line into words while removing any empty string
$words = preg_split('/\W/', $line, 0, PREG_SPLIT_NO_EMPTY);
// increase counters
foreach ($words as $word) {
$word2count[$word] += 1;
}
}
// write the results to STDOUT (standard output)
foreach ($word2count as $word => $count) {
// tab-delimited
echo "$word\t$count\n";
}
?>
wcreducer.php(示例 hadoop 作业的减速器脚本)
#!/usr/bin/php
<?php
//reducer script for sample hadoop job
$word2count = array();
// input comes from STDIN
while (($line = fgets(STDIN)) !== false) {
// remove leading and trailing whitespace
$line = trim($line);
// parse the input we got from mapper.php
list($word, $count) = explode("\t", $line);
// convert count (currently a string) to int
$count = intval($count);
// sum counts
if ($count > 0) $word2count[$word] += $count;
}
ksort($word2count); // sort the words alphabetically
// write the results to STDOUT (standard output)
foreach ($word2count as $word => $count) {
echo "$word\t$count\n";
}
?>
此代码适用于在 commoncrawl 数据集上使用 PHP 的 Wordcount 流式作业。
在这里,这些代码读取整个输入。这不是我需要的,我需要读取前 100 行并将它们写入文本文件。我是 Hadoop、CommonCrawl 和 PHP 的初学者。那么,我该怎么做呢?
请帮忙。