我正在尝试在 Hadoop 中创建一个 SetWritable。这是我的实现。我刚刚开始使用 MapReduce,但我不知道我应该如何做到这一点。我写了下面的代码,但它不起作用。
自定义可写(需要一个集合):
public class TextPair implements Writable {
private Text first;
public HashSet<String> valueSet = new HashSet<String>();
public TextPair() {
}
@Override
public void write(DataOutput out) throws IOException {
out.writeInt(valueSet.size());
Iterator<String> it = valueSet.iterator();
while (it.hasNext()) {
this.first = new Text(it.next());
first.write(out);
}
}
@Override
public void readFields(DataInput in) throws IOException {
Iterator<String> it = valueSet.iterator();
while (it.hasNext()) {
this.first = new Text(it.next());
first.readFields(in);
}
}
}
映射器代码:
public class TokenizerMapper extends Mapper<Object, Text, Text, TextPair> {
ArrayList<String> al = new ArrayList<String>();
TextPair tp = new TextPair();
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
String [] val = value.toString().substring(2,value.toString().length()).split(" ");
for(String v: val) {
tp.valueSet.add(v);
}
String [] vals = value.toString().split(" ");
for(int i=0; i<vals.length-1; i++) {
setKey(vals[0],vals[i+1]);
System.out.println(getKey());
context.write(new Text(getKey()), tp);
}
}
public void setKey(String first,String second) {
al.clear();
al.add(first);
al.add(second);
java.util.Collections.sort(al);
}
public String getKey() {
String tp = al.get(0)+al.get(1);
return tp;
}
}
我基本上是在尝试从 Mapper 发出 SetWritable 作为值。请建议我需要进行哪些更改。谢谢!