4

我需要 Flink Streaming 方面的帮助。我在下面生成了一个简单的 Hello-world 类型的代码。这会从 RabbitMQ 流式传输 Avro 消息并将其持久化到 HDFS。我希望有人可以查看代码,也许它可以帮助其他人。

我为 Flink 流式传输找到的大多数示例都将结果发送到标准输出。我实际上想将数据保存到 Hadoop。我读到,理论上,你可以使用 Flink 流式传输到任何你喜欢的地方。实际上,我还没有找到任何将数据保存到 HDFS 的示例。但是,根据我找到的示例以及试验和错误,我提供了以下代码。

这里的数据源是 RabbitMQ。我使用客户端应用程序将“MyAvroObjects”发送到 RabbitMQ。MyAvroObject.java - 不包括 - 从 avro IDL 生成...可以是任何 avro 消息。

下面的代码使用 RabbitMQ 消息,并将其保存到 HDFS,作为 avro 文件......嗯,这就是我希望的。

package com.johanw.flink.stackoverflow;

import java.io.IOException;

import org.apache.avro.io.Decoder;
import org.apache.avro.io.DecoderFactory;
import org.apache.avro.mapred.AvroKey;
import org.apache.avro.mapred.AvroOutputFormat;
import org.apache.avro.mapred.AvroWrapper;
import org.apache.avro.mapreduce.AvroJob;
import org.apache.avro.specific.SpecificDatumReader;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.api.java.hadoop.mapred.HadoopOutputFormat;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.api.java.typeutils.TypeExtractor;
import org.apache.flink.streaming.api.TimeCharacteristic;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.sink.FileSinkFunctionByMillis;
import org.apache.flink.streaming.connectors.rabbitmq.RMQSource;
import org.apache.flink.streaming.util.serialization.DeserializationSchema;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.Job;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class RMQToHadoop {
    public class MyDeserializationSchema implements DeserializationSchema<MyAvroObject> {
        private static final long serialVersionUID = 1L;

        @Override
        public TypeInformation<MyAvroObject> getProducedType() {
             return TypeExtractor.getForClass(MyAvroObject.class);
        }

        @Override
        public MyAvroObject deserialize(byte[] array) throws IOException {
            SpecificDatumReader<MyAvroObject> reader = new SpecificDatumReader<MyAvroObject>(MyAvroObject.getClassSchema());
            Decoder decoder = DecoderFactory.get().binaryDecoder(array, null);
            MyAvroObject MyAvroObject = reader.read(null, decoder);
            return MyAvroObject;
        }

        @Override
        public boolean isEndOfStream(MyAvroObject arg0) {
            return false;
        }
    }

    private String hostName;
    private String queueName;

    public final static String path = "/hdfsroot";

    private static Logger logger = LoggerFactory.getLogger(RMQToHadoop.class);

    public RMQToHadoop(String hostName, String queueName) {
        super();
        this.hostName = hostName;
        this.queueName = queueName;
    }

    final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

    public void run() {
        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
        logger.info("Running " + RMQToHadoop.class.getName());
        DataStream<MyAvroObject> socketStockStream = env.addSource(new RMQSource<>(hostName, queueName, new MyDeserializationSchema()));
        Job job;
        try {
            job = Job.getInstance();
            AvroJob.setInputKeySchema(job, MyAvroObject.getClassSchema());
        } catch (IOException e1) {
            e1.printStackTrace();
        }

        try {
            JobConf jobConf = new JobConf(Job.getInstance().getConfiguration());
            jobConf.set("avro.output.schema", MyAvroObject.getClassSchema().toString());
            org.apache.avro.mapred.AvroOutputFormat<MyAvroObject> akof = new AvroOutputFormat<MyAvroObject>();
            HadoopOutputFormat<AvroWrapper<MyAvroObject>, NullWritable> hof = new HadoopOutputFormat<AvroWrapper<MyAvroObject>, NullWritable>(akof, jobConf);
            FileSinkFunctionByMillis<Tuple2<AvroWrapper<MyAvroObject>, NullWritable>> fileSinkFunctionByMillis = new FileSinkFunctionByMillis<Tuple2<AvroWrapper<MyAvroObject>, NullWritable>>(hof, 10000l);
            org.apache.hadoop.mapred.FileOutputFormat.setOutputPath(jobConf, new Path(path));

            socketStockStream.map(new MapFunction<MyAvroObject, Tuple2<AvroWrapper<MyAvroObject>, NullWritable>>() {
                private static final long serialVersionUID = 1L;
                @Override
                public Tuple2<AvroWrapper<MyAvroObject>, NullWritable> map(MyAvroObject envelope) throws Exception {
                    logger.info("map");
                    AvroKey<MyAvroObject> key = new AvroKey<MyAvroObject>(envelope);
                    Tuple2<AvroWrapper<MyAvroObject>, NullWritable> tupple = new Tuple2<AvroWrapper<MyAvroObject>, NullWritable>(key, NullWritable.get());
                    return tupple;
                }
            }).addSink(fileSinkFunctionByMillis);
            try {
                env.execute();
            } catch (Exception e) {
                logger.error("Error while running " + RMQToHadoop.class + ".", e);
            }
        } catch (IOException e) {
            logger.error("Error while running " + RMQToHadoop.class + ".", e);
        }
    }

    public static void main(String[] args) throws IOException {
        RMQToHadoop toHadoop = new RMQToHadoop("localhost", "rabbitTestQueue");
        toHadoop.run();
    }
}

如果您更喜欢 RabbitMQ 以外的其他来源,那么使用其他来源也可以正常工作。例如使用 Kafka 消费者:

import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer082;

...

DataStreamSource<MyAvroObject> socketStockStream = env.addSource(new FlinkKafkaConsumer082<MyAvroObject>(topic, new MyDeserializationSchema(), sourceProperties));

问题:

  1. 请查阅。这是将数据保存到 HDFS 的好习惯吗?

  2. 如果流式传输过程引起问题怎么办,比如在序列化期间。它生成和异常,代码就退出了。Spark 流式传输依赖于 Yarn 自动重启应用。这也是使用 Flink 时的好习惯吗?

  3. 我正在使用 FileSinkFunctionByMillis。我实际上希望使用 HdfsSinkFunction 之类的东西,但这并不存在。所以 FileSinkFunctionByMillis 是最接近这个的,这对我来说很有意义。同样,我发现的文档没有任何解释该怎么做,所以我只是猜测。

  4. 当我在本地运行它时,我找到了一个类似“C:\hdfsroot_temporary\0_temporary\attempt__0000_r_000001_0”的目录结构,它是...... basare。这里有什么想法吗?

顺便说一句,当您想将数据保存到 Kafka 时,我可以使用...

Properties destProperties = new Properties();
destProperties.setProperty("bootstrap.servers", bootstrapServers);
FlinkKafkaProducer<MyAvroObject> kafkaProducer = new FlinkKafkaProducer<L3Result>("MyKafkaTopic", new MySerializationSchema(), destProperties);

提前谢谢了!!!!

4

1 回答 1

0

我认为FileSinkFunctionByMillis可以使用,但这意味着您的流媒体程序不是容错的。这意味着如果您的资源或机器或写入失败,那么您的程序将崩溃而无法恢复。

我建议你看看使用RollingSinkhttps://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/streaming_guide.html#hadoop-filesystem)。这可用于创建类似 Flum 的管道,以将数据摄取到 HDFS(或其他文件系统)中。滚动接收器是可恢复接收器,这意味着您的程序将是容错的,因为 Kafka 消费者也是容错的。您还可以指定自定义Writer以您想要的任何格式写入数据,例如 Avro。

于 2016-01-06T09:19:16.050 回答