0

我使用 Flink 来丰富输入流

case class Input( key: String, message: String )

预先计算的分数

case class Score( key: String, score: Int )

并产生输出

case class Output( key: String, message: String, score: Int )

输入流和分数流都从 Kafka 主题中读取,结果输出流也发布到 Kafka

val processed = inputStream.connect( scoreStream )
                           .flatMap( new ScoreEnrichmentFunction )
                           .addSink( producer )

使用以下 ScoreEnrichmentFunction:

class ScoreEnrichmentFunction extends RichCoFlatMapFunction[Input, Score, Output]
{
    val scoreStateDescriptor = new ValueStateDescriptor[Score]( "saved scores", classOf[Score] )
    lazy val scoreState: ValueState[Score] = getRuntimeContext.getState( scoreStateDescriptor )

    override def flatMap1( input: Input, out: Collector[Output] ): Unit = 
    {
        Option( scoreState.value ) match {
            case None => out.collect( Output( input.key, input.message, -1 ) )
            case Some( score ) => out.collect( Output( input.key, input.message, score.score ) )  
        }
    }

    override def flatMap2( score: Score, out: Collector[Output] ): Unit = 
    {
        scoreState.update( score )
    } 
}

这很好用。但是,如果我采取安全点并取消 Flink 作业,则当我从保存点恢复作业时,存储在 ValueState 中的分数会丢失。

据我了解,ScoreEnrichmentFunction 似乎需要使用 CheckPointedFunction 进行扩展

class ScoreEnrichmentFunction extends RichCoFlatMapFunction[Input, Score, Output] with CheckpointedFunction

但我很难理解如何实现方法 snapshotState 和 initializeState 以使用键控状态

override def snapshotState( context: FunctionSnapshotContext ): Unit = ???


override def initializeState( context: FunctionInitializationContext ): Unit = ???

请注意,我使用以下环境:

val env = StreamExecutionEnvironment.getExecutionEnvironment
    env.setParallelism( 2 )
    env.setBufferTimeout( 1 )
    env.enableCheckpointing( 1000 )
    env.getCheckpointConfig.enableExternalizedCheckpoints( ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION )
    env.getCheckpointConfig.setCheckpointingMode( CheckpointingMode.EXACTLY_ONCE )
    env.getCheckpointConfig.setMinPauseBetweenCheckpoints( 500 )
    env.getCheckpointConfig.setCheckpointTimeout( 60000 )
    env.getCheckpointConfig.setFailOnCheckpointingErrors( false )
    env.getCheckpointConfig.setMaxConcurrentCheckpoints( 1 )
4

1 回答 1

0

我想我找到了问题所在。我试图为检查点和保存点使用单独的目录,这导致保存点目录和 FsStateBackend 目录不同。

使用相同的目录

val backend = new FsStateBackend( "file:/data", true )
env.setStateBackend( backend )

并在获取保存点时

bin/flink cancel d75f4712346cadb4df90ec06ef257636 -s file:/data

解决问题。

于 2018-09-29T06:39:08.347 回答