我正在研究 Kafka Streams。我面临以下问题:
到目前为止我所做的详细信息:
我创建了以下主题、流和表格:
./kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic bptcus
./kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic address-elasticsearch-sink
为上述创建的主题创建表和流。
CREATE table CUSTOMER_SRC (customerId VARCHAR,name VARCHAR, age VARCHAR, address VARCHAR) WITH (KAFKA_TOPIC='bptcus', VALUE_FORMAT='JSON', KEY='customerId');
CREATE stream ADDRESS_SRC (addressId VARCHAR, city VARCHAR, state VARCHAR) WITH (KAFKA_TOPIC='address-elasticsearch-sink', VALUE_FORMAT='JSON');
我可以看到如下数据:
select * from customer_src;
1528743137610 | Parent-1528743137047 | Ron | 31 | [{"addressId":"1","city":"Fremont","state":"CA"},{"addressId":"2","city":"Dallas","state":"TX"}]
select * from address_src;
1528743413826 | Parent-1528743137047 | 1 | Detroit | MI
通过加入上面创建的表和流来创建另一个流。
CREATE stream CUST_ADDR_SRC as select c.name , c.age , c.address, a.rowkey, a.addressId , a.city , a.state from ADDRESS_SRC a left join CUSTOMER_SRC c on c.rowkey=a.rowkey;
我可以看到 CUST_ADDR_SRC 流中的数据,如下所示:
select * from cust_addr_src;
1528743413826 | Parent-1528743137047 | Ron | 31 | [{"addressId":"1","city":"Fremont","state":"CA"},{"addressId":"2","city":"Dallas","state":"TX"}] | Parent-1528743137047 | 1 | Detroit | MI
我的问题:
- 现在我想用 addressId 1(Detroit) 替换 addressId 1(Fremont)。我怎样才能做到这一点?
- 如票证中所述,我还尝试将流输入打印到控制台
这是我的代码:
Properties config = new Properties();
config.put(StreamsConfig.APPLICATION_ID_CONFIG, "cusadd-application");
config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "10.1.61.125:9092");
config.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "10.1.61.125:2181");
config.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
config.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
KStreamBuilder builder = new KStreamBuilder();
KStream<String, String> source = builder.stream("cust_addr_src");
source.foreach(new ForeachAction<String, String>() {
public void apply(String key, String value) {
System.out.println("Stream key values are: " + key + ": " + value);
}
});
我没有看到输出。
只有,我可以看到以下输出:
12:04:42.145 [StreamThread-1] 调试 org.apache.kafka.clients.consumer.internals.Fetcher - 将分区 cust_addr_src-0 的偏移量重置为最新偏移量。12:04:42.145 [StreamThread-1] 调试 org.apache.kafka.clients.NetworkClient - 在 hsharma-mbp15.local:9092 处启动到节点 0 的连接。12:04:42.145 [StreamThread-1] 调试 org.apache.kafka.common.metrics.Metrics - 添加了名为 node-0.bytes-sent 的传感器 12:04:42.145 [StreamThread-1] 调试 org.apache.kafka .common.metrics.Metrics - 添加了名为 node-0.bytes-received 12:04:42.145 [StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics 的传感器 - 添加了名为 node-0.latency 的传感器12:04:42.145 [StreamThread-1] 调试 org.apache.kafka.clients.NetworkClient - 已完成与节点 0 的连接 12:04:42.145 [StreamThread-1] 调试 org.apache.kafka.clients.consumer.internals。
提前致谢。