本文主要讨论如何使用Alink的Kafka连接组件(Kafka011SourceStreamOp和Kafka011SinkStreamOp)读取写入数据。如何你需要一个本地的Kafka数据源进行实验,可以参考我另外一篇文章,详细介绍了搭建Kafka及建立Topic的过程。
首先,我们演示如何将流式数据写入Kafka。
假设已经有一个Kafka的数据源(譬如:本地Kafka数据源,端口为9092),并且Kafka中已经有一个topic,名称为iris,则Kafka写入组件Kafka011SinkStreamOp可以如下设置:
Kafka011SinkStreamOp sink = new Kafka011SinkStreamOp() .setBootstrapServers("localhost:9092") .setDataFormat("json") .setTopic("iris");
注意:Kafka写入的数据只能为字符串,需要设置每条记录转化为字符串的方式,这里我们使用Json格式。
我们还需要构造一个获取流式数据的方式,最简单的方式是使用CsvSourceStreamOp组件,将csv数据(https://alink-release.oss-cn-beijing.aliyuncs.com/data-files/iris.csv)以流的方式读入。然后,再连接Kafka写入组件,开始执行流式操作。完整代码如下:
private static void writeKafka() throws Exception { String URL = "https://alink-release.oss-cn-beijing.aliyuncs.com/data-files/iris.csv"; String SCHEMA_STR = "sepal_length double, sepal_width double, petal_length double, petal_width double, category string"; CsvSourceStreamOp data = new CsvSourceStreamOp().setFilePath(URL).setSchemaStr(SCHEMA_STR); Kafka011SinkStreamOp sink = new Kafka011SinkStreamOp() .setBootstrapServers("localhost:9092") .setDataFormat("json") .setTopic("iris"); data.link(sink); StreamOperator.execute(); }
由于CSV文件中数据有限,当读取完最后一条时,流式任务会结束。
接下来,我们可以使用Alink的Kafka011SourceStreamOp组件读取数据,并设置其消费者组ID,读取模式为从头开始,具体代码如下:
private static void readKafka() throws Exception { Kafka011SourceStreamOp source = new Kafka011SourceStreamOp() .setBootstrapServers("localhost:9092") .setTopic("iris") .setStartupMode("EARLIEST") .setGroupId("alink_group"); source.print(); StreamOperator.execute(); }
执行打印结果如下,中间略去大部分数据:
message_key|message|topic|topic_partition|partition_offset -----------|-------|-----|---------------|---------------- null|{"sepal_width":3.4,"petal_width":0.2,"sepal_length":4.8,"category":"Iris-setosa","petal_length":1.6}|iris|0|0 null|{"sepal_width":4.1,"petal_width":0.1,"sepal_length":5.2,"category":"Iris-setosa","petal_length":1.5}|iris|0|1 null|{"sepal_width":2.8,"petal_width":1.5,"sepal_length":6.5,"category":"Iris-versicolor","petal_length":4.6}|iris|0|2 null|{"sepal_width":3.0,"petal_width":1.8,"sepal_length":6.1,"category":"Iris-virginica","petal_length":4.9}|iris|0|3 null|{"sepal_width":2.9,"petal_width":1.8,"sepal_length":7.3,"category":"Iris-virginica","petal_length":6.3}|iris|0|4 ...... null|{"sepal_width":2.2,"petal_width":1.0,"sepal_length":6.0,"category":"Iris-versicolor","petal_length":4.0}|iris|0|145 null|{"sepal_width":2.4,"petal_width":1.0,"sepal_length":5.5,"category":"Iris-versicolor","petal_length":3.7}|iris|0|146 null|{"sepal_width":3.1,"petal_width":0.2,"sepal_length":4.6,"category":"Iris-setosa","petal_length":1.5}|iris|0|147 null|{"sepal_width":3.4,"petal_width":0.2,"sepal_length":4.8,"category":"Iris-setosa","petal_length":1.9}|iris|0|148 null|{"sepal_width":2.9,"petal_width":1.4,"sepal_length":6.1,"category":"Iris-versicolor","petal_length":4.7}|iris|0|149
可以看到直接从Kafka中获取的每条数据都是Json格式的字符串。
接下来,我们需要对字符串里面的数据进行提取。推荐使用JsonValueStreamOp,通过设置需要提取内容的JsonPath,提取出各列数据。详细代码如下:
Kafka011SourceStreamOp source = new Kafka011SourceStreamOp() .setBootstrapServers("localhost:9092") .setTopic("iris") .setStartupMode("EARLIEST") .setGroupId("alink_group"); StreamOperator data = source .link( new JsonValueStreamOp() .setSelectedCol("message") .setReservedCols(new String[] {}) .setOutputCols( new String[] {"sepal_length", "sepal_width", "petal_length", "petal_width", "category"}) .setJsonPath(new String[] {"$.sepal_length", "$.sepal_width", "$.petal_length", "$.petal_width", "$.category"}) ); System.out.print(data.getSchema()); data.print(); StreamOperator.execute();
关于结果数据的Schema打印为:
root |-- sepal_length: STRING |-- sepal_width: STRING |-- petal_length: STRING |-- petal_width: STRING |-- category: STRING
可以看出JsonValueStreamOp提取出来的结果都是string类型的,具体数据打印结果如下,略去中间的大部分数据。
sepal_length|sepal_width|petal_length|petal_width|category ------------|-----------|------------|-----------|-------- 4.8|3.4|1.6|0.2|Iris-setosa 5.2|4.1|1.5|0.1|Iris-setosa 6.5|2.8|4.6|1.5|Iris-versicolor 6.1|3.0|4.9|1.8|Iris-virginica 7.3|2.9|6.3|1.8|Iris-virginica ...... 5.2|2.7|3.9|1.4|Iris-versicolor 6.4|2.7|5.3|1.9|Iris-virginica 6.8|3.0|5.5|2.1|Iris-virginica 5.7|2.5|5.0|2.0|Iris-virginica 6.1|2.8|4.0|1.3|Iris-versicolor
至此,我们已经能够拿到数据了,只是数据的类型有问题,需要进行转换。我们可以使用Flink SQL 的cast方法,在代码实现上,只需在连接JsonValueStreamOp之后,使用select方法(其参数为SQL语句),具体代码如下:
StreamOperator data = source .link( new JsonValueStreamOp() .setSelectedCol("message") .setReservedCols(new String[] {}) .setOutputCols( new String[] {"sepal_length", "sepal_width", "petal_length", "petal_width", "category"}) .setJsonPath(new String[] {"$.sepal_length", "$.sepal_width", "$.petal_length", "$.petal_width", "$.category"}) ) .select("CAST(sepal_length AS DOUBLE) AS sepal_length, " + "CAST(sepal_width AS DOUBLE) AS sepal_width, " + "CAST(petal_length AS DOUBLE) AS petal_length, " + "CAST(petal_width AS DOUBLE) AS petal_width, category" );
执行新的代码,关于结果数据的Schema打印为:
root |-- sepal_length: DOUBLE |-- sepal_width: DOUBLE |-- petal_length: DOUBLE |-- petal_width: DOUBLE |-- category: STRING
每列数据都转化为相应的类型。具体数据打印结果如下,略去中间的大部分数据。
sepal_length|sepal_width|petal_length|petal_width|category ------------|-----------|------------|-----------|-------- 4.8000|3.4000|1.6000|0.2000|Iris-setosa 5.2000|4.1000|1.5000|0.1000|Iris-setosa 6.5000|2.8000|4.6000|1.5000|Iris-versicolor 6.1000|3.0000|4.9000|1.8000|Iris-virginica 7.3000|2.9000|6.3000|1.8000|Iris-virginica ...... 5.2000|2.7000|3.9000|1.4000|Iris-versicolor 6.4000|2.7000|5.3000|1.9000|Iris-virginica 6.8000|3.0000|5.5000|2.1000|Iris-virginica 5.7000|2.5000|5.0000|2.0000|Iris-virginica 6.1000|2.8000|4.0000|1.3000|Iris-versicolor
可以看出,配合使用Alink的相关组件,可以完整地从Kafka上读取、写入数据。后面,可通过Alink的各算法组件进行深入计算。