您好,登錄后才能下訂單哦!
這篇文章給大家分享的是有關Spark2.x中如何實現SparkStreaming消費Kafka實例的內容。小編覺得挺實用的,因此分享給大家做個參考,一起跟隨小編過來看看吧。
軟件軟件:
spark版本是apache spark2.2.0
kafka版本是kafka0.10.0
采用Direct Approach的方式來融合Spark Streaming和Kafka。沒有采用Receiver-Based的方式。后續我會專門整理一篇文章分析兩種融合方式不同。
1.kafka數據準備:
創建kafka的topic命令:
/usr/hdp/2.6.3.0-235/kafka/bin/kafka-topics.sh --zookeeper salver158.hadoop.unicom:2181,salver31.hadoop.unicom:2181,salver32.hadoop.unicom:2181 -topic kafkawordcount -replication-factor 2 -partitions 2 -create
發送數據命令:
/usr/hdp/2.6.3.0-235/kafka/bin/kafka-console-producer.sh --zookeeper salver158.hadoop.unicom:2181,salver31.hadoop.unicom:2181,salver32.hadoop.unicom:2181 -topic kafkawordcount
2.代碼實例:
package com.unicom.ljs.spark220.study.streaming;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.TopicPartition;
import org.apache.spark.SparkConf;
import org.apache.spark.streaming.Durations;
import org.apache.spark.streaming.api.java.JavaInputDStream;
import org.apache.spark.streaming.api.java.JavaPairDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka010.ConsumerStrategies;
import org.apache.spark.streaming.kafka010.KafkaUtils;
import org.apache.spark.streaming.kafka010.LocationStrategies;
import scala.Tuple2;
import java.util.*;
/**
* @author: Created By lujisen
* @company ChinaUnicom Software JiNan
* @date: 2020-01-31 20:30
* @version: v1.0
* @description: com.unicom.ljs.spark220.study.streaming
*/
public class KafkaStreamingWordCount {
public static void main(String[] args) throws InterruptedException {
SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("KafkaStreamingWordCount");
JavaStreamingContext ssc=new JavaStreamingContext(sparkConf, Durations.seconds(5));
String topic="kafkawordcount";
Collection<String> topics = new HashSet<>();
topics.add(topic);
//kafka相關參數,其他參數可自行百度
String brokerList = "10.124.165.31:6667,10.124.165.32:6667";
Map<String, Object> props = new HashMap<>();
props.put("bootstrap.servers", brokerList);
props.put("group.id", "groupLjs1");
props.put("auto.offset.reset", "earliest");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
/*指定kafka中topic的消費分區*/
Map<TopicPartition, Long> offsets = new HashMap<>();
offsets.put(new TopicPartition(topic, 0), 0L);
offsets.put(new TopicPartition(topic, 1), 0L);
//通過KafkaUtils.createDirectStream指定kafka數據源
// 三個參數 1 sparkcontext 2.LocationStrategies.PreferConsistent,如上所示。這將在可用執行程序之間均勻分配分區 3,訂閱kafka 的配置
JavaInputDStream<ConsumerRecord<Object, Object>> lines = KafkaUtils.createDirectStream(
ssc,
LocationStrategies.PreferConsistent(),
ConsumerStrategies.Subscribe(topics, props, offsets)
);
JavaPairDStream<String, Integer> counts =lines.flatMap(
x -> Arrays.asList(x.value().toString().split(" ")).iterator())
.mapToPair(x -> new Tuple2<String, Integer>(x, 1)).reduceByKey((x, y) -> x + y);
/*打印結果*/
counts.print();
/*啟動*/
ssc.start();
ssc.awaitTermination();
/*停止*/
ssc.close();
}
}
3.數據統計展示:
感謝各位的閱讀!關于“Spark2.x中如何實現SparkStreaming消費Kafka實例”這篇文章就分享到這里了,希望以上內容可以對大家有一定的幫助,讓大家可以學到更多知識,如果覺得文章不錯,可以把它分享出去讓更多的人看到吧!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。