您好,登錄后才能下訂單哦!
今天小編給大家分享一下怎么利用Java連接Hadoop進行編程的相關知識點,內容詳細,邏輯清晰,相信大部分人都還太了解這方面的知識,所以分享這篇文章給大家參考一下,希望大家閱讀完這篇文章后有所收獲,下面我們一起來了解一下吧。
hadoop版本:3.3.2
jdk版本:1.8
hadoop安裝系統:ubuntu18.04
編程環境:IDEA
編程主機:windows
創建maven工程,引入以下依賴:
<dependency> <groupId>org.testng</groupId> <artifactId>testng</artifactId> <version>RELEASE</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-hdfs</artifactId> <version>3.3.2</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>3.3.2</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-core</artifactId> <version>1.2.1</version> </dependency>
虛擬機的/etc/hosts配置
hdfs-site.xml配置
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/root/rDesk/hadoop-3.3.2/tmp/dfs/name</value> </property> <property> <name>dfs.datanode.http.address</name> <value>VM-12-11-ubuntu:50010</value> </property> <property> <name>dfs.client.use.datanode.hostname</name> <value>true</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/root/rDesk/hadoop-3.3.2/tmp/dfs/data</value> </property> </configuration>
core-site.xml配置
<configuration> <property> <name>hadoop.tmp.dir</name> <value>file:/root/rDesk/hadoop-3.3.2/tmp</value> <description>Abase for other temporary directories.</description> </property> <property> <name>fs.defaultFS</name> <value>hdfs://VM-12-11-ubuntu:9000</value> </property> </configuration>
啟動hadoop
sbin/start-dfs.sh
主機的hosts(C:\Windows\System32\drivers\etc)文件配置
嘗試連接到虛擬機的hadoop并讀取文件內容,這里我讀取hdfs下的/root/iinput文件內容
Java代碼:
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hdfs.DistributedFileSystem; public class TestConnectHadoop { public static void main(String[] args) throws Exception { String hostname = "VM-12-11-ubuntu"; String HDFS_PATH = "hdfs://" + hostname + ":9000"; Configuration conf = new Configuration(); conf.set("fs.defaultFS", HDFS_PATH); conf.set("fs.hdfs.impl", DistributedFileSystem.class.getName()); conf.set("dfs.client.use.datanode.hostname", "true"); FileSystem fs = FileSystem.get(conf); FileStatus[] fileStatuses = fs.listStatus(new Path("/")); for (FileStatus fileStatus : fileStatuses) { System.out.println(fileStatus.toString()); } FileStatus fileStatus = fs.getFileStatus(new Path("/root/iinput")); System.out.println(fileStatus.getOwner()); System.out.println(fileStatus.getGroup()); System.out.println(fileStatus.getPath()); FSDataInputStream open = fs.open(fileStatus.getPath()); byte[] buf = new byte[1024]; int n = -1; StringBuilder sb = new StringBuilder(); while ((n = open.read(buf)) > 0) { sb.append(new String(buf, 0, n)); } System.out.println(sb); } }
運行結果:
編程實現一個類“MyFSDataInputStream”,該類繼承“org.apache.hadoop.fs.FSDataInputStream",要求如下: ①實現按行讀取HDFS中指定文件的方法”readLine()“,如果讀到文件末尾,則返回為空,否則返回文件一行的文本
思路:emmm我的思路比較簡單,只適用于該要求,僅作參考。
將所有的數據讀取出來存儲起來,然后根據換行符進行拆分,將拆分的字符串數組存儲起來,用于readline返回
Java代碼
import org.apache.hadoop.fs.FSDataInputStream; import java.io.IOException; import java.io.InputStream; public class MyFSDataInputStream extends FSDataInputStream { private String data = null; private String[] lines = null; private int count = 0; private FSDataInputStream in; public MyFSDataInputStream(InputStream in) throws IOException { super(in); this.in = (FSDataInputStream) in; init(); } private void init() throws IOException { byte[] buf = new byte[1024]; int n = -1; StringBuilder sb = new StringBuilder(); while ((n = this.in.read(buf)) > 0) { sb.append(new String(buf, 0, n)); } data = sb.toString(); lines = data.split("\n"); } /** * 實現按行讀取HDFS中指定文件的方法”readLine()“,如果讀到文件末尾,則返回為空,否則返回文件一行的文本 */ public String read_line() { return count < lines.length ? lines[count++] : null; } }
測試類:
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hdfs.DistributedFileSystem; public class TestConnectHadoop { public static void main(String[] args) throws Exception { String hostname = "VM-12-11-ubuntu"; String HDFS_PATH = "hdfs://" + hostname + ":9000"; Configuration conf = new Configuration(); conf.set("fs.defaultFS", HDFS_PATH); conf.set("fs.hdfs.impl", DistributedFileSystem.class.getName()); conf.set("dfs.client.use.datanode.hostname", "true"); FileSystem fs = FileSystem.get(conf); FileStatus fileStatus = fs.getFileStatus(new Path("/root/iinput")); System.out.println(fileStatus.getOwner()); System.out.println(fileStatus.getGroup()); System.out.println(fileStatus.getPath()); FSDataInputStream open = fs.open(fileStatus.getPath()); MyFSDataInputStream myFSDataInputStream = new MyFSDataInputStream(open); String line = null; int count = 0; while ((line = myFSDataInputStream.read_line()) != null ) { System.out.printf("line %d is: %s\n", count++, line); } System.out.println("end"); } }
運行結果:
②實現緩存功能,即利用”MyFSDataInputStream“讀取若干字節數據時,首先查找緩存,如果緩存中有所需要數據,則直接由緩存提供,否則從HDFS中讀取數據
import org.apache.hadoop.fs.FSDataInputStream; import java.io.BufferedInputStream; import java.io.IOException; import java.io.InputStream; public class MyFSDataInputStream extends FSDataInputStream { private BufferedInputStream buffer; private String[] lines = null; private int count = 0; private FSDataInputStream in; public MyFSDataInputStream(InputStream in) throws IOException { super(in); this.in = (FSDataInputStream) in; init(); } private void init() throws IOException { byte[] buf = new byte[1024]; int n = -1; StringBuilder sb = new StringBuilder(); while ((n = this.in.read(buf)) > 0) { sb.append(new String(buf, 0, n)); } //緩存數據讀取 buffer = new BufferedInputStream(this.in); lines = sb.toString().split("\n"); } /** * 實現按行讀取HDFS中指定文件的方法”readLine()“,如果讀到文件末尾,則返回為空,否則返回文件一行的文本 */ public String read_line() { return count < lines.length ? lines[count++] : null; } @Override public int read() throws IOException { return this.buffer.read(); } public int readWithBuf(byte[] buf, int offset, int len) throws IOException { return this.buffer.read(buf, offset, len); } public int readWithBuf(byte[] buf) throws IOException { return this.buffer.read(buf); } }
以上就是“怎么利用Java連接Hadoop進行編程”這篇文章的所有內容,感謝各位的閱讀!相信大家閱讀完這篇文章都有很大的收獲,小編每天都會為大家更新不同的知識,如果還想學習更多的知識,請關注億速云行業資訊頻道。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。