您好,登錄后才能下訂單哦!
HBase是一個開源的非關系型分布式數據庫(NoSQL),它參考了谷歌的BigTable建模,實現的編程語言為?Java。它是Apache軟件基金會的Hadoop項目的一部分,運行于HDFS文件系統之上,為?Hadoop?提供類似于BigTable 規模的服務。因此,它可以容錯地存儲海量稀疏的數據。
Hbase安裝
安裝環境
三臺虛擬機:master、slave1、slave2,
已經安裝好Hadoop環境和zookeeper
下載Hbase安裝包,根據你自己的需求下載對應的安裝包
wget http://archive.apache.org/dist/hbase/0.98.24/hbase-0.98.24-hadoop2-bin.tar.gz
也可以直接去鏡像網站下載,地址:http://archive.apache.org/dist/
下載好后,解壓安裝包
tar -zxvf hbase-0.98.24-hadoop2-bin.tar.gz
添加Hbase的環境變量
//打開~/.bashrc文件
vim ~/.bashrc
//然后在里邊追加兩行
export HBASE_HOME=/usr/local/src/hbase-0.98.24-hadoop2
export PATH=$PATH:$HBASE_HOME/bin
//然后保存退出,source一下
source ~/.bashrc
配置Hbase
打開Hbase目錄下conf/hbase-env.sh(如果沒有新建一個)
vim conf/hbase-env.sh
//添加下邊兩個配置
export JAVA_HOME=/usr/local/src/jdk1.8.0_171 //java home
export HBASE_MANAGES_ZK=false //是否使用自帶的zookeeper,自己有安裝的話就用自己的,沒有就用自帶的
配置hbase-site.xml文件
vim conf/hbase-site.xml
//添加如下配置
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master,slave1,slave2</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
修改regionservers文件
vim conf/regionservers
//添加需要安裝regionserver的機器節點
slave1
slave2
到這里Hbase簡單的環境就搭建好了
啟動Hbase需要首先啟動Hadoop和zookeeper
master機器節點
//進入到Hadoop目錄的sbin下
./start-all.sh
查看Hadoop是不是啟動成功
master機器節點,jps查看進程看到圖中進程說明成功啟動
slave機器節點,jps查看
master和slave節點都執行,進入zookeeper安裝目錄bin目錄下
zkServer.sh start
然后jps查看進程,能看到QuorumPeerMain說明Zookeeper啟動成功
####啟動Hbase
在Hadoop和Zookeeper都啟動之后就可以啟動Hbase了,進入Hbase的安裝目錄的bin目錄下
./start-hbase.sh
jps查看進程,在master能看到Hmaster進程,在slave節點能看到HRegionServer進程,說明Hbase啟動成功
也可以通過網址來檢查,http://master:60010/master-status
進入shell命令模式,在bin目錄下執行
./hbase shell
hbase(main):001:0>
hbase(main):003:0> list
TABLE
0 row(s) in 0.1510 seconds
=> []
hbase(main):006:0> create 'test_table' , 'mate_data', 'action'
0 row(s) in 2.4390 seconds
=> Hbase::Table - test_table
hbase(main):009:0> desc 'test_table'
Table test_table is ENABLED
test_table
COLUMN FAMILIES DESCRIPTION
{NAME => 'action', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_EN
CODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536',
REPLICATION_SCOPE => '0'}
{NAME => 'mate_data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK
_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536
', REPLICATION_SCOPE => '0'}
2 row(s) in 0.0520 seconds
hbase(main):010:0> alter 'test_table', {NAME => 'new', VERSIONS => '2', IN_MEMORY => 'true'}
Updating all regions with the new schema...
0/1 regions updated.
1/1 regions updated.
Done.
0 row(s) in 2.2790 seconds
hbase(main):011:0> desc 'test_table'
Table test_table is ENABLED
test_table
COLUMN FAMILIES DESCRIPTION
{NAME => 'action', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_EN
CODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536',
REPLICATION_SCOPE => '0'}
{NAME => 'mate_data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK
_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536
', REPLICATION_SCOPE => '0'}
{NAME => 'new', BLOOMFILTER => 'ROW', VERSIONS => '2', IN_MEMORY => 'true', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODI
NG => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPL
ICATION_SCOPE => '0'}
3 row(s) in 0.0570 seconds
hbase(main):013:0> alter 'test_table', {NAME => 'new', METHOD => 'delete'}
Updating all regions with the new schema...
0/1 regions updated.
1/1 regions updated.
Done.
0 row(s) in 2.2390 seconds
hbase(main):014:0> desc 'test_table'
Table test_table is ENABLED
test_table
COLUMN FAMILIES DESCRIPTION
{NAME => 'action', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_EN
CODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536',
REPLICATION_SCOPE => '0'}
{NAME => 'mate_data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK
_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536
', REPLICATION_SCOPE => '0'}
2 row(s) in 0.0430 seconds
//首先disable
hbase(main):016:0> disable 'test_table'
0 row(s) in 1.2980 seconds
//然后再刪除
hbase(main):017:0> drop 'test_table'
0 row(s) in 0.2020 seconds
//查看是否刪除
hbase(main):018:0> list
TABLE
0 row(s) in 0.0070 seconds
=> []
hbase(main):021:0> put 'test_table', '1001', 'mate_data:name', 'zhangsan'
0 row(s) in 0.1400 seconds
hbase(main):022:0> put 'test_table', '1002', 'mate_data:name', 'lisi'
0 row(s) in 0.0110 seconds
hbase(main):023:0> put 'test_table', '1001', 'mate_data:gender', 'woman'
0 row(s) in 0.0170 seconds
hbase(main):024:0> put 'test_table', '1002', 'mate_data:age', '25'
0 row(s) in 0.0140 seconds
hbase(main):025:0> scan 'test_table'
ROW COLUMN+CELL
1001 column=mate_data:gender, timestamp=1540034584363, value=woman
1001 column=mate_data:name, timestamp=1540034497293, value=zhangsan
1002 column=mate_data:age, timestamp=1540034603800, value=25
1002 column=mate_data:name, timestamp=1540034519659, value=lisi
2 row(s) in 0.0410 seconds
hbase(main):026:0> get 'test_table', '1001'
COLUMN CELL
mate_data:gender timestamp=1540034584363, value=woman
mate_data:name timestamp=1540034497293, value=zhangsan
2 row(s) in 0.0340 seconds
hbase(main):027:0> get 'test_table', '1001', 'mate_data:name'
COLUMN CELL
mate_data:name timestamp=1540034497293, value=zhangsan
1 row(s) in 0.0320 seconds
hbase(main):028:0> count 'test_table'
2 row(s) in 0.0390 seconds
=> 2
hbase(main):029:0> truncate 'test_table'
Truncating 'test_table' table (it may take a while):
- Disabling table...
- Truncating table...
0 row(s) in 1.5220 seconds
不能通過Python腳本來直接操作Hbase,必須要借助thrift服務作為中間層,所以需要兩個Python模塊:hbase模塊和thrift模塊,和安裝thrift來實現Python對Hbase的操作
####安裝thrift并獲得thrift模塊
wget http://archive.apache.org/dist/thrift/0.11.0/thrift-0.11.0.tar.gz
tar -zxvf thrift-0.11.0.tar.gz
cd thrift-0.11.0/
./configure
make
make install
cd lib/py/build/lib.linux-x86_64-2.7
然后就能看到thrift模塊
wget http://archive.apache.org/dist/hbase/0.98.24/hbase-0.98.24-src.tar.gz
tar -zxvf hbase-0.98.24-src.tar.gz
//進入該目錄
cd /usr/local/src/hbase-0.98.24/hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift
//執行如下命令,產生gen-py目錄
thrift --gen py Hbase.thrift
//進入該目錄就能得到生成的hbase模塊
cd gen-py
from thrift.transport import TSocket
from thrift.protocol import TBinaryProtocol
from hbase import Hbase
from hbase.ttypes import *
transport = TSocket.TSocket('master', 9090)
transport = TTransport.TBufferedTransport(transport)
protocol = TBinaryProtocol.TBinaryProtocol(transport)
client = Hbase.Client(protocol)
transport.open()
base_info_contents = ColumnDescriptor(name='columnName1', maxVersions=1)
other_info_contents = ColumnDescriptor(name='columnName2', maxVersions=1)
client.createTable('tableName', [base_info_contents,other_info_contents])
from thrift.transport import TSocket
from thrift.protocol import TBinaryProtocol
from hbase import Hbase
from hbase.ttypes import *
transport = TSocket.TSocket('master', 9090)
transport = TTransport.TBufferedTransport(transport)
protocol = TBinaryProtocol.TBinaryProtocol(transport)
client = Hbase.Client(protocol)
transport.open()
table_name = 'tableName'
rowKey = 'rowKeyName'
mutations = [Mutation(column="columnName:columnPro", value="valueName")]
client.mutateRow(table_name,rowKey,mutations,None)
from thrift.transport import TSocket
from thrift.protocol import TBinaryProtocol
from hbase import Hbase
from hbase.ttypes import *
transport = TSocket.TSocket('master', 9090)
transport = TTransport.TBufferedTransport(transport)
protocol = TBinaryProtocol.TBinaryProtocol(transport)
client = Hbase.Client(protocol)
transport.open()
table_name = 'tableName'
rowKey = 'rowKeyName'
result = client.getRow(table_name,rowKey,None)
for l in result:
print "the row is "+ l.row
for k,v in l.columns.items():
print '\t'.join([k,v.value])
from thrift.transport import TSocket
from thrift.protocol import TBinaryProtocol
from hbase import Hbase
from hbase.ttypes import *
transport = TSocket.TSocket('master', 9090)
transport = TTransport.TBufferedTransport(transport)
protocol = TBinaryProtocol.TBinaryProtocol(transport)
client = Hbase.Client(protocol)
transport.open()
table_name = 'tableName'
scan = TScan()
id = client.scannerOpenWithScan(table_name,scan,None)
result = client.scannerGetList(id,10)
for l in result:
print "========="
print "the row is "+ l.row
for k,v in l.columns.items():
print '\t'.join([k,v.value])
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。