您好,登錄后才能下訂單哦!
今天就跟大家聊聊有關ClickHouse集群搭建的方案是什么,可能很多人都不太了解,為了讓大家更加了解,小編給大家總結了以下內容,希望大家根據這篇文章可以有所收獲。
我們完成ClickHouse
分布式集群安裝,也創建本地表和分布式表進行了測試,但是,假如停掉一個節點會發生神馬情況?
node03
上kill
掉clickhouse-server
進程[root@node03 ~]# ps -ef | grep clickhouse clickho+ 2233 1 73 13:07 ? 00:00:02 clickhouse-server --daemon --pid-file=/var/run/clickhouse-server/clickhouse-server.pid --config-file=/etc/clickhouse-server/config.xml root 2306 1751 0 13:07 pts/0 00:00:00 grep --color=auto clickhouse [root@node03 ~]# service clickhouse-server stop Stop clickhouse-server service: DONE [root@node03 ~]# ps -ef | grep clickhouse root 2337 1751 0 13:07 pts/0 00:00:00 grep --color=auto clickhouse
node01
上查詢分布式表node01 :) select * from cluster3s1r_all; # node03沒有被殺掉時 SELECT * FROM cluster3s1r_all ┌─id─┬─website─────────────────┬─wechat─────┬─FlightDate─┬─Year─┐ │ 2 │ http://www.merryyou.cn/ │ javaganhuo │ 2020-11-28 │ 2020 │ └────┴─────────────────────────┴────────────┴────────────┴──────┘ ┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐ │ 1 │ https://niocoder.com/ │ java干貨 │ 2020-11-28 │ 2020 │ └────┴───────────────────────┴──────────┴────────────┴──────┘ ┌─id─┬─website──────────────┬─wechat─┬─FlightDate─┬─Year─┐ │ 3 │ http://www.xxxxx.cn/ │ xxxxx │ 2020-11-28 │ 2020 │ └────┴──────────────────────┴────────┴────────────┴──────┘ 3 rows in set. Elapsed: 0.037 sec. node01 :) select * from cluster3s1r_all; # node03節點被殺掉時 SELECT * FROM cluster3s1r_all ┌─id─┬─website─────────────────┬─wechat─────┬─FlightDate─┬─Year─┐ │ 2 │ http://www.merryyou.cn/ │ javaganhuo │ 2020-11-28 │ 2020 │ └────┴─────────────────────────┴────────────┴────────────┴──────┘ ↘ Progress: 1.00 rows, 59.00 B (8.87 rows/s., 523.62 B/s.) 0% Received exception from server (version 20.8.3): Code: 279. DB::Exception: Received from localhost:9000. DB::Exception: All connection tries failed. Log: Code: 32, e.displayText() = DB::Exception: Attempt to read after eof (version 20.8.3.18) Code: 210, e.displayText() = DB::NetException: Connection refused (node03:9000) (version 20.8.3.18) Code: 210, e.displayText() = DB::NetException: Connection refused (node03:9000) (version 20.8.3.18) : While executing Remote. 1 rows in set. Elapsed: 0.114 sec.
只返回了node01
節點上的數據,node03
節點上的兩條數據丟失。
但在ClickHouse
中,replica
是掛在shard
上的,因此要用多副本,必須先定義shard
。
最簡單的情況:1個分片多個副本。
metrika.xml
文件node01
上修改 /etc/clickhouse-server/metrika.xml
集群配置文件
<yandex> <!-- 集群配置 --> <clickhouse_remote_servers> <!-- 1分片2備份 --> <cluster_1shards_2replicas> <!-- 數據分片1 --> <shard> <!-- false代表一次性寫入所有副本,true表示寫入其中一個副本,配合zk來進行數據復制 --> <internal_replication>false</internal_replication> <replica> <host>node01</host> <port>9000</port> </replica> <replica> <host>node02</host> <port>9000</port> </replica> </shard> </cluster_1shards_2replicas> </clickhouse_remote_servers> </yandex>
將修改后的配置分發到node02
機器上
[root@node01 clickhouse-server]# scp /etc/clickhouse-server/metrika.xml node02:$PWD metrika.xml 100% 674 618.9KB/s 00:00
如果配置文件沒有問題,是不用重啟clickhouse-server
的,會自動加載配置文件,node01
上查看集群信息
[root@node01 clickhouse-server]# clickhouse-client -m ClickHouse client version 20.8.3.18. Connecting to localhost:9000 as user default. Connected to ClickHouse server version 20.8.3 revision 54438. node01 :) select * from system.clusters; SELECT * FROM system.clusters ┌─cluster───────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─┬─host_address───┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐ │ cluster_1shards_2replicas │ 1 │ 1 │ 1 │ node01 │ 192.168.10.100 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ cluster_1shards_2replicas │ 1 │ 1 │ 2 │ node02 │ 192.168.10.110 │ 9000 │ 0 │ default │ │ 0 │ 0 │ │ test_cluster_two_shards │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ test_cluster_two_shards │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ │ test_cluster_two_shards_localhost │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ test_cluster_two_shards_localhost │ 2 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ test_shard_localhost │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ test_shard_localhost_secure │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9440 │ 0 │ default │ │ 0 │ 0 │ │ test_unavailable_shard │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ test_unavailable_shard │ 2 │ 1 │ 1 │ localhost │ ::1 │ 1 │ 0 │ default │ │ 0 │ 0 │ └───────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────┴────────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘ 10 rows in set. Elapsed: 0.018 sec.
在node01
和node02
上分別創建本地表cluster1s2r_local
CREATE TABLE default.cluster1s2r_local ( `id` Int32, `website` String, `wechat` String, `FlightDate` Date, Year UInt16 ) ENGINE = MergeTree(FlightDate, (Year, FlightDate), 8192);
在node01
機器上創建分布式表,注意集群名稱
CREATE TABLE default.cluster1s2r_all AS cluster1s2r_local ENGINE = Distributed(cluster_1shards_2replicas, default, cluster1s2r_local, rand());
往分布式表cluster1s2r_all
插入數據,cluster1s2r_all
會全部插入到node01
和node02
節點的cluster1s2r_local
里
插入數據
INSERT INTO default.cluster1s2r_all (id,website,wechat,FlightDate,Year)values(1,'https://niocoder.com/','java干貨','2020-11-28',2020); INSERT INTO default.cluster1s2r_all (id,website,wechat,FlightDate,Year)values(2,'http://www.merryyou.cn/','javaganhuo','2020-11-28',2020); INSERT INTO default.cluster1s2r_all (id,website,wechat,FlightDate,Year)values(3,'http://www.xxxxx.cn/','xxxxx','2020-11-28',2020);
查詢分布式表和本地表
node01 :) select * from cluster1s2r_all; # 查詢分布式表 SELECT * FROM cluster1s2r_all ┌─id─┬─website──────────────┬─wechat─┬─FlightDate─┬─Year─┐ │ 3 │ http://www.xxxxx.cn/ │ xxxxx │ 2020-11-28 │ 2020 │ └────┴──────────────────────┴────────┴────────────┴──────┘ ┌─id─┬─website─────────────────┬─wechat─────┬─FlightDate─┬─Year─┐ │ 2 │ http://www.merryyou.cn/ │ javaganhuo │ 2020-11-28 │ 2020 │ └────┴─────────────────────────┴────────────┴────────────┴──────┘ ┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐ │ 1 │ https://niocoder.com/ │ java干貨 │ 2020-11-28 │ 2020 │ └────┴───────────────────────┴──────────┴────────────┴──────┘ 3 rows in set. Elapsed: 0.018 sec. node01 :) select * from cluster1s2r_local; # node01節點查詢本地表 SELECT * FROM cluster1s2r_local ┌─id─┬─website─────────────────┬─wechat─────┬─FlightDate─┬─Year─┐ │ 2 │ http://www.merryyou.cn/ │ javaganhuo │ 2020-11-28 │ 2020 │ └────┴─────────────────────────┴────────────┴────────────┴──────┘ ┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐ │ 1 │ https://niocoder.com/ │ java干貨 │ 2020-11-28 │ 2020 │ └────┴───────────────────────┴──────────┴────────────┴──────┘ ┌─id─┬─website──────────────┬─wechat─┬─FlightDate─┬─Year─┐ │ 3 │ http://www.xxxxx.cn/ │ xxxxx │ 2020-11-28 │ 2020 │ └────┴──────────────────────┴────────┴────────────┴──────┘ 3 rows in set. Elapsed: 0.015 sec. node02 :) select * from cluster1s2r_local; # node02節點查詢本地表 SELECT * FROM cluster1s2r_local ┌─id─┬─website─────────────────┬─wechat─────┬─FlightDate─┬─Year─┐ │ 2 │ http://www.merryyou.cn/ │ javaganhuo │ 2020-11-28 │ 2020 │ └────┴─────────────────────────┴────────────┴────────────┴──────┘ ┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐ │ 1 │ https://niocoder.com/ │ java干貨 │ 2020-11-28 │ 2020 │ └────┴───────────────────────┴──────────┴────────────┴──────┘ ┌─id─┬─website──────────────┬─wechat─┬─FlightDate─┬─Year─┐ │ 3 │ http://www.xxxxx.cn/ │ xxxxx │ 2020-11-28 │ 2020 │ └────┴──────────────────────┴────────┴────────────┴──────┘ 3 rows in set. Elapsed: 0.007 sec.
查詢node01
和node02
本地表cluster1s2r_local
都是全量數據, 即使sotp
到其中一個節點數據也不會丟失,數據副本已經生效。
既然有多副本,就有個一致性的問題:加入寫入數據時,掛掉一臺機器,會怎樣?
模擬寫入分布式表是某一個節點down
機
停掉node02
節點服務 service clickhouse-server stop
在node01
節點上向分布式表cluster1s2r_all
插入數據
sql INSERT INTO default.cluster1s2r_all (id,website,wechat,FlightDate,Year)values(4,'http://www.yyyyyy.cn/','yyyyy','2020-11-29',2020);
啟動node02
節點服務
查詢驗證是否同步 查看node01
和node02
機器的cluster1s2r_local
、以及cluster1s2r_all
,發現都是總數據量都增加了1條,說明這種情況下,集群節點之間能夠自動同步
上面是通過向分布式表cluster1s2r_all
插入數據,如果通過本地表cluster1s2r_local
,數據還能同步嗎?
在node01
上往cluster1s2r_local
插入1條數據;
查詢node02
,cluster1s2r_local
數據沒有同步
綜上所述,通過分布表寫入數據,會自動同步數據;而通過本地表表寫入數據,不會同步;一般i情況下是沒什么大問題。
但是生產情況總比理論復雜的多,以上配置可能會存在數據不一致的問題
官方文檔描述如下:
Each shard can have the internal_replication parameter defined in the config file. If this parameter is set to true, the write operation selects the first healthy replica and writes data to it. Use this alternative if the Distributed table “looks at” replicated tables. In other words, if the table where data will be written is going to replicate them itself. If it is set to false (the default), data is written to all replicas. In essence, this means that the Distributed table replicates data itself. This is worse than using replicated tables, because the consistency of replicas is not checked, and over time they will contain slightly different data.
翻譯如下:
分片可在配置文件中定義 ‘internal_replication’ 參數。 此參數設置為?true?時,寫操作只選一個正常的副本寫入數據。如果分布式表的子表是復制表(*ReplicaMergeTree),請使用此方案。換句話說,這其實是把數據的復制工作交給實際需要寫入數據的表本身而不是分布式表。 若此參數設置為?false?(默認值),寫操作會將數據寫入所有副本。實質上,這意味著要分布式表本身來復制數據。這種方式不如使用復制表的好,因為不會檢查副本的一致性,并且隨著時間的推移,副本數據可能會有些不一樣。
簡單理解如下:
這個為true代表zk會挑選一個合適的節點寫入,然后在后臺進行多個節點之間數據的同步. 如果是false,則是一次性寫入所有節點,以這種重復寫入的方法實現節點之間數據的同步.
自動數據備份是表的行為,引擎為 ReplicatedXXX
的表支持自動同步。
Replicated
前綴只用于MergeTree
系列(MergeTree
是最常用的引擎)。
重點說明: Replicated
表自動同步與之前的集群自動同步不同,是表的行為,與metrika.xml
中的<clickhouse_remote_servers>
配置沒有關系,只要有zookeeper
配置就行了。
node01
修改metrika.xml
配置
<yandex> <zookeeper-servers> <node index="1"> <host>node01</host> <port>2181</port> </node> <node index="2"> <host>node02</host> <port>2181</port> </node> <node index="3"> <host>node03</host> <port>2181</port> </node> </zookeeper-servers> </yandex>
將修改后的配置分發到node02
機器上
[root@node01 clickhouse-server]# scp /etc/clickhouse-server/metrika.xml node02:$PWD metrika.xml 重啟`clickhouse-server`,由于之前的表存在導致啟動是失敗。`error`日志 ```shell [root@node01 clickhouse-server]# tail -f /var/log/clickhouse-server/clickhouse-server.err.log 7. DB::StorageDistributed::startup() @ 0x10f1bd40 in /usr/bin/clickhouse 8. ? @ 0x1151d922 in /usr/bin/clickhouse 9. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa43d6ad in /usr/bin/clickhouse 10. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa43dd93 in /usr/bin/clickhouse 11. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa43cc4d in /usr/bin/clickhouse 12. ? @ 0xa43b3ff in /usr/bin/clickhouse 13. start_thread @ 0x7ea5 in /usr/lib64/libpthread-2.17.so 14. clone @ 0xfe8dd in /usr/lib64/libc-2.17.so (version 20.8.3.18) 2020.11.29 14:43:01.163530 [ 3643 ] {} <Error> Application: DB::Exception: Requested cluster 'cluster_1shards_2replicas' not found: while loading database `default` from path /var/lib/clickhouse/metadata/default
刪除之前的建表語句
[root@node01 default]# rm -rf /var/lib/clickhouse/metadata/default/*.sql
啟動clickhouse-server
在node01
和node02
節點上創建數據庫表
-- node01 節點 CREATE TABLE `cluster_zk` ( `id` Int32, `website` String, `wechat` String, `FlightDate` Date, Year UInt16 ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/cluster_zk', 'replica01', FlightDate, (Year, FlightDate), 8192); -- node02節點 CREATE TABLE `cluster_zk` ( `id` Int32, `website` String, `wechat` String, `FlightDate` Date, Year UInt16 ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/cluster_zk', 'replica02', FlightDate, (Year, FlightDate), 8192);
node01
節點上插入數據
INSERT INTO default.cluster_zk (id,website,wechat,FlightDate,Year)values(1,'https://niocoder.com/','java干貨','2020-11-28',2020);
node01
,node02
節點上查詢數據
node01 :) select * from cluster_zk; # node01節點 SELECT * FROM cluster_zk ┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐ │ 1 │ https://niocoder.com/ │ java干貨 │ 2020-11-28 │ 2020 │ └────┴───────────────────────┴──────────┴────────────┴──────┘ 1 rows in set. Elapsed: 0.004 sec. node02 :) select * from cluster_zk; # node02節點 SELECT * FROM cluster_zk ┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐ │ 1 │ https://niocoder.com/ │ java干貨 │ 2020-11-28 │ 2020 │ └────┴───────────────────────┴──────────┴────────────┴──────┘ 1 rows in set. Elapsed: 0.004 sec.
查詢zk
信息
[zk: localhost:2181(CONNECTED) 2] ls /clickhouse/tables/cluster_zk/replicas [replica02, replica01] [zk: localhost:2181(CONNECTED) 3]
node01
修改metrika.xml
配置, 注意此處internal_replication
為true
<yandex> <clickhouse_remote_servers> <perftest_1shards_2replicas> <shard> <internal_replication>true</internal_replication> <replica> <host>node01</host> <port>9000</port> </replica> <replica> <host>node02</host> <port>9000</port> </replica> </shard> </perftest_1shards_2replicas> </clickhouse_remote_servers> <zookeeper-servers> <node index="1"> <host>node01</host> <port>2181</port> </node> <node index="2"> <host>node02</host> <port>2181</port> </node> <node index="3"> <host>node03</host> <port>2181</port> </node> </zookeeper-servers> </yandex>
將修改后的配置分發到node02
機器上
[root@node01 clickhouse-server]# scp /etc/clickhouse-server/metrika.xml node02:$PWD metrika.xml
查詢集群信息
node01 :) select * from system.clusters; SELECT * FROM system.clusters ┌─cluster───────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─┬─host_address───┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐ │ perftest_1shards_2replicas │ 1 │ 1 │ 1 │ node01 │ 192.168.10.100 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ perftest_1shards_2replicas │ 1 │ 1 │ 2 │ node02 │ 192.168.10.110 │ 9000 │ 0 │ default │ │ 0 │ 0 │ │ test_cluster_two_shards │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ test_cluster_two_shards │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ │ test_cluster_two_shards_localhost │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ test_cluster_two_shards_localhost │ 2 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ test_shard_localhost │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ test_shard_localhost_secure │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9440 │ 0 │ default │ │ 0 │ 0 │ │ test_unavailable_shard │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ test_unavailable_shard │ 2 │ 1 │ 1 │ localhost │ ::1 │ 1 │ 0 │ default │ │ 0 │ 0 │ └───────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────┴────────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘ 10 rows in set. Elapsed: 0.018 sec.
創建分布式表
CREATE TABLE default.clusterzk_all AS cluster_zk ENGINE = Distributed(perftest_1shards_2replicas, default, cluster_zk, rand());
分布式表查詢數據
ode01 :) select * from clusterzk_all; SELECT * FROM clusterzk_all ┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐ │ 1 │ https://niocoder.com/ │ java干貨 │ 2020-11-28 │ 2020 │ └────┴───────────────────────┴──────────┴────────────┴──────┘ 1 rows in set. Elapsed: 0.020 sec.
分布式表寫入
上文已經提到,internal_replication
為true
,則通過分布表寫入數據時,會自動找到“最健康”的副本寫入,然后其他副本通過表自身的復制功能同步數據,最終達到數據一致。
ip | 主機 | clickhouse | 分片副本 |
---|---|---|---|
192.168.10.100 | node01 | 9000 | 01/01 |
192.168.10.100 | node01 | 9001 | 03/02 |
192.168.10.100 | node02 | 9000 | 02/01 |
192.168.10.100 | node02 | 9001 | 01/02 |
192.168.10.100 | node03 | 9000 | 03/01 |
192.168.10.100 | node03 | 9001 | 02/02 |
3分片2副本.
在node01
,node02
,node03
的9001
端口再啟動一個clickhouse-server
實例。
即shard1
的兩個副本放到node01 9000
、node02 9001
兩個機器上,shard2
的兩個副本放到node02 9000
、node03 9001
上,shard3
的兩個副本放到node03 9000
、node01 9001
上.
node01
創建并修改config1.xml
[root@node01 clickhouse-server]# cp /etc/clickhouse-server/config.xml /etc/clickhouse-server/config1.xml [root@node01 clickhouse-server]# vim /etc/clickhouse-server/config1.xml
修改以下內容
<?xml version="1.0"?> <yandex> <!--省略其他 --> <http_port>8124</http_port> <tcp_port>9001</tcp_port> <mysql_port>9005</mysql_port> <interserver_http_port>9010</interserver_http_port> <log>/var/log/clickhouse-server/clickhouse-server-1.log</log> <errorlog>/var/log/clickhouse-server/clickhouse-server.err-1.log</errorlog> <!-- Path to data directory, with trailing slash. --> <path>/var/lib/clickhouse1/</path> <!-- Path to temporary data for processing hard queries. --> <tmp_path>/var/lib/clickhouse1/tmp/</tmp_path> <user_files_path>/var/lib/clickhouse1/user_files/</user_files_path> <format_schema_path>/var/lib/clickhouse1/format_schemas/</format_schema_path> <include_from>/etc/clickhouse-server/metrika1.xml</include_from> <!--省略其他 --> </yandex>
node01
創建并修改metrika.xml
<yandex> <!--ck集群節點--> <clickhouse_remote_servers> <!--ck集群名稱--> <perftest_3shards_2replicas> <shard> <internal_replication>true</internal_replication> <replica> <host>node01</host> <port>9000</port> </replica> <replica> <host>node02</host> <port>9001</port> </replica> </shard> <shard> <internal_replication>true</internal_replication> <replica> <host>node02</host> <port>9000</port> </replica> <replica> <host>node03</host> <port>9001</port> </replica> </shard> <shard> <internal_replication>true</internal_replication> <replica> <host>node03</host> <port>9000</port> </replica> <replica> <host>node01</host> <port>9001</port> </replica> </shard> </perftest_3shards_2replicas> </clickhouse_remote_servers> <!--zookeeper相關配置--> <zookeeper-servers> <node index="1"> <host>node01</host> <port>2181</port> </node> <node index="2"> <host>node02</host> <port>2181</port> </node> <node index="3"> <host>node03</host> <port>2181</port> </node> </zookeeper-servers> <macros> <shard>01</shard> <!--分ID, 同一分片內的副本配置相同的分ID--> <replica>node01</replica> <!--當前節點主機名--> </macros> <networks> <ip>::/0</ip> </networks> <!--壓縮相關配置--> <clickhouse_compression> <case> <min_part_size>10000000000</min_part_size> <min_part_size_ratio>0.01</min_part_size_ratio> <method>lz4</method> <!--壓縮算法lz4壓縮比zstd快, 更占磁盤--> </case> </clickhouse_compression> </yandex>
metrika.xml
文件為metrika1.xml
,修改macros
配置node01``metrika.xml``macros
配置<macros> <shard>01</shard> <!--分ID, 同一分片內的副本配置相同的分ID--> <replica>node01</replica> <!--當前節點主機名--> </macros>
node01``metrika1.xml``macros
配置<macros> <shard>03</shard> <!--分ID, 同一分片內的副本配置相同的分ID--> <replica>node01</replica> <!--當前節點主機名--> </macros>
node02``metrika.xml``macros
配置<macros> <shard>02</shard> <!--分ID, 同一分片內的副本配置相同的分ID--> <replica>node02</replica> <!--當前節點主機名--> </macros>
node02``metrika1.xml``macros
配置<macros> <shard>01</shard> <!--分ID, 同一分片內的副本配置相同的分ID--> <replica>node02</replica> <!--當前節點主機名--> </macros>
node03``metrika.xml``macros
配置<macros> <shard>03</shard> <!--分ID, 同一分片內的副本配置相同的分ID--> <replica>node03</replica> <!--當前節點主機名--> </macros>
node03``metrika.1xml``macros
配置<macros> <shard>02</shard> <!--分ID, 同一分片內的副本配置相同的分ID--> <replica>node03</replica> <!--當前節點主機名--> </macros>
clickhouse-server-1
[root@node01 clickhouse-server]# cp /etc/rc.d/init.d/clickhouse-server /etc/rc.d/init.d/clickhouse-server-1 You have new mail in /var/spool/mail/root [root@node01 clickhouse-server]# vim /etc/rc.d/init.d/clickhouse-server-1
修改以下內容
CLICKHOUSE_CONFIG=$CLICKHOUSE_CONFDIR/config1.xml CLICKHOUSE_PIDFILE="$CLICKHOUSE_PIDDIR/$PROGRAM-1.pid"
node02
和node03
節點[root@node01 clickhouse-server]# scp /etc/rc.d/init.d/clickhouse-server-1 node02:/etc/rc.d/init.d/ clickhouse-server-1 100% 11KB 4.0MB/s 00:00 You have new mail in /var/spool/mail/root [root@node01 clickhouse-server]# scp /etc/rc.d/init.d/clickhouse-server-1 node03:/etc/rc.d/init.d/ clickhouse-server-1 100% 11KB 4.0MB/s 00:00 [root@node01 clickhouse-server]# scp /etc/clickhouse-server/config1.xml node02:$PWD config1.xml 100% 33KB 10.2MB/s 00:00 [root@node01 clickhouse-server]# scp /etc/clickhouse-server/config1.xml node03:$PWD config1.xml 100% 33KB 9.7MB/s 00:00 [root@node01 clickhouse-server]# scp /etc/clickhouse-server/metrika.xml node02:$PWD metrika.xml 100% 2008 1.0MB/s 00:00 You have new mail in /var/spool/mail/root [root@node01 clickhouse-server]# scp /etc/clickhouse-server/metrika.xml node03:$PWD metrika.xml 100% 2008 1.1MB/s 00:00 [root@node01 clickhouse-server]# [root@node01 clickhouse-server]# scp /etc/clickhouse-server/metrika1.xml node02:$PWD metrika1.xml 100% 2008 1.0MB/s 00:00 [root@node01 clickhouse-server]# scp /etc/clickhouse-server/metrika1.xml node03:$PWD metrika1.xml
修改node02
和node03
的macros
配置
ClickHouse
實例node01
的clickhouse-server-1
實例
node02
的clickhouse-server-1
實例
node03
的clickhouse-server
實例
node03
的clickhouse-server-1
實例
service clickhouse-server restart service clickhouse-server-1 restart
node01 :) select * from system.clusters; SELECT * FROM system.clusters ┌─cluster───────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─┬─host_address───┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐ │ perftest_3shards_2replicas │ 1 │ 1 │ 1 │ node01 │ 192.168.10.100 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ perftest_3shards_2replicas │ 1 │ 1 │ 2 │ node02 │ 192.168.10.110 │ 9001 │ 0 │ default │ │ 0 │ 0 │ │ perftest_3shards_2replicas │ 2 │ 1 │ 1 │ node02 │ 192.168.10.110 │ 9000 │ 0 │ default │ │ 0 │ 0 │ │ perftest_3shards_2replicas │ 2 │ 1 │ 2 │ node03 │ 192.168.10.120 │ 9001 │ 0 │ default │ │ 0 │ 0 │ │ perftest_3shards_2replicas │ 3 │ 1 │ 1 │ node03 │ 192.168.10.120 │ 9000 │ 0 │ default │ │ 0 │ 0 │ │ perftest_3shards_2replicas │ 3 │ 1 │ 2 │ node01 │ 192.168.10.100 │ 9001 │ 0 │ default │ │ 0 │ 0 │ │ test_cluster_two_shards │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ test_cluster_two_shards │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ │ test_cluster_two_shards_localhost │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ test_cluster_two_shards_localhost │ 2 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ test_shard_localhost │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ test_shard_localhost_secure │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9440 │ 0 │ default │ │ 0 │ 0 │ │ test_unavailable_shard │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ │ test_unavailable_shard │ 2 │ 1 │ 1 │ localhost │ ::1 │ 1 │ 0 │ default │ │ 0 │ 0 │ └───────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────┴────────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘ 14 rows in set. Elapsed: 0.019 sec.
創建可復制表,node01
節點執行即可,其他節點會自動創建。
CREATE TABLE `cluster32r_local` ON cluster perftest_3shards_2replicas ( `id` Int32, `website` String, `wechat` String, `FlightDate` Date, Year UInt16 ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/ontime','{replica}', FlightDate, (Year, FlightDate), 8192);
node01 :) CREATE TABLE `cluster32r_local` ON cluster perftest_3shards_2replicas :-] ( :-] `id` Int32, :-] `website` String, :-] `wechat` String, :-] `FlightDate` Date, :-] Year UInt16 :-] ) :-] ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/ontime','{replica}', FlightDate, (Year, FlightDate), 8192); CREATE TABLE cluster32r_local ON CLUSTER perftest_3shards_2replicas ( `id` Int32, `website` String, `wechat` String, `FlightDate` Date, `Year` UInt16 ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/ontime', '{replica}', FlightDate, (Year, FlightDate), 8192) ┌─host───┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐ │ node03 │ 9001 │ 0 │ │ 5 │ 0 │ │ node03 │ 9000 │ 0 │ │ 4 │ 0 │ │ node01 │ 9001 │ 0 │ │ 3 │ 0 │ │ node01 │ 9000 │ 0 │ │ 2 │ 0 │ │ node02 │ 9000 │ 0 │ │ 1 │ 0 │ └────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘ ┌─host───┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐ │ node02 │ 9001 │ 0 │ │ 0 │ 0 │ └────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘ 6 rows in set. Elapsed: 46.994 sec.
創建分布式表
CREATE TABLE cluster32r_all AS cluster32r_local ENGINE = Distributed(perftest_3shards_2replicas, default, cluster32r_local, rand());
往第一個shard
的副本插入數據(node01 9000
),可以在第二個副本中查看數據(node02 9001
)
INSERT INTO default.cluster32r_local (id,website,wechat,FlightDate,Year)values(1,'https://niocoder.com/','java干貨','2020-11-28',2020); INSERT INTO default.cluster32r_local (id,website,wechat,FlightDate,Year)values(2,'http://www.merryyou.cn/','javaganhuo','2020-11-28',2020);
使用客戶端鏈接node02 9001
實例查看
[root@node02 ~]# clickhouse-client --port 9001 -m ClickHouse client version 20.8.3.18. Connecting to localhost:9001 as user default. Connected to ClickHouse server version 20.8.3 revision 54438. node02 :) show tables; SHOW TABLES ┌─name─────────────┐ │ cluster32r_local │ └──────────────────┘ 1 rows in set. Elapsed: 0.010 sec. node02 :) select * from cluster32r_local; SELECT * FROM cluster32r_local ┌─id─┬─website─────────────────┬─wechat─────┬─FlightDate─┬─Year─┐ │ 2 │ http://www.merryyou.cn/ │ javaganhuo │ 2020-11-28 │ 2020 │ └────┴─────────────────────────┴────────────┴────────────┴──────┘ ┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐ │ 1 │ https://niocoder.com/ │ java干貨 │ 2020-11-28 │ 2020 │ └────┴───────────────────────┴──────────┴────────────┴──────┘ 2 rows in set. Elapsed: 0.018 sec.
分布式表查詢
node01 :) select * from cluster32r_all; SELECT * FROM cluster32r_all ┌─id─┬─website─────────────────┬─wechat─────┬─FlightDate─┬─Year─┐ │ 2 │ http://www.merryyou.cn/ │ javaganhuo │ 2020-11-28 │ 2020 │ └────┴─────────────────────────┴────────────┴────────────┴──────┘ ┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐ │ 1 │ https://niocoder.com/ │ java干貨 │ 2020-11-28 │ 2020 │ └────┴───────────────────────┴──────────┴────────────┴──────┘ 2 rows in set. Elapsed: 0.030 sec.
所有副本節點均可本地表和分布式表均可讀寫數據
看完上述內容,你們對ClickHouse集群搭建的方案是什么有進一步的了解嗎?如果還想了解更多知識或者相關內容,請關注億速云行業資訊頻道,感謝大家的支持。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。