您好,登錄后才能下訂單哦!
今天小編給大家分享一下分布式服務注冊發現與統一配置管理Consul的方法的相關知識點,內容詳細,邏輯清晰,相信大部分人都還太了解這方面的知識,所以分享這篇文章給大家參考一下,希望大家閱讀完這篇文章后有所收獲,下面我們一起來了解一下吧。
下載相應版本解壓,并將可執行文件復制到/usr/local/consul目錄下
創建一個service的配置文件
silence$ sudo mkdir /etc/consul.d silence$ echo '{"service":{"name": "web", "tags": ["rails"], "port": 80}}' | sudo tee /etc/consul.d/web.json
啟動代理
silence$ /usr/local/consul/consul agent -dev -node consul_01 -config-dir=/etc/consul.d/ -ui
-dev 參數代表本地測試環境啟動;-node 參數表示自定義集群名稱;-config-drir 參數表示services的注冊配置文件目錄,即上面創建的文件夾-ui 啟動自帶的web-ui管理頁面
集群成員查詢方式
silence-pro:~ silence$ /usr/local/consul/consul members
HTTP協議數據查詢
silence-pro:~ silence$ curl http://127.0.0.1:8500/v1/catalog/service/web [ { "ID": "ab1e3577-1b24-d254-f55e-9e8437956009", "Node": "consul_01", "Address": "127.0.0.1", "Datacenter": "dc1", "TaggedAddresses": { "lan": "127.0.0.1", "wan": "127.0.0.1" }, "NodeMeta": { "consul-network-segment": "" }, "ServiceID": "web", "ServiceName": "web", "ServiceTags": [ "rails" ], "ServiceAddress": "", "ServicePort": 80, "ServiceEnableTagOverride": false, "CreateIndex": 6, "ModifyIndex": 6 } ] silence-pro:~ silence$
web-ui管理 Consul Web UI
Consul的web-ui可以用來進行服務狀態的查看,集群節點的檢查,訪問列表的控制以及KV存儲系統的設置,相對于Eureka和ETCD,Consul的web-ui要好用的多。(Eureka和ETCD將在下一篇文章中簡單介紹。)
7.KV存儲的數據導入和導出
silence-pro:consul silence$ ./consul kv import @temp.json silence-pro:consul silence$ ./consul kv export redis/
temp.json文件內容格式如下,一般是管理頁面配置后先導出保存文件,以后需要再導入該文件
[ { "key": "redis/config/password", "flags": 0, "value": "MTIzNDU2" }, { "key": "redis/config/username", "flags": 0, "value": "U2lsZW5jZQ==" }, { "key": "redis/zk/", "flags": 0, "value": "" }, { "key": "redis/zk/password", "flags": 0, "value": "NDU0NjU=" }, { "key": "redis/zk/username", "flags": 0, "value": "ZGZhZHNm" } ]
Consul的KV存儲系統是一種類似zk的樹形節點結構,用來存儲相關key/value鍵值對信息的,我們可以使用KV存儲系統來實現上面提到的配置中心,將統一的配置信息保存在KV存儲系統里面,方便各個實例獲取并使用同一配置。而且更改配置后各個服務可以自動拉取最新配置,不需要重啟服務。
maven pom依賴增加,版本可自由更換
<dependency> <groupId>com.orbitz.consul</groupId> <artifactId>consul-client</artifactId> <version>0.12.3</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency>
Consul 基本工具類,根據需要相應擴展
package com.coocaa.consul.consul.demo; import com.google.common.base.Optional; import com.google.common.net.HostAndPort; import com.orbitz.consul.*; import com.orbitz.consul.model.agent.ImmutableRegCheck; import com.orbitz.consul.model.agent.ImmutableRegistration; import com.orbitz.consul.model.health.ServiceHealth; import java.net.MalformedURLException; import java.net.URI; import java.util.List; public class ConsulUtil { private static Consul consul = Consul.builder().withHostAndPort(HostAndPort.fromString("127.0.0.1:8500")).build(); /** * 服務注冊 */ public static void serviceRegister() { AgentClient agent = consul.agentClient(); try { /** * 注意該注冊接口: * 需要提供一個健康檢查的服務URL,以及每隔多長時間訪問一下該服務(這里是3s) */ agent.register(8080, URI.create("http://localhost:8080/health").toURL(), 3, "tomcat", "tomcatID", "dev"); } catch (MalformedURLException e) { e.printStackTrace(); } } /** * 服務獲取 * * @param serviceName */ public static void findHealthyService(String serviceName) { HealthClient healthClient = consul.healthClient(); List<ServiceHealth> serviceHealthList = healthClient.getHealthyServiceInstances(serviceName).getResponse(); serviceHealthList.forEach((response) -> { System.out.println(response); }); } /** * 存儲KV */ public static void storeKV(String key, String value) { KeyValueClient kvClient = consul.keyValueClient(); kvClient.putValue(key, value); } /** * 根據key獲取value */ public static String getKV(String key) { KeyValueClient kvClient = consul.keyValueClient(); Optional<String> value = kvClient.getValueAsString(key); if (value.isPresent()) { return value.get(); } return ""; } /** * 找出一致性的節點(應該是同一個DC中的所有server節點) */ public static List<String> findRaftPeers() { StatusClient statusClient = consul.statusClient(); return statusClient.getPeers(); } /** * 獲取leader */ public static String findRaftLeader() { StatusClient statusClient = consul.statusClient(); return statusClient.getLeader(); } public static void main(String[] args) { AgentClient agentClient = consul.agentClient(); agentClient.deregister("tomcatID"); } }
3.通過上面的基本工具類可以實現服務的注冊和KV數據的獲取與存儲功能
三臺主機Consul下載安裝,我這里沒有物理主機,所以通過三臺虛擬機來實現。虛擬機IP分192.168.231.145,192.168.231.146,192.168.231.147
將145和146兩臺主機作為Server模式啟動,147作為Client模式啟動,Server和Client只是針對Consul集群來說的,跟服務沒有任何關系!
Server模式啟動145,節點名稱設為n1,數據中心統一用dc1
[root@centos145 consul]# ./consul agent -server -bootstrap-expect 2 -data-dir /tmp/consul -node=n1 -bind=192.168.231.145 -datacenter=dc1 bootstrap_expect = 2: A cluster with 2 servers will provide no failure tolerance. See https://www.consul.io/docs/internals/consensus.html#deployment-table bootstrap_expect > 0: expecting 2 servers ==> Starting Consul agent... ==> Consul agent running! Version: 'v1.0.1' Node ID: '6cc74ff7-7026-cbaa-5451-61f02114cd25' Node name: 'n1' Datacenter: 'dc1' (Segment: '<all>') Server: true (Bootstrap: false) Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600) Cluster Addr: 192.168.231.145 (LAN: 8301, WAN: 8302) Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false ==> Log data will now stream in as it occurs: 2017/12/06 23:26:21 [INFO] raft: Initial configuration (index=0): [] 2017/12/06 23:26:21 [INFO] serf: EventMemberJoin: n1.dc1 192.168.231.145 2017/12/06 23:26:21 [INFO] serf: EventMemberJoin: n1 192.168.231.145 2017/12/06 23:26:21 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp) 2017/12/06 23:26:21 [INFO] raft: Node at 192.168.231.145:8300 [Follower] entering Follower state (Leader: "") 2017/12/06 23:26:21 [INFO] consul: Adding LAN server n1 (Addr: tcp/192.168.231.145:8300) (DC: dc1) 2017/12/06 23:26:21 [INFO] consul: Handled member-join event for server "n1.dc1" in area "wan" 2017/12/06 23:26:21 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp) 2017/12/06 23:26:21 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp) 2017/12/06 23:26:21 [INFO] agent: started state syncer 2017/12/06 23:26:28 [ERR] agent: failed to sync remote state: No cluster leader 2017/12/06 23:26:30 [WARN] raft: no known peers, aborting election 2017/12/06 23:26:49 [ERR] agent: Coordinate update error: No cluster leader 2017/12/06 23:26:54 [ERR] agent: failed to sync remote state: No cluster leader 2017/12/06 23:27:24 [ERR] agent: Coordinate update error: No cluster leader 2017/12/06 23:27:27 [ERR] agent: failed to sync remote state: No cluster leader 2017/12/06 23:27:56 [ERR] agent: Coordinate update error: No cluster leader 2017/12/06 23:28:02 [ERR] agent: failed to sync remote state: No cluster leader 2017/12/06 23:28:27 [ERR] agent: failed to sync remote state: No cluster leader 2017/12/06 23:28:33 [ERR] agent: Coordinate update error: No cluster leader
目前只啟動了145,所以還沒有集群
4.Server模式啟動146,節點名稱用n2,并在n2上啟用了web-ui管理頁面功能
[root@centos146 consul]# ./consul agent -server -bootstrap-expect 2 -data-dir /tmp/consul -node=n2 -bind=192.168.231.146 -datacenter=dc1 -ui bootstrap_expect = 2: A cluster with 2 servers will provide no failure tolerance. See https://www.consul.io/docs/internals/consensus.html#deployment-table bootstrap_expect > 0: expecting 2 servers ==> Starting Consul agent... ==> Consul agent running! Version: 'v1.0.1' Node ID: 'eb083280-c403-668f-e193-60805c7c856a' Node name: 'n2' Datacenter: 'dc1' (Segment: '<all>') Server: true (Bootstrap: false) Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600) Cluster Addr: 192.168.231.146 (LAN: 8301, WAN: 8302) Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false ==> Log data will now stream in as it occurs: 2017/12/06 23:28:30 [INFO] raft: Initial configuration (index=0): [] 2017/12/06 23:28:30 [INFO] serf: EventMemberJoin: n2.dc1 192.168.231.146 2017/12/06 23:28:31 [INFO] serf: EventMemberJoin: n2 192.168.231.146 2017/12/06 23:28:31 [INFO] raft: Node at 192.168.231.146:8300 [Follower] entering Follower state (Leader: "") 2017/12/06 23:28:31 [INFO] consul: Adding LAN server n2 (Addr: tcp/192.168.231.146:8300) (DC: dc1) 2017/12/06 23:28:31 [INFO] consul: Handled member-join event for server "n2.dc1" in area "wan" 2017/12/06 23:28:31 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp) 2017/12/06 23:28:31 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp) 2017/12/06 23:28:31 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp) 2017/12/06 23:28:31 [INFO] agent: started state syncer 2017/12/06 23:28:38 [ERR] agent: failed to sync remote state: No cluster leader 2017/12/06 23:28:39 [WARN] raft: no known peers, aborting election 2017/12/06 23:28:57 [ERR] agent: Coordinate update error: No cluster leader 2017/12/06 23:29:11 [ERR] agent: failed to sync remote state: No cluster leader 2017/12/06 23:29:30 [ERR] agent: Coordinate update error: No cluster leader 2017/12/06 23:29:38 [ERR] agent: failed to sync remote state: No cluster leader 2017/12/06 23:29:57 [ERR] agent: Coordinate update error: No cluster leader
同樣沒有集群發現,此時n1和n2都啟動起來,但是互相并不知道集群的存在!
5.將n1節點加入n2
[silence@centos145 consul]$ ./consul join 192.168.231.146
此時n1和n2都打印發現了集群的日志信息
6.這個時候n1和n2兩個節點已經是一個集群里面的Server模式的節點了
7.Client模式啟動147
[root@centos147 consul]# ./consul agent -data-dir /tmp/consul -node=n3 -bind=192.168.231.147 -datacenter=dc1 ==> Starting Consul agent... ==> Consul agent running! Version: 'v1.0.1' Node ID: 'be7132c3-643e-e5a2-9c34-cad99063a30e' Node name: 'n3' Datacenter: 'dc1' (Segment: '') Server: false (Bootstrap: false) Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600) Cluster Addr: 192.168.231.147 (LAN: 8301, WAN: 8302) Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false ==> Log data will now stream in as it occurs: 2017/12/06 23:36:46 [INFO] serf: EventMemberJoin: n3 192.168.231.147 2017/12/06 23:36:46 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp) 2017/12/06 23:36:46 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp) 2017/12/06 23:36:46 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp) 2017/12/06 23:36:46 [INFO] agent: started state syncer 2017/12/06 23:36:46 [WARN] manager: No servers available 2017/12/06 23:36:46 [ERR] agent: failed to sync remote state: No known Consul servers 2017/12/06 23:37:08 [WARN] manager: No servers available 2017/12/06 23:37:08 [ERR] agent: failed to sync remote state: No known Consul servers 2017/12/06 23:37:36 [WARN] manager: No servers available 2017/12/06 23:37:36 [ERR] agent: failed to sync remote state: No known Consul servers 2017/12/06 23:38:02 [WARN] manager: No servers available 2017/12/06 23:38:02 [ERR] agent: failed to sync remote state: No known Consul servers 2017/12/06 23:38:22 [WARN] manager: No servers available 2017/12/06 23:38:22 [ERR] agent: failed to sync remote state: No known Consul servers
8.在n3上面將節點n3加入集群
[silence@centos147 consul]$ ./consul join 192.168.231.145
9.再次查看集群節點信息
10.此時三個節點的Consul集群搭建成功了!其實n1和n2是Server模式啟動,n3是Client模式啟動。
11.關于Consul的Server模式和Client模式主要的區別是這樣的,一個Consul集群通過啟動的參數-bootstrap-expect
來控制這個集群的Server節點個數,Server模式的節點會維護集群的狀態,并且如果某個Server節點退出了集群,則會觸發Leader重新選舉機制,在會剩余的Server模式節點中重新選舉一個Leader;而Client模式的節點的加入和退出很自由。
12.在n2中啟動web-ui
以上就是“分布式服務注冊發現與統一配置管理Consul的方法”這篇文章的所有內容,感謝各位的閱讀!相信大家閱讀完這篇文章都有很大的收獲,小編每天都會為大家更新不同的知識,如果還想學習更多的知識,請關注億速云行業資訊頻道。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。