您好,登錄后才能下訂單哦!
GP Master首先會檢測Primary狀態,如果Primary不可連通,那么將會檢測Mirror狀態,Primary/Mirror狀態總共有4種:
上面的2-4需要進行gprecoverseg進行segment恢復。
對失敗的segment節點;啟動時會直接跳過,忽略。
[gpadmin@mdw ~]$ gpstart
==≥ gpstart:mdw:gpadmin-[INFO]:-Starting gpstart with args:
==≥ gpstart:mdw:gpadmin-[INFO]:-Gathering information and validating the environment...
==≥ gpstart:mdw:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 5.0.0 build 1'
==≥ 。。。。。。。。。。。。。。。。。。。。。。。。。。
==≥ gpstart:mdw:gpadmin-[INFO]:-Master Started...
==≥ gpstart:mdw:gpadmin-[INFO]:-Shutting down master
==≥ gpstart:mdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw2 directory /data/gpdata/gpdatam/gpseg0 <<<<<
==≥ gpstart:mdw:gpadmin-[INFO]:---------------------------
==≥ gpstart:mdw:gpadmin-[INFO]:-Master instance parameters
==≥ gpstart:mdw:gpadmin-[INFO]:---------------------------
==≥ gpstart:mdw:gpadmin-[INFO]:-Database = template1
==≥ gpstart:mdw:gpadmin-[INFO]:-Master Port = 1921
==≥ gpstart:mdw:gpadmin-[INFO]:-Master directory = /data/gpdata/pgmaster/gpseg-1
==≥ gpstart:mdw:gpadmin-[INFO]:-Timeout = 600 seconds
==≥ gpstart:mdw:gpadmin-[INFO]:-Master standby = Off
==≥ gpstart:mdw:gpadmin-[INFO]:---------------------------------------
==≥ gpstart:mdw:gpadmin-[INFO]:-Segment instances that will be started
==≥ gpstart:mdw:gpadmin-[INFO]:---------------------------------------
==≥ gpstart:mdw:gpadmin-[INFO]:- Host Datadir Port Role
==≥ gpstart:mdw:gpadmin-[INFO]:- sdw1 /data/gpdata/gpdatap/gpseg0 40000 Primary
==≥ gpstart:mdw:gpadmin-[INFO]:- sdw2 /data/gpdata/gpdatap/gpseg1 40000 Primary
==≥ gpstart:mdw:gpadmin-[INFO]:- sdw1 /data/gpdata/gpdatam/gpseg1 50000 Mirror
Continue with Greenplum instance startup Yy|Nn (default=N):
> y
==》gpstart:mdw:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait...
==》
==》gpstart:mdw:gpadmin-[INFO]:-Process results...
==》gpstart:mdw:gpadmin-[INFO]:-----------------------------------------------------
==》gpstart:mdw:gpadmin-[INFO]:- Successful segment starts = 3
==》gpstart:mdw:gpadmin-[INFO]:- Failed segment starts = 0
==》gpstart:mdw:gpadmin-[WARNING]:-Skipped segment starts (segments are marked down in configuration) = 1 <<<<<<<<
==》gpstart:mdw:gpadmin-[INFO]:-----------------------------------------------------
==》gpstart:mdw:gpadmin-[INFO]:-
==》gpstart:mdw:gpadmin-[INFO]:-Successfully started 3 of 3 segment instances, skipped 1 other segments
==》gpstart:mdw:gpadmin-[INFO]:-----------------------------------------------------
==》gpstart:mdw:gpadmin-[WARNING]:-****************************************************************************
==》gpstart:mdw:gpadmin-[WARNING]:-There are 1 segment(s) marked down in the database
==》gpstart:mdw:gpadmin-[WARNING]:-To recover from this current state, review usage of the gprecoverseg
==》gpstart:mdw:gpadmin-[WARNING]:-management utility which will recover failed segment instance databases.
==》gpstart:mdw:gpadmin-[WARNING]:-****************************************************************************
==》gpstart:mdw:gpadmin-[INFO]:-Starting Master instance mdw directory /data/gpdata/pgmaster/gpseg-1
==》gpstart:mdw:gpadmin-[INFO]:-Command pg_ctl reports Master mdw instance active
==》gpstart:mdw:gpadmin-[INFO]:-No standby master configured. skipping...
==》gpstart:mdw:gpadmin-[WARNING]:-Number of segments not attempted to start: 1
==》gpstart:mdw:gpadmin-[INFO]:-Check status of database with gpstate utility51
[gpadmin@mdw ~]$ gpstate -m
==》gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -m
==》gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.0.0 build 1'
==》gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.0.0 build 1)
==》gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
==》gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
==》gpstate:mdw:gpadmin-[INFO]:--Current GPDB mirror list and status
==》gpstate:mdw:gpadmin-[INFO]:--Type = Spread
==》gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
==》gpstate:mdw:gpadmin-[INFO]:- Mirror Datadir Port Status Data Status
==》gpstate:mdw:gpadmin-[WARNING]:-sdw2 /data/gpdata/gpdatam/gpseg0 50000 Failed <<<<<<<<
==》gpstate:mdw:gpadmin-[INFO]:- sdw1 /data/gpdata/gpdatam/gpseg1 50000 Passive Synchronized
==》gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
==》gpstate:mdw:gpadmin-[WARNING]:-1 segment(s) configured as mirror(s) have failed
可直觀看出“[WARNING]:-sdw2 /data/gpdata/gpdatam/gpseg0 50000 Failed ”
---- 當然primary segment也是這樣恢復 ----
[gpadmin@mdw ~]$ gprecoverseg -o ./recov
==》gprecoverseg:mdw:gpadmin-[INFO]:-Starting gprecoverseg with args: -o ./recov
==》gprecoverseg:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.0.0 build 1'
==》gprecoverseg:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.0.0 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on ==》
==》gprecoverseg:mdw:gpadmin-[INFO]:-Checking if segments are ready
==》gprecoverseg:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
==》gprecoverseg:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
==》gprecoverseg:mdw:gpadmin-[INFO]:-Configuration file output to ./recov successfully.
可以知道哪些segment需要恢復
[gpadmin@mdw ~]$ cat recov
filespaceOrder=fastdisk
sdw2:50000:/data/gpdata/gpdatam/gpseg03
[gpadmin@mdw ~]$ gprecoverseg -i ./recov
==》gprecoverseg:mdw:gpadmin-[INFO]:-Starting gprecoverseg with args: -i ./recov
==》gprecoverseg:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.0.0 build 1'
==》gprecoverseg:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.0.0 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on ==》
==》gprecoverseg:mdw:gpadmin-[INFO]:-Checking if segments are ready
==》gprecoverseg:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
==》gprecoverseg:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
==》gprecoverseg:mdw:gpadmin-[INFO]:-Greenplum instance recovery parameters
==》gprecoverseg:mdw:gpadmin-[INFO]:----------------------------------------------------------
==》gprecoverseg:mdw:gpadmin-[INFO]:-Recovery from configuration -i option supplied
==》gprecoverseg:mdw:gpadmin-[INFO]:----------------------------------------------------------
==》gprecoverseg:mdw:gpadmin-[INFO]:-Recovery 1 of 1
==》gprecoverseg:mdw:gpadmin-[INFO]:----------------------------------------------------------
==》gprecoverseg:mdw:gpadmin-[INFO]:- Synchronization mode = Incremental
==》gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance host = sdw2
==》gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance address = sdw2
==》gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance directory = /data/gpdata/gpdatam/gpseg0
==》gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance port = 50000
==》gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance replication port = 51000
==》gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance fastdisk directory = /data/gpdata/seg1/pg_mir_cdr/gpseg0
==》gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance host = sdw1
==》gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance address = sdw1
==》gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance directory = /data/gpdata/gpdatap/gpseg0
==》gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance port = 40000
==》gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance replication port = 41000
==》gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance fastdisk directory = /data/gpdata/seg1/pg_pri_cdr/gpseg0
==》gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Target = in-place
==》gprecoverseg:mdw:gpadmin-[INFO]:-Process results...
==》gprecoverseg:mdw:gpadmin-[INFO]:-Done updating primaries
==》gprecoverseg:mdw:gpadmin-[INFO]:-******************************************************************
==》gprecoverseg:mdw:gpadmin-[INFO]:-Updating segments for resynchronization is completed.
==》gprecoverseg:mdw:gpadmin-[INFO]:-For segments updated successfully, resynchronization will continue in the background.
==》gprecoverseg:mdw:gpadmin-[INFO]:-
==》gprecoverseg:mdw:gpadmin-[INFO]:-Use gpstate -s to check the resynchronization progress.
==》gprecoverseg:mdw:gpadmin-[INFO]:-******************************************************************35
[gpadmin@mdw ~]$ gpstate -m
==》gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -m
==》gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.0.0 build 1'
==》。。。。。。。。。。。。。。。。。。。。。。。。。。
==》gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
==》gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
==》gpstate:mdw:gpadmin-[INFO]:--Current GPDB mirror list and status
==》gpstate:mdw:gpadmin-[INFO]:--Type = Spread
==》gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
==》gpstate:mdw:gpadmin-[INFO]:- Mirror Datadir Port Status Data Status
==》gpstate:mdw:gpadmin-[INFO]:- sdw2 /data/gpdata/gpdatam/gpseg0 50000 Passive Resynchronizing
==》gpstate:mdw:gpadmin-[INFO]:- sdw1 /data/gpdata/gpdatam/gpseg1 50000 Passive Synchronized
==》gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------13
數據庫的主備就恢復了,但是還有一步,是可選的。要不要把primary mirror角色對調一下,因為現在mirror和primary和優先角色是相反的。 如果要對調,使用以下命令,會停庫來處理。
gprecoverseg -r
-i :主要參數,用于指定一個配置文件,該配置文件描述了需要修復的Segment和修復后的目的位置。
-F :可選項,指定后,gprecoverseg會將”-i”中指定的或標記”d”的實例刪除,并從活著的Mirror復制一個完整一份到目標位置。
-r :當FTS發現有Primary宕機并進行主備切換,在gprecoverseg修復后,擔當Primary的Mirror角色并不會立即切換回來,就會導致部分主機上活躍的Segment過多從而引起性能瓶頸。因此需要恢復Segment原先的角色,稱為re-balance.
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。