oracle 12c flex cluster專題 之 節點角色轉換
沃趣科技 周天鵬
筆者上一篇譯文中在介紹Leaf Node時提到,
**雖然leaf node不要求直接訪問共享存儲,但最好還是連上共享存儲,因為說不準未來哪天就要把這個leaf node轉為hub node使用。**
其實這樣的說法并不夠準確,在12cR1時,leaf node上是無法運行只讀數據庫實例的,這時不連接共享存儲完全不影響其使用。而12cR2的leaf node是可以運行只讀數據庫實例的,一旦leaf node上有了數據庫,這時leaf node(確切的說這時leaf node應該叫做reader node)就必須連接共享存儲了。
這次就介紹下如何將節點的角色在hub node和leaf node之間互相轉換。由于筆者實驗環境中已經存在了一個leaf node,所以先從leaf node轉為hub node做起。
初始狀態:
```
[root@rac1 ~]# crsctl get cluster mode status
Cluster is running in "flex" mode
[root@rac1 ~]# srvctl status srvpool -detail
Server pool name: Free
Active servers count: 0
Active server names:
Server pool name: Generic
Active servers count: 0
Active server names:
Server pool name: RF1POOL
Active servers count: 1
Active server names: rac3
NAME=rac3 STATE=ONLINE
Server pool name: ztp_pool
Active servers count: 2
Active server names: rac1,rac2
NAME=rac1 STATE=ONLINE
NAME=rac2 STATE=ONLINE
[root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'leaf'
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
Node 'rac3' active role is 'leaf'
```
# leaf轉hub
該集群上運行著名為orcl的數據庫,在角色轉換之前先觀察下orcl庫的狀態
```
ora.orcl.db
1 ONLINE ONLINE rac3 Open,Readonly,HOME=/
u01/app/oracle/produ
ct/12.2.0/dbhome_1,S
TABLE
2 ONLINE ONLINE rac2 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3 ONLINE ONLINE rac1 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
```
顯然,由于rac3現在是leaf node,所以rac3上的數據庫實例只能以只讀方式打開。
執行如下操作即可將rac3的角色從leaf node轉為hub node
**crsctl set node role {hub | leaf}**
```
[root@rac3 ~]# crsctl set node role hub
CRS-4408: Node 'rac3' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.
```
查看各節點角色信息
```
[root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'hub', but active role is 'leaf'.
Restart Oracle High Availability Services for the new role to take effect.
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
Node 'rac3' active role is 'leaf', but configured role is 'hub'.
Restart Oracle High Availability Services for the new role to take effect.
```
根據命令輸出信息可知,在配置生效前需要重啟該節點的crs,即**角色轉換無法在線進行。**
關閉rac3的crs服務
```
[root@rac3 ~]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac3'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'rac3'
CRS-2673: Attempting to stop 'ora.orcl.db' on 'rac3'
CRS-2677: Stop of 'ora.orcl.db' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac3'
CRS-2673: Attempting to stop 'ora.LISTENER_LEAF.lsnr' on 'rac3'
CRS-2677: Stop of 'ora.LISTENER_LEAF.lsnr' on 'rac3' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.rac3.vip' on 'rac3'
CRS-2677: Stop of 'ora.rac3.vip' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.rac3.vip' on 'rac2'
CRS-2676: Start of 'ora.rac3.vip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac3'
CRS-2677: Stop of 'ora.net1.network' on 'rac3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac3' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac3'
CRS-2673: Attempting to stop 'ora.crf' on 'rac3'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac3'
CRS-2673: Attempting to stop 'ora.m
dnsd' on 'rac3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac3' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac3'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac3'
CRS-2677: Stop of 'ora.mdnsd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac3' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac3'
CRS-2677: Stop of 'ora.cssd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.driver.afd' on 'rac3'
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac3'
CRS-2677: Stop of 'ora.driver.afd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
```
查看各個節點角色信息
```
[root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
```
啟動rac3的crs服務
```
[root@rac3 ~]# crsctl start crs -wait
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.evmd' on 'rac3'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac3'
CRS-2676: Start of 'ora.mdnsd' on 'rac3' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac3'
CRS-2676: Start of 'ora.gpnpd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac3'
CRS-2676: Start of 'ora.gipcd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac3'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac3'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac3'
CRS-2676: Start of 'ora.diskmon' on 'rac3' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac3'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac3'
CRS-2676: Start of 'ora.ctssd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac3'
CRS-2676: Start of 'ora.crf' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac3'
CRS-2676: Start of 'ora.crsd' on 'rac3' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac3'
CRS-2676: Start of 'ora.drivers.acfs' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac3'
CRS-2676: Start of 'ora.asm' on 'rac3' succeeded
CRS-6017: Processing resource auto-start for servers: rac3
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac3'
CRS-2673: Attempting to stop 'ora.rac3.vip' on 'rac2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rac2'
CRS-2672: Attempting to start 'ora.ons' on 'rac3'
CRS-2677: Stop of 'ora.rac3.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.rac3.vip' on 'rac3'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'rac2'
CRS-2677: Stop of 'ora.scan2.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'rac3'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac3' succeeded
CRS-2676: Start of 'ora.rac3.vip' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'rac3'
CRS-2676: Start of 'ora.ons' on 'rac3' succeeded
CRS-2676: Start of 'ora.scan2.vip' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'rac3'
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'rac3' succeeded
CRS-2679: Attempting to clean 'ora.asm' on 'rac3'
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'rac3' succeeded
CRS-2681: Clean of 'ora.asm' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac3'
CRS-2676: Start of 'ora.asm' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac3'
CRS-2672: Attempting to start 'ora.FLEXDG.dg' on 'rac3'
CRS-2676: Start of 'ora.FLEXDG.dg' on 'rac3' succeeded
CRS-2676: Start of 'ora.DATA.dg' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.orcl.db' on 'rac3'
CRS-2672: Attempting to start 'ora.prod1.db' on 'rac3'
CRS-2676: Start of 'ora.orcl.db' on 'rac3' succeeded
CRS-2676: Start of 'ora.prod1.db' on 'rac3' succeeded
CRS-6016: Resource auto-start has completed for server rac3
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
```
啟動完成后在查看各個節點角色信息
```
[root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'hub'
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
Node 'rac3' active role is 'hub'
```
此時觀察下整個集群的狀態
```
[root@rac1 ~]# crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ONLINE ONLINE rac3 STABLE
ora.DATA.dg
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ONLINE ONLINE rac3 STABLE
ora.FLEXDG.dg
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ONLINE ONLINE rac3 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ONLINE ONLINE rac3 STABLE
ora.OCR.dg
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ONLINE ONLINE rac3 STABLE
ora.net1.network
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ONLINE ONLINE rac3 STABLE
ora.ons
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ONLINE ONLINE rac3 STABLE
ora.proxy_advm
OFFLINE OFFLINE rac1 STABLE
OFFLINE OFFLINE rac2 STABLE
OFFLINE OFFLINE rac3 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac3 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac2 STABLE
ora.MGMTLSNR
1 OFFLINE OFFLINE STABLE
ora.asm
1 ONLINE ONLINE rac1 Started,STABLE
2 ONLINE ONLINE rac2 Started,STABLE
3 ONLINE ONLINE rac3 Started,STABLE
ora.cvu
1 ONLINE ONLINE rac2 STABLE
ora.gns
1 ONLINE ONLINE rac1 STABLE
ora.gns.vip
1 ONLINE ONLINE rac1 STABLE
ora.orcl.db
1 ONLINE ONLINE rac3 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
2 ONLINE ONLINE rac2 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3 ONLINE ONLINE rac1 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
ora.prod1.db
1 ONLINE ONLINE rac1 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
2 ONLINE ONLINE rac2 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3 ONLINE ONLINE rac3 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
ora.qosmserver
1 OFFLINE OFFLINE STABLE
ora.rac1.vip
1 ONLINE ONLINE rac1 STABLE
ora.rac2.vip
1 ONLINE ONLINE rac2 STABLE
ora.rac3.vip
1 ONLINE ONLINE rac3 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac1 STABLE
ora.scan2.vip
1 ONLINE ONLINE rac3 STABLE
ora.scan3.vip
1 ONLINE ONLINE rac2 STABLE
--------------------------------------------------------------------------------
```
此時rac3上的orcl庫的實例已變為open狀態,而不是之前的Open,Readonly
# hub轉leaf
**在12cR2中,如果想將一個節點角色設置為leaf node,那么該集群的scan解析方式必須為GNS。**
通過上面的整個集群的狀態信息也可以看出筆者的實驗環境是配置了GNS的。如果未配置,執行crsctl set node role leaf命令時將報錯。
```
[root@rac3 ~]# crsctl set node role leaf
CRS-4408: Node 'rac3' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.
```
同上,rac3依然需要重啟crs來使配置生效。
過程略
重啟后各個節點角色信息如下:
```
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
Node 'rac3' active role is 'leaf'
[root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'leaf'
```
此時整個集群狀態如下:
```
[root@rac1 ~]# crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.DATA.dg
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.FLEXDG.dg
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ONLINE ONLINE rac3 STABLE
ora.LISTENER_LEAF.lsnr
OFFLINE OFFLINE rac3 STABLE
ora.OCR.dg
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.net1.network
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ONLINE ONLINE rac3 STABLE
ora.ons
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.proxy_advm
OFFLINE OFFLINE rac1 STABLE
OFFLINE OFFLINE rac2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac2 STABLE
ora.MGMTLSNR
1 OFFLINE OFFLINE STABLE
ora.asm
1 ONLINE ONLINE rac1 Started,STABLE
2 ONLINE ONLINE rac2 Started,STABLE
3 ONLINE OFFLINE Instance Shutdown,ST
ABLE
ora.cvu
1 ONLINE ONLINE rac2 STABLE
ora.gns
1 ONLINE ONLINE rac1 STABLE
ora.gns.vip
1 ONLINE ONLINE rac1 STABLE
ora.orcl.db
1 ONLINE ONLINE rac3 Open,Readonly,HOME=/
u01/app/oracle/produ
ct/12.2.0/dbhome_1,S
TABLE
2 ONLINE ONLINE rac2 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3 ONLINE ONLINE rac1 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
ora.prod1.db
1 ONLINE ONLINE rac1 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
2 ONLINE ONLINE rac2 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3 ONLINE OFFLINE Instance Shutdown,ST
ABLE
ora.qosmserver
1 OFFLINE OFFLINE STABLE
ora.rac1.vip
1 ONLINE ONLINE rac1 STABLE
ora.rac2.vip
1 ONLINE ONLINE rac2 STABLE
ora.rac3.vip
1 ONLINE ONLINE rac3 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac1 STABLE
ora.scan2.vip
1 ONLINE ONLINE rac1 STABLE
ora.scan3.vip
1 ONLINE ONLINE rac2 STABLE
--------------------------------------------------------------------------------
```
可以發現在rac3切換為leaf node之后,多了ora.LISTENER_LEAF.lsnr這個資源,
而且rac3上的asm實例是不啟動的,db實例又變成了readonly方式打開。
需要注意的一點是,leaf node上的只讀db實例會把服務注冊到LISTENER_LEAF這個監聽中,而不是LISTENER。
所以lsnrctl status的輸出結果始終看不到任何已注冊的服務。
```
[root@rac3 ~]# srvctl start listener -listener LISTENER_LEAF
[grid@rac3 ~]$ lsnrctl status
LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 27-JUL-2017 16:46:01
Copyright (c) 1991, 2016, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 27-JUL-2017 16:24:27
Uptime 0 days 0 hr. 21 min. 34 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/12.2.0/grid/network/admin/listener.ora
Listener Log File /u01/app/grid/diag/tnslsnr/rac3/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.70.103)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.70.186)(PORT=1521)))
The listener supports no services
The command completed successfully
[grid@rac3 ~]$ lsnrctl status listener_leaf
LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 27-JUL-2017 16:46:02
Copyright (c) 1991, 2016, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_LEAF)))
STATUS of the LISTENER
------------------------
Alias LISTENER_LEAF
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 27-JUL-2017 16:44:31
Uptime 0 days 0 hr. 1 min. 31 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/12.2.0/grid/network/admin/listener.ora
Listener Log File /u01/app/grid/diag/tnslsnr/rac3/listener_leaf/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_LEAF)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.70.103)(PORT=1525)))
Services Summary...
Service "5491bed1838610f0e05366460a0a5736" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
Service "5507ca8c0abd4747e05365460a0a8d01" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
Service "orcl" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
Service "orclXDB" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
Service "orclpdb" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
Service "ztp" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
The command completed successfully
```
最后需要注意的是:leaf node上默認監聽端口為1525
# 結論
* 轉換節點角色需要重啟該節點crs
* 12cR2中節點轉換為leaf node要求必須配置GNS
* leaf node上的asm實例是不會啟動的,db實例只能以只讀方式啟動
* 12cR1中還需要手動更新inventory,12cR2中已不再需要,角色修改操作大幅簡化