2009年6月4日星期四

Oracle11g RAC add node steps

在測試環境已經做過了,翻閱了oracle 官方的文檔,記錄如下,看看還有沒有思維的死角:
Add a node


1 About Preparing Access to the New Node

To prepare the new node prior to installing the Oracle software, see Chapter 2, "Preparing Your Cluster".

It is critical that you follow the configuration steps in order for the following procedures to work. These steps include, but are not limited to the following:
需要的,但是不限於一下操作

* Adding the public and private node names for the new node to the /etc/hosts file on the existing nodes, docrac1 and docrac2
/etc/hosts 中增加public,private 名字
* Verifying the new node can be accessed (using the ping command) from the existing nodes
從已存在的node驗證新節點 ping
* Running the following command on either docrac1 or docrac2 to verify the new node has been properly configured:
從已經存在的節點驗證新節點
cluvfy stage -pre crsinst -n docrac3

2 Extending the Oracle Clusterware Home Directory
Oracle Clusterware安裝

Now that the new node has been configured to support Oracle Clusterware, you use Oracle Universal Installer (OUI) to add a CRS home to the node being added to your Oracle RAC cluster. This section assumes that you are adding a node named docrac3 and that you have already successfully installed Oracle Clusterware on docrac1 in a nonshared home, where CRS_home represents the successfully installed Oracle Clusterware home. Adding a new node to an Oracle RAC cluster is sometimes referred to as cloning.


To extend the Oracle Clusterware installation to include the new node:

1. Verify the ORACLE_HOME environment variable on docrac1 directs you to the successfully installed CRS home on that node.
在運行安裝程序的node 校驗ORACLE_HOME環境變數
2. Go to CRS_home/oui/bin and run the addNode.sh script.
在運行安裝程式的節點運行addNode.sh 腳本

cd /crs/oui/bin
./addNode.sh

OUI starts and first displays the Welcome window.
3. Click Next.

The Specify Cluster Nodes to Add to Installation window appears.
4. Select the node or nodes that you want to add, for example, docrac3. Make sure the public, private and VIP names are configured correctly for the node you are adding. Click Next.

5. Verify the entries that OUI displays on the Summary window and click Next.
The Cluster Node Addition Progress window appears. During the installation process, you will be prompted to run scripts to complete the configuration.
6. Run the rootaddNode.sh script from the CRS_home/install/ directory on docrac1 as the root user when prompted to do so. For example:

[docrac1:oracle]$ su root
[docrac1:root]# cd /crs/install
[docrac1:root]# ./rootaddNode.sh
在運行安裝程式的node 運行 rootaddNode.sh
This script adds the node applications of the new node to the Oracle Cluster Registry (OCR) configuration.
7. Run the orainstRoot.sh script on the node docrac3 if OUI prompts you to do so. When finished, click OK in the OUI window to continue with the installation.
在新增node運行orainstRoot.sh腳本

Another window appears, prompting you to run the root.sh script.
在新增node運行root.sh腳本

8. Run the CRS_home/root.sh script as the root user on the node docrac3 to start Oracle Clusterware on the new node.

[docrac3:oracle]$ su root
[docrac3:root]# cd /crs
[docrac3:root]# ./root.sh

9.

Return to the OUI window after the script runs successfully, then click OK.

OUI displays the End of Installation window.
10.

Exit the installer.
11. Obtain the Oracle Notification Services (ONS) port identifier used by the new node, which you need to know for the next step, by running the ons.config script in the CRS_home/opmn/conf directory on the docrac1 node, as shown in the following example:

[docrac1:oracle]$ cd /crs/opmn/conf
[docrac1:oracle]$ cat ons.config
--參考--

[oracle@croracle01 conf]$ cat ons.config
localport=6150
useocr=on
allowgroup=true
usesharedinstall=true

--------
After you locate the ONS port identifier for the new node, you must make sure that the ONS on docrac1 can communicate with the ONS on the new node, docrac3.

12. Add the new node's ONS configuration information to the shared OCR. From the CRS_home/bin directory on the node docrac1, run the ONS configuration utility as shown in the following example, where remote_port is the port identifier from Step 11, and docrac3 is the name of the node that you are adding:
在共享OCR信息中增加新增node's ONS

[docrac1:oracle]$ ./racgons add_config docrac3:remote_port
You should now have Oracle Clusterware running on the new node. To verify the installation of Oracle Clusterware on the new node, you can run the following command as the root user on the newly configured node, docrac3:

[docrac1:oracle]$ opt/oracle/crs/bin/cluvfy stage -post crsinst -n docrac3 -verbose
---參考--

[oracle@croracle01 ~]$ cluvfy stage -post crsinst -n croracle04 -verbose

Performing post-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "croracle01"
Destination Node Reachable?
------------------------------------ ------------------------
croracle04 yes
Result: Node reachability check passed from node "croracle01".


Checking user equivalence...

Check: User equivalence for user "oracle"
Node Name Comment
------------------------------------ ------------------------
croracle04 passed
Result: User equivalence check passed for user "oracle".

Checking Cluster manager integrity...


Checking CSS daemon...
Node Name Status
------------------------------------ ------------------------
croracle04 running
Result: Daemon status check passed for "CSS daemon".

Cluster manager integrity check passed.

Checking cluster integrity...

Node Name
------------------------------------
croracle01
croracle02
croracle03
croracle04

Cluster integrity check passed


Checking OCR integrity...

Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.

Uniqueness check for OCR device passed.

Checking the version of OCR...
OCR of correct Version "2" exists.

Checking data integrity of OCR...
Data integrity check for OCR passed.

OCR integrity check passed.

Checking CRS integrity...

Checking daemon liveness...

Check: Liveness for "CRS daemon"
Node Name Running
------------------------------------ ------------------------
croracle04 yes
Result: Liveness check passed for "CRS daemon".

Checking daemon liveness...

Check: Liveness for "CSS daemon"
Node Name Running
------------------------------------ ------------------------
croracle04 yes
Result: Liveness check passed for "CSS daemon".

Checking daemon liveness...

Check: Liveness for "EVM daemon"
Node Name Running
------------------------------------ ------------------------
croracle04 yes
Result: Liveness check passed for "EVM daemon".

Liveness of all the daemons
Node Name CRS daemon CSS daemon EVM daemon
------------ ------------------------ ------------------------ ----------
croracle04 yes yes yes

Checking CRS health...

Check: Health of CRS
Node Name CRS OK?
------------------------------------ ------------------------
croracle04 yes
Result: CRS health check passed.

CRS integrity check passed.

Checking node application existence...

Checking existence of VIP node application
Node Name Required Status Comment
------------ ------------------------ ------------------------ ----------
croracle04 yes exists passed
Result: Check passed.

Checking existence of ONS node application
Node Name Required Status Comment
------------ ------------------------ ------------------------ ----------
croracle04 no exists passed
Result: Check passed.

Checking existence of GSD node application
Node Name Required Status Comment
------------ ------------------------ ------------------------ ----------
croracle04 no exists passed
Result: Check passed.


Post-check for cluster services setup was successful.
[oracle@croracle01 ~]$

---------


3. Extending the Automatic Storage Management Home Directory
擴展ASM

To extend an existing Oracle RAC database to a new node, you must configure the shared storage for the new database instances that will be created on new node. You must configure access to the same shared storage that is already used by the existing database instances in the cluster. For example, the sales cluster database in this guide uses Automatic Storage Management (ASM) for the database shared storage, so you must configure ASM on the node being added to the cluster.

Because you installed ASM in its own home directory, you must configure an ASM home on the new node using OUI. The procedure for adding an ASM home to the new node is very similar to the procedure you just completed for extending Oracle Clusterware to the new node.
OUI方式(圖形)
Note:
If the ASM home directory is the same as the Oracle home directory in your installation, then you do not need to complete the steps in this section.
步驟:
To extend the ASM installation to include the new node:

1. Ensure that you have successfully installed the ASM software on at least one node in your cluster environment. In the following steps, ASM_home refers to the location of the successfully installed ASM software.
在安裝程序node的 ASM_HOME 目錄
2. Go to the ASM_home/oui/bin directory on docrac1 and run the addNode.sh script.
3. When OUI displays the Node Selection window, select the node to be added (docrac3), and then click Next.
4. Verify the entries that OUI displays on the Summary window, and then click Next.
5. Run the root.sh script on the new node, docrac3, from the ASM home directory on that node when OUI prompts you to do so.

You now have a copy of the ASM software on the new node.


4 Extending the Oracle RAC Home Directory
可以理解為安裝Oracle database software

Now that you have extended the CRS home and ASM home to the new node, you must extend the Oracle home on docrac1 to docrac3. The following steps assume that you have already completed the previous tasks described in this section, and that docrac3 is already a member node of the cluster to which docrac1 belongs.

The procedure for adding an Oracle home to the new node is very similar to the procedure you just completed for extending ASM to the new node.

To extend the Oracle RAC installation to include the new node:

1. Ensure that you have successfully installed the Oracle RAC software on at least one node in your cluster environment. To use these procedures as shown, replace Oracle_home with the location of your installed Oracle home directory.
2. Go to the Oracle_home/oui/bin directory on docrac1 and run the addNode.sh script.
從安裝程序node安裝 addNode.sh
3. When OUI displays the Specify Cluster Nodes to Add to Installation window, select the node to be added (docrac3), and then click Next.
4. Verify the entries that OUI displays in the Cluster Node Addition Summary window, and then click Next.
The Cluster Node Addition Progress window appears.
5. When prompted to do so, run the root.sh script s the root user on the new node, docrac3, from the Oracle home directory on that node.
6. Return to the OUI window and click OK. The End of Installation window appears.
7. Exit the installer.

After completing these steps, you should have an installed Oracle home on the new node.


5.Adding an Instance to the Cluster Database
這裡我使用的是dbca與資料不同.
You can use Enterprise Manager to add an instance to your cluster database. You must first configured the new node to be a part of the cluster and installed the software on the new node.

主要參考oracle 11g online document's.
Oracle® Database 2 Day + Real Application Clusters Guide
11g Release 1 (11.1)
9 Adding and Deleting Nodes and Instances

没有评论:

发表评论