您在这里:首页 > 学员专区 > 技术文章
Oracle视频
Oracle
CUUG课程

Oracle10203RAC环境添加新节点(二)

 

简单描述一下,在Oracle 10203 for Solaris sparcRAC双节点环境中,新增一个节点的过程。共享存储已经在第三个节点上配置完成,这里主要介绍操作系统上和Oracle上的配置。

这一篇描述CLUSTER软件的建立。

 

 

在新增节点racnode3上将共享存储上的裸设备授权给Oracle用户:

bash-3.00# chown oracle:oinstall /dev/rdsk/c1t500601603022E66Ad*

建立和节点1、节点2上一致的/dev/rac/vot和/dev/rac/ocr链接,执行相同的裸设备文件:

bash-3.00# mkdir /dev/rac

bash-3.00# ln -s /dev/rdsk/c1t500601603022E66Ad2s1 /dev/rac/ocr

bash-3.00# ln -s /dev/rdsk/c1t500601603022E66Ad2s3 /dev/rac/vot

安装节点执行$ORA_CLS_HOME/out/bin/addNode.sh来启动图形化界面添加新的节点:

bash-2.03$ ./addNode.sh

Starting Oracle Universal Installer...

Checking installer requirements...

Checking operating system version: must be 5.8, 5.9 or 5.10.    Actual 5.8

                                      Passed

Checking Temp space: must be greater than 150 MB.   Actual 8715 MB    Passed

Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed

All installer requirements met.

Oracle Universal Installer, Version 10.2.0.1.0 Production

Copyright (C) 1999, 2005, Oracle. All rights reserved.

在图形界面中,填入新增节点的PUBLIC、PRIVATE和VIP对应的名称。

安装过程Oracle跳过了一些远端执行碰到的错误,需要手工更正,在racnode3上执行下面的脚本:

bash-3.00$ /data/oracle/product/10.2/crs/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/data/oracle/product/10.2/crs ORACLE_HOME_NAME=OraCrs10g_home1 CLUSTER_NODES=racnode1,racnode2,racnode3 CRS=true "INVENTORY_LOCATION=/data/oracle/oraInventory" LOCAL_NODE=racnode3

Starting Oracle Universal Installer...

Checking installer requirements...

Checking operating system version: must be 5.8, 5.9 or 5.10.    Actual 5.10

                                      Passed

Checking Temp space: must be greater than 250 MB.   Actual 10167 MB    Passed

Checking swap space: must be greater than 500 MB.   Actual 7415 MB    Passed

All installer requirements met.

'AttachHome' was successful.

在节点2上执行下面的脚本:

bash-2.03$ /data/oracle/product/10.2/crs/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/data/oracle/product/10.2/crs CLUSTER_NODES=racnode1,racnode2,racnode3 CRS=true "INVENTROY_LOCATION=/data/oracle/oraInventory" LOCAL_NODE=racnode2

Starting Oracle Universal Installer...

Checking installer requirements...

Checking operating system version: must be 5.8, 5.9 or 5.10.    Actual 5.8

                                      Passed

Checking Temp space: must be greater than 150 MB.   Actual 8233 MB    Passed

Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed

All installer requirements met.

'UpdateNodeList' was successful.

节点3上运行类似的脚本:

bash-3.00$ /data/oracle/product/10.2/crs/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/data/oracle/product/10.2/crs CLUSTER_NODES=racnode1,racnode2,racnode3 CRS=true "INVENTROY_LOCATION=/data/oracle/oraInventory" LOCAL_NODE=racnode3

Starting Oracle Universal Installer...

Checking installer requirements...

Checking operating system version: must be 5.8, 5.9 or 5.10.    Actual 5.10

                                      Passed

Checking Temp space: must be greater than 250 MB.   Actual 10167 MB    Passed

Checking swap space: must be greater than 500 MB.   Actual 7415 MB    Passed

Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed

All installer requirements met.

'UpdateNodeList' was successful.

随后需要依次执行下面的脚本:

1.节点3上的/data/oracle/oraInventory/orainstRoot.sh:

bash-3.00# ./data/oracle/oraInventory/orainstRoot.sh

Creating the Oracle inventory pointer file (/var/opt/oracle/oraInst.loc)

Changing permissions of /data/oracle/oraInventory to 770.

Changing groupname of /data/oracle/oraInventory to oinstall.

The execution of the script. is complete

2.节点1上的/data/oracle/product/10.2/crs/install/rootaddnode.sh:

# ./data/oracle/product/10.2/crs/install/rootaddnode.sh

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is10GRelease 2.

Attempting to add 1 new nodes to the configuration

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node <nodenumber>: <nodename> <private interconnect name> <hostname>

node 3: racnode3 racnode3-priv racnode3

Creating OCR keys for user 'root', privgrp 'other'..

Operation successful.

/data/oracle/product/10.2/crs/bin/srvctl add nodeapps -n racnode3 -A racnode3-vip/255.255.255.0/ce0 -o /data/oracle/product/10.2/crs

3.节点3上的/data/oracle/product/10.2/crs/root.sh:

bash-3.00# ./data/oracle/product/10.2/crs/root.sh

WARNING: directory '/data/oracle/product/10.2' is not owned by root

WARNING: directory '/data/oracle/product' is not owned by root

WARNING: directory '/data/oracle' is not owned by root

WARNING: directory '/data' is not owned by root

Checking to see if Oracle CRS stack is already configured

OCR LOCATIONS =  /dev/rac/ocr

OCR backup directory '/data/oracle/product/10.2/crs/cdata/crs' does not exist. Creating now

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

WARNING: directory '/data/oracle/product/10.2' is not owned by root

WARNING: directory '/data/oracle/product' is not owned by root

WARNING: directory '/data/oracle' is not owned by root

WARNING: directory '/data' is not owned by root

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is10GRelease 2.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node <nodenumber>: <nodename> <private interconnect name> <hostname>

node 1: racnode1 racnode1-priv racnode1

node 2: racnode2 racnode2-priv racnode2

clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.

-force is destructive and will destroy any previous cluster

configuration.

Oracle Cluster Registry for cluster has already been initialized

Startup will be queued to init within 30 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

        racnode1

        racnode2

        racnode3

CSS is active on all nodes.

Waiting for the Oracle CRSD and EVMD to start

Oracle CRS stack installed and running under init(1M)

Running vipca(silent) for configuring nodeapps

IP address "racnode1-vip" has already been used. Enter an unused IP address.

由于vip的配置出现了错误,需要手工启动vipca来进行配置,在图形化界面用root执行/data/oracle/product/10.2/crs/bin/vipca,输入racnode3对应的VIP名称和ip地址:racnode3-vip,172.25.198.227,然后执行确定。

成功配置后,在添加节点的图形化工具addNode中点击继续。

至此,cluster node节点添加成功。

 

 (以上内容摘于网络,如有侵权,请告之,将第一时间删除)

相关文章 [上一篇] Oracle10203RAC环境添加新节点(一)
010-88589926(88587026)
CUUG热门培训课程
Oracle DBA就业培训
CUUG名师
网络课程
技术沙龙
最新动态

总机:(010)-88589926,88589826,88587026 QQ讨论群:243729577 182441349 邮箱:cuug_bj@cuug.com
通信地址:北京市海淀区紫竹院路98号北京化工大学科技园609室(CUUG)邮政编码:100089 
中国UNIX用户协会 Copyright 2010  ALL Rights Reserved 北京神脑资讯技术有限公司
京ICP备11008061号  京公网安备110108006275号