技术活动
CUUG学员就业信息
学员感言、就业资讯
报名热线
文档
当前您的位置:首页 > 技术活动 > 技术中心 > 文档
使用 runcluvfy 校验Oracle RAC安装环境-CUUG

  --*****************************************

  -- 使用 runcluvfy 校验Oracle RAC安装环境

  --*****************************************

  所谓工欲善其事,必先利其器。安装 Orale RAC 可谓是一个浩大的工程,尤其是没有做好前期的规划与配置工作时将导致安装的复杂

  度绝非想象。幸好有runcluvfy工具,这大大简化了安装工作。下面的演示是基于安装Oracle 10g RAC / Linux来完成的。

  1.从安装文件路径下使用runcluvfy实施安装前的校验

  [oracle@node1 cluvfy]$ pwd

  /u01/Clusterware/clusterware/cluvfy

  [oracle@node1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose

  Performing pre-checks for cluster services setup

  Checking node reachability...

  Check: Node reachability from node "node1"

  Destination Node Reachable?

  ------------------------------------ ------------------------

  node1 yes

  node2 yes

  Result: Node reachability check passed from node "node1".

  Checking user equivalence...

  Check: User equivalence for user "oracle"

  Node Name Comment

  ------------------------------------ ------------------------

  node2 passed

  node1 passed

  Result: User equivalence check passed for user "oracle".

  Checking administrative privileges...

  Check: Existence of user "oracle"

  Node Name User Exists Comment

  ------------ ------------------------ ------------------------

  node2 yes passed

  node1 yes passed

  Result: User existence check passed for "oracle".

  Check: Existence of group "oinstall"

  Node Name Status Group ID

  ------------ ------------------------ ------------------------

  node2 exists 500

  node1 exists 500

  Result: Group existence check passed for "oinstall".

  Check: Membership of user "oracle" in group "oinstall" [as Primary]

  Node Name User Exists Group Exists User in Group Primary Comment

  ---------------- ------------ ------------ ------------ ------------ ------------

  node2 yes yes yes yes passed

  node1 yes yes yes yes passed

  Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed.

  Administrative privileges check passed.

  Checking node connectivity...

  Interface information for node "node2"

  Interface Name IP Address Subnet

  ------------------------------ ------------------------------ ----------------

  eth0 192.168.0.12 192.168.0.0

  eth1 10.101.0.12 10.101.0.0

  Interface information for node "node1"

  Interface Name IP Address Subnet

  ------------------------------ ------------------------------ ----------------

  eth0 192.168.0.11 192.168.0.0

  eth1 10.101.0.11 10.101.0.0

  Check: Node connectivity of subnet "192.168.0.0"

  Source Destination Connected?

  ------------------------------ ------------------------------ ----------------

  node2:eth0 node1:eth0 yes

  Result: Node connectivity check passed for subnet "192.168.0.0" with node(s) node2,node1.

  Check: Node connectivity of subnet "10.101.0.0"

  Source Destination Connected?

  ------------------------------ ------------------------------ ----------------

  node2:eth1 node1:eth1 yes

  Result: Node connectivity check passed for subnet "10.101.0.0" with node(s) node2,node1.

  Suitable interfaces for the private interconnect on subnet "192.168.0.0":

  node2 eth0:192.168.0.12

  node1 eth0:192.168.0.11

  Suitable interfaces for the private interconnect on subnet "10.101.0.0":

  node2 eth1:10.101.0.12

  node1 eth1:10.101.0.11

  ERROR:

  Could not find a suitable set of interfaces for VIPs.

  Result: Node connectivity check failed.

  Checking system requirements for 'crs'...

  Check: Total memory

  Node Name Available Required Comment

  ------------ ------------------------ ------------------------ ----------

  node2 689.38MB (705924KB) 512MB (524288KB) passed

  node1 689.38MB (705924KB) 512MB (524288KB) passed

  Result: Total memory check passed.

  Check: Free disk space in "/tmp" dir

  Node Name Available Required Comment

  ------------ ------------------------ ------------------------ ----------

  node2 4.22GB (4428784KB) 400MB (409600KB) passed

  node1 4.22GB (4426320KB) 400MB (409600KB) passed

  Result: Free disk space check passed.

  Check: Swap space

  Node Name Available Required Comment

  ------------ ------------------------ ------------------------ ----------

  node2 2GB (2096472KB) 1GB (1048576KB) passed

  node1 2GB (2096472KB) 1GB (1048576KB) passed

  Result: Swap space check passed.

  Check: System architecture

  Node Name Available Required Comment

  ------------ ------------------------ ------------------------ ----------

  node2 i686 i686 passed

  node1 i686 i686 passed

  Result: System architecture check passed.

  Check: Kernel version

  Node Name Available Required Comment

  ------------ ------------------------ ------------------------ ----------

  node2 2.6.18-194.el5 2.4.21-15EL passed

  node1 2.6.18-194.el5 2.4.21-15EL passed

  Result: Kernel version check passed.

  Check: Package existence for "make-3.79"

  Node Name Status Comment

  ------------------------------ ------------------------------ ----------------

  node2 make-3.81-3.el5 passed

  node1 make-3.81-3.el5 passed

  Result: Package existence check passed for "make-3.79".

  Check: Package existence for "binutils-2.14"

  Node Name Status Comment

  ------------------------------ ------------------------------ ----------------

  node2 binutils-2.17.50.0.6-14.el5 passed

  node1 binutils-2.17.50.0.6-14.el5 passed

  Result: Package existence check passed for "binutils-2.14".

  Check: Package existence for "gcc-3.2"

  Node Name Status Comment

  ------------------------------ ------------------------------ ----------------

  node2 gcc-4.1.2-48.el5 passed

  node1 gcc-4.1.2-48.el5 passed

  Result: Package existence check passed for "gcc-3.2".

  Check: Package existence for "glibc-2.3.2-95.27"

  Node Name Status Comment

  ------------------------------ ------------------------------ ----------------

  node2 glibc-2.5-49 passed

  node1 glibc-2.5-49 passed

  Result: Package existence check passed for "glibc-2.3.2-95.27".

  Check: Package existence for "compat-db-4.0.14-5"

  Node Name Status Comment

  ------------------------------ ------------------------------ ----------------

  node2 compat-db-4.2.52-5.1 passed

  node1 compat-db-4.2.52-5.1 passed

  Result: Package existence check passed for "compat-db-4.0.14-5".

  Check: Package existence for "compat-gcc-7.3-2.96.128"

  Node Name Status Comment

  ------------------------------ ------------------------------ ----------------

  node2 missing failed

  node1 missing failed

  Result: Package existence check failed for "compat-gcc-7.3-2.96.128".

  Check: Package existence for "compat-gcc-c++-7.3-2.96.128"

  Node Name Status Comment

  ------------------------------ ------------------------------ ----------------

  node2 missing failed

  node1 missing failed

  Result: Package existence check failed for "compat-gcc-c++-7.3-2.96.128".

  Check: Package existence for "compat-libstdc++-7.3-2.96.128"

  Node Name Status Comment

  ------------------------------ ------------------------------ ----------------

  node2 missing failed

  node1 missing failed

  Result: Package existence check failed for "compat-libstdc++-7.3-2.96.128".

  Check: Package existence for "compat-libstdc++-devel-7.3-2.96.128"

  Node Name Status Comment

  ------------------------------ ------------------------------ ----------------

  node2 missing failed

  node1 missing failed

  Result: Package existence check failed for "compat-libstdc++-devel-7.3-2.96.128".

  Check: Package existence for "openmotif-2.2.3"

  Node Name Status Comment

  ------------------------------ ------------------------------ ----------------

  node2 openmotif-2.3.1-2.el5_4.1 passed

  node1 openmotif-2.3.1-2.el5_4.1 passed

  Result: Package existence check passed for "openmotif-2.2.3".

  Check: Package existence for "setarch-1.3-1"

  Node Name Status Comment

  ------------------------------ ------------------------------ ----------------

  node2 setarch-2.0-1.1 passed

  node1 setarch-2.0-1.1 passed

  Result: Package existence check passed for "setarch-1.3-1".

  Check: Group existence for "dba"

  Node Name Status Comment

  ------------ ------------------------ ------------------------

  node2 exists passed

  node1 exists passed

  Result: Group existence check passed for "dba".

  Check: Group existence for "oinstall"

  Node Name Status Comment

  ------------ ------------------------ ------------------------

  node2 exists passed

  node1 exists passed

  Result: Group existence check passed for "oinstall".

  Check: User existence for "nobody"

  Node Name Status Comment

  ------------ ------------------------ ------------------------

  node2 exists passed

  node1 exists passed

  Result: User existence check passed for "nobody".

  System requirement failed for 'crs'

  Pre-check for cluster services setup was unsuccessful on all the nodes.

  Could not find a suitable set of interfaces for VIPs.”,可以忽略该错误

  信息,这是一个bug,Metalink中有详细说明,doc.id:338924.1。参考本文尾部列出的内容。

  对于上面描述的failed的包,尽可能的将其安装到系统。

  2.安装Clusterware 后的检查,注意,此时执行的cluvfy是位于已安装的路径

  [oracle@node1 ~]$ pwd

  /u01/app/oracle/product/10.2.0/crs_1/bin

  [oracle@node1 ~]$./cluvfy stage -post crsinst -n node1,node2

  Performing post-checks for cluster services setup

  Checking node reachability...

  Node reachability check passed from node "node1".

  Checking user equivalence...

  User equivalence check passed for user "oracle".

  Checking Cluster manager integrity...

  Checking CSS daemon...

  Daemon status check passed for "CSS daemon".

  Cluster manager integrity check passed.

  Checking cluster integrity...

  Cluster integrity check passed

  Checking OCR integrity...

  Checking the absence of a non-clustered configuration...

  All nodes free of non-clustered, local-only configurations.

  Uniqueness check for OCR device passed.

  Checking the version of OCR...

  OCR of correct Version "2" exists.

  Checking data integrity of OCR...

  Data integrity check for OCR passed.

  OCR integrity check passed.

  Checking CRS integrity...

  Checking daemon liveness...

  Liveness check passed for "CRS daemon".

  Checking daemon liveness...

  Liveness check passed for "CSS daemon".

  Checking daemon liveness...

  Liveness check passed for "EVM daemon".

  Checking CRS health...

  CRS health check passed.

  CRS integrity check passed.

  Checking node application existence...

  Checking existence of VIP node application (required)

  Check passed.

  Checking existence of ONS node application (optional)

  Check passed.

  Checking existence of GSD node application (optional)

  Check passed.

  Post-check for cluster services setup was successful.

  从上面的校验可以看出,Clusterware的相关后台进程,nodeapps相关资源以及OCR等处于passed状态,即Clusterware成功安装

  3.cluvfy的用法

  [oracle@node1 ~]$ cluvfy -help #直接使用-help参数即可获得cluvfy的帮助信息

  USAGE:

  cluvfy [ -help ]

  cluvfy stage { -list | -help }

  cluvfy stage {-pre|-post} [-verbose]

  cluvfy comp { -list | -help }

  cluvfy comp [-verbose]

  [oracle@node1 ~]$ cluvfy comp -list

  USAGE:

  cluvfy comp [-verbose]

  Valid components are:

  nodereach : checks reachability between nodes

  nodecon : checks node connectivity

  cfs : checks CFS integrity

  ssa : checks shared storage accessibility

  space : checks space availability

  sys : checks minimum system requirements

  clu : checks cluster integrity

  clumgr : checks cluster manager integrity

  ocr : checks OCR integrity

  crs : checks CRS integrity

  nodeapp : checks node applications existence

  admprv : checks administrative privileges

  peer : compares properties with peers

  4.ID 338924.1

  CLUVFY Fails With Error: Could not find a suitable set of interfaces for VIPs [ID 338924.1]

  ________________________________________

  Modified 29-JUL-2010 Type PROBLEM Status PUBLISHED

  In this Document

  Symptoms

  Cause

  Solution

  References

  ________________________________________

  Applies to:

  Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.1.0.7 - Release: 10.2 to 11.1

  Information in this document applies to any platform.

  Symptoms

  When running cluvfy to check network connectivity at various stages of the RAC/CRS installation process, cluvfy fails

  with errors similar to the following:

  =========================

  Suitable interfaces for the private interconnect on subnet "10.0.0.0":

  node1 eth0:10.0.0.1

  node2 eth0:10.0.0.2

  Suitable interfaces for the private interconnect on subnet "192.168.1.0":

  node1_internal eth1:192.168.1.2

  node2_internal eth1:192.168.1.1

  ERROR:

  Could not find a suitable set of interfaces for VIPs.

  Result: Node connectivity check failed.

  ========================

  On Oracle 11g, you may still see a warning in some cases, such as:

  ========================

  WARNING:

  Could not find a suitable set of interfaces for VIPs.

  ========================

  Output seen will be comparable to that noted above, but IP addresses and node_names may be different - i.e. the node names

  of 'node1','node2','node1_internal','node2_internal' will be substituted with your actual Public and Private node names.

  A second problem that will be encountered in this situation is that at the end of the CRS installation for 10gR2, VIPCA

  will be run automatically in silent mode, as one of the 'optional' configuration assistants. In this scenario, the VIPCA

  will fail at the end of the CRS installation. The InstallActions log will show output such as:

  > />> Oracle CRS stack installed and running under init(1M)

  > />> Running vipca(silent) for configuring nodeapps

  > />> The given interface(s), "eth0" is not public. Public interfaces should

  > />> be used to configure virtual IPs.

  Cause

  This issue occurs due to incorrect assumptions made in cluvfy and vipca based on an Internet Best Practice document -

  "RFC1918 - Address Allocation for Private Internets". This Internet Best Practice RFC can be viewed here:

  http://www.faqs.org/rfcs/rfc1918.html

  From an Oracle perspective, this issue is tracked in BUG:4437727

  &n,bsp; Per BUG:4437727, cluvfy makes an incorrect assumption based on RFC 1918 that any IP address/subnet that begins with any

  of the following octets is private and hence may not be fit for use as a VIP:

  172.16.x.x through 172.31.x.x

  192.168.x.x

  10.x.x.x

  However, this assumption does not take into account that it is possible to use these IPs as Public IP's on an internal

  network (or intranet). Therefore, it is very common to use IP addresses in these ranges as Public IP's and as Virtual

  IP(s), and this is a supported configuration.

  Solution

  The solution to the error above that is given when running 'cluvfy' is to simply ignore it if you intend to use an IP in

  one of the above ranges for your VIP. The installation and configuration can continue with no corrective action necessary.

  One result of this, as noted in the problem section, is that the silent VIPCA will fail at the end of the 10gR2 CRS

  installation. This is because VIPCA is running in silent mode and is trying to notify that the IPs that were provided

  may not be fit to be used as VIP(s). To correct this, you can manually execute the VIPCA GUI after the CRS installation

  is complete. VIPCA needs to be executed from the CRS_HOME/bin directory as the 'root' user (on Unix/Linux) or as a

  Local Administrator (on Windows):

  $ cd $ORA_CRS_HOME/bin

  $ ./vipca

  Follow the prompts for VIPCA to select the appropriate interface for the public network, and assign the VIPs for each node

  when prompted. Manually running VIPCA in the GUI mode, using the same IP addresses, should complete successfully.

  Note that if you patch to 10.2.0.3 or above, VIPCA will run correctly in silent mode. The command to re-run vipca

  silently can be found in CRS_HOME/cfgtoollogs (or CRS_HOME/cfgtoollogs) in the file 'configToolAllCommands' or

  'configToolFailedCommands'. Thus, in the case of a new install, the silent mode VIPCA command will fail after the

  10.2.0.1 base release install, but once the CRS Home is patched to 10.2.0.3 or above, vipca can be re-run silently,

  without the need to invoke the GUI tool

  References

  NOTE:316583.1 - VIPCA FAILS COMPLAINING THAT INTERFACE IS NOT PUBLIC

  Related

  ________________________________________

  Products

  ________________________________________

  ? Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Enterprise Edition

  Keywords

  ________________________________________

  INSTALLATION FAILS; INTERCONNECT; PRIVATE INTERCONNECT; PRIVATE NETWORKS

  Errors

  ________________________________________RFC-1918

  上面的描述很多,下面给出处理办法

  在出现错误的节点修改vipca 文件

  [root@node2 ~]# vi $CRS_ORA_HOME/bin/vipca

  找到如下内容:

  Remove this workaround when the bug 3937317 is fixed

  arch=`uname -m`

  if [ "$arch" = "i686" -o "$arch" = "ia64" ]

  then

  LD_ASSUME_KERNEL=2.4.19

  export LD_ASSUME_KERNEL

  fi

  #End workaround

  在fi 后新添加一行:

  unset LD_ASSUME_KERNEL

  以及srvctl 文件

  [root@node2 ~]# vi $CRS_ORA_HOME/bin/srvctl

  找到如下内容:

  LD_ASSUME_KERNEL=2.4.19

  export LD_ASSUME_KERNEL

  同样在其后新增加一行:

  unset LD_ASSUME_KERNEL

  保存退出,然后在故障重新执行root.sh