当前位置: 首页 > Database, HA, oracle > 正文

Oralce 11g RAC 节点添加

Oracle rac环境中,当集群的负载较大,需要扩充处理能力时,可以考虑添加节点

假设前提

原始rac环境的db_name为TESTRAC, sys 用户密码为 MANAGER, 节点名为 rac1,rac2

新节点名为 rac3; 原始rac环境的安装方法和设置请参考 Oracle 11g RAC 安装手记

安装前配置

以下步骤在新节点操作,只列出名称,具体操作方式,请参考 Oracle 11g RAC 安装手记

1. 安装操作系统

2. 安装需要的软件包

3. 安装pdksh

4. 配置系统参数

5. 设置系统资源限制

6. 添加需要的用户和组

7. 设置hosts文件【需要同步至原始的rac节点】

    [root@rac3 ~]# cat > /etc/hosts <<EOF
    127.0.0.1       localhost.localdomain   localhost
    # Public
    192.168.0.111   rac1.localdomain        rac1
    192.168.0.112   rac2.localdomain        rac2
    192.168.0.123   rac2.localdomain        rac2
    # Private
    172.16.1.111   rac1-priv.localdomain   rac1-priv
    172.16.1.112   rac2-priv.localdomain   rac2-priv
    172.16.1.123   rac2-priv.localdomain   rac2-priv
    # Virtual
    192.168.0.113   rac1-vip.localdomain    rac1-vip
    192.168.0.114   rac2-vip.localdomain    rac2-vip
    192.168.0.124   rac2-vip.localdomain    rac2-vip
    # SCAN
    192.168.0.115   scan.localdomain scan
    192.168.0.116   scan.localdomain scan
    192.168.0.117   scan.localdomain scan
    EOF

8. 设置用户默认进程数

9. 关闭SELINUX

10.关闭iptables

11.建立需要的目录,并设置好权限

12.添加环境变量

13.建立集群的环境变量文件

14.建立数据库的环境变量文件

15.设置UDEV规格

16.更新分区表

17.重新加载UDEV规则

18.重启UDEV

19.结果验证

    [root@rac3 ~]# ls -al /dev/asm*
    brw-rw---- 1 oracle dba 8, 17 Oct 12 14:39 /dev/asm-disk1
    brw-rw---- 1 oracle dba 8, 33 Oct 12 14:38 /dev/asm-disk2
    brw-rw---- 1 oracle dba 8, 49 Oct 12 14:39 /dev/asm-disk3
    brw-rw---- 1 oracle dba 8, 65 Oct 12 14:39 /dev/asm-disk4

20.安装cuvqdisk包

21.配置ssh等效性

正式安装

在节点rac1上执行以下步骤,验证条件是否都已经满足

    cd /your/path/to/grid
    ./runcluvfy.sh stage -pre nodeadd -n rac3 -verbose

出现以下错误时,可以忽略(因为这些软件包已经安装,却提示没有安装)

    Result: TCP connectivity check failed for subnet "192.168.0.0"
    Result: Package existence check failed for "libaio-0.3.105 (i386)"
    Result: Package existence check failed for "compat-libstdc++-33-3.2.3 (i386)"
    Result: Package existence check failed for "libaio-devel-0.3.105 (i386)"
    Result: Package existence check failed for "libgcc-3.4.6 (i386)"
    Result: Package existence check failed for "libstdc++-3.4.6 (i386)"
    Result: Package existence check failed for "unixODBC-2.2.11 (i386)"
    Result: Package existence check failed for "unixODBC-devel-2.2.11 (i386)"

如果遇到其他错误,请进行修正后再继续

在节点 rac1 上运行以下步骤, 为新节点安装 grid 软件


    [oracle@rac1 ~]$ grid_env
    [oracle@rac1 ~]$ cd /u01/app/11.2.0/grid/oui/bin
    [oracle@rac1 ~]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" \
         "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip}"

出现下面的错误

    Saving inventory on nodes (Friday, July 12, 2013 9:32:00 AM CST)
    SEVERE:Remote 'UpdateNodeList' failed on nodes: 'rac2'. Refer to \
    '/u01/app/oraInventory/logs/addNodeActions2013-07-12_09-28-13AM.log' for details.
    You can manually re-run the following command on the failed nodes \
    after the installation: /u01/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList \
     -noClusterEnabled ORACLE_HOME=/u01/app/11.2.0/grid CLUSTER_NODES=rac1,rac2,rac3 \
      CRS=true  "INVENTORY_LOCATION=/u01/app/oraInventory" -invPtrLoc \
     "/u01/app/11.2.0/grid/oraInst.loc" LOCAL_NODE=<node on which command is to \
     be run>. 
     Please refer 'UpdateNodeList' logs under central inventory of remote nodes where \
     failure occurred for more details.

按照提示,在 rac2 上执行

    [oracle@rac2 ~]$ grid_env
    [oracle@rac2 ~]$ /u01/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList \
          -noClusterEnabled ORACLE_HOME=/u01/app/11.2.0/grid \
          CLUSTER_NODES=rac1,rac2,rac3 CRS=true  \
           "INVENTORY_LOCATION=/u01/app/oraInventory" -invPtrLoc \
           "/u01/app/11.2.0/grid/oraInst.loc" LOCAL_NODE=rac2

    Starting Oracle Universal Installer...

    Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
    The inventory pointer is located at /u01/app/11.2.0/grid/oraInst.loc
    The inventory is located at /u01/app/oraInventory

在 rac3 上用 root 用户执行

    [root@rac3 ~]# /u01/app/oraInventory/orainstRoot.sh
    [root@rac3 ~]# /u01/app/11.2.0/grid/root.sh

如过上述命令执行时出现问题,可以使用下列命令回滚

    [root@rac3 ~]# /u01/app/11.2.0/grid/crs/install/roothas.pl -delete \
        -force -verbose

同样,执行 root.sh 的过程中,可以卡住,我们可以采取帮忙的手段

    [root@rac3 ~]# < /var/tmp/.oracle/npohasd

如上,已经在rac3上安装好了 grid 环境,接下来可以安装数据库了

在 rac1 上用 oracle 用户执行

    [oracle@rac1 ~]$ db_env
    [oracle@rac1 ~]$ cd /u01/app/oracle/product/11.2.0/db_1/oui/bin/
    [oracle@rac1 ~]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}"

同样出现如下错误

    Saving inventory on nodes (Friday, July 12, 2013 9:58:08 AM CST)
    SEVERE:Remote 'UpdateNodeList' failed on nodes: 'rac2'. Refer to \
       '/u01/app/oraInventory/
      logs/addNodeActions2013-07-12_09-50-56AM.log' for details.
    You can manually re-run the following command on the failed nodes \
       after the installation: 
     /u01/app/oracle/product/11.2.0/db_1/oui/bin/runInstaller \
      -updateNodeList -noClusterEnabled \
     ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 CLUSTER_NODES=rac1,\
      rac2,rac3 CRS=false  \
     "INVENTORY_LOCATION=/u01/app/oraInventory" -invPtrLoc \
      "/u01/app/oracle/product/11.2.0/db_1/
     oraInst.loc" LOCAL_NODE=<node on which command is to be run>. 
    Please refer 'UpdateNodeList' logs under central inventory of 
     remote nodes where failure occurred for more details.

按照提示, 在 rac2 上执行如下步骤

    [oracle@rac2 ~]$ db_env 
    [oracle@rac2 ~]$ /u01/app/oracle/product/11.2.0/db_1/oui/bin/runInstaller \
    -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 \
     CLUSTER_NODES=rac1,rac2,rac3 \
    CRS=false  "INVENTORY_LOCATION=/u01/app/oraInventory" -invPtrLoc \
    "/u01/app/oracle/product/11.2.0/db_1/oraInst.loc" LOCAL_NODE=rac2
    Starting Oracle Universal Installer...

    Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
    The inventory pointer is located at /u01/app/oracle/product/11.2.0/db_1/oraInst.loc
    The inventory is located at /u01/app/oraInventory

在 rac3 上执行root.sh

    [root@rac3 ~]# /u01/app/oracle/product/11.2.0/db_1/root.sh

ok, rac3 上‌上已经安装好了数据库软件,接下啦,使用dbca为rac3添加数据库实例

    [oracle@rac1 u01]$ which dbca
    /u01/app/oracle/product/11.2.0/db_1/bin/dbca
    [oracle@rac1 u01]$ dbca -silent -addInstance -nodeList "rac3" -gdbName  "testrac" \
        -instanceName "testrac3" -sysDBAUserName "sys" -sysDBAPassword MANAGER
    Adding instance
    1% complete
    2% complete
    6% complete
    13% complete
    20% complete
    26% complete
    33% complete
    40% complete
    46% complete
    53% complete
    66% complete
    Completing instance management.
    76% complete
    100% complete
    Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/testrac/testrac.log" \
    for further details.

如上,我们已经将新的节点添加入了rac集群了,下来,我们验证一下

    [oracle@rac1 ~]$ crsctl status resource ora.testrac.db -t
    --------------------------------------------------------------------------------
    NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
    --------------------------------------------------------------------------------
    Cluster Resources
    --------------------------------------------------------------------------------
    ora.testrac.db
          1        ONLINE  ONLINE       rac1                     Open                
          2        ONLINE  ONLINE       rac2                     Open                
          3        ONLINE  ONLINE       rac3                     Open        

that’s ok!

    分享到:

本文固定链接: http://klwang.info/oralce-11g-orac-node-add/ | 数据库|Linux|软件开发

该日志由 klwang 于2013年07月13日发表在 Database, HA, oracle 分类下, 你可以发表评论,并在保留原文地址及作者的情况下引用到你的网站或博客。
原创文章转载请注明: Oralce 11g RAC 节点添加 | 数据库|Linux|软件开发
关键字: , , , ,

Oralce 11g RAC 节点添加:等您坐沙发呢!

发表评论

*
快捷键:Ctrl+Enter