Hatena::ブログ(Diary)

ablog このページをアンテナに追加 RSSフィード Twitter

2010-12-21

Oracle Database 11gR2 RAC on CentOS 5.5 x86

参考


構成

ハードウェアソフトウェア構成

ネットワーク構成
用途ホスト名IPアドレスサブネットマスク
DNS&NAS(Host OS)-192.168.18.1255.255.255.0
SCAN VIPrac-scan.ablog.com192.168.18.131
192.168.18.132
192.168.18.133
255.255.255.0
node1 VIPnode01-vip.ablog.com192.168.18.111255.255.255.0
node2 VIPnode02-vip.ablog.com192.168.18.112255.255.255.0
node1 Public IPnode01.ablog.com192.168.18.121255.255.255.0
node2 Public IPnode02.ablog.com192.168.18.122255.255.255.0
node1 Private IP (インターコネクト)node01-priv.ablog.com192.168.93.101255.255.255.0
node2 Private IP (インターコネクト)node02-priv.ablog.com192.168.93.102255.255.255.0
node1 Private IP (NAS用)node01-st.ablog.com192.168.81.101255.255.255.0
node2 Private IP (NAS用)node02-st.ablog.com192.168.81.102255.255.255.0

仮想ネットワーク設定


仮想マシン作成

以下の設定で node01 と node02 の2台の仮想マシンを作成する。


ゲストOSインストール

以下の設定で node01 と node02 の2台の仮想マシンOSインストールする。

デフォルトラベルデバイス
チェックCentOS/dev/sda1

ゲストOS設定

[root@node01 ~]# vi /etc/inittab 
id:3:initdefault:
[root@node01 ~]# vi /etc/sysconfig/i18n
LANG=C
  • 不要なサービスを停止する。
[root@node01 ~]# chkconfig acpid off
[root@node01 ~]# chkconfig apmd off
[root@node01 ~]# chkconfig atd off
[root@node01 ~]# chkconfig auditd off
[root@node01 ~]# chkconfig avahi-daemon off
[root@node01 ~]# chkconfig bluetooth off
[root@node01 ~]# chkconfig cpuspeed off
[root@node01 ~]# chkconfig cups off
[root@node01 ~]# chkconfig hidd off
[root@node01 ~]# chkconfig isdn off
[root@node01 ~]# chkconfig ip6tables off
[root@node01 ~]# chkconfig iptables off
[root@node01 ~]# chkconfig mcstrans off
[root@node01 ~]# chkconfig mdmonitor off
[root@node01 ~]# chkconfig messagebus off
[root@node01 ~]# chkconfig haldaemon off
[root@node01 ~]# chkconfig pcscd off
[root@node01 ~]# chkconfig restorecond off
[root@node01 ~]# chkconfig rpcgssd off
[root@node01 ~]# chkconfig rpcidmapd off
[root@node01 ~]# chkconfig sendmail off
[root@node01 ~]# chkconfig smartd off
[root@node01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
BOOTPROTO=static
HWADDR=...
ONBOOT=yes
#HOTPLUG=no
#DHCP_HOSTNAME=node01
BROADCAST=192.168.93.255
IPADDR=192.168.93.101 # node02 は 192.168.93.102
NETMASK=255.255.255.0
NETWORK=192.168.93.0
[root@node01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth3
DEVICE=eth3
BOOTPROTO=static
HWADDR=...
ONBOOT=yes
#HOTPLUG=no
#DHCP_HOSTNAME=node01
BROADCAST=192.168.81.255
IPADDR=192.168.81.101 # node02 は 192.168.81.102
NETMASK=255.255.255.0
NETWORK=192.168.81.0
[root@node01 ~]# service network restart
[root@node01 ~]# cat /etc/sysconfig/ntpd 
# Drop root to id 'ntp:ntp' by default.
#OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid"
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# Set to 'yes' to sync hw clock after successful ntpdate
SYNC_HWCLOCK=no

# Additional options for ntpdate
NTPDATE_OPTIONS=""
[root@node01 ~]# service ntpd restart

ハードウェア要件の確認

[root@node01 ~]# uname -m
i686
  • システムのランレベルが3または5であることを確認する。
    • 「N 3」または「N 5」と表示されることを確認する。
    • 先頭に「N」が表示されているのはOS起動後にランレベルが変更されていないことを意味する。
[root@node01 ~]# runlevel
N 3
  • 2.5GB以上の物理RAMがあることを確認する。
[root@node01 ~]# grep MemTotal /proc/meminfo
MemTotal:       900096 kB
  • スワップ領域のサイズが以下の要件を満たすことを確認する。
物理RAMサイズ必要なスワップ領域サイズ
1GBから2GBRAMサイズの1.5倍
2GBから16GBRAMサイズと同等
16GBを超える16GB
[root@node01 ~]# grep SwapTotal /proc/meminfo
SwapTotal:     2048276 kB
[root@node01 ~]# df -h /tmp
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              37G  4.2G   31G  12% /
[root@node01 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              37G  4.2G   31G  12% /
tmpfs                 440M     0  440M   0% /dev/shm
none                  440M  104K  440M   1% /var/lib/xenstored

ネットワーク要件の確認

[root@node01 ~]# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 00:0C:29:E5:5C:25  
          inet addr:192.168.18.122  Bcast:192.168.18.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fee5:5c25/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:979 errors:0 dropped:0 overruns:0 frame:0
          TX packets:828 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:90190 (88.0 KiB)  TX bytes:101574 (99.1 KiB)
    • インターコネクト用
[root@node02 ~]# ifconfig eth2
eth2      Link encap:Ethernet  HWaddr 00:0C:29:E5:5C:39  
          inet addr:192.168.93.102  Bcast:192.168.93.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fee5:5c39/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:19 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2812 (2.7 KiB)  TX bytes:720 (720.0 b)
          Interrupt:18 Base address:0x2400 
[root@node01 ~]# ifconfig eth3
eth3      Link encap:Ethernet  HWaddr 00:0C:29:E5:5C:43  
          inet addr:192.168.81.102  Bcast:192.168.81.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fee5:5c43/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4598 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5728 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:345200 (337.1 KiB)  TX bytes:439960 (429.6 KiB)
          Interrupt:17 Base address:0x2480 

ソフトウェア要件の確認

[root@node01 ~]# cat /etc/issue
CentOS release 5.5 (Final)
Kernel \r on an \m

[root@node01 ~]# uname -r
2.6.18-194.el5xen
  • オペレーティング・システムのパッケージ要件を満たしていることを確認する。
    • 以下のバージョン以上のパッケージがインストールされていることを確認し、インストールされていない場合はインストールする。
      • binutils-2.17.50.0.6
      • compat-libstdc++-33-3.2.3
      • elfutils-libelf-0.125
      • elfutils-libelf-devel-0.125
      • elfutils-libelf-devel-static-0.125
      • gcc-4.1.2
      • gcc-c++-4.1.2
      • glibc-2.5-24
      • glibc-common-2.5
      • glibc-devel-2.5
      • glibc-headers-2.5
      • kernel-headers-2.6.18
      • ksh-20060214
      • libaio-0.3.106
      • libaio-devel-0.3.106
      • libgcc-4.1.2
      • libgomp-4.1.2
      • libstdc++-4.1.2
      • libstdc++-devel-4.1.2
      • make-3.81
      • sysstat-7.0.2
      • unixODBC-2.2.11
      • unixODBC-devel-2.2.11
      • pdksh-5.2.14
[root@node01 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"|\
egrep "binutils|compat-libstdc++|elfutils-libelf|gcc|glibc|kernel-headers|ksh|libaio|libgcc|libgomp|libstdc|make|sysstat|unixODBC|pdksh"|sort
binutils-2.17.50.0.6-14.el5 (i386)
elfutils-libelf-0.137-3.el5 (i386)
glibc-2.5-49 (i686)
glibc-common-2.5-49 (i386)
ksh-20100202-1.el5 (i386)
libaio-0.3.106-5 (i386)
libgcc-4.1.2-48.el5 (i386)
libstdc++-4.1.2-48.el5 (i386)
make-3.81-3.el5 (i386)
[root@node01 ~]# mount -r -t iso9660 /dev/cdrom /media 
[root@node01 ~]# cd /media/CentOS
[root@node01 CentOS]# rpm -ivh compat-libstdc++-33-3.2.3-61.i386.rpm 
[root@node01 CentOS]# rpm -ivh elfutils-libelf-devel-0.137-3.el5.i386.rpm elfutils-libelf-devel-static-0.137-3.el5.i386.rpm 
[root@node01 CentOS]# rpm -ivh libstdc++-devel-4.1.2-48.el5.i386.rpm 
[root@node01 CentOS]# rpm -ivh kernel-headers-2.6.18-194.el5.i386.rpm 
[root@node01 CentOS]# rpm -ivh glibc-headers-2.5-49.i386.rpm
[root@node01 CentOS]# rpm -ivh glibc-devel-2.5-49.i386.rpm 
[root@node01 CentOS]# rpm -ivh libgomp-4.4.0-6.el5.i386.rpm 
[root@node01 CentOS]# rpm -ivh glibc-devel-2.5-49.i386.rpm 
[root@node01 CentOS]# rpm -ivh gcc-4.1.2-48.el5.i386.rpm 
[root@node01 CentOS]# rpm -ivh gcc-c++-4.1.2-48.el5.i386.rpm 
[root@node01 CentOS]# rpm -ivh libaio-devel-0.3.106-5.i386.rpm 
[root@node01 CentOS]# rpm -ivh sysstat-7.0.2-3.el5.i386.rpm 
[root@node01 CentOS]# rpm -ivh unixODBC-2.2.11-7.1.i386.rpm 
[root@node01 CentOS]# rpm -ivh unixODBC-devel-2.2.11-7.1.i386.rpm 
[root@node01 CentOS]# rpm -ivh pdksh-5.2.14-36.el5.i386.rpm 
[root@node01 ~]# vi /etc/sysctl.conf
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
[root@node01 ~]# sysstl -p 
[root@node01 ~]# sysctl -a|egrep "sem|shmmax|shmmni|shmall|file-max|ip_local|rmem|wmem|aio-max-nr"
net.ipv4.udp_wmem_min = 4096
net.ipv4.udp_rmem_min = 4096
net.ipv4.tcp_rmem = 4096        87380   4194304
net.ipv4.tcp_wmem = 4096        16384   4194304
net.ipv4.ip_local_port_range = 9000     65500
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_max = 1048576
vm.lowmem_reserve_ratio = 256   256     32
kernel.sem = 250        32000   100     128
kernel.shmmni = 4096
kernel.shmall = 268435456
kernel.shmmax = 4294967295
fs.aio-max-nr = 1048576
fs.file-max = 6815744
  • OSグループ・ユーザを作成する。
[root@node01 ~]# groupadd -g 501 oinstall
[root@node01 ~]# groupadd -g 502 dba
[root@node01 ~]# groupadd -g 503 asmadmin
[root@node01 ~]# groupadd -g 504 asmdba
[root@node01 ~]# useradd -u 501 -g oinstall -G asmadmin,asmdba grid
[root@node01 ~]# useradd -u 502 -g oinstall -G dba,asmdba oracle
[root@node01 ~]# passwd grid
[root@node01 ~]# passwd orale
  • oracleユーザのSSHユーザ等価関係の有効化を行う。
[oracle@node01 ~]$ ssh-keygen -t dsa
[oracle@node01 ~]$ ssh node02 ssh-keygen -t dsa
[oracle@node01 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@node01 ~]$ ssh node02 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@node01 ~]$ chmod 600 ~/.ssh/authorized_keys
[oracle@node01 ~]$ scp ~/.ssh/authorized_keys node02:~/.ssh/
[oracle@node01 ~]$ ssh node01 date
[oracle@node01 ~]$ ssh node02 date
[oracle@node02 ~]$ ssh node02 date
[oracle@node02 ~]$ ssh node01 date
[root@node01 ~]# mkdir -p  /u01/app/grid
[root@node01 ~]# chown -R grid:oinstall /u01/app/grid
[root@node01 ~]# chmod -R 775 /u01/app/grid
[root@node01 ~]# mkdir -p /u01/app/11.2.0/grid
[root@node01 ~]# chown -R grid:oinstall /u01/app/11.2.0 
[root@node01 ~]# mkdir -p /u01/app/oracle
[root@node01 ~]# chown -R oracle:oinstall /u01/app/oracle
[root@node02 ~]# mkdir /u01/app/oraInventory
[root@node02 ~]# chown grid:oinstall /u01/app/oraInventory 
[root@node02 ~]# chmod 770 /u01/app/oraInventory/
[root@node01 ~]# vi /etc/security/limits.conf
grid                 soft    nproc   2047
grid                 hard    nproc   16384
grid                 soft    nofile  1024
grid                 hard    nofile  65536
oracle               soft    nproc   2047
oracle               hard    nproc   16384
oracle               soft    nofile  1024
oracle               hard    nofile  65536
  • /etc/pam.d/login に以下を追記する。
[root@node01 ~]# vi /etc/pam.d/login
session    required     pam_limits.so
  • /etc/profile に以下を追記する。
[root@node01 ~]# vi /etc/profile
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
              ulimit -p 16384
              ulimit -n 65536
        else
              ulimit -u 16384 -n 65536
        fi
       umask 022
fi
  • grid ユーザの環境設定を行う。
    • デフォルトのファイル・モード作成マスクの値を022に設定する。
[root@node01 ~]# su - grid
[grid@node01 ~]$ vi .bash_profile
umask 022
[grid@node01 ~]$ source .bash_profile 
[grid@node01 ~]$ umask
0022
    • ~/.sshxb/config の ForwardX11属性がnoに設定する。
[grid@node01 ~]$ mkdir .ssh
[grid@node01 ~]$ chmod 700 .ssh
[grid@node01 ~]$ vi ~/.ssh/config
Host *
      ForwardX11 no
    • インストール中にsttyコマンドによって発生するエラーの防止するため、.bashrc に以下の記述を追記する。
[grid@node01 ~]$ vi .bashrc 
if [ -t 0 ]; then
   stty intr ^C
fi
  • oracle ユーザの環境設定を行う。
    • デフォルトのファイル・モード作成マスクの値を022に設定する。
[root@node01 ~]# su - oracle
[oracle@node01 ~]$ vi .bash_profile
umask 022
[oracle@node01 ~]$ source .bash_profile 
[oracle@node01 ~]$ umask
0022
    • ~/.ssh/config の ForwardX11属性がnoに設定する。
[oracle@node01 ~]$ mkdir .ssh
[oracle@node01 ~]$ chmod 700 .ssh
[oracle@node01 ~]$ vi ~/.ssh/config
Host *
      ForwardX11 no
    • インストール中にsttyコマンドによって発生するエラーの防止するため、.bashrc に以下の記述を追記する。
[oracle@node01 ~]$ vi .bashrc 
if [ -t 0 ]; then
   stty intr ^C
fi
[root@node01 ~]# mount -r -t iso9660 /dev/cdrom /media
[root@node01 ~]# cd /media/CentOS 
[root@node01 CentOS]# rpm -ivh kernel-xen-devel-2.6.18-194.el5.i686.rpm
[root@node01 CentOS]# cd; umount /media
[root@node01 ~]# mount -r -t iso9660 /dev/cdrom /media 
[root@node01 ~]# cd /tmp
[root@node01 tmp]# tar zxpf /media/VMwareTools-8.4.4-301548.tar.gz 
[root@node01 tmp]# cd vmware-tools-distrib
[root@node01 vmware-tools-distrib]# ./vmware-install.pl 
Would you like to continue (NOT RECOMMENDED)? [no] yes
[root@node02 ~]# vi /etc/resolv.conf
search ablog.com
nameserver 192.168.18.121
  • ホスト名にドメイン名を追記する。
[root@node01 ~]# vi /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               node01 localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
192.168.18.131  rac-scan.ablo.com       rac-scan
192.168.18.121  node01.ablog.com        node01
192.168.18.122  node02.ablog.com        node02
192.168.18.111  node01-vip.ablog.com    node01-vip
192.168.18.112  node02-vip.ablog.com    node02-vip
192.168.93.101  node01-priv.ablog.com   node01-priv
192.168.93.102  node02-priv.ablog.com   node02-priv
192.168.81.101  node01-st.ablog.com     node01-st
192.168.81.102  node02-st.ablog.com     node02-st
[root@node01 ~]# vi /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=node01.ablog.com
GATEWAY=192.168.18.2                                                                  
[root@node01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth2
# Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE]
DEVICE=eth2
BOOTPROTO=static
HWADDR=00:0C:29:1B:F9:6C
ONBOOT=yes
HOSTNAME=node01-priv.ablog.com
BROADCAST=192.168.93.255
IPADDR=192.168.93.101
NETMASK=255.255.255.0
NETWORK=192.168.93.0
[root@node01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth3
# Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE]
DEVICE=eth3
BOOTPROTO=static
HWADDR=00:0C:29:1B:F9:76
ONBOOT=yes
HOSTNAME=node01-st.ablog.com
BROADCAST=192.168.81.255
IPADDR=192.168.81.101
NETMASK=255.255.255.0
NETWORK=192.168.81.0
  • NFS の設定を行う。
    • node01
[root@node01 ~]# mkdir -p /nfs/rac/grid/orcl/{ocr,voting}
[root@node01 ~]# chown -R grid:oinstall /nfs/rac/grid
[root@node01 ~]#  vi /etc/exports
/nfs (rw,no_root_squash)
[root@node01 ~]# service portmap start
[root@node01 ~]# service nfs start
[root@node01 ~]# service nfslock start
[root@node01 ~]# exportfs
[root@node01 ~]# chkconfig portmap on
[root@node01 ~]# chkconfig nfs on
[root@node01 ~]# chkconfig nfslock on
[root@node01 ~]# vi /etc/fstab
node01-st.ablog.com:/nfs/rac /u02                  nfs     rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 0 0
[root@node01 ~]# mount -a
    • node02
[root@node01 ~]# vi /etc/fstab
node01-st.ablog.com:/nfs/rac /u02                  nfs     rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 0 0
[root@node01 ~]# mount -a
  • Direct NFS の設定を行う。
    • node01
[root@node01 ~]# cat /etc/oranfstab
server: node01-st.ablog.com
local: node01-st.ablog.com
path: node01-st.ablog.com
export: /nfs/rac mount: /u02
    • node02
[root@node02 ~]# vi /etc/oranfstab
server: node01-st.ablog.com
local: node02-st.ablog.com
path: node01-st.ablog.com
export: /nfs/rac mount: /u02
  • nfs が netfs より先に起動するようにする。
[root@node01 ~]# find /etc/rc.d -name 'S25netfs'|perl -lane '$from=$_;s/S25/S61/;rename($o,$_)'
[root@node01 ~]# mkdir -p /u02/app/oracle/oradata/orcl
[root@node01 ~]# mkdir -p /u02/app/oracle/admin/orcl/arch
[root@node01 ~]# chown -R oracle:oinstall -p /u02/app/oracle
[root@node01 ~]# mkdir -p /u02/grid/orcl/{voting,ocr}
[root@node01 ~]# chown -R grid:oinstall /u02/grid

Grid Infrastructure をインストールする。

[root@node01 ~]# mkdir /u01/software/
[root@node01 ~]# mv p10098816_112020_LINUX_* /u01/software/
[root@node01 neo]# chown -R grid:oinstall /u01/software 
[grid@node01 ~]$ cd /u01/software 
[grid@node01 software]$ ls *.zip|xargs -n1 unzip
[root@node01 ~]# telinit 5
  • grid ユーザでログインする。
  • [Applications]-[Accessories]-[Terminal]
[grid@node01 ~]$ cd /u01/software/grid
[grid@node01 ~]$ env|egrep "LANG|ORA"
LANG=C
[grid@node01 grid]$ ./runInstaller
[root@node01 ~]# vi /u01/app/11.2.0/grid/lib/osds_acfslib.pm
 # see - http://www.oracle.com/us/technologies/027626.pdf
  open (RPM_QF, "rpm -qf /etc/redhat-release 2>&1 |");
  $release = <RPM_QF>;
  close (RPM_QF);

  if (($release =~ /^redhat-release/) ||         # straight RH
      ($release =~ /^enterprise-release/) ||     # Oracle Enterprise Linux
      ($release =~ /^centos-release/))           # CentOS
  {
    if ($release =~ /release-4/)                 # RH/OEL 4
  • orainstRoot.shroot.sh を実行する。
    • node01
[root@node01 ~]# /u01/app/oraInventory/orainstRoot.sh 
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@node01 ~]# /u01/app/11.2.0/grid/root.sh 
Running Oracle 11g root script...

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE 
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-2672: Attempting to start 'ora.mdnsd' on 'node01'
CRS-2676: Start of 'ora.mdnsd' on 'node01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'node01'
CRS-2676: Start of 'ora.gpnpd' on 'node01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node01'
CRS-2672: Attempting to start 'ora.gipcd' on 'node01'
CRS-2676: Start of 'ora.gipcd' on 'node01' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'node01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node01'
CRS-2672: Attempting to start 'ora.diskmon' on 'node01'
CRS-2676: Start of 'ora.diskmon' on 'node01' succeeded
CRS-2676: Start of 'ora.cssd' on 'node01' succeeded
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting disk: /u02/grid/orcl/voting/voting01.
Now formatting voting disk: /u02/grid/orcl/voting/voting02.
Now formatting voting disk: /u02/grid/orcl/voting/voting03.
CRS-4603: Successful addition of voting disk /u02/grid/orcl/voting/voting01.
CRS-4603: Successful addition of voting disk /u02/grid/orcl/voting/voting02.
CRS-4603: Successful addition of voting disk /u02/grid/orcl/voting/voting03.
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   8438cdc1b80a4f16bfd450029ebef789 (/u02/grid/orcl/voting/voting01) []
 2. ONLINE   10086dbff1a44f78bf592068061b49db (/u02/grid/orcl/voting/voting02) []
 3. ONLINE   9e461c00ad9b4f24bf801fbe650491ba (/u02/grid/orcl/voting/voting03) []
Located 3 voting disk(s).

ACFS-9200: Supported
ACFS-9200: Supported
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
    • node02
[root@node02 ~]# /u01/app/oraInventory/orainstRoot.sh 
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@node02 ~]# /u01/app/11.2.0/grid/root.sh 
Running Oracle 11g root script...

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE 
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node node01, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
  • Clusterware の起動状態を確認する。
[grid@node01 ~]$ /u01/app/11.2.0/grid/bin/crsctl check cluster -all
**************************************************************
node01:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node02:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[grid@node01 ~]$ /u01/app/11.2.0/grid/bin/crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       node01                                       
               ONLINE  ONLINE       node02                                       
ora.asm
               OFFLINE OFFLINE      node01                   Instance Shutdown   
               OFFLINE OFFLINE      node02                   Instance Shutdown   
ora.gsd
               OFFLINE OFFLINE      node01                                       
               OFFLINE OFFLINE      node02                                       
ora.net1.network
               ONLINE  ONLINE       node01                                       
               ONLINE  ONLINE       node02                                       
ora.ons
               ONLINE  ONLINE       node01                                       
               ONLINE  ONLINE       node02                                       
ora.registry.acfs
               OFFLINE OFFLINE      node01                                       
               OFFLINE OFFLINE      node02                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node02                                       
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       node01                                       
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       node01                                       
ora.cvu
      1        ONLINE  ONLINE       node01                                       
ora.node01.vip
      1        ONLINE  ONLINE       node01                                       
ora.node02.vip
      1        ONLINE  ONLINE       node02                                       
ora.oc4j
      1        ONLINE  ONLINE       node01                                       
ora.scan1.vip
      1        ONLINE  ONLINE       node02                                       
ora.scan2.vip
      1        ONLINE  ONLINE       node01                                       
ora.scan3.vip
      1        ONLINE  ONLINE       node01 

Grid Infrasturucture インストール後の作業

[root@node01 ~]# cd /u01/app/11.2.0/grid
[root@node01 grid]# cp -pi root.sh root.sh.20101226

Oracle RACインストールする

[root@node01 ~]# telinit 5
[oracle@node01 ~]$ /u01/app/11.2.0/grid/bin/cluvfy stage -pre dbinst -fixup -n node01,node02 -osdba dba -verbose 2>&1|tee cluvy_`date '+%Y%m%d-%H%M%S'`.log
[oracle@node01 ~]$ cd /u01/software/database
[oracle@node01 database]$ ./runInstaller -debug 2>&1|tee install_db_`date '+%Y%m%d-%H%M%S'`.log
[root@node01 ~]# /u01/app/oracle/product/11.2.0/dbome_1/root.sh

データベースを作成する

[root@node01 ~]# telinit 5
  • クラスタ検証ユーティリティ(CVU)によるDBCAの要件の検証を行う。
[oracle@node01 ~]$ /u01/app/11.2.0/grid/bin/cluvfy stage -pre dbcfg -fixup -n node01,node02 -d /u01/app/oracle/product/11.2.0/dbome_1
[oracle@node01 ~]$ /u01/app/oracle/product/11.2.0/dbome_1/dbca -debug 2>&1|tee dbca_`date '+%Y%m%d'`.log

スパム対策のためのダミーです。もし見えても何も入力しないでください
ゲスト


画像認証

トラックバック - http://d.hatena.ne.jp/yohei-a/20101221/1292892424