Please see my other blog for Oracle EBusiness Suite Posts - EBMentors

Search This Blog

Note: All the posts are based on practical approach avoiding lengthy theory. All have been tested on some development servers. Please don’t test any post on production servers until you are sure.

Sunday, July 07, 2013

Installing 12c RAC on Linux

Pre-Req
Familiarity with Oracle Virtual Machine
Understaning with Oracle RAC eg; 11gRAC. You can have the understanding by below posts.
Installing Oracle 11g RAC on Windows 2008
Installing 11gR2 RAC on Linux
Installing 11gR2 RAC on Solaris


Environment: 
Oracle VM,
Oracle Enterprise Linux 5 update 7 x86_64
Oracle Database 12c Enterprise Edition Release 12.1.0.0.2
Assumptions:
Installation of Linux 5U7 on a Virtual Machine (CLOUDRAC1)
Installation of  VBoxGuestAdditions on VM
Two network interfaces (eth0 and eth1) added to VM using the IPs given in Step 3.

Linux Configuration for Oracle RAC 12c (On VM 1 eg; CLOUDRAC1)
1-After the installation of OS (Linux 5U7), following packages need to be installed by using "root" user. You need to install the 64bit version of the packages.

# From Enterprise Linux 5 DVD (/media/cdrom/Server)
rpm -Uvh binutils-2.*
rpm -Uvh compat-libstdc++-33*
rpm -Uvh elfutils-libelf-0.*
rpm -Uvh elfutils-libelf-devel-*
rpm -Uvh gcc-4.*
rpm -Uvh gcc-c++-4.*
rpm -Uvh glibc-2.*
rpm -Uvh glibc-common-2.*
rpm -Uvh glibc-devel-2.*
rpm -Uvh glibc-headers-2.*
rpm -Uvh ksh-2*
rpm -Uvh libaio-0.*
rpm -Uvh libaio-devel-0.*
rpm -Uvh libgcc-4.*
rpm -Uvh libstdc++-4.*
rpm -Uvh libstdc++-devel-4.*
rpm -Uvh make-3.*
rpm -Uvh sysstat-7.*
rpm -Uvh unixODBC-2.*
rpm -Uvh unixODBC-devel-2.*
If you want to use ASMLib then check the below packages also, (I did not use it here)
if not installed, install them also. 

# rpm -qa | grep oracleasm*
oracleasm-2.6.18-238.el5debug-2.0.5-1.el5
oracleasm-2.6.18-238.el5-2.0.5-1.el5
oracleasm-support-2.1.4-1.el5
# rpm -qa | grep kernel-debug*
kernel-debug-2.6.18-238.el5

# From 12c RAC  DVD (/media/cdrom/rpm)
rpm -Uvh cvuqdisk
2- Make sure the shared memory filesystem is big enough for Automatic Memory Manager to work.
# umount tmpfs
# mount -t tmpfs shmfs -o size=1500m /dev/shm
Make the setting permanent by amending the "tmpfs" setting of the "/etc/fstab" file to look like this.
tmpfs     /dev/shm    tmpfs    size=1500m    0    0
3- Following modifications in hosts file, give IP as per your environment 
#gedit /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1         localhost.localdomain localhost
########Public ##############
132.35.21.177    cloudrac1.localdomain cloudrac1
132.35.21.178    cloudrac2.localdomain cloudrac2
########Private ##############
10.10.10.1    cloudrac1-priv.localdomain cloudrac1-priv
10.10.10.2    cloudrac2-priv.localdomain cloudrac2-priv
########Virtual ##############
132.35.21.187    cloudrac1-vip.localdomain cloudrac1-vip
132.35.21.188    cloudrac2-vip.localdomain cloudrac2-vip
########SCAN ##############
132.35.21.198    cloudracscan.localdomain cloudracscan

4- Add or amend the following lines to the "/etc/sysctl.conf" file for various parameter settings.

#gedit /etc/sysctl.conf

fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 1054504960
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586

vm.swappiness=100
Run the following command to change the above kernel parameters.
#/sbin/sysctl -p  

5- Add the following lines to the "/etc/pam.d/login" file, if it does not already exist.
[root@cloudrac1 ~]# gedit /etc/security/limits.conf
session required pam_limits.so
6- Disable secure linux by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows.
#gedit /etc/selinux/config
SELINUX=disabled
7- deconfigure NTP
[root@cloudrac1 ~]# service ntpd stop
[root@cloudrac1 ~]# chkconfig ntpd off
[root@cloudrac1 ~]# mv /etc/ntp.conf /etc/ntp.conf.org
[root@cloudrac1 ~]# rm /var/run/ntpd.pid
8- Create required new groups and users.
#groupadd -g 1000 oinstall
#groupadd -g 1200 dba
#useradd -u 1100 -g oinstall -G dba oracle
passwd oracle
9- Create the required directories in which the Oracle software will be installed. Set appropriate ownership and permissions
#mkdir -p /u01/app/11.2.0/grid
#mkdir -p /u01/app/oracle/product/11.2.0/db_1
#chown -R oracle:oinstall /u01
#chmod -R 775 /u01/
10- Modify resolv.conf file
#gedit /etc/resolv.conf
search localdomain
nameserver 0.0.0.0
options timeout:0
options attempts:0

11-set NOZEROCONF parameter
#gedit /etc/sysconfig/network
NOZEROCONF=yes

12- Make the clone of CLOUDRAC1 VM
After making all of above changes make the clone of the first virtual machine (CLOUDRAC1) to CLOUDRAC2. Please note that you may need to change the MAC address after cloning. You may need to delete the .bak interfaces after starting the 2nd VM.
After making clone and deleting the .bak interfaces, now run ping test from cloudrac1 (node1) to cloudrac2 (node2) and vice versa using both public IP/name and private IP/name.
you may need to start/stop the network service. You can take the help of following commands if you face the connectivity issue between nodes.
#ifup eth0/eth1 (to make the interface card up)
#ifdown eth0/eth1 (to make the interface card up)
#ifconfig (to view the interfaces and associated IPs)
#ip addr (to view the interfaces and associated IPs)
#service network status/stop/start
13- Create Shared Disks
When all ping tests are OK as mentioned in Step 12, go ahead to create the shared disks. You can create shared disks using OVM visual interface. In a nutshell you have to do below.
- created SCSI Controller on node1 and node2
- created a new vdi disk clouddisk1 on cloudrac1, it should be fixed not dynamic under SCSI Controller
- from File/Virtual Media Manager make this disk shareable
- create SCSI Contller on node2 and attach shared disk to cloudrac2
- Start both nodes (CLOUDRAC1 and CLOUDRAC2)

14- Create partition for the new disk
Use the fdisk utility to create the partition.
#cd /dev/
#ls sd*
My new disk (clouddisk1) has been added and recognized by system as "sdb"

Just make the partiton


[root@cloudrac1 dev]# ls sd*
sda sda1 sda2 sdb sdc
[root@cloudrac1 dev]# fdisk sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.


The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
e extended
p primary partition (1-4)

p
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):
Using default value 1305


Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.



-- check the ownership of the disks
[root@cloudrac2 dev]# ls -l sdb*
brw-r----- 1 root   disk 8, 16 Mar 16 12:09 sdb
brw-r----- 1 root   disk 8, 17 Mar 16 12:24 sdb1

change the ownership as it should be with "oracle" user who will be installing the grid infrastructure. To change this ownership add the permission in the following file. If you don't do it you may encounter issues/warning while checking the pre-req by the 12c GI installer.

#gedit /etc/udev/rules.d/50-udev.rules
KERNEL=="sdb1", OWNER="oracle", GROUP="oinstall", MODE="0660"

Perform the above change on both nodes and then restart the and check the permission again to verify it.
[root@cloudrac2 dev]# ls -l sdb*
brw-r----- 1 root   disk 8, 16 Mar 16 12:09 sdb
brw-r----- 1 oracle disk 8, 17 Mar 16 12:24 sdb1

If you add more shared disk , you will need to add in 50-udev.rules accordingly.

Install GI
15- Install GI, Start the 12c GI installer using oracle user
[root@cloudrac1 ~]# umount /dev/cdrom
[root@cloudrac1 ~]# mount /dev/cdrom /media
mount: block device /dev/cdrom is write-protected, mounting read-only
[root@cloudrac1 ~]# export DISPLAY=:0.0
[root@cloudrac1 ~]# xhost +
[root@cloudrac1 ~]# su - oracle
[oracle@cloudrac1 ~]$ /media/Linuxgrid_12.1BETA2/grid/runInstaller


























16- check the services
 [root@cloudrac1 bin]# ./crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora.DATA.dg    ora....up.type ONLINE    ONLINE    cloudrac1  
ora....ER.lsnr ora....er.type ONLINE    ONLINE    cloudrac2  
ora....N1.lsnr ora....er.type ONLINE    ONLINE    cloudrac1  
ora.asm        ora.asm.type   ONLINE    ONLINE    cloudrac1  
ora....SM1.asm application    ONLINE    ONLINE    cloudrac1  
ora....C1.lsnr application    ONLINE    OFFLINE              
ora....ac1.gsd application    OFFLINE   OFFLINE              
ora....ac1.ons application    ONLINE    ONLINE    cloudrac1  
ora....ac1.vip ora....t1.type ONLINE    ONLINE    cloudrac1  
ora....SM2.asm application    ONLINE    ONLINE    cloudrac2  
ora....C2.lsnr application    ONLINE    ONLINE    cloudrac2  
ora....ac2.gsd application    OFFLINE   OFFLINE              
ora....ac2.ons application    ONLINE    ONLINE    cloudrac2  
ora....ac2.vip ora....t1.type ONLINE    ONLINE    cloudrac2  
ora.cvu        ora.cvu.type   ONLINE    ONLINE    cloudrac1  
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE              
ora....network ora....rk.type ONLINE    ONLINE    cloudrac1  
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    cloudrac1  
ora.ons        ora.ons.type   ONLINE    ONLINE    cloudrac1  
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    cloudrac1


shutdown the cloudrac2 and checked the services on cloudrac1

[root@cloudrac1 bin]# ./crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE cloudrac1 STABLE
ONLINE ONLINE cloudrac2 STOPPING
ora.LISTENER.lsnr
ONLINE ONLINE cloudrac1 STABLE
ONLINE OFFLINE cloudrac2 STABLE
ora.asm
ONLINE ONLINE cloudrac1 Started,STABLE
ONLINE ONLINE cloudrac2 Started,STABLE
ora.gsd
OFFLINE OFFLINE cloudrac1 STABLE
OFFLINE OFFLINE cloudrac2 STABLE
ora.net1.network
ONLINE ONLINE cloudrac1 STABLE
ONLINE ONLINE cloudrac2 STABLE
ora.ons
ONLINE ONLINE cloudrac1 STABLE
ONLINE ONLINE cloudrac2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE cloudrac1 STABLE
ora.cloudrac1.vip
1 ONLINE ONLINE cloudrac1 STABLE
ora.cloudrac2.vip
1 ONLINE INTERMEDIATE cloudrac1 FAILED OVER,STABLE
ora.cvu
1 ONLINE ONLINE cloudrac1 STABLE
ora.oc4j
1 ONLINE ONLINE cloudrac1 STABLE
ora.scan1.vip
1 ONLINE ONLINE cloudrac1 STABLE



No comments: