Amazon Partner

Thursday, 19 November 2009

libdb.so.2: cannot open shared object file

====== GRID START ERROR

/u01/app/oracle/products/oms10g/Apache/Apache/bin/apachectl start: execing httpd
/u01/app/oracle/products/oms10g/Apache/Apache/bin/httpd: error while loading shared libraries: libdb.so.2: cannot open shared object file: No such file or di
rectory


Fix:

ln -s /usr/lib/libgdbm.so.2.0.0 /usr/lib/libdb.so.2

ORA-27154: post/wait create failed / +ASM1 instance startup failed

+ASM1 instance startup failed :

SQL> startup nomount;
ORA-27154: post/wait create failed
ORA-27300: OS system dependent operation:semget failed with status: 28
ORA-27301: OS failure message: No space left on device
ORA-27302: failure occurred at: sskgpsemsper


Cause of the problem
The error may mislead you. Though it indicates No space left on device but whenever I issue df -h on my OS there is enough space. The problem happened because of short of semaphores setting in the OS.


Solution:

add the following line in /etc/sysctl.conf
kernel.sem = 256 32768 100 228

starting opmn failed (OEM) Oracle enterprise manager

==================
Enterprise manageer installation , starting opmn failed

create a symbolic link "ln -s /usr/lib/libgdbm.so.2.0.0 /usr/lib/libdb.so.2"

Remember that change the permissions in the libdb.so.2 file

chmod 755 /usr/lib/libgdbm.so.2.0.0
chmod 755 /usr/lib/libdb.so.2

Reexecute the configuration assitant

libnnz11.so: cannot restore segment prot after reloc: Permission denied

Issue :

Installation failed with library error: Oracle11g
libnnz11.so: cannot restore segment prot after reloc: Permission denied


Workaround : -

Change SELinux from Enforcing to permisible , until oracle fix the bug. Once oracle fixed the bug(hopefully) , you will be able to set to Enforcing.

Setup ntpd for 11g RAC / Clusterware

Linux  :
------
Edit the followin file :/etc/sysconfig/ntpd

Add -x to the current option as follows:

# Drop root to id 'ntp:ntp' by default.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# Set to 'yes' to sync hw clock after successful ntpdate
SYNC_HWCLOCK=no

# Additional options for ntpdate
NTPDATE_OPTIONS=""

-------------------------
HP-UX :

Add export XNTPD_ARGS="-x" to /etc/rc.config.d/netdaemons


Restart:
/sbin/init.d/xntpd stop


--------------------------

Sun Solaris :
/sbin/init.d/xntpd start

Setup ISCSI Client (Linux) for 11g RAC

Follow the following Steps to setup iscsi-initiater on linux server as client of openfier.

[root@oracluster03 /]# cd /media/*/Server
[root@oracluster03 Server]# rpm -i iscsi-init*
warning: iscsi-initiator-utils-6.2.0.868-0.18.el5.x86_64.rpm: Header V3 DSA sign ature: NOKEY, key ID 1e5e0159


[root@oracluster03 ~]# service iscsid start
Turning off network shutdown. Starting iSCSI daemon: [ OK ]
[ OK ]
[root@oracluster03 ~]# chkconfig iscsid on
[root@oracluster03 ~]# chkconfig iscsi on
[root@oracluster03 ~]# service iscsid status
iscsid (pid 22651 22650) is running...

[root@oracluster03 ~]#
[root@oracluster03 ~]#

--- cut here start -------


iscsiadm -m discovery -t sendtargets -p $STORAGE|awk '{print "iscsiadm -m node -T "$2"  -p $STORAGE -l"}'

----- cut here end ----------

Test your command and make sure you can see the expected storage.

# export STORAGE=192.168.1.200
# iscsiadm -m discovery -t sendtargets -p $STORAGE


To manually log in to an iSCSI target, use the following command
iscsiadm -m node -T proper_target_name -p target_IP -l

As you need to do this each disk, it can be achieved by using simple AWK command to generate for all disks.

iscsiadm -m discovery -t sendtargets -p $STORAGE|awk '{print "iscsiadm -m node -T "$2"  -p $STORAGE -l"}'

Now use the output to test your login to each disk.

iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.ocrdisk3  -p $STORAGE -l
iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.ocrdisk2  -p $STORAGE -l

iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.ocrdisk1  -p $STORAGE -l

As We setting this up for Oracle RAC, so ideally we need automatic login on startup ( default behaviour ), As a safe side we setup the disk to auto login on startup by using "--op update -n node.startup -v automatic" option.

# iscsiadm -m discovery -t sendtargets -p $STORAGE|awk '{print "iscsiadm -m node -T "$2"  -p $STORAGE --op update -n node.startup -v automatic"}'  

iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.ocrdisk3  -p $STORAGE --op update -n node.startup -v automatic
iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.ocrdisk2  -p $STORAGE --op update -n node.startup -v automatic
iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.ocrdisk1  -p $STORAGE --op update -n node.startup -v automatic

# iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.ocrdisk3  -p $STORAGE --op update -n node.startup -v automatic
# iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.ocrdisk2  -p $STORAGE --op update -n node.startup -v automatic
# iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.ocrdisk1  -p $STORAGE --op update -n node.startup -v automatic


Now create UDEV rules so all the disk have uniform name on startup.

Create openscsi Rules file

[root@oracluster03 ~]# vi /etc/udev/rules.d/55-openiscsi.rules
----------------------File start here ---------------------
KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c/part%n"

-------------------------------------------------------------------------------------------
[root@oracluster03 ~]# mkdir -p /etc/udev/scripts

[root@oracluster03 ~]# vi /etc/udev/scripts/iscsidev.sh

-------------------------------file start here --------------------


#!/bin/sh

# FILE: /etc/udev/scripts/iscsidev.sh

BUS=${1}
HOST=${BUS%%:*}

[ -e /sys/class/iscsi_host ] || exit 1

file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"

target_name=$(cat ${file})

# This is not an open-scsi drive
if [ -z "${target_name}" ]; then
exit 1
fi

echo "${target_name##*.}"

---------file ends here -------------------------

[root@oracluster03 ~]# chmod 755 /etc/udev/scripts/iscsidev.sh

[root@oracluster03 ~]# service iscsi stop

Logging out of session [sid: 1, target: iqn.2006-01.com.openfiler:cluster03.gridcontrol01, portal: 192.168.2.5,3260]
Logout of [sid: 1, target: iqn.2006-01.com.openfiler:cluster03.gridcontrol01, portal: 192.168.2.5,3260]: successful
Stopping iSCSI daemon: [ OK ]

[root@oracluster03 ~]# service iscsi start
iscsid dead but pid file exists
Turning off network shutdown. Starting iSCSI daemon: [ OK ]
[ OK ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:cluster03.gridcontrol01, portal: 192.168.2.5,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:cluster03.gridcontrol01, portal: 192.168.2.5,3260]: successful

[ OK ]

Verify that, Disk are linked for logical link with local name.

[root@oracluster03 ~]# ls -lrt /dev/iscsi/*/


####Only on First node ##########


fdisk /dev/iscsi/disk01/part

fdisk /dev/iscsi/disk02/part


######### on all other nodes ########
partprobe


fdisk -l