Thomas Vogt’s IT Blog

knowledge is power …

Linux – Repair Bootloader / Change Boot device path

If a Linux sever is not booting anymore (e.g. HP Proliant server) and on screen the following message appears:

“boot from hard drive c: no operating system”

there is usually a problem with boot disk device path,  for example after changing configuration on Raid Controller or moving disks from one disk controller to another etc.

Solution

Boot into Rescue Mode

Just boot from a Linux Live CD/DVD (f.e. RHEL 6 installation DVD) an choose “Rescue Mode”, which gives you a root terminal after boot.

Mount disk devices

Mount all devices (including /dev device, otherwise you run into trouble later !)

Rescue# fdisk -l
Rescue# mount /dev/vg00/lv_root /mnt
Rescue# mount /dev/cciss/c0d0p1 /mnt/boot
Rescue# mount  -o bind /dev /mnt/dev

Chroot into mounted Root environment

Rescue# chroot /mnt

Hint: If you get an error like “/bin/sh: exec format error” you use the wrong LiveCD (e.g. x86 instead of x86_64)

Run Grub CLI

Rescue# grub

grub> root (hd0, 0)  -> hd0,0 is actually the first partition on first disk (e.g. /dev/cciss/c0d0p1, basically /boot partition)

grub> setup (hd0)

grub> quit

Change entries in /boot/grub/menu.lst

Last step is to set correct entries in bootloader config file.

vim /boot/grub/menu.lst

splashimage=(hd0,0)/grub/splash.xpm.gz

title Red Hat Enterprise Linux (2.6.32-220.el6.x86_64)
root (hd0,0)

Reboot Server

After the steps above you should be able to boot server again.

Rescue# reboot

December 21, 2011 Posted by | Linux, RedHat | 1 Comment

Join RedHat Linux to Microsoft Active Directory

Overview

To login as a Microsoft Active Directory (AD) user on a RedHat Linux system the Linux server has to be joined on the AD. There are several ways to do that, one solution is to use Likewise Open as described here.
Likewise is an open-source community project that enables core AD authentication for Linux.

Environment:
– RedHat Linux Enterprise (RHEL) 5.4
– Microsoft Active Directory 2003
– Likewise Open 6.0

Installation

The software is available and downloadable after registration on Likewise website http://www.likewise.com/download/.

# chmod +x LikewiseOpen-6.0.0.8234-linux-x86_64-rpm-installer
# ./LikewiseOpen-6.0.0.8234-linux-x86_64-rpm-installer

Join Linux system to AD Domain


# domainjoin-cli join mydomain.local Administrator

Joining to AD Domain: mydomain.local
With Computer DNS Name: myserver.mydomain.local
Administrator@MYDOMAIN.LOCAL’s password:
Enter Administrator@MYDOMAIN.LOCAL’s password:
SUCCESS

Login as Domain User

With PuTTY (single backslash)

login as: mydomain\domain_user
Using keyboard-interactive authentication.
Password:
/usr/bin/xauth: creating new authority file /home/local/MYDOMAIN/domain_user/.Xauthority
-sh-3.2$

On a Unix command line (double backslash)
$ ssh -l mydomain\\domain_user myserver.mydomain.local

-sh-3.2$ whoami
MYDOMAIN\domain_user

# domainjoin-cli query
Name = myserver
Domain = MYDOMAIN.LOCAL
Distinguished Name = CN=MYSERVER,CN=Computers,DC=mydomain,DC=local

Check Linux server on AD console

Linux server as new computer account in AD

Useful information

http://www.likewise.com/

July 28, 2010 Posted by | Linux, RedHat, Windows | 1 Comment

Xen Guest (DomU) Installation

OS: RedHat Enterprise Linux 5 (RHEL5), should also work on CentOS and Fedora.

There are a few methodes to install a Xen Guest (DomU). In my experience the easiest and smoothest way for such an installation is to use the virt-install script, which is installed by default on RedHat Linux systems.

Xen Packages

First of all, we have to install the Xen Virtualization packages (kernel-xen-devel is optional, but for example needed by HP ProLiant Supprt Pack (PSP)).

# yum install rhn-virtualization-common rhn-virtualization-host kernel-xen-devel virt-manager

On CentOS and Fedora:

# yum groupinstall Virtualization

After that a reboot for loading the Xen kernel is needed.

Guest Installation with virt-install

In the following example the Xen Guest installation is made with a kickstart config file. The kickstart file and the OS binaries are on a provisioning server (cobbler) and reachable over the http protocol.

# virt-install -x ks=http://lungo.pool/cblr/kickstarts/rhel51-x86_64_smbxen/ks.cfg

Would you like a fully virtualized guest (yes or no)?  This will allow you to run unmodified operating systems. no

 What is the name of your virtual machine? smbxen

 How much RAM should be allocated (in megabytes)? 1024

 What would you like to use as the disk (path)? /dev/vg_xen/lv_smbxen

 Would you like to enable graphics support? (yes or no) yes

 What is the install location? http://lungo.pool/cblr/links/rhel51-x86_64/   

Starting install...

Retrieving Server...                                            651 kB 00:00

Retrieving vmlinuz...     100% |=========================| 1.8 MB    00:00

Retrieving initrd.img...  100% |=========================| 5.2 MB    00:00

Creating domain...                                                 0 B 00:00

VNC Viewer Free Edition 4.1.2 for X - built Jan 15 2007 10:33:11

At this point a regular RedHat OS installation (graphical installer) starts.

To automatically run the guest after a system (Dom0) reboot, we have to create the following link:

# ln -s /etc/xen/[guest_name] /etc/xen/auto/


We can manage the Xen Guest with xm commands, virsh commands or virt-manager.

Virt-Manager

# virt-manager

virt-manager


xm commands

List Domains (Xen Guests)

# xm list

Start a Guest

# xm create [guest-config]

Connect to a guest console ( Back: ESC-] (US-keyboard), Ctrl-5 (German keyboard))

# xm console [guest_name]

Shutdown a guest

# xm shutdown [guest_name]

Destroy (Power off) a guest

# xm destroy [guest_name]

Monitor guests

# xm top


virsh commands

# virsh
virsh # help 

Commands:

    autostart       autostart a domain
    capabilities    capabilities
    connect         (re)connect to hypervisor
    console         connect to the guest console
    create          create a domain from an XML file
    start           start a (previously defined) inactive domain
    destroy         destroy a domain
    define          define (but don't start) a domain from an XML file
    domid           convert a domain name or UUID to domain id
    domuuid         convert a domain name or id to domain UUID
    dominfo         domain information
    domname         convert a domain id or UUID to domain name
    domstate        domain state
    dumpxml         domain information in XML
    help            print help
    list            list domains
    net-autostart   autostart a network
    net-create      create a network from an XML file
    net-define      define (but don't start) a network from an XML file
    net-destroy     destroy a network
    net-dumpxml     network information in XML
    net-list        list networks
    net-name        convert a network UUID to network name
    net-start       start a (previously defined) inactive network
    net-undefine    undefine an inactive network
    net-uuid        convert a network name to network UUID
    nodeinfo        node information
    quit            quit this interactive terminal
    reboot          reboot a domain
    restore         restore a domain from a saved state in a file
    resume          resume a domain
    save            save a domain state to a file
    schedinfo       show/set scheduler parameters
    dump            dump the core of a domain to a file for analysis
    shutdown        gracefully shutdown a domain
    setmem          change memory allocation
    setmaxmem       change maximum memory limit
    setvcpus        change number of virtual CPUs
    suspend         suspend a domain
    undefine        undefine an inactive domain
    vcpuinfo        domain vcpu information
    vcpupin         control domain vcpu affinity
    version         show version
    vncdisplay      vnc display
    attach-device   attach device from an XML file
    detach-device   detach device from an XML file
    attach-interface attach network interface
    detach-interface detach network interface
    attach-disk     attach disk device
    detach-disk     detach disk device

Exp.: Start a guest with virsh

# virsh start [guest_name]

March 31, 2008 Posted by | Linux, RedHat, Virtualization, Xen | 11 Comments

Linux Network Bonding

Every system and network administrator is responsible to provide a fast and non interrupted network connection for his users. One step in that direction is the easy to configure and reliable network bonding (also called network trunking) solution for linux systems.

The following configuration example is on a RedHat Enterprise Linux (RHEL) 5 System.

Network Scripts Configuration

mode = 0, load balancing round-robin (default);
miimon=500 (monitor bond all 500ms)

# vim /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BONDING_OPTS="mode=0 miimon=500"
BOOTPROTO=none
ONBOOT=yes
NETWORK=172.16.15.0
NETMASK=255.255.255.0
IPADDR=172.16.15.12
USERCTL=no


# vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no


# vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no

Load Channel Bonding modul on system boot


# vim /etc/modprobe.conf


alias bond0 bonding

Show Network Configuration

The bond MAC address will be the taken from its first slave device.

# ifconfig


bond0 Link encap:Ethernet HWaddr 00:0B:CD:E1:7C:B0
inet addr:172.16.15.12 Bcast:172.16.15.255 Mask:255.255.255.0
inet6 addr: fe80::20b:cdff:fee1:7cb0/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:1150 errors:0 dropped:0 overruns:0 frame:0
TX packets:127 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:168235 (164.2 KiB) TX bytes:22330 (21.8 KiB)


eth0 Link encap:Ethernet HWaddr 00:0B:CD:E1:7C:B0
inet6 addr: fe80::20b:cdff:fee1:7cb0/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:567 errors:0 dropped:0 overruns:0 frame:0
TX packets:24 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:84522 (82.5 KiB) TX bytes:4639 (4.5 KiB)
Interrupt:201


eth1 Link encap:Ethernet HWaddr 00:0B:CD:E1:7C:B0
inet6 addr: fe80::20b:cdff:fee1:7cb0/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:592 errors:0 dropped:0 overruns:0 frame:0
TX packets:113 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:84307 (82.3 KiB) TX bytes:19567 (19.1 KiB)
Interrupt:177 Base address:0x4000

Test the Bond

Plug out a network connection.

# tail -f /var/log/messages


kernel: bonding: bond0: link status definitely down for interface eth1, disabling it

Plug in the network connection again.

# tail -f /var/log/messages


kernel: bonding: bond0: link status definitely up for interface eth1.

The bonding module detects a failure in one of the two physical network interfaces configured in the bonding interface and disables the affected interface. After the failed network interface is online again the bonding module automatically re-integrates the interface into the bond. During this network interface failure, the logical network connection was always online and usable for all applications without any interrupts.

December 4, 2007 Posted by | Linux, Network, RedHat | 1 Comment

Linux SAN Multipathing

There are a lot of SAN multipathing solutions on Linux at the moment. Two of them are discussesed in this blog. The first one is device mapper multipathing that is a failover and load balancing solution with a lot of configuration options. The second one (mdadm multipathing) is just a failover solution with manuel re-anable of a failed path. The advantage of mdadm multiphating is that it is very easy to configure.

Before using a multipathing solution for a production environment on Linux it is also important to determine if the used solution is supportet with the used Hardware. For example HP doesn’t support the Device Mapper Multipathing solution on their servers yet.

Device Mapper Multipathing

Procedure for configuring the system with DM-Multipath:

  1. Install device-mapper-multipath rpm
  2. Edit the multipath.conf configuration file:
    • comment out the default blacklist
    • change any of the existing defaults as needed
  3. Start the multipath daemons
  4. Create the multipath device with the multipath

Install Device Mapper Multipath

# rpm -ivh device-mapper-multipath-0.4.7-8.el5.i386.rpm
warning: device-mapper-multipath-0.4.7-8.el5.i386.rpm: Header V3 DSA signature:
Preparing...                ########################################### [100%]
1:device-mapper-multipath########################################### [100%]

Initial Configuration

Set user_friendly_name. The devices will be created as /dev/mapper/mpath[n]. Uncomment the blacklist.

# vim /etc/multipath.conf

#blacklist {
#        devnode "*"
#}

defaults {
user_friendly_names yes
path_grouping_policy multibus

}

Load the needed modul and the startup service.

# modprobe dm-multipath
# /etc/init.d/multipathd start
# chkconfig multipathd on

Print out the multipathed device.

# multipath -v2
or
# multipath -v3

Configuration

Configure device type in config file.

# cat /sys/block/sda/device/vendor
HP

# cat /sys/block/sda/device/model
HSV200

# vim /etc/multipath.conf
devices {

device {
vendor                  "HP"
product                 "HSV200"
path_grouping_policy    multibus
no_path_retry           "5"
}
}

Configure multipath device in config file.

# cat /var/lib/multipath/bindings

# Format:
# alias wwid
#
mpath0 3600508b400070aac0000900000080000

# vim /etc/multipath.conf

multipaths {

multipath {
wwid                    3600508b400070aac0000900000080000
alias                   mpath0
path_grouping_policy    multibus
path_checker            readsector0
path_selector           "round-robin 0"
failback                "5"
rr_weight               priorities
no_path_retry           "5"
}
}

Set not mutipathed devices on the blacklist. (f.e. local Raid-Devices, Volume Groups)

# vim /etc/multipath.conf

devnode_blacklist {

devnode "^cciss!c[0-9]d[0-9]*"
devnode "^vg*"
}

Show Configured Multipaths.

# dmsetup ls --target=multipath
mpath0  (253, 1)

# multipath -ll

mpath0 (3600508b400070aac0000900000080000) dm-1 HP,HSV200
[size=10G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=4][active]
\_ 0:0:0:1 sda 8:0   [active][ready]
\_ 0:0:1:1 sdb 8:16  [active][ready]
\_ 1:0:0:1 sdc 8:32  [active][ready]
\_ 1:0:1:1 sdd 8:48  [active][ready]

Format and mount Device

Fdisk cannot be used with /dev/mapper/[dev_name] devices. Use fdisk on the underlying disks and execute the following command when device-mapper multipath maps the device to create a /dev/mapper/mpath[n] device for the partition.

# fdisk /dev/sda

# kpartx -a /dev/mapper/mpath0

# ls /dev/mapper/*
mpath0  mpath0p1

# mkfs.ext3 /dev/mapper/mpath0p1

# mount /dev/mapper/mpath0p1 /mnt/san

After that /dev/mapper/mpath0p1 is the first partition on the multipathed device.

Multipathing with mdadm on Linux

The md multipathing solution is only a failover solution what means that only one path is used at one time and no load balancing is made.
Start the MD Multipathing Service

# chkconfig mdmpd on

# /etc/init.d/mdmpd start

On the first Node (if it is a shared device)
Make Label on Disk

# fdisk /dev/sda
Disk /dev/sdt: 42.9 GB, 42949672960 bytes
64 heads, 32 sectors/track, 40960 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sdt1               1       40960    41943024   fd  Linux raid autodetect

# partprobe

Bind multiple paths together

# mdadm --create /dev/md4 --level=multipath --raid-devices=4 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1

Get UUID

# mdadm --detail /dev/md4
UUID : b13031b5:64c5868f:1e68b273:cb36724e

Set md configuration in config file

# vim /etc/mdadm.conf

# Multiple Paths to RAC SAN
DEVICE /dev/sd[qrst]1
ARRAY /dev/md4 uuid=b13031b5:64c5868f:1e68b273:cb36724e

# cat /proc/mdstat

On the second Node (Copy the /etc/mdadm.conf from the first node)

# mdadm -As

# cat /proc/mdstat

Restore a failed path

# mdadm /dev/md1 -f /dev/sdt1 -r /dev/sdt1 -a /dev/sdt1

November 29, 2007 Posted by | Linux, RedHat, SAN | 26 Comments

ASM Disk not shown in Oracle Universal Installer (OUI) or DBCA

Terms:

Operating System: Enterprise Linux 4 U5 (RHEL4 U5)

Oracle: 10.2.0.1

Problem:

While installing the ASM Instance with Oracle Universal Installer the ASM Disk, created with oracleasm createdisk, is not shown.

Solution:

Define the Scanorder in /etc/sysconfig/oracleasm config file. For example, if the used multipathing device is /dev/md1, you have to force the ASMlib to scan the /dev/md* paths before the /dev/sd* paths.

# vim /etc/sysconfig/oracleasm
# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER="md sd"

Also make sure that the needed packages are installed for using ASM with ASMlib.

Make sure that the needed packages are installed.

  • oracleasmlib-2.0 – the ASM libraries
  • oracleasm-support-2.0 – utilities needed to administer ASMLib
  • oracleasm – a kernel module for the ASM library

More Inofs:

Metalink Note:394956.1

November 28, 2007 Posted by | Linux, Oracle, RedHat | 22 Comments