Understanding Dell PERC Controller

Part – 1

http://www.youtube.com/watch?v=nKiOECFu1HE

Advertisements

Debugging .htaccess

Debugging .htaccess is at time shooting in dark. I had gone through one such day where every thing makes sense but it does not work. I had outlined the step i used to find out the bug.

Add a Garbage inside the .htaccess to see whether you get Server Error. If you get a server error it is working

<IfModule mod_rewrite.c>

#Options +FollowSymlinks

This is Garbage and should result in failure

RewriteEngine On
RewriteBase /
RewriteRule ^(.*$ /deubg.php?$1 [QSA]
</IfModule>

If you get a server error then there is some problem in the your .htaccess config. You can try the online .htaccess tool to debug the same

http://htaccess.madewithlove.be/

If you do not get a .htaccess error then .htaccess is not read by the Apache

  1. Open the Apache configuration file located at /etc/httpd/conf/httpd.conf
  2. Change AllowOveride None to AllowOveride All inside the DocumentRoot Directory Directive, normally<Directory “/var/www/html”>
This will allow the directives to be modified.

Software Raid on Xenserver 5.6 SP2 using mdadm

I finally got my virtualization servers in, and all of my components to get them humming with XenServer 5.6 Sp2. The only problem I found was that the onboard RAID controller on HP GL160G6 BV110i is really a HostRAID or FakeRAID solution. Fake raid controller only does the work of writing and the parity bit are maaintained by driver at the cost of CPU.

My advise: Stay miles away from the FakeRaid instead use MDADM or software raid. In case you made it work some how. You will realise my world as gold when you got a failure. One upgrade of OS you RAID will fail as the driver will mismatch and leaving you high and dry. You need to do simultaneous upgrade of firmware and  OS provided you got the update firmware.

XenServer does support software RAID though using Linux mdadm. I find Linux software RAID is pretty stable. Plus since it’s only RAID 1 I am unlikely to see a huge performance hit.

The only problem with the tutorial I found is that it takes a lot of terminal commands to get software RAID configured properly, and if I have to do this on more than one server that would be a pain. So I decided to copy all the commands into one handy dandy shell script!

You can download that sucker here: (XenServer RAID 1 Script)

xenRAID.sh (Please save the content in text file)

In order to run it, you must have two hard drives of the same size. Run the XenServer install like normal, and only select the SDA drive as the installation point, as well as for the VM containers. Make sure that you leave SDB unchecked when configuring your VM containers. After that, follow the prompts like normal to complete the install.

Once the install is done, you can ssh into your newly installed XenServer from another machine, then run the following:

chmod +x xenRAID.sh

sh xenRAID.sh

That’s it pal! Just wait a few minutes, answer a few questions with ‘Y’ for yes, and you will have a software RAID 1 implementation of XenServer 5.6 is no time! I have tested it three times in a row, and it works great! I have also taken out one drive at a time to verify that XenServer still boots, and it’s worked flawlessly!

 

PS: Once it reboot do a cat /proc/mdstat and there might be a chance that one of the RAID device is not added

To add a raid device

mdamd –manage /dev/md0|1 –add /dev/sda|b1|3

Use of the values in | as per your configuration

Managing Software Raid MDADM in Linux

1 Preliminary Note
In this example I have two hard drives, /dev/sda and /dev/sdb, with the partitions /dev/sda1 and /dev/sda2 as well as /dev/sdb1 and /dev/sdb2.

/dev/sda1 and /dev/sdb1 make up the RAID1 array /dev/md0.

/dev/sda2 and /dev/sdb2 make up the RAID1 array /dev/md1.

/dev/sda1 + /dev/sdb1 = /dev/md0

/dev/sda2 + /dev/sdb2 = /dev/md1

/dev/sdb has failed, and we want to replace it.

2 How Do I Tell If A Hard Disk Has Failed?
If a disk has failed, you will probably find a lot of error messages in the log files, e.g. /var/log/messages or /var/log/syslog.

You can also run

cat /proc/mdstat

and instead of the string [UU] you will see [U_] if you have a degraded RAID1 array.

3 Removing The Failed Disk
To remove /dev/sdb, we will mark /dev/sdb1 and /dev/sdb2 as failed and remove them from their respective RAID arrays (/dev/md0 and /dev/md1).

First we mark /dev/sdb1 as failed:

mdadm –manage /dev/md0 –fail /dev/sdb1

The output of

cat /proc/mdstat

should look like this:

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0] sdb1[2](F)
24418688 blocks [2/1] [U_]

md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/2] [UU]

unused devices:

Then we remove /dev/sdb1 from /dev/md0:

mdadm –manage /dev/md0 –remove /dev/sdb1

The output should be like this:

server1:~# mdadm –manage /dev/md0 –remove /dev/sdb1
mdadm: hot removed /dev/sdb1

And

cat /proc/mdstat

should show this:

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0]
24418688 blocks [2/1] [U_]

md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/2] [UU]

unused devices:

Now we do the same steps again for /dev/sdb2 (which is part of /dev/md1):

mdadm –manage /dev/md1 –fail /dev/sdb2

cat /proc/mdstat

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0]
24418688 blocks [2/1] [U_]

md1 : active raid1 sda2[0] sdb2[2](F)
24418688 blocks [2/1] [U_]

unused devices:

mdadm –manage /dev/md1 –remove /dev/sdb2

server1:~# mdadm –manage /dev/md1 –remove /dev/sdb2
mdadm: hot removed /dev/sdb2

cat /proc/mdstat

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0]
24418688 blocks [2/1] [U_]

md1 : active raid1 sda2[0]
24418688 blocks [2/1] [U_]

unused devices:

Then power down the system:

shutdown -h now

and replace the old /dev/sdb hard drive with a new one (it must have at least the same size as the old one – if it’s only a few MB smaller than the old one then rebuilding the arrays will fail).

4 Adding The New Hard Disk
After you have changed the hard disk /dev/sdb, boot the system.

The first thing we must do now is to create the exact same partitioning as on /dev/sda. We can do this with one simple command:

sfdisk -d /dev/sda | sfdisk /dev/sdb

You can run

fdisk -l

to check if both hard drives have the same partitioning now.

Next we add /dev/sdb1 to /dev/md0 and /dev/sdb2 to /dev/md1:

mdadm –manage /dev/md0 –add /dev/sdb1

server1:~# mdadm –manage /dev/md0 –add /dev/sdb1
mdadm: re-added /dev/sdb1

mdadm –manage /dev/md1 –add /dev/sdb2

server1:~# mdadm –manage /dev/md1 –add /dev/sdb2
mdadm: re-added /dev/sdb2

Now both arays (/dev/md0 and /dev/md1) will be synchronized. Run

cat /proc/mdstat

to see when it’s finished.

During the synchronization the output will look like this:

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
24418688 blocks [2/1] [U_]
[=>……………….] recovery = 9.9% (2423168/24418688) finish=2.8min speed=127535K/sec

md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/1] [U_]
[=>……………….] recovery = 6.4% (1572096/24418688) finish=1.9min speed=196512K/sec

unused devices:

When the synchronization is finished, the output will look like this:

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
24418688 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/2] [UU]

unused devices:

That’s it, you have successfully replaced /dev/sdb!

Install php 5.3 on Centos 5.x

php 5.3 is available from remi repo

Enable epel Repo

wget http://download.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm

rpm -Uvh epel-release-5-4.noarch.rpm

 

Enable Remi Repo

wget http://rpms.famillecollet.com/el5.i386/remi-release-5-7.el5.remi.noarch.rpm

rpm -Uvh remi-release-5-7.el5.remi.noarch.rpm

 

Update or Install php

yum --enablerepo=remi install php-common
  yum --enablerepo=remi install php
  yum install gd gd-devel
  yum --enablerepo=remi install php-mcrypt php-xml php-xml php-devel php-imap php-soap php-mbstring php-mysql
  yum --enablerepo=remi install php-mhash php-simplexml php-dom php-gd php-pear php-pecl-imagick php-magickwand

 Installing Wo…

Installing Wowza Media Server on Ubuntu

I needed to install Java first…

sudo apt-get install python-software-properties
sudo add-apt-repository ppa:sun-java-community-team/sun-java6
sudo apt-get update
sudo apt-get install sun-java6-jre sun-java6-plugin

Installing Wowza Media Server on Centos

yum install java

Download the RAW Linux Version

sudo chmod +x WowzaMediaServer-2.2.0.tar.bin

sudo ./WowzaMediaServer-2.2.0.tar.bin

Then follow onscreen instructions…

Install Location:
/usr/local/WowzaMediaServer

To enter serial number:
cd /usr/local/WowzaMediaServer/bin
./startup.sh