Maanas Royy

Technology Trends and Talk

Maanas Royy

Firing atq job from apache on Centos 6.x

On CentOS selinux is enabled by default and it will not allow to fire apache any atq job or any other job.

 

First we need to give execute permission to apache

chcon -R -t httpd_sys_rw_content_t /home/agms/residuals/

Then we need to allow in the policy for selinux

Change to root home. Fire this command

grep atq /var/log/audit/audit.log | audit2allow -M agms_atq_pol

This will create a policy file arms_atq_pol.te

[root@dev02 ~]# cat agms_atq_pol.te 

module agms_atq_pol 1.0;

require {

type httpd_t;

class netlink_audit_socket create;

}

#============= httpd_t ==============

#!!!! This avc can be allowed using the boolean ‘allow_httpd_mod_auth_pam’

allow httpd_t self:netlink_audit_socket create;

Now load this using the 

semodule -i agms_atq_pol.pp

And reboot as it will not load in kernel on its own. Yes policy goes to kernel. We wasted some time in trying to see it works without reboot.

Advertisement

Migrating GitLab to GitLab-Omnibus

Coming from a Manual Install (MySQL)

Some assumptions

  • You are coming from a manual GitLab installation that uses MySQL.
  • You know how to manually upgrade this existing install.
  • You’ve already installed the RPM or DEB version of GitLab on your new server
  • Your local workstation has python2.7 installed
  • You have upgrade the manual GitLan installation to same version as RPM or DEB version of GitLab

Some notes

  • If you have any extra customizations (branding and such) this will not transfer
  • If you are using LDAP, these settings will not transfer. You will have to add them again  to the new install. (Make sure you are looking at the correct version of this doc. At the time of writing, I was working with 6.8)

On your Existing Installation

First thing you are going to want to do is get your manual install completely up-to-date (or at least to one of the versions available from the omnibus archives) as you can’t restore across versions of GitLab. I’m going to assume you have done this already.

Next, we need to backup our exiting installation.

## Stop Server
sudo service gitlab stop
## Backup GitLab
cd /home/git/gitlab
sudo -u git -H bundle exec rake gitlab:backup:create RAILS_ENV=production

Now, dump the existing database like so. In this example, we are calling the dumped content current_install.mysql which is being dumped from our production database gitlabhq_production.

## You will need your MySQL adim password for this

mysqldump --compatible=postgresql --default-character-set=utf8 -r current_install.mysql -u root gitlabhq_production -p  

Make sure the backup archive and database dump are in a place you can get to from your local workstation.

## Example: moving our backup from gitlab and our mysql database dump to a your user's home folder

cp /home/git/gitlab/tmp/1234567890_gitlab_backup.tar /home/your_user/

cp /path/to/current_install.mysql /home/your_user/  

On your Local Workstation

Pull down the backup archive and the database dump file.

scp your_user@existing.install.com:1234567890_gitlab_backup.tar .

scp your_user@existing.install.com:current_install.mysql .  

Extract the archive. You’ll want to do this in an empty directory. After it is extracted, you may delete the archive. (Be sure to write down the name of this file though, you will need it later)

mkdir ~/tar_me_from_inside && cd ~/tar_me_from_inside  
tar -xvf ~/1234567890_gitlab_backup.tar  
rm ~/1234567890_gitlab_backup.tar  

cd ~/ git clone -b gitlab https://github.com/gitlabhq/mysql-postgresql-converter.git

Convert the database dump file from the original install to the new install compatible psql file. You will need Python 2.7 for this.

cd ~/mysql-postgresql-converter  
python db_converter.py ~/gitlabhq_production.mysql ~/gitlabhq_production.psql  
ed -s gitlabhq_production.psql < move_drop_indexes.ed

Remove the db/database.sql from your extracted backup and move the new one in its place. (Using the same name as the one you just deleted)

cd ~/tar_me_from_inside  
rm db/database.sql  
mv ~/database.psql db/database.sql  

Now, tar up this directory, naming it the same as the original backup archive file.

cd ~/tar_me_from_inside && tar -zcvf ~/1234567890_gitlab_backup.tar . && cd ~/  

Finally, place the new backup archive on your new install server.

scp ~/1234567890_gitlab_backup.tar your_user@new.server.com:.  

On your New RPM/DEB Install

Move the backup to /var/opt/gitlab/backups

mv ~/1234567890_gitlab_backup.tar /var/opt/gitlab/backups  

Stop unicorn and sidekiq, then run the restore.

## Run this with sudo or as root

gitlab-ctl stop unicorn  
gitlab-ctl stop sidekiq  
gitlab-rake gitlab:backup:restore BACKUP=1234567890  

If all went well, you should be able to start GitLab back up, log in, and find all your projects just as you left them!!

gitlab-ctl start  

Php can’t connect to mysql with error 13 (but command line can)

Could not connect: Can’t connect to MySQL server on ‘MYSQL.SERVER’ (13)

This problem is encountered on scientific linux machine or any machine where security patches are applied automatically. The selinux is the reason which does not allow such connections.

setsebool -P httpd_can_network_connect=1

will also be a helpful CLI command to many people visiting this question, as to allow mysql_connet() connections from within HTTP (Apache) requests to a remote MySQL database server, ensure to enable Network Connections from httpd in SElinux usually located in /etc/selinux/config (disabled by default to prevent hackers from attacking other machines using your httpd).

Squash several Git commits into a single commit

When we are working in any project in git, over a period of time there are a large number of commits are added in git. The best course of action is to always work in a branch and the keep working with you pace and adding commit. When you reach a point where you want to merge changed to master branch and wants to squash multiple commit into single with detailed description.

The easiest way to turn multiple commits in a feature branch into a single commit is to reset the feature branch changes in the master and commit everything again.

# Switch to the master branch and make sure you are up to date. 
git checkout master 
git fetch # this may be necessary (depending on your git config) to receive updates on origin/master 
git pull # Merge the feature branch into the master branch. 
git merge feature_branch # Reset the master branch to origin's state.
# This is important, here after merge you are resetting the master branch head
git reset origin/master # Git now considers all changes as unstaged changes. 
# We can add these changes as one commit. 
# Adding . will also add untracked files. 
git add --all 
git commit

Note that this is not touching the feature branch at all. If you would merge the feature branch into the master again at a later stage all of its commits would reappear in the log.

You may also do it the other way round (merging master into the branch and resetting to the master state) but this will destroy your commits in the feature branch, meaning you can not push it to origin.

How to Compile and Run Java Code from a Command Line

Being spoiled by IDEs and automated building tools I recently realized that I don’t know how to run java code from a command line anymore. After playing a guessing game for an hour trying to compile a simple piece of code that took 5 minutes to write, I thought maybe it’s time to do a little research.

Task

Lets say we have a fairly standard java project that consists of three top level folders:

/bin – empty folder that will contain compiled .class files

/lib – contains third party .jar files

/src – contains .java source files

Our task would be to compile and launch the project from its root folder. We will use Windows OS as example (on Unix systems the only difference would be path separation symbol – ":" instead of ";").

Compiling Java Code

The first step is compiling plain text .java sources into Java Virtual Machine byte code (.class files). This is done with javac utility that comes with JDK.

Assuming we are at the application root folder trying to compile Application.java file from com.example package that uses lib1.jar and lib2.jar libraries from lib folder to a destination bin folder, compilation command should have the following format:

javac -d bin -sourcepath src -cp lib/lib1.jar;lib/lib2.jar src/com/example/Application.java

As a result bin/com/example/Application.class file should be created. IfApplication.java uses other classes from the project, they all should be automatically compiled and put into corresponding folders.

Running Java Code

To launch a .class file we just compiled, another JDK utility called java would be needed.

Assuming we are at the application root folder trying to launch Application.class file from com.example package that uses lib1.jar and lib2.jar libraries from lib folder, the launch command should have the following format

java -cp bin;lib/lib1.jar;lib/lib2.jar com.example.Application

Note that we don’t provide a filename here, only an actual class name that java would attempt to find based on provided classpath.

Some Notes About Classpath

Lets say during Application.java compilation a compiler stumbles upon somecom.example.Util class. How to find it in the file system? According to Java file naming rules,Util class has to be located somewhere in Util.java file under /com/example/ folder, but where to start searching for this path? Here is where classpath comes into play which sets the starting folder for searching for classes. Classpath can be set in 3 different ways:

  • If no --classpath parameter is passed, CLASSPATH environment variable is used
  • If CLASSPATH environment variable is not found, current folder (".") is used by default
  • If --classpath is explicitly set as a command line parameter, it overrides all other values

The fact that classpath when set overrides default value (current folder) can cause some unexpected results.

For example if we don’t use any third party libraries, only our own com.example.Utilclass, and try to compile Application.java from the src folder:

javac com/example/Application.java

this would work, but then if we decide to add a third party libarary to the classpath:

javac -cp lib/lib1.jar com/example/Application.java

it would cause an error:

package com.example.Util does not exist

This happens because when we set -cp lib/lib1.jar we override default value for the classpath – current folder. Now a compiler will be looking for all classes only inside that jar file. To fix this we need to explicitly add the current folder to the classpath:

javac -cp .;lib/lib1.jar com/example/Application.java

Installing Coldfusion 11 on CentOS/SL 6.x 64 bit

  1. Installing the required libstdc++.so.5 C++ Library
  2. Running the ColdFusion installer
  3. Starting ColdFusion for the first time
  4. Adding hostname pointed to localhost
  5. Finish the ColdFusion installation in your browser

1. Install the required libstdc++.so.5 C++ Library

There’s one final thing you need to do before running the installer. ColdFusion 8 and 9 require the libstdc++.so.5 C++ library for a few features such as custom tags, Web Services and some cfimage functionality. Download and install the library using the package manager built into CentOS.

yum install libstdc++.so.5

There are several ways you can go about getting the installation file for ColdFusion 9. The easiest is probably by logging into your Adobe account and downloading either the 32-bit install file (coldfusion_11_WWE_linux.bin) or the 64-bit install file (coldfusion_11_WWE_linux64.bin). I highly recommend you go with a 64-bit installation as this will allow you to allocate much more RAM to the ColdFusion server than in a 32-bit environment. Of course, this requires you have a 64-bit version of CentOS installed as well.

2. Make the install file executable and running the installer

Before you can run the installation file you need to make it executable. Navigate to the directory where you placed the file and run the change mod command to make it executable.

cd /install

# For the 32-bit installation
chmod +x coldfusion_11_WWE_linux.bin

# For the 64-bit installation
chmod +x coldfusion_11_WWE_linux64.bin

The install process may take a few minutes to get going depending on your server specs. You’ll be presented with a multi-page license agreement that you have to accept in order to continue the installation. After that, you are presented with installation questions.

After the installation is finished you will see a success screen that tells you to start ColdFusion and run the Configuration Wizard. The wizard isn’t something you have to run specifically, as the first time you launch the ColdFusion Administrator the wizard will run for you. You are given a URL for the ColdFusion Administrator:

http://%5Bmachinename%5D:8500/CFIDE/administrator/index.cfm

3. Starting the coldfusion for first time

Change the directory to coldfusion install location

cd /opt/coldfusion11/cfusion/bin

and issue

./coldfusion start

4. Adding hostname pointed to 127.0.0.1

You might end into error 500.

Issue this as root

echo “127.0.0.1 “`hostname` >> /etc/hosts

Then restart the coldfusion. It worked in my case.

5. Finish the ColdFusion installation in your browser

Open your favorite browser and copy/paste the following into the address bar. Change machinename to the IP address of your server. A local IP such as 10.x.x.x or 192.x.x.x will work if you are connected to a server in your office or you are connected to an external server via VPN. You might also be able to use the external IP of the server. Or, if the server is already hosting a domain name, you could change machinename to yourdomain.com.

After loading this URL you should see a ColdFusion-branded Configuration and Settings wizard screen with a password prompt. Enter the password for the ColdFusion Administrator you created during installation and press enter. ColdFusion will do a few things and then show a new screen with an okay button. Press the button to go straight to the main screen of the ColdFusion Administrator.

Enabling e1000 Gigabit device emulation in Citrix XenServer

The following howto describes modification to critical system software. If you choose to follow this guide, you do so at your own risk.

The problem

The commercial version of the Citrix XenServer does not allow you to choose the type of ethernet adapter to emulate within your VM. The standard device that is emulated is a Realtek 8139 (RTL8139), which is a 100Mbit/sec Fast Ethernet card.

Citrix themselves do not view this as a major issue, as they expect you to install paravirtualised drivers within your guest operating system. This is usually a very good idea and just fine if you’re using Windows, or a major supported OS such as Red Hat, CentOS or Ubuntu. Under these Linux operating systems, your entire kernel must be replaced by a Citrix supplied kernel. The paravirtualised drivers will outperform any emulated device.

However, if you’re running a system with a customised non-standard kernel that doesn’t support Citrix Xen paravirtualisation, you’ll be stuck with a 100Mbit/sec bottleneck in your network. Sure, you can go and rebuild your kernel with the right paravirtualised drivers, but that’s not always an option.

Those familiar with the open source version of Xen will know that the underlying QEMU device emulation that Xen uses can emulate an Intel 1Gbit/sec adapter, called “e1000”. Apart of the additional speed, this device also supports jumbo ethernet frames. This emulation mode is available under Citrix XenServer, but is a hidden feature, due to hard-coding of the Realtek driver option.

Enabling e1000 emulation

You’ll need to ssh into your Citrix server and become root. Then do the following:

First rename /usr/lib/xen/bin/qemu-dm to /usr/lib/xen/bin/qemu-dm.orig

# mv /usr/lib/xen/bin/qemu-dm /usr/lib/xen/bin/qemu-dm.orig

Then make a replacement /usr/lib/xen/bin/qemu-dm file like this

#!/bin/bash
oldstring=$@
newstring=${oldstring//rtl8139/e1000}
exec /usr/lib/xen/bin/qemu-dm.orig $newstring

Then chmod (to make it executable) and chattr it (to stop it being overwritten):

# chmod 755 /usr/lib/xen/bin/qemu-dm
# chattr +i /usr/lib/xen/bin/qemu-dm

If you now shutdown and re-start your Citrix virtual machines, they will have an emulated e1000 device.

Warning

The “chattr” line above makes the replacement file “immutable”. This means that the file cannot be overwritten, prevents the loss of this modification in the event of a system update.

However, this may cause updates provided by Citrix to fail at the point of installation. An alternative approach would be leave the file unprotected, and re-applying this modification after Citrix-supplied updates have been applied.

The remove the protection from the file, do the following:

 

# chattr -i /usr/lib/xen/bin/qemu-dm

Device eth0 does not seem to be present, delaying initialization

Recently I’ve been deploying quite a few VMs for a wide range of different services.

When adding the new ethernet card or if you want to switch eth0 and eth1, the issue is there’s a hard preset storage of the MAC address to system network device. When trying to bring up eth0 I got this error message.

Device eth0 does not seem to be present, delaying initialization  

This became quite common when working with cloning images.
Dmesg revealed that eth0 has been renamed to eth1 (udev: renamed network interface eth0 to eth1)

The simple fix is to remove the persistent rule and give it a reboot. Don’t forget to update the networks file /etc/sysconfig/network-scripts/ifcfg-ethx

Comment or Modify UUID and MAC Address

nano /etc/sysconfig/network-scripts/ifcfg-eth0  
rm -f /etc/udev/rules.d/70-persistent-net.rules  
reboot  

Now when the server reboots, these persistent-net.rules will be regenerated again on boot with the new mac addresses.

SSH and SFTP Chroot Jail

Let’s get Jailed!

Step 1: Create your chroot directories

I’ve seen a few strategies for this including placing the chroot directory under /var/chroot.

In this case all of the clients on this server have a public_html subdirectory structure under their home directories. To make it easy to see who’s been jailed we’ll put our chroot jail in /home.

#Create our directories
sudo mkdir -p /home/jail/{dev,etc,lib,lib64,usr,bin,home}
sudo mkdir -p /home/jail/usr/bin

#Set owner
sudo chown root:root /home/jail

#Needed for the OpenSSH ChrootDirectory directive to work
sudo chmod go-w /home/jail

Step 1: Choose your commands

We’ll offer a limited set of userspace applications. For these to work you need to copy the binary into its corresponding directory in the jail, as well as copy over any linked dependencies.

Allan Field pointed me to a handy script that can be used for bringing binary dependencies over for a given executable (as opposed to manually – via ldd and copying the results.)

The script can be found here…http://www.cyberciti.biz/files/lighttpd/l2chroot.txt.

We’re going to offer bash, cp, ls, clear, and mkdir to our jailed users (for starters).

#First the binaries
cd /home/jail/bin
sudo cp /bin/bash .
sudo cp /bin/ls .
sudo cp /bin/cp .
sudo cp /bin/mv .
sudo cp /bin/mkdir .

#Now our l2chroot script to bring over dependencies
sudo l2chroot.sh /bin/bash
sudo l2chroot.sh /bin/ls
sudo l2chroot.sh /bin/cp
sudo l2chroot.sh /bin/mv
sudo l2chroot.sh /bin/mkdir

(This should really be wrapped up into a single bash script that takes both the binary and its dependencies).

The clear command requires terminal definitions…

# clear command
cd /home/jail/usr/bin
sudo cp /usr/bin/clear .
sudo l2chroot.sh /usr/bin/clear
#Add terminal info files - so that clear, and other terminal aware commands will work.
cd /home/jail/lib
sudo cp -r /lib/terminfo .

Step 2: Create your user and jail group

Create the jail group sudo groupadd jail

You can either create a new user using sudo adduser --home /home/jail/home/username username, or copy (and then later remove) the home directory of an existing user into the home/jail/home directory.

If you create a new user using sudo adduser --home /home/jail/home/username username – the home directory will be created in the jail, but the user’s home directory in /etc/passwd will need to be edited to return it to /home/username – since the jail root will put home at the root again once the user is logged in.

Now add the user to the jail group sudo addgroup username jail

Step 3: Update sshd_config

We’re going to edit the sshd_config file, removing theForceCommand internal-sftp directive – since we don’t want to limit our users to SFTP (you could maintain a second group and configuration for this).

Match Group jail
    ChrootDirectory /home/jail
    X11Forwarding no
    AllowTcpForwarding no

We’ve chrooted to /home/jail – and both SFTP and SSH logins will default to the user’s home directory below the jail.

Restart the sshd daemon, and you’re ready to go sudo /etc/init.d/ssh restart or service ssh restart

Try logging in via SSH or SFTP, and your jailed user will be dropped into their home directory under /home/jail/home, with a limited set of userspace applications, and no access to the parent environment.

Step 4: Bonus marks – give the user MySQL access

I’d like users to be able to upload and configure a site, including be able to perform mysql dumps and restores. Here’s how to give them a MySQL prompt.

#Binaries for MySQL Client
sudo mkdir /home/jail/usr/local/mysql/bin
cd /home/jail/usr/local/mysql/bin
sudo cp /usr/local/mysql/bin/mysql .
sudo l2chroot.sh /usr/local/mysql/bin/mysql
cd /home/jail/lib/x86_64-linux-gnu
sudo cp /lib/x86_64-linux-gnu/libgcc_s.so.1 .

Note the ‘undiscovered’ dependancy on libgcc_s.so.1.

Wrapping Up!

This is a nice and simple solution that works thanks to OpenSSH’s built in ChrootDirectory directive. It doesn’t require any modification to the passwd file, and could fairly easily be wrapped up into a consolidated shell scrip for creating, updating and adding applications to the jail.