Maanas Royy

Technology Trends and Talk

Maanas Royy

Firing atq job from apache on Centos 6.x

On CentOS selinux is enabled by default and it will not allow to fire apache any atq job or any other job.

 

First we need to give execute permission to apache

chcon -R -t httpd_sys_rw_content_t /home/agms/residuals/

Then we need to allow in the policy for selinux

Change to root home. Fire this command

grep atq /var/log/audit/audit.log | audit2allow -M agms_atq_pol

This will create a policy file arms_atq_pol.te

[root@dev02 ~]# cat agms_atq_pol.te 

module agms_atq_pol 1.0;

require {

type httpd_t;

class netlink_audit_socket create;

}

#============= httpd_t ==============

#!!!! This avc can be allowed using the boolean ‘allow_httpd_mod_auth_pam’

allow httpd_t self:netlink_audit_socket create;

Now load this using the 

semodule -i agms_atq_pol.pp

And reboot as it will not load in kernel on its own. Yes policy goes to kernel. We wasted some time in trying to see it works without reboot.

Advertisement

Migrating GitLab to GitLab-Omnibus

Coming from a Manual Install (MySQL)

Some assumptions

  • You are coming from a manual GitLab installation that uses MySQL.
  • You know how to manually upgrade this existing install.
  • You’ve already installed the RPM or DEB version of GitLab on your new server
  • Your local workstation has python2.7 installed
  • You have upgrade the manual GitLan installation to same version as RPM or DEB version of GitLab

Some notes

  • If you have any extra customizations (branding and such) this will not transfer
  • If you are using LDAP, these settings will not transfer. You will have to add them again  to the new install. (Make sure you are looking at the correct version of this doc. At the time of writing, I was working with 6.8)

On your Existing Installation

First thing you are going to want to do is get your manual install completely up-to-date (or at least to one of the versions available from the omnibus archives) as you can’t restore across versions of GitLab. I’m going to assume you have done this already.

Next, we need to backup our exiting installation.

## Stop Server
sudo service gitlab stop
## Backup GitLab
cd /home/git/gitlab
sudo -u git -H bundle exec rake gitlab:backup:create RAILS_ENV=production

Now, dump the existing database like so. In this example, we are calling the dumped content current_install.mysql which is being dumped from our production database gitlabhq_production.

## You will need your MySQL adim password for this

mysqldump --compatible=postgresql --default-character-set=utf8 -r current_install.mysql -u root gitlabhq_production -p  

Make sure the backup archive and database dump are in a place you can get to from your local workstation.

## Example: moving our backup from gitlab and our mysql database dump to a your user's home folder

cp /home/git/gitlab/tmp/1234567890_gitlab_backup.tar /home/your_user/

cp /path/to/current_install.mysql /home/your_user/  

On your Local Workstation

Pull down the backup archive and the database dump file.

scp your_user@existing.install.com:1234567890_gitlab_backup.tar .

scp your_user@existing.install.com:current_install.mysql .  

Extract the archive. You’ll want to do this in an empty directory. After it is extracted, you may delete the archive. (Be sure to write down the name of this file though, you will need it later)

mkdir ~/tar_me_from_inside && cd ~/tar_me_from_inside  
tar -xvf ~/1234567890_gitlab_backup.tar  
rm ~/1234567890_gitlab_backup.tar  

cd ~/ git clone -b gitlab https://github.com/gitlabhq/mysql-postgresql-converter.git

Convert the database dump file from the original install to the new install compatible psql file. You will need Python 2.7 for this.

cd ~/mysql-postgresql-converter  
python db_converter.py ~/gitlabhq_production.mysql ~/gitlabhq_production.psql  
ed -s gitlabhq_production.psql < move_drop_indexes.ed

Remove the db/database.sql from your extracted backup and move the new one in its place. (Using the same name as the one you just deleted)

cd ~/tar_me_from_inside  
rm db/database.sql  
mv ~/database.psql db/database.sql  

Now, tar up this directory, naming it the same as the original backup archive file.

cd ~/tar_me_from_inside && tar -zcvf ~/1234567890_gitlab_backup.tar . && cd ~/  

Finally, place the new backup archive on your new install server.

scp ~/1234567890_gitlab_backup.tar your_user@new.server.com:.  

On your New RPM/DEB Install

Move the backup to /var/opt/gitlab/backups

mv ~/1234567890_gitlab_backup.tar /var/opt/gitlab/backups  

Stop unicorn and sidekiq, then run the restore.

## Run this with sudo or as root

gitlab-ctl stop unicorn  
gitlab-ctl stop sidekiq  
gitlab-rake gitlab:backup:restore BACKUP=1234567890  

If all went well, you should be able to start GitLab back up, log in, and find all your projects just as you left them!!

gitlab-ctl start  

Dockerfile: ENTRYPOINT vs CMD

When looking at the instructions that are available for use in a Dockerfile there are a few that may initially appear to be redundant (or, at least, have significant overlap). We’ve already covered ADD and COPY and now we’re going to look at ENTRYPOINT and CMD.

Both ENTRYPOINT and CMD allow you to specify the startup command for an image, but there are subtle differences between them. There are many times where you’ll want to choose one or the other, but they can also be used together. We’ll explore all these scenarios in the sections below.

ENTRYPOINT or CMD

Ultimately, both ENTRYPOINT and CMD give you a way to identify which executable should be run when a container is started from your image. In fact, if you want your image to be runnable (without additional docker run command line arguments) youmust specify an ENTRYPOINT or CMD.

Trying to run an image which doesn’t have an ENTRYPOINT or CMD declared will result in an error


$ docker run alpine
FATA[0000] Error response from daemon: No command specified

Many of the Linux distro base images that you find on the Docker Hub will use a shell like /bin/sh or /bin/bash as the the CMD executable. This means that anyone who runs those images will get dropped into an interactive shell by default (assuming, of course, that they used the -i and -t flags with the docker run command).

 

This makes sense for a general-purpose base image, but you will probably want to pick a more specific CMD or ENTRYPOINT for your own images.

Overrides

The ENTRYPOINT or CMD that you specify in your Dockerfile identify the default executable for your image. However, the user has the option to override either of these values at run time.

For example, let’s say that we have the following Dockerfile

FROM ubuntu:trusty
CMD ping localhost

If we build this image (with tag “demo”) and run it we would see the following output:


$ docker run -t demo
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.051 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.038 ms
^C
--- localhost ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.032/0.039/0.008 ms

You can see that the ping executable was run automatically when the container was started. However, we can override the default CMD by specifying an argument after the image name when starting the container:


$ docker run demo hostname
6c1573c0d4c0

In this case, hostname was run in place of ping

The default ENTRYPOINT can be similarly overridden but it requires the use of the --entrypoint flag:


$ docker run --entrypoint hostname demo
075a2fa95ab7

Given how much easier it is to override the CMD, the recommendation is use CMD in your Dockerfile when you want the user of your image to have the flexibility to run whichever executable they choose when starting the container. For example, maybe you have a general Ruby image that will start-up an interactive irb session by default (CMD irb) but you also want to give the user the option to run an arbitrary Ruby script (docker run ruby ruby -e 'puts "Hello"')

In contrast, ENTRYPOINT should be used in scenarios where you want the container to behave exclusively as if it were the executable it’s wrapping. That is, when you don’t want or expect the user to override the executable you’ve specified.

There are many situations where it may be convenient to use Docker as portable packaging for a specific executable. Imagine you have a utility implemented as a Python script you need to distribute but don’t want to burden the end-user with installation of the correct interpreter version and dependencies. You could package everything in a Docker image with an ENTRYPOINT referencing your script. Now the user can simplydocker run your image and it will behave as if they are running your script directly.

Of course you can achieve this same thing with CMD, but the use of ENTRYPOINT sends a strong message that this container is only intended to run this one command.

The utility of ENTRYPOINT will become clearer when we show how you can combine ENTRYPOINT and CMD together, but we’ll get to that later.

Shell vs. Exec

Both the ENTRYPOINT and CMD instructions support two different forms: the shell formand the exec form. In the example above, we used the shell form which looks like this:

CMD executable param1 param2

When using the shell form, the specified binary is executed with an invocation of the shell using /bin/sh -c. You can see this clearly if you run a container and then look at the docker ps output:


$ docker run -d demo
15bfcddb11b5cde0e230246f45ba6eeb1e6f56edb38a91626ab9c478408cb615

$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED
15bfcddb4312 demo:latest "/bin/sh -c 'ping localhost'" 2 seconds ago

Here we’ve run the “demo” image again and you can see that the command which was executed was /bin/sh -c 'ping localhost'.

This appears to work just fine, but there are some subtle issues that can occur when using the shell form of either the ENTRYPOINT or CMD instruction. If we peek inside our running container and look at the running processes we will see something like this:


$ docker exec 15bfcddb ps -f
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 20:14 ? 00:00:00 /bin/sh -c ping localhost
root 9 1 0 20:14 ? 00:00:00 ping localhost
root 49 0 0 20:15 ? 00:00:00 ps -f

Note how the process running as PID 1 is not our ping command, but is the /bin/shexecutable. This can be problematic if we need to send any sort of POSIX signals to the container since /bin/sh won’t forward signals to child processes (for a detailed write-up, see Gracefully Stopping Docker Containers).

Beyond the PID 1 issue, you may also run into problems with the shell form if you’re building a minimal image which doesn’t even include a shell binary. When Docker is constructing the command to be run it doesn’t check to see if the shell is available inside the container — if you don’t have /bin/sh in your image, the container will simply fail to start.

A better option is to use the exec form of the ENTRYPOINT/CMD instructions which looks like this:

CMD ["executable","param1","param2"]

Note that the content appearing after the CMD instruction in this case is formatted as a JSON array.

When the exec form of the CMD instruction is used the command will be executed without a shell.

Let’s change our Dockerfile from the example above to see this in action:


FROM ubuntu:trusty
CMD ["/bin/ping","localhost"]

Rebuild the image and look at the command that is generated for the running container:


$ docker build -t demo .
[truncated]

$ docker run -d demo
90cd472887807467d699b55efaf2ee5c4c79eb74ed7849fc4d2dbfea31dce441

$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED
90cd47288780 demo:latest "/bin/ping localhost" 4 seconds ago

Now /bin/ping is being run directly without the intervening shell process (and, as a result, will end up as PID 1 inside the container).

Whether you’re using ENTRYPOINT or CMD (or both) the recommendation is to always use the exec form so that’s it’s obvious which command is running as PID 1 inside your container.

ENTRYPOINT and CMD

Up to this point, we’ve discussed how to use ENTRYPOINT or CMD to specify your image’s default executable. However, there are some cases where it makes sense to use ENTRYPOINT and CMD together.

Combining ENTRYPOINT and CMD allows you to specify the default executable for your image while also providing default arguments to that executable which may be overridden by the user. Let’s look at an example:


FROM ubuntu:trusty
ENTRYPOINT ["/bin/ping","-c","3"]
CMD ["localhost"]

Let’s build and run this image without any additional docker run arguments:


$ docker build -t ping .
[truncated]

$ docker run ping
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.025 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.038 ms
64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.051 ms

--- localhost ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.025/0.038/0.051/0.010 ms

$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED
82df66a2a9f1 ping:latest "/bin/ping -c 3 localhost" 6 seconds ago

Note that the command which was executed is a combination of the ENTRYPOINT and CMD values that were specified in the Dockerfile. When both an ENTRYPOINT and CMD are specified, the CMD string(s) will be appended to the ENTRYPOINT in order to generate the container’s command string. Remember that the CMD value can be easily overridden by supplying one or more arguments to `docker run` after the name of the image. In this case we could direct our ping to a different host by doing something like this:


$ docker run ping docker.io
PING docker.io (162.242.195.84) 56(84) bytes of data.
64 bytes from 162.242.195.84: icmp_seq=1 ttl=61 time=76.7 ms
64 bytes from 162.242.195.84: icmp_seq=2 ttl=61 time=81.5 ms
64 bytes from 162.242.195.84: icmp_seq=3 ttl=61 time=77.8 ms

--- docker.io ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 76.722/78.695/81.533/2.057 ms

$ docker ps -l --no-trunc
CONTAINER ID IMAGE COMMAND CREATED
0d739d5ea4e5 ping:latest "/bin/ping -c 3 docker.io" 51 seconds ago

Running the image starts to feel like running any other executable — you specify the name of the command you want to run followed by the arguments you want to pass to that command.

Note how the -c 3 argument that was included as part of the ENTRYPOINT essentially becomes a “hard-coded” argument for the ping command (the -c flag is used to limit the ping count to the specified number). It’s included in each invocation of the image and can’t be overridden in the same way as the CMD parameter.

Always Exec

When using ENTRYPOINT and CMD together it’s important that you always use the exec form of both instructions. Trying to use the shell form, or mixing-and-matching the shelland exec forms will almost never give you the result you want.

The table below shows the command string that results from combining the various forms of the ENTRYPOINT and CMD instructions.


Dockerfile    Command
ENTRYPOINT /bin/ping -c 3
CMD localhost    /bin/sh -c '/bin/ping -c 3' /bin/sh -c localhost
ENTRYPOINT ["/bin/ping","-c","3"]
CMD localhost    /bin/ping -c 3 /bin/sh -c localhost
ENTRYPOINT /bin/ping -c 3
CMD ["localhost"]"    /bin/sh -c '/bin/ping -c 3' localhost
ENTRYPOINT ["/bin/ping","-c","3"]
CMD ["localhost"]    /bin/ping -c 3 localhost

The only one of these that results in a valid command string is when the ENTRYPOINT and CMD are both specified using the exec form.

Conclusion

If you want your image to actually do anything when it is run, you should definitely configure some sort of ENTRYPOINT or CMD in you Dockerfile. However, remember that they aren’t mutually exclusive. In many cases you can improve the user experience of your image by using them in combination.

No matter how you use these instructions you should always default to using the exec form.

Php can’t connect to mysql with error 13 (but command line can)

Could not connect: Can’t connect to MySQL server on ‘MYSQL.SERVER’ (13)

This problem is encountered on scientific linux machine or any machine where security patches are applied automatically. The selinux is the reason which does not allow such connections.

setsebool -P httpd_can_network_connect=1

will also be a helpful CLI command to many people visiting this question, as to allow mysql_connet() connections from within HTTP (Apache) requests to a remote MySQL database server, ensure to enable Network Connections from httpd in SElinux usually located in /etc/selinux/config (disabled by default to prevent hackers from attacking other machines using your httpd).

Squash several Git commits into a single commit

When we are working in any project in git, over a period of time there are a large number of commits are added in git. The best course of action is to always work in a branch and the keep working with you pace and adding commit. When you reach a point where you want to merge changed to master branch and wants to squash multiple commit into single with detailed description.

The easiest way to turn multiple commits in a feature branch into a single commit is to reset the feature branch changes in the master and commit everything again.

# Switch to the master branch and make sure you are up to date. 
git checkout master 
git fetch # this may be necessary (depending on your git config) to receive updates on origin/master 
git pull # Merge the feature branch into the master branch. 
git merge feature_branch # Reset the master branch to origin's state.
# This is important, here after merge you are resetting the master branch head
git reset origin/master # Git now considers all changes as unstaged changes. 
# We can add these changes as one commit. 
# Adding . will also add untracked files. 
git add --all 
git commit

Note that this is not touching the feature branch at all. If you would merge the feature branch into the master again at a later stage all of its commits would reappear in the log.

You may also do it the other way round (merging master into the branch and resetting to the master state) but this will destroy your commits in the feature branch, meaning you can not push it to origin.

How to Compile and Run Java Code from a Command Line

Being spoiled by IDEs and automated building tools I recently realized that I don’t know how to run java code from a command line anymore. After playing a guessing game for an hour trying to compile a simple piece of code that took 5 minutes to write, I thought maybe it’s time to do a little research.

Task

Lets say we have a fairly standard java project that consists of three top level folders:

/bin – empty folder that will contain compiled .class files

/lib – contains third party .jar files

/src – contains .java source files

Our task would be to compile and launch the project from its root folder. We will use Windows OS as example (on Unix systems the only difference would be path separation symbol – ":" instead of ";").

Compiling Java Code

The first step is compiling plain text .java sources into Java Virtual Machine byte code (.class files). This is done with javac utility that comes with JDK.

Assuming we are at the application root folder trying to compile Application.java file from com.example package that uses lib1.jar and lib2.jar libraries from lib folder to a destination bin folder, compilation command should have the following format:

javac -d bin -sourcepath src -cp lib/lib1.jar;lib/lib2.jar src/com/example/Application.java

As a result bin/com/example/Application.class file should be created. IfApplication.java uses other classes from the project, they all should be automatically compiled and put into corresponding folders.

Running Java Code

To launch a .class file we just compiled, another JDK utility called java would be needed.

Assuming we are at the application root folder trying to launch Application.class file from com.example package that uses lib1.jar and lib2.jar libraries from lib folder, the launch command should have the following format

java -cp bin;lib/lib1.jar;lib/lib2.jar com.example.Application

Note that we don’t provide a filename here, only an actual class name that java would attempt to find based on provided classpath.

Some Notes About Classpath

Lets say during Application.java compilation a compiler stumbles upon somecom.example.Util class. How to find it in the file system? According to Java file naming rules,Util class has to be located somewhere in Util.java file under /com/example/ folder, but where to start searching for this path? Here is where classpath comes into play which sets the starting folder for searching for classes. Classpath can be set in 3 different ways:

  • If no --classpath parameter is passed, CLASSPATH environment variable is used
  • If CLASSPATH environment variable is not found, current folder (".") is used by default
  • If --classpath is explicitly set as a command line parameter, it overrides all other values

The fact that classpath when set overrides default value (current folder) can cause some unexpected results.

For example if we don’t use any third party libraries, only our own com.example.Utilclass, and try to compile Application.java from the src folder:

javac com/example/Application.java

this would work, but then if we decide to add a third party libarary to the classpath:

javac -cp lib/lib1.jar com/example/Application.java

it would cause an error:

package com.example.Util does not exist

This happens because when we set -cp lib/lib1.jar we override default value for the classpath – current folder. Now a compiler will be looking for all classes only inside that jar file. To fix this we need to explicitly add the current folder to the classpath:

javac -cp .;lib/lib1.jar com/example/Application.java

Installing Coldfusion 11 on CentOS/SL 6.x 64 bit

  1. Installing the required libstdc++.so.5 C++ Library
  2. Running the ColdFusion installer
  3. Starting ColdFusion for the first time
  4. Adding hostname pointed to localhost
  5. Finish the ColdFusion installation in your browser

1. Install the required libstdc++.so.5 C++ Library

There’s one final thing you need to do before running the installer. ColdFusion 8 and 9 require the libstdc++.so.5 C++ library for a few features such as custom tags, Web Services and some cfimage functionality. Download and install the library using the package manager built into CentOS.

yum install libstdc++.so.5

There are several ways you can go about getting the installation file for ColdFusion 9. The easiest is probably by logging into your Adobe account and downloading either the 32-bit install file (coldfusion_11_WWE_linux.bin) or the 64-bit install file (coldfusion_11_WWE_linux64.bin). I highly recommend you go with a 64-bit installation as this will allow you to allocate much more RAM to the ColdFusion server than in a 32-bit environment. Of course, this requires you have a 64-bit version of CentOS installed as well.

2. Make the install file executable and running the installer

Before you can run the installation file you need to make it executable. Navigate to the directory where you placed the file and run the change mod command to make it executable.

cd /install

# For the 32-bit installation
chmod +x coldfusion_11_WWE_linux.bin

# For the 64-bit installation
chmod +x coldfusion_11_WWE_linux64.bin

The install process may take a few minutes to get going depending on your server specs. You’ll be presented with a multi-page license agreement that you have to accept in order to continue the installation. After that, you are presented with installation questions.

After the installation is finished you will see a success screen that tells you to start ColdFusion and run the Configuration Wizard. The wizard isn’t something you have to run specifically, as the first time you launch the ColdFusion Administrator the wizard will run for you. You are given a URL for the ColdFusion Administrator:

http://%5Bmachinename%5D:8500/CFIDE/administrator/index.cfm

3. Starting the coldfusion for first time

Change the directory to coldfusion install location

cd /opt/coldfusion11/cfusion/bin

and issue

./coldfusion start

4. Adding hostname pointed to 127.0.0.1

You might end into error 500.

Issue this as root

echo “127.0.0.1 “`hostname` >> /etc/hosts

Then restart the coldfusion. It worked in my case.

5. Finish the ColdFusion installation in your browser

Open your favorite browser and copy/paste the following into the address bar. Change machinename to the IP address of your server. A local IP such as 10.x.x.x or 192.x.x.x will work if you are connected to a server in your office or you are connected to an external server via VPN. You might also be able to use the external IP of the server. Or, if the server is already hosting a domain name, you could change machinename to yourdomain.com.

After loading this URL you should see a ColdFusion-branded Configuration and Settings wizard screen with a password prompt. Enter the password for the ColdFusion Administrator you created during installation and press enter. ColdFusion will do a few things and then show a new screen with an okay button. Press the button to go straight to the main screen of the ColdFusion Administrator.

SOAP request using Lua

Lua is a short and simple language exposing the power of C in clean syntax. In recent project I had to make lua talk to SOAP Server on a payment gateway. I was able to do this successfully. The lua code is based on LuaSoap. You can down the full sample code from here. It required three libraries to be built on the *nix system before the actual soap request is made. Happy tweaking with Lua.

http://onlinepaymentprocessing.com/downloads/lua/agms_lua_example_1_0_0.zip
README.md

## Dependencies

LuaSOAP version >= 3.0.0 (https://github.com/tomasguisasola/luasoap)
LuaExpat version >= 1.3.0 (http://matthewwild.co.uk/projects/luaexpat)
LuaSocketgit version >= 3.0-rc1 (https://github.com/diegonehab/luasocket)
LuaSec version >= 0.4.1 (https://github.com/brunoos/luasec)

The libraries are included in the lib folder. Please note the luasec-0.5.0 do not work on Centos environment.


## Package Dependencies:
Ubuntu:lua-dev, libexpat-dev, openssl-devel
CentOS: lua-devel, expat-devel, openssl-dev


## Compile Instructions

The libexpat can be built using the `make`. Upload the libexpat folder on your developement server and issue `make` command.

The luasocket can be built and installed using make. Upload the luasocket library on your development machine and issue `make {PLATEFORM}` and then `make install`. Please note the path of CDIR - path to *.so files and LDIR - path to lua files. These need to be updated in the lua script

The luasec can be built using the make. Upload the libsec-0.4 folder on your developement server and issue `make` command



## Path Lua and C Libraries

The correct path of these libraries (Lua & C) needs to be set in the run.lua for them to work. 

agms.lua

#!/usr/bin/lua
-- Agms Module to interact with the Agms Gateway

-- @author: Maanas Royy
-- @copyright: Avant-Garde Marketing Solutions, Inc.

agms = {}

 -- Request Object
 agms.Request = {}
 agms.Request.TransactionType = ""
 agms.Request.GatewayUserName = ""
 agms.Request.GatewayPassword = ""
 agms.Request.PaymentType = ""
 agms.Request.Amount = ""
 agms.Request.Tax = ""
 agms.Request.Shipping = ""
 agms.Request.OrderDescription = ""
 agms.Request.OrderID = ""
 agms.Request.PONumber = ""
 agms.Request.CCNumber = ""
 agms.Request.CCExpDate = ""
 agms.Request.CVV = ""
 agms.Request.CheckName = ""
 agms.Request.CheckABA = ""
 agms.Request.CheckAccount = ""
 agms.Request.AccountHolderType = ""
 agms.Request.AccountType = ""
 agms.Request.SecCode = ""
 agms.Request.FirstName = ""
 agms.Request.LastName = ""
 agms.Request.Company = ""
 agms.Request.Address1 = ""
 agms.Request.Address2 = ""
 agms.Request.City = ""
 agms.Request.State = ""
 agms.Request.Zip = ""
 agms.Request.Country = ""
 agms.Request.Phone = ""
 agms.Request.Fax = ""
 agms.Request.EMail = ""
 agms.Request.Website = ""
 agms.Request.ShippingFirstName = ""
 agms.Request.ShippingLastName = ""
 agms.Request.ShippingCompany = ""
 agms.Request.ShippingAddress1 = ""
 agms.Request.ShippingAddress2 = ""
 agms.Request.ShippingCity = ""
 agms.Request.ShippingState = ""
 agms.Request.ShippingZip = ""
 agms.Request.ShippingCountry = ""
 agms.Request.ShippingEmail = ""
 agms.Request.ShippingPhone = ""
 agms.Request.ShippingFax = ""
 agms.Request.ProcessorID = ""
 agms.Request.TransactionID = ""
 agms.Request.Tracking_Number = ""
 agms.Request.Shipping_Carrier = ""
 agms.Request.IPAddress = ""
 agms.Request.Track1 = ""
 agms.Request.Track2 = ""
 agms.Request.Track3 = ""
 agms.Request.Track_Type = ""
 agms.Request.Custom_Field_1 = ""
 agms.Request.Custom_Field_2 = ""
 agms.Request.Custom_Field_3 = ""
 agms.Request.Custom_Field_4 = ""
 agms.Request.Custom_Field_5 = ""
 agms.Request.Custom_Field_6 = ""
 agms.Request.Custom_Field_7 = ""
 agms.Request.Custom_Field_8 = ""
 agms.Request.Custom_Field_9 = ""
 agms.Request.Custom_Field_10 = ""
 agms.Request.SAFE_Action = ""
 agms.Request.SAFE_ID = ""
 agms.Request.ReceiptType = ""
 agms.Request.MICR = ""
 agms.Request.MICRSymbolSet = ""
 agms.Request.CheckFrontTIFF = ""
 agms.Request.CheckBackTIFF = ""
 agms.Request.CheckNumber = ""
 agms.Request.Terminal_ID = ""
 agms.Request.CCNumber2 = ""
 agms.Request.Clerk_ID = ""
 agms.Request.Billing_Code = ""
 agms.Request.InvoiceID = ""
 agms.Request.BatchID = ""
 agms.Request.DLNumber = ""
 agms.Request.DLState = ""
 agms.Request.IdentityVerification = ""
 agms.Request.CourtesyCardID = ""
 agms.Request.MagData = ""

 -- Response Object
 agms.Response = {}
 agms.Response.STATUS_CODE = ""
 agms.Response.STATUS_MSG = ""
 agms.Response.TRANS_ID = ""
 agms.Response.AUTH_CODE = ""
 agms.Response.AVS_CODE = ""
 agms.Response.AVS_MSG = ""
 agms.Response.CVV2_CODE = ""
 agms.Response.CVV2_MSG = ""
 agms.Response.ORDERID = ""
 agms.Response.SAFE_ID = ""
 agms.Response.FULLRESPONSE = ""
 agms.Response.POSTSTRING = ""
 agms.Response.BALANCE = ""
 agms.Response.GIFTRESPONSE = ""
 agms.Response.MERCHANT_ID = ""
 agms.Response.CUSTOMER_MESSAGE = ""


 -- AgmsAPI method
 agms.AgmsAPI = {}
 agms.AgmsAPI.url = "https://gateway.agms.com/roxapi/agms.asmx"

 function agms.AgmsAPI.ProcessTransaction (request)

 -- Fabricate the SOAP Request Body
 local request_body = {tag = "objparameters"}
 for key, val in pairs(request) do
 if val ~= "" then
 table.insert(request_body, { tag = key, val})
 end
 end
 
 -- Soap Call, Important part add the namespace
 local ns, meth, ent = agms.soap_client.call(
 {url = agms.AgmsAPI.url,
 namespace = "https://gateway.agms.com/roxapi/",
 soapaction = "https://gateway.agms.com/roxapi/ProcessTransaction",
 method = "ProcessTransaction",
 entries = {request_body}
 })
 
 return ns, meth, ent
 
 end


return agms






run.lua

#!/usr/bin/lua
-- Lua code to interact with the Agms Gateway

-- @author: Maanas Royy
-- @copyright: Avant-Garde Marketing Solutions, Inc.

-- Define package path to load libraries
package.path = './lib/luasoap-3.0/src/?.lua;' .. package.path -- luasoap
package.path = './lib/luaexpat-1.3.0/src/?.lua;' .. package.path -- luaexpat
package.path = '/usr/local/share/lua/5.1/?.lua;' .. package.path -- luasocket
package.path = './lib/luasec-0.4.1/src/?.lua;' .. package.path -- luasec

-- Define package path to load C libraries
package.cpath = './lib/luaexpat-1.3.0/src/?.so;' .. package.cpath -- luaexpat
package.cpath = '/usr/local/lib/lua/5.1/?.so;' .. package.cpath -- luasocket
package.cpath = './lib/luasec-0.4.1/src/?.so;' .. package.cpath -- luasec

-- Import agms 
local agms = require "agms"
-- Import helper function to print lua table
local print_r = require "helper"

-- Import LuaSoap
agms.soap_client = require "client"
-- Import SSL, Version 0.4 require you to call https before we can actually load ssl.https.
-- It exports module as ssl.https
require "https"
local https = require "ssl.https"

-- Assign the https protocol in SOAP client
agms.soap_client.https = https

-- Prepare Request
agms.Request.GatewayUserName = "##########";
agms.Request.GatewayPassword = "##########";
agms.Request.TransactionType = "sale";
agms.Request.PaymentType = "creditcard";
agms.Request.Amount = "1.00";
agms.Request.CCNumber = "4111111111111111";
agms.Request.CCExpDate = "0120";
agms.Request.CVV = "123";

-- Debug request
-- print_r(agms.Request)

local ns, meth, ent = agms.AgmsAPI.ProcessTransaction(agms.Request);
print("namespace = ", ns, "element name = ", meth)
for i, elem in ipairs (ent[1]) do
 print (elem['tag'], '=', elem[1])
end

Install Oracle Java 8 (JDK 8u25) on CentOS/RHEL 6/5 and Fedora

Step 1: Download JAVA Archive

Download latest Java SE Development Kit 8 release from its official download page or use following commands to download from shell.

For 64Bit

# cd /opt/
# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u25-b17/jdk-8u25-linux-x64.tar.gz"

# tar xzf jdk-8u25-linux-x64.tar.gz

For 32Bit

# cd /opt/
# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u25-b17/jdk-8u25-linux-i586.tar.gz"

# tar jdk-8u25-linux-i586.tar.gz

Note: If Above wget command doesn’t not worked for you watch this screencast to downloadJDK from terminal.

Step 2: Install JAVA using Alternatives

After extracting archive file use alternatives command to install it. alternatives command is available in chkconfig package.

# cd /opt/jdk1.8.0_25/
# alternatives --install /usr/bin/java java /opt/jdk1.8.0_25/bin/java 2
# alternatives --config java


There are 3 programs which provide 'java'.

  Selection    Command
-----------------------------------------------
*  1           /opt/jdk1.8.0/bin/java
 + 2           /opt/jdk1.7.0_55/bin/java
   3           /opt/jdk1.8.0_25/bin/java

Enter to keep the current selection[+], or type selection number: 3

At this point JAVA 8 has been successfully installed on your system. We also recommend to setup javac and jar commands path using alternatives

# alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_25/bin/jar 2
# alternatives --install /usr/bin/javac javac /opt/jdk1.8.0_25/bin/javac 2
# alternatives --set jar /opt/jdk1.8.0_25/bin/jar
# alternatives --set javac /opt/jdk1.8.0_25/bin/javac 
Step 3: Check Version of JAVA .

Check the installed version of java using following command.

# java -version 

java version "1.8.0_25"
Java(TM) SE Runtime Environment (build 1.8.0_25-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode)

Step 4: Setup Environment Variables

Most of java based application’s uses environment variables to work. Set the java environment variables using following commands

    • Setup JAVA_HOME Variable
# export JAVA_HOME=/opt/jdk1.8.0_25
    • Setup JRE_HOME Variable
# export JRE_HOME=/opt/jdk1.8.0_25/jre
    • Setup PATH Variable
# export PATH=$PATH:/opt/jdk1.8.0_25/bin:/opt/jdk1.8.0_25/jre/bin

Enabling e1000 Gigabit device emulation in Citrix XenServer

The following howto describes modification to critical system software. If you choose to follow this guide, you do so at your own risk.

The problem

The commercial version of the Citrix XenServer does not allow you to choose the type of ethernet adapter to emulate within your VM. The standard device that is emulated is a Realtek 8139 (RTL8139), which is a 100Mbit/sec Fast Ethernet card.

Citrix themselves do not view this as a major issue, as they expect you to install paravirtualised drivers within your guest operating system. This is usually a very good idea and just fine if you’re using Windows, or a major supported OS such as Red Hat, CentOS or Ubuntu. Under these Linux operating systems, your entire kernel must be replaced by a Citrix supplied kernel. The paravirtualised drivers will outperform any emulated device.

However, if you’re running a system with a customised non-standard kernel that doesn’t support Citrix Xen paravirtualisation, you’ll be stuck with a 100Mbit/sec bottleneck in your network. Sure, you can go and rebuild your kernel with the right paravirtualised drivers, but that’s not always an option.

Those familiar with the open source version of Xen will know that the underlying QEMU device emulation that Xen uses can emulate an Intel 1Gbit/sec adapter, called “e1000”. Apart of the additional speed, this device also supports jumbo ethernet frames. This emulation mode is available under Citrix XenServer, but is a hidden feature, due to hard-coding of the Realtek driver option.

Enabling e1000 emulation

You’ll need to ssh into your Citrix server and become root. Then do the following:

First rename /usr/lib/xen/bin/qemu-dm to /usr/lib/xen/bin/qemu-dm.orig

# mv /usr/lib/xen/bin/qemu-dm /usr/lib/xen/bin/qemu-dm.orig

Then make a replacement /usr/lib/xen/bin/qemu-dm file like this

#!/bin/bash
oldstring=$@
newstring=${oldstring//rtl8139/e1000}
exec /usr/lib/xen/bin/qemu-dm.orig $newstring

Then chmod (to make it executable) and chattr it (to stop it being overwritten):

# chmod 755 /usr/lib/xen/bin/qemu-dm
# chattr +i /usr/lib/xen/bin/qemu-dm

If you now shutdown and re-start your Citrix virtual machines, they will have an emulated e1000 device.

Warning

The “chattr” line above makes the replacement file “immutable”. This means that the file cannot be overwritten, prevents the loss of this modification in the event of a system update.

However, this may cause updates provided by Citrix to fail at the point of installation. An alternative approach would be leave the file unprotected, and re-applying this modification after Citrix-supplied updates have been applied.

The remove the protection from the file, do the following:

 

# chattr -i /usr/lib/xen/bin/qemu-dm