Ricardo Aravena's Blog

Blogging for DevOps hackers.

Docker First Impressions

| Comments

For the last few days I’ve been taking at crack at using the recent Docker container deployment tool that I’ve been hearing a lot buzz about. In essence, it’s a wrapper on top of Linux LXC containers, writen in the new friendly and not so popular yet Go language developed at Google.

Just a little bit of background, for those of you not familiar with LXC containers, they are pretty much defined as chroot on steroids. Basically, you can run isolated virtual environments in a single Linux machine and make it look like that they are different machines. These environments give you the advantage of being isalated and at the same they are able to use the same Linux exectutables and memory space to improve speed and footprint size.

Docker is pretty easy to try from its website, you can just click on the Get Started link and a UI terminal shows up where you can type Docker commands. It’s CLI feels a lot like a CLI writen in Python or Ruby, yet it still doesn’t have the nice subtleties such as using a ? to get help. To get help you simply type docker help or docker command help

Setup varies depending on your platform. It can be a bit confusing if you work with multiple say, Linux platforms but from personal experience most people stick to a more favorite Linux platform. Docker is also available for MacOS and Windows, but in reality it doesn’t run over either one since LXC is only available for Linux. They way they mention that it can be run on MacOS or Windows is by running it on a lightweght VM.

As of this writing there’s a message on each one of the platform installation pages that says: Docker is still under heavy development! We don’t recommend using it in production yet, but we’re getting closer with each release. Please see our blog post, “Getting to Docker 1.0”

For me, it wasn’t that difficult to setup, I just followed the steps on the Wiki. I don’t think the steps are that difficult but it just requires basic knowledge of the Linux command line. Docker requires the Linux 3.8 kernel and in my case, I tried Docker on Ubuntu 13.10 so I didn’t have to install an extra kernel package.

However, if you are running say Ubuntu 12.04 LTS (Precise) you are going to have to install the update kernel packages and reboot the machine before you can use Docker:

$ # install the backported kernel from raring
$ sudo apt-get update
$ sudo apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring
$ sudo reboot

In my case, for Ubuntu 13.10 it says that if you want AUFS (AnotherUnionFS) support you need to install the linux-image-extra package:

$ sudo apt-get update
$ sudo apt-get install linux-image-extra-`uname -r`

It turns out that my Ubuntu 13.10 already had the linux-image-extra package, so I didn’t have to make any changes.

Next I had to run:

$ sudo apt-key adv —keyserver keyserver.ubuntu.com —recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
$ sudo sh -c "echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update
$ sudo apt-get install lxc-docker

But if you prefer, there’s also a curl script that simplifies the previous steps.

$ curl -s https://get.docker.io/ubuntu/ | sudo sh

All in all pretty easy. One last thing is that if you have the Ubuntu Firewall enabled (ufw) you need to allow it to forward traffic in the /etc/default/ufw file:

# Change:
# DEFAULT_FORWARD_POLICY=“DROP”
# to
DEFAULT_FORWARD_POLICY=“ACCEPT”

and then run:

$ sudo ufw allow 4243/tcp

Now we are all dandy and ready to try Docker commands. The first one that you want to run is the one to create a container:

$ sudo docker run -i -t ubuntu /bin/bash

This command will automatically download all the default ubuntu container images that you can use to run your Ubuntu container. It does take a while but then again it’s downloading full container images each about 60Mb compressed. Keep in mind that the -i option means “interactive” and the -t means allocate a pseudo tty.

This the output of the command:

Unable to find image 'ubuntu' locally
Pulling repository ubuntu
5ac751e8d623: Download complete
9cd978db300e: Download complete
9cc9ea5ea540: Download complete
9f676bd305a4: Download complete
eb601b8965b8: Download complete
511136ea3c5a: Download complete
f323cf34fd77: Download complete
7a4f87241845: Download complete
1c7f181e78b9: Download complete
6170bb7b0ad1: Download complete
321f7f4200f4: Download complete

After it’s complete, it will display a bash shell in the container with the prompt displaying the container hash id. For example, root@3b667578ce4f:/#. You can run almost any linux command that your Ubuntu distro supports, including something like: apt-get update; apt-get install apache2.

I ran apt-get upgrade and somehow it couldn’t finish the installation saying some of the packages were missing dependencies, so in essence I fried the container.

I said no problem, I’ll just get rid of it. First hit the ctrl-p ctrl-q keys to detach from the container. Then, run these Docker commands to find out the container id (if you somehow don’t remember it), stop the container and then delete the container.

list-containers
$ docker ps

then:

$ docker stop 3b667578ce4f
$ docker rm 3b667578ce4f

and we are back to square one so we can start with a clean sheet. You can notice the all the containers will be stored here: /var/lib/docker/containers in a directory that matches the container’s specific hash id. You’ll also notice that after you run the docker rm command the directory for the specific container is deleted.

There are other command that are useful. For example:

$ docker images

will display all the downloaded images that can be used as a container. In my case, it’s a list of Ubuntu distros:

root@host:~# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu saucy 9f676bd305a4 4 weeks ago 182.1 MB
ubuntu 13.10 9f676bd305a4 4 weeks ago 182.1 MB
ubuntu 13.04 eb601b8965b8 4 weeks ago 170.2 MB
ubuntu raring eb601b8965b8 4 weeks ago 170.2 MB
ubuntu 12.10 5ac751e8d623 4 weeks ago 161.4 MB
ubuntu quantal 5ac751e8d623 4 weeks ago 161.4 MB
ubuntu 10.04 9cc9ea5ea540 4 weeks ago 183 MB
ubuntu lucid 9cc9ea5ea540 4 weeks ago 183 MB
ubuntu 12.04 9cd978db300e 4 weeks ago 204.7 MB
ubuntu latest 9cd978db300e 4 weeks ago 204.7 MB
ubuntu precise 9cd978db300e 4 weeks ago 204.7 MB

So now let’s say you want to use the quantal image you can run:

$ docker run -t -i ubuntu:5ac751e8d623 /bin/bash

Although, out of the scope of this specific post, there are many other features that you can use including creating images and publishing them to a public repository.

Conclusion

Docker is a very useful tool for people wanting to deploy application in an isolated way so that it doesn’t interfere with the major functions of a particular server. It allows for easy creation an deletion of containers in case something goes wrong or simply when ops guys are trying to deploy something quickly already prebaked into a Docker image. You can even deploy Docker containers in AWS EC2 instances to even further compartamentalize your application.

However, if you are concerned about Docker being in its early development stage (as of this writing) and if you don’t care about costs to a certain extent, the Docker/LXC approach is not very different from say using Amazon EC2 prebaked AMIs. LXC containers are pretty light weight but whatever it is that you are running will still be CPU, Disk I/O and Network contrained on the same physical or virtualized machine.

Also, there are also several tools available to create AMIs including the infamous Aminator developed at Neflix. And if you happen to be a fan of Ansible like me, you can use the Ansible Aminator playbook from AnSwers.

Ansible Playbook for PaperTrail on Ubuntu

| Comments

This posts describes how to create a simple Ansible task on how to setup PaperTrail on Ubuntu.

It’s a follow up to a previous blog describing an Ansible Playbook to setup an HAProxy system. This Ansible task can be included in the HAProxy playbook as well as any other playbooks with something like this:

papertrail.yml
—–
PLAYBOOK: Install papertrail on Ubuntu
—–
name: scout
hosts: all
user: <user-with-sudo>
sudo: True
tasks:
include: tasks/papertrail.yml

Next, we define the task that includes installing the dependencies rsyslog and libssl-dev. Also we copy a specific rsyslog configuration for papertrail.

papertrail.yml
—–
# TASK: Papertrail log aggregation
name: Install dependencies for Papertrail
apt: pkg=$item state=latest
with_items:
libssl-dev
rsyslog-gnutls
name: Copy rsyslog.conf
copy: >
src=files/rsyslog.conf
dest=/etc/rsyslog.conf
owner=root group=root mode=0444
notify: restart rsyslog

And here’s the content of rsyslog.conf:

Next you need to include the papertrail cerfiticate file if you want to encrypt your connection from rsyslog to PaperTrail. The link to the certificate file is here. You also need to tell Ansible to restart rsyslog when it installs this file using the notify keyword.

papertrail.yml
name: Papertrail certificate
copy: >
src=files/syslog.papertrail.crt
dest=/etc/syslog.papertrail.crt
owner=root group=root mode=0444
notify: restart rsyslog

Here you include the specific papertrail configuration for rsyslog.

papertrail.yml
name: Papertrail rsyslog config file
copy: >
src=files/papertrail.conf
dest=/etc/rsyslog.d/70-papertrail.conf
owner=root group=root mode=0444
notify: restart rsyslog

The papertrail.conf file can be seen here:

Optionally you can install the Ruby papertrail remote syslog in case you’d like to send random logs from the machine to PaperTrail.

papertrail.yml
name: Install Papertrail remote file logger
shell: >
executable=/bin/bash source /etc/profile.d/rvm.sh;
gem install remote_syslog —no-ri —no-rdoc

Finally just run it: ansible-playbook -T 120 -i inventory-file papertrail.yml

Simple Clouformation With Multiple AWS Accounts

| Comments

In this post I’ll describe how to create a simple AWS CloudFormation template so that we can deploy stack using multiple AWS accounts. In other words a common JSON CloudFormation template that can be use to bring up a stack in multiple accounts. The way we are able to do this is by having exact copies of the EC2 AMIs on all the accounts and regions where we are deploying our stack.

With the new features from AWS including the ability to link multiple accounts, many customers are starting to use accounts say for different departments or for different purposes say, production, QA, development, sales. So, the motivation behind this script is the need for single JSON format that works across all accounts.

For more information on CloudFormation you can visit the AWS Cloudformation Page as well as the AWS Cloudformation documentation page.

First we ask for parameters in the CloudFormation template:

cf.json
{
"AWSTemplateFormatVersion":"2010-09-09",
"Description":"My WebService",
"Parameters":{
"AwsAccount":{
"Description":"Account: Production, or Dev",
"Type":"String",
"Default":"Production",
"MinLength":"1",
"MaxLength":"1",
"AllowedValues":[
"Production",
"Dev"
],
"ConstraintDescription":"Must be either 'Production', or 'Dev'"
},
"InstanceType":{
"Description":"EC2 instnce type to launch",
"Type":"String",
"Default":"m1.large"
},
"MinGroupSize":{
"Description":"Minimum number of servers to launch – Must match a multiple of avzones available in region",
"Type":"Number",
"Default":"3"
},
"MaxGroupSize":{
"Description":"Maximum number of servers to launch – Must match a multiple of avzones available in region",
"Type":"Number",
"Default":"30"
}
},

Next up is the mappings definition, we have to define specific parameters for each account. Note the accountID variable is a generic one. You have to subsitute with your specific accountIds

cf.json
"Mappings":{
"AWSAccountInfo":{
"Production":{
"accountId": 123456789012,
"hostedZone":"production.mydomain.com.",
"keypair":"production",
"envName":"production",
"name":"Production"
},
"Dev":{
"accountId": 123456789012,
"hostedZone":"dev.mydomain.com.",
"keypair":"dev",
"envName":"dev",
"name":"Dev"
}
},
"Production":{
"us-east-1":{
"ami":"ami-xxxxxxxx"
},
"us-west-1":{
"ami":"ami-xxxxxxxx"
},
"us-west-2":{
"ami":"ami-xxxxxxxx"
}
},
"Dev":{
"us-east-1":{
"ami":"ami-xxxxxxxx"
},
"us-west-1":{
"ami":"ami-xxxxxxxx"
},
"us-west-2":{
"ami":"ami-xxxxxxxx"
}
}
},

Now you want to setup your resources starting with the Autoscaling group. Notice how on the notification configuration we specify parameters that identify out account.

cf.json
"Resources":{
"ServerGroup":{
"Type":"AWS::AutoScaling::AutoScalingGroup",
"Properties":{
"AvailabilityZones":{
"Fn::GetAZs":""
},
"LaunchConfigurationName":{
"Ref":"LaunchConfig"
},
"MinSize":{
"Ref":"MinGroupSize"
},
"MaxSize":{
"Ref":"MaxGroupSize"
},
"LoadBalancerNames":[
{
"Ref":"ElasticLoadBalancer"
}
],
"Cooldown":"120",
"Tags":[
{
"Key":"Name",
"Value":"MyServerType",
"PropagateAtLaunch":"true"
},
{
"Key":"User",
"Value":"Customers",
"PropagateAtLaunch":"true"
}
],
"NotificationConfiguration":{
"TopicARN":{
"Fn::Join":[
":",
[
"arn:aws:sns",
{
"Ref":"AWS::Region"
},
{
"Fn::FindInMap":[
"AWSAccountInfo",
{
"Ref":"AwsAccount"
},
"accountId"
]
},
"notification"
]
]
},
"NotificationTypes":[
"autoscaling:EC2_INSTANCE_LAUNCH",
"autoscaling:EC2_INSTANCE_LAUNCH_ERROR",
"autoscaling:EC2_INSTANCE_TERMINATE",
"autoscaling:EC2_INSTANCE_TERMINATE_ERROR"
]
}
}
},

Next we define our launch configuration for our instance in our autoscaling group. Notice how we setup the environment in “UserData” to the one corresponding to the AWS account we are using.

cf.json
"LaunchConfig": {
"Type":"AWS::AutoScaling::LaunchConfiguration",
"Properties": {
"KeyName": {
"Fn::FindInMap": [
"AWSAccountInfo",
{
"Ref":"AwsAccount"
},
"keypair"
]
},
"ImageId":{
"Fn::FindInMap":[
{
"Ref":"AwsAccount"
},
{
"Ref":"AWS::Region"
},
"ami"
]
},
"SecurityGroups":[
{
"Ref":"InstanceSecurityGroup"
}
],
"InstanceType":{
"Ref":"InstanceType"
},
"IamInstanceProfile":{
"Ref":"DmpInstanceProfile"
},
"UserData": {
"Fn::Base64": {
"Fn::Join": [
"\n",
[ "#!/bin/bash",
{ "Fn::Join": [ "", [ "ENV='", { "Fn::FindInMap":[ "AWSAccountInfo", { "Ref":"AwsAccount" }, "envName" ] }, "'" ] ] }
]
]
}
}
}
},

Next we define the server scale up and scale down policies for AWS Autoscale.

cf.json
"ServerScaleUpPolicy":{
"Type":"AWS::AutoScaling::ScalingPolicy",
"Properties":{
"AdjustmentType":"ChangeInCapacity",
"AutoScalingGroupName":{
"Ref":"ServerGroup"
},
"Cooldown":"60",
"ScalingAdjustment": {
"Fn::Join":[
"", [
{
"Ref":"MinGroupSize"
}
]
]
}
}
},
"ServerScaleDownPolicy":{
"Type":"AWS::AutoScaling::ScalingPolicy",
"Properties":{
"AdjustmentType":"ChangeInCapacity",
"AutoScalingGroupName":{
"Ref":"ServerGroup"
},
"Cooldown":"60",
"ScalingAdjustment": {
"Fn::Join":[
"", [
"–",
{
"Ref":"MinGroupSize"
}
]
]
}
}
},

Then we define some alarms.

cf.json
"CPUAlarmHigh":{
"Type":"AWS::CloudWatch::Alarm",
"Properties":{
"AlarmDescription":"Scale-up if CPU > 80% for 5 minutes",
"MetricName":"CPUUtilization",
"Namespace":"AWS/EC2",
"Statistic":"Average",
"Period":"300",
"EvaluationPeriods":"2",
"Threshold":"70",
"AlarmActions":[
{
"Ref":"ServerScaleUpPolicy"
}
],
"Dimensions":[
{
"Name":"AutoScalingGroupName",
"Value":{
"Ref":"ServerGroup"
}
}
],
"ComparisonOperator":"GreaterThanThreshold"
}
},
"CPUAlarmLow":{
"Type":"AWS::CloudWatch::Alarm",
"Properties":{
"AlarmDescription":"Scale-down if CPU < 20% for 20 minutes",
"MetricName":"CPUUtilization",
"Namespace":"AWS/EC2",
"Statistic":"Average",
"Period":"300",
"EvaluationPeriods":"4",
"Threshold":"20",
"AlarmActions":[
{
"Ref":"ServerScaleDownPolicy"
}
],
"Dimensions":[
{
"Name":"AutoScalingGroupName",
"Value":{
"Ref":"ServerGroup"
}
}
],
"ComparisonOperator":"LessThanThreshold"
}
},

Now we define a Load Balancer.

cf.json
"ElasticLoadBalancer":{
"Type":"AWS::ElasticLoadBalancing::LoadBalancer",
"Properties":{
"AvailabilityZones":{
"Fn::GetAZs":""
},
"Listeners":[
{
"LoadBalancerPort":"80",
"InstancePort":"80",
"Protocol":"HTTP"
}
],
"HealthCheck":{
"HealthyThreshold":"3",
"UnhealthyThreshold":"3",
"Interval":"30",
"Timeout":"5"
}
}
},

Now Security Groups and Security Policies. Notice the security group policy for the the RDS backend DB. An RDS instance can also be added to this template.

cf.json
"InstanceSecurityGroup":{
"Type":"AWS::EC2::SecurityGroup",
"Properties":{
"GroupDescription":"Server access",
"SecurityGroupIngress":[
{
"IpProtocol":"tcp",
"FromPort":"22",
"ToPort":"22",
"CidrIp":"0.0.0.0/0"
},
{
"IpProtocol":"tcp",
"FromPort":"80",
"ToPort":"80",
"SourceSecurityGroupOwnerId":{
"Fn::GetAtt":[
"ElasticLoadBalancer",
"SourceSecurityGroup.OwnerAlias"
]
},
"SourceSecurityGroupName":{
"Fn::GetAtt":[
"ElasticLoadBalancer",
"SourceSecurityGroup.GroupName"
]
}
}
]
}
},
"RdsIngress":{
"Type":"AWS::RDS::DBSecurityGroupIngress",
"Properties":{
"DBSecurityGroupName":"web-dbbackend",
"EC2SecurityGroupName":{
"Ref":"InstanceSecurityGroup"
}
}
}

Finally some outputs.

cf.json
},
"Outputs":{
"URL":{
"Description":"The URL of the ELB",
"Value":{
"Fn::Join":[
"",
[
{
"Fn::GetAtt":[
"ElasticLoadBalancer",
"DNSName"
]
}
]
]
}
}
}
}

To verify that syntax of your JSON script, save the full file to something like cf.json and run: cat cf.json | python -mjson.tool

Ansible Playbook for Scout on Ubuntu

| Comments

This is a sample Ansible task (http://www.ansibleworks.com) on how to setup Scout (https://www.scoutapp.com) on Ubuntu. It needs to be included in an ansible playbook.

It’s a follow up to a previous blog describing an Ansible Playbook to setup an HAProxy system. This Ansible task can be included in the HAProxy playbook as well as any other playbooks with something like this:

scout.yml
—–
PLAYBOOK: Install scout on Ubuntu
—–
name: scout
hosts: all
user: user-with-sudo
sudo: True
vars:
scout_key: YourScoutAPIKeyFromTheirWebsite
tasks:
include: tasks/scout.yml

We start by defining a “task” file:

tasks/scout.yml
—–
# TASK: ScoutApp Monitoring (https://scoutapp.com)
# Separate task to install Ruby
include: ruby.yml
name: Install scout + dependencies
shell: >
executable=/bin/bash source /etc/profile.d/rvm.sh;
gem install scout scout_api —no-rdoc —no-ri
name: Create scout home directory
file: >
dest=/root/.scout state=directory
owner=root group=root mode=0700

In the same file add the crontab entry and logrotate entry for Scout.

tasks/scout.yml
name: Scout cron script crontab
template: >
dest=/etc/cron.d/scout
src=../packages/templates/scout/scout-crontab.j2
owner=root group=root mode=0444
name: Scout cron script logrotate
copy: >
dest=/etc/logrotate.d/scout
src=../packages/files/scout/scout-logrotate
owner=root group=root mode=0444

This is what scout-crontab.j2 looks like:

templates/scout-crontab.j2
# crontab for Scout monitoring run by root
* * * * * root /bin/bash -l -c 'scout -n "{{ ansible_fqdn }}" {{ scout_key }}' >> /var/log/scout.log 2>&1

And this is what scout-logrotate looks like:

files/scout-logrotate
/var/log/admobius/scout.log
{
rotate 7
daily
compress
delaycompress
missingok
notifempty
}

Now to install ruby using RVM, if you don’t want to use the system ruby (most of the times you don’t).

tasks/ruby.yml
—–
# TASK: Install Ruby on Ubuntu
name: Install Ruby dependencies
apt: pkg=$item state=latest install_recommends=no
with_items:
autoconf
automake
bison
build-essential
curl
libc6-dev
libgdbm-dev
libffi-dev
libncurses5-dev
libreadline6
libreadline6-dev
libsqlite3-dev
libssl-dev
libtool
libyaml-dev
libxml2-dev
libxslt1-dev
openssl
pkg-config
sqlite3
subversion
zlib1g
zlib1g-dev
name: Install RVM
shell: curl -L get.rvm.io | bash -s stable
name: Install Ruby 2.0.0
shell: >
executable=/bin/bash source /etc/profile.d/rvm.sh;
rvm install 2.0.0
name: Set default ruby version
shell: >
executable=/bin/bash source /etc/profile.d/rvm.sh;
rvm —default use 2.0.0

and now run it.

ansible-playbook -T 120 scout.yml

Upgrade Linux Kernel on Chromebook

| Comments

So after installing ChrUbuntu on my Acer C7 Chromebook, I’m very pleased that with the help of this blog I was able to upgrade the Linux Kernel to 3.8.11

uname -a
raravena@chromebook:~/git/blog-src$ uname -a
Linux chromebook 3.8.11 #3 SMP Thu Oct 17 07:41:20 PDT 2013 x86_64 x86_64 x86_64 GNU/Linux

These are the modified steps:

kernel-upgrade
#!/bin/bash
set -x
#
# Grab verified boot utilities from ChromeOS.
#
mkdir -p /usr/share/vboot
mount -o ro /dev/sda3 /mnt
cp /mnt/usr/bin/vbutil* /usr/bin
mkdir -p /usr/bin/old_bins
cp /mnt/usr/bin/old_bins/vbutil /usr/bin/old_bins/.
cp /mnt/usr/bin/dump_kernel_config /usr/bin
rsync -avz /mnt/usr/share/vboot/ /usr/share/vboot/
umount /mnt
#
# On the Acer C7, ChromeOS is 32-bit, so the verified boot binaries need a
# few 32-bit shared libraries to run under ChrUbuntu, which is 64-bit.
#
apt-get install libc6:i386 libssl1.0.0:i386
#
# Fetch ChromeOS kernel sources from the Git repo.
#
apt-get install git-core
cd /usr/src
cd kernel-next
git checkout origin/chromeos-3.8
#
# Configure the kernel
#
# First we patch base.config to set CONFIG_SECURITY_CHROMIUMOS
# to n
cp ./chromeos/config/base.config ./chromeos/config/base.config.orig
sed -e </span>
's/CONFIG_SECURITY_CHROMIUMOS=y/CONFIG_SECURITY_CHROMIUMOS=n/' </span>
./chromeos/config/base.config.orig > ./chromeos/config/base.config
./chromeos/scripts/prepareconfig chromeos-intel-pineview
#
# … and then we proceed as per Olaf's instructions
#
yes "" | make oldconfig
#
# Build the Ubuntu kernel packages
#
apt-get install kernel-package
make-kpkg kernel_image kernel_headers
#
# Backup current kernel and kernel modules
#
tstamp=$(date +%Y-%m-%d-%H%M)
dd if=/dev/sda6 of=/kernel-backup-$tstamp
cp -Rp /lib/modules/3.4.0 /lib/modules/3.4.0-backup-$tstamp
#
# Install kernel image and modules from the Ubuntu kernel packages we
# just created.
#
dpkg -i /usr/src/linux-.deb
#
# Extract old kernel config
#
vbutil_kernel —verify /dev/sda6 —verbose | tail -1 > /config-$tstamp-orig.txt
#
# Add disablevmx=off to the command line, so that VMX is enabled (for VirtualBox & Co)
#
sed -e 's/$/ disablevmx=off/' </span>
/config-$tstamp-orig.txt > /config-$tstamp.txt
#
# Wrap the new kernel with the verified block and with the new config.
#
vbutil_kernel —pack /newkernel </span>
—keyblock /usr/share/vboot/devkeys/kernel.keyblock </span>
—version 1 </span>
—signprivate /usr/share/vboot/devkeys/kernel_data_key.vbprivk </span>
—config=/config-$tstamp.txt </span>
—vmlinuz /boot/vmlinuz-3.8.11 </span>
—arch x86_64
#
# Make sure the new kernel verifies OK.
#
vbutil_kernel —verify /newkernel
#
# Copy the new kernel to the KERN-C partition.
#
dd if=/newkernel of=/dev/sda6

I ran into an error while compiling the kernel, but gladly was able to fix it

diffs
diff —git a/net/mac80211/tx.c b/net/mac80211/tx.c
index 467c1d1..4ba651d 100644
—– a/net/mac80211/tx.c
+++ b/net/mac80211/tx.c
@@ -1749,7 +1749,7 @@ netdev_tx_t ieee80211_subif_start_xmit(struct sk_buff skb,
bool multicast;
u32 info_flags = 0;
u16 info_id = 0;
– struct ieee80211_chanctx_conf chanctx_conf;
+ struct ieee80211_chanctx_conf chanctx_conf = NULL;
struct ieee80211_sub_if_data ap_sdata;
enum ieee80211_band band;

Setup a Simple HAProxy Config

| Comments

Here’s simple haproxy configuration to get you started, you probably want to stick this under /etc/haproxy/haproxy.cfg

Simple HAProxy Config
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 4096
contimeout 5000
clitimeout 50000
srvtimeout 50000
stats enable
stats auth admin:password
stats uri /monitor
stats refresh 5s
option httpchk GET /status
retries 5
option redispatch
errorfile 503 /etc/haproxy/errors/503.http
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
balance roundrobin # each server is used in turns, according to assigned weight
listen http-in
bind :80
monitor-uri /haproxy # end point to monitor HAProxy status (returns 200)
# option httpclose
server server1 server1.mydomain.com:8080 weight 1 maxconn 2000 check inter 4000
server server2 server2.mydomain.com:8080 weight 1 maxconn 2000 check inter 4000
server server3 server3.mydomain.com:8080 weight 1 maxconn 2000 check inter 4000
rspidel ^Set-cookie:\ IP= # do not let this cookie tell our internal IP address

You also want to setup logging using rsyslog, you can syslog-ng or other loggers too as well, but the configuration is different.

Rsyslog HAproxy config
# put this in /etc/rsyslog.d/49-haproxy.conf:
local0. –/var/log/haproxy/haproxy_0.log
local1. –/var/log/haproxy/haproxy_1.log
& ~

Now, setup logrotate (usually under /etc/logrotate.d/haproxy:

HAProxy logrotate config
/var/log/haproxy/haproxy*.log
{
rotate 7
weekly
missingok
notifempty
compress
delaycompress
sharedscripts
postrotate
reload rsyslog >/dev/null 2>&1 || true
endscript
}

You can also use your favorite configuration management tool, such as Puppet, Chef or Ansible, to parameterize and template the configurations. I use Ansible (I’ll explain in a different blog)

How to Create an Ansible Playbook to Configure HAProxy

| Comments

This is the continuation for Setup a Simple HAProxy Config

It explains how to create an Ansible playbook to automate the haproxy configuration.

If you’d like to find out more about Ansible you can read up on it on their website: http://www.ansibleworks.com

haproxy.yml
—–
# Set up and configure an HaProxy server (Ubuntu flavor)
name: haproxy
hosts: all
user: userwithsudoaccess
sudo: True
tags: haproxy
vars_files:
"vars/main.yml"
tasks:
# haproxy package for Ubuntu
include: tasks/haproxy-apt.yml
# Specific haproxy tasks follow here
name: Copy haproxy logrotate file
action: >
copy src=files/haproxy.logrotate dest=/etc/logrotate.d/haproxy
mode=0644 owner=root group=root
name: Create haproxy rsyslog configuration
action: >
copy src=files/haproxy-rsyslog.conf
dest=/etc/rsyslog.d/49-haproxy.conf
mode=0644 owner=root group=root
notify: restart rsyslog
name: Configure system rsyslog
action: >
copy src=files/rsyslog.conf
dest=/etc/rsyslog.conf
mode=0644 owner=root group=root
notify: restart rsyslog
name: Create haproxy configuration file
action: >
template src=templates/haproxy.cfg.j2 dest=/etc/haproxy/haproxy.cfg
mode=0644 owner=root group=root
notify: restart haproxy

The following file that contains the variables needed for the haproxy playbook it should located under vars (vars/main.yml)

vars/main.yml
—–
haproxy_port: 8080
haproxy_servers:
server1.mydomain.com
server2.mydomain.com
server3.mydomain.com

The following is the task/haproxy-apt.yml file that is used to install haproxy on Ubuntu. If you are using CentOS or RedHat you can use ‘yum’ instead of ‘apt’

tasks/haproxy-apt.yml
—–
# TASK: Install and configure HAProxy – Ubuntu style
#
name: Install HAProxy
action: apt pkg=$item state=latest
with_items:
haproxy
name: Enable HAProxy service
action: service name=haproxy enabled=yes
name: Copy Ubuntu default file
action: >
copy dest=/etc/default/haproxy
src=../packages/files/haproxy/default
owner=root group=root mode=0444
notify: restart haproxy
# Note the notify clause is handled by a
# Ansible handler (explained below)

The content for rsyslog.conf, haproxy.logrotate and 49-haproxy.conf can be found in the previous blog

However, this time we are templating haproxy.cfg with jinja2 and the content is:

haproxy.cfg.j2
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
stats enable
stats auth admin:password
stats uri /monitor
stats refresh 5s
option httpchk GET /status
retries 5
option redispatch
errorfile 503 /etc/haproxy/errors/503.http
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
balance roundrobin # each server is used in turns, according to assigned weight
listen http-in
bind :80
monitor-uri /haproxy # end point to monitor HAProxy status (returns 200)
# option httpclose
{% for dmp_server in dmp_servers %}
server {{ dmp_server }} {{ dmp_server }}:{{ dmp_port }} weight 1 maxconn 1000 check inter 4000
{% endfor %}
rspidel ^Set-cookie:\ IP= # do not let this cookie tell our internal IP address

Include handlers at the end of the file:

haproxy.yml
handlers:
include: handlers/main.yml

The content of handlers/main.yml looks like this:

handlers/main.yml
—–
# Ansible Handlers
name: restart haproxy
action: service name=haproxy state=restarted
name: restart rsyslog
action: service name=rsyslog state=restarted

Optional

Include Scout (https://scoutapp.com) and Papertrail (https://papertrailapp.com) More on this later…

haproxy.yml
# Scout
include: tasks/scout.yml
when: env == 'prod'
# Papertrail for logging
include: tasks/papertrail.yml
when: env == 'prod'

Now run it:

ansible-playbook -T 120 -i <inventory-file> haproxy.yml