<

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x

Deploying and Scaling Out a Distributed Build System for Release Engineering Management

Published on
7,561 Points
1,261 Views
3 Endorsements
Last Modified:
Approved
I'd like to talk about something that is near and dear to my heart: build systems. Without them, building software is all about compiling locally, with software versions everywhere. It can be a mess. Today we are going to discuss building a small distributed build system with a couple tools, and then how to properly scale it. We are not automating the process of scaling for this article, as to not overcomplicate it at this point. However, when we go through the process of deploying this and generic scaling, we are laying the framework for the automation processes. Why work hard later trying to automate a process when we can prep it for automation beforehand?
 
Some of what we will need:

  • Ubuntu Linux 14.04.1 Netboot (amd64)
  • Jenkins CI
  • Maven
  • OpenJDK 6 & 7
  • Ant
  • VMware vSphere (can be done anywhere. This is just what I have deployed.)
  • Sublime Text Editor (any will do. This is just my personal preference.)
 
The order of operations:
  1. Deploy Ubuntu.
  2. Deploy Jenkins.
  3. Write Kickstart file.
  4. Prepare Ubuntu template.
  5. Scale.
 
The first thing we will do is to install the netboot version of Ubuntu Server 14.04.1. We are using netboot because it already has the appropriate networking modules enabled for initramfs. During the VM creation phase, here are my configurations:
  • 3GB vRAM
  • 16GB disk
  • 1 vCPU
  • Ubuntu Linux 64-bit
  • 1 NIC
 
Once that VM is deployed, we will attach the installation .iso, and start the process. Here is what I used:
  • Hostname is jenkins-master.build.reboot-three-times.com
  • User is "Build System"
  • Username is build
  • Password is buildme123!
  • LVM (use all free space, next to victory!)
  • Basic Ubuntu Server metapackage
  • OpenSSH Server metapackage
  • Proxy (as per my networking configurations)
  • GRUB
 
In my environment, I use Dynamic DNS with DHCP, so it's easier for me to deploy VMs willy-nilly. Static IPs are fine, but DHCP will work best for scaling from 2 VMs to 20,000 VMs with little difficulty. Now that Ubuntu has been deployed and installed, we will SSH into it with build/buildme123!. Once there, we update the repo and then install open-vm-tools for easy management.
sudo apt-get update
sudo apt-get install -y open-vm-tools

Open in new window


Since I am using Dynamic DNS and DHCP, we will run a shell script that I have that will send a properly formatted nsupdate to our DNS server to create the A record. That way the hostname maps properly. This is totally cheating, but it's a good workaround for static hostnames with DHCP. I usually label it updateme.

#!/bin/bash
IP_ADDR=`ifconfig | head -n 2 | grep -i inet | cut -d ":" -f 2 | cut -d " " -f 1`
HOST_NAME=`hostname`
FOURTH_OCTET=`echo $IP_ADDR | cut -d "." -f 4`
THIRD_OCTET=`echo $IP_ADDR | cut -d "." -f 3`
SECOND_OCTET=`echo $IP_ADDR | cut -d "." -f 2`
FIRST_OCTET=`echo $IP_ADDR | cut -d "." -f 1`
#echo $IP_ADDR
#echo $HOST_NAME
#echo $FOURTH_OCTET
#echo $THIRD_OCTET
#echo $SECOND_OCTET
#echo $FIRST_OCTET
#If update file exists, delete it.
if [ -f /tmp/updatemedata ]; then
rm /tmp/updatemedata
fi
#Build Update File
echo "server nameServerIPGoesHere" >> /tmp/updatemedata
echo "update delete $HOST_NAME A" >> /tmp/updatemedata
echo "update delete $HOST_NAME PTR" >> /tmp/updatemedata
echo "update add $HOST_NAME 86400 A $IP_ADDR" >> /tmp/updatemedata
echo "send" >> /tmp/updatemedata
echo "update add $FOURTH_OCTET.$THIRD_OCTET.$SECOND_OCTET.$FIRST_OCTET.in-addr.arpa 86400 PTR $HOST_NAME" >> /tmp/updatemedata
echo "send" >> /tmp/updatemedata
echo "quit" >> /tmp/updatemedata
#send it!
nsupdate < /tmp/updatemedata

Open in new window

 

Next, we make sure to chmod a+x it, throw it in /usr/bin, and then run it. Then we throw it into a crontab where it runs every four hours. Now, we reboot.
 
Make sure to install the OpenJDK:

sudo apt-get install -y openjdk-7-jre openjdk-7-jdk openjdk-6-jre openjdk-6-jdk

Open in new window

 

Now to install Jenkins CI on jenkins-master, we need to add the Jenkins repository key, add it to our sources list, and then update apt. Then install Run this command:

wget -q -O - https://jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add - && sudo sh -c 'echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list' && sudo apt-get update && sudo apt-get install -y jenkins && sudo apt-get upgrade -y

Open in new window

 

That will put us completely at an upgraded system with a very basic Jenkins CI deployment, but we have no build tools. It would build locally and fail, and we have no way to access the portal. Now, we have to install and configure Apache's reverse proxy so we don't have to specify a port. We will create this over :80 and not :443. We can also just SSL; the instructions are here under mod_proxy with HTTPS:
https://wiki.jenkins-ci.org/display/JENKINS/Running+Jenkins+behind+Apache
 
Here is how I installed it.

sudo apt-get install -y apache2 && sudo a2enmod proxy && sudo a2enmod proxy_http && sudo a2dissite 000-default && sudo chmod 777 /etc/apache2/sites-available && sudo echo -e " \n\tServerAdmin webmaster@localhost \n\tServerNamejenkins-master.build.reboot-three-times.com \n\tServerAlias jenkins-master \n\tProxyRequests Off \n\t \n\t\tOrder deny,allow \n\t\tAllow from all \n\t \n\tProxyPreserveHost on \n\tProxyPass / http://localhost:8080/ nocanon \n\tAllowEncodedSlashes NoDecode \n" >> /etc/apache2/sites-available/jenkins.conf && sudo a2ensite jenkins && sudo apache2ctl restart && sudo chmod 755 /etc/apache2/sites-available

Open in new window

 

*Fun side note: While writing this article, I actually found a new bug with coreutils and how it interacts with the installed version of bash.
 
Now that we have Jenkins semi-ready to go, we should install all the versions of our tools in this architecture:

/var/jenkins
  +- .ssh
  +- bin
  |   +- slave
  +- workspace (Jenkins creates this file and store all data files inside)
  +- tools
      +- Ant 1.9
      +- Maven 3.2
      +- Maven 3.1
      +- Maven 3.0
      +- OpenJDK 6 (symlink)
      +- OpenJDK 7 (symlink)

Open in new window


Now we can always install Sun Java or other architectures for Java, as well as other compilers. It really just depends on what a user specifically needs, but make sure to have the folder structure be like this for sanity reasons, and so we can keep everything in one place.
 
To simplify the process, use this command:

mkdir /var/jenkins && mkdir /var/jenkins/bin && mkdir /var/jenkins/tools && cd /var/jenkins/tools/ && wget https://www.apache.org/dist/ant/binaries/apache-ant-1.9.4-bin.tar.gz && tar -zxvf apache-ant-1.9.4-bin.tar.gz && wget http://apache.tradebit.com/pub/maven/maven-3/3.2.5/binaries/apache-maven-3.2.5-bin.tar.gz && tar -zxvf apache-maven-3.2.5-bin.tar.gz && wget http://apache.tradebit.com/pub/maven/maven-3/3.1.1/binaries/apache-maven-3.1.1-bin.tar.gz && tar -zxvf apache-maven-3.1.1-bin.tar.gz && wgethttp://apache.tradebit.com/pub/maven/maven-3/3.0.5/binaries/apache-maven-3.0.5-bin.tar.gz && tar -zxvf apache-maven-3.0.5-bin.tar.gz && ln -s /usr/lib/jvm/java-6-openjdk-amd64 /var/jenkins/tools && ln -s /usr/lib/jvm/java-7-openjdk-amd64 /var/jenkins/tools

Open in new window

 

Now we have to configure Jenkins to use the compilers that we've just added, as well as the OpenJDK. Under the "Manage Jenkins" menu, we will see where we can add the installations. Now, by default, and out of the box, Jenkins can install and place these on its own, but that's not how we're going to proceed. Part of the hassle is a) knowing where everything is, b) appreciating our hard work at the end and c) being able to fix it when it breaks.
 
Here is how I have the configurations set up:

  • Name: Maven 3.2.5
    MAVEN_HOME: /var/jenkins/tools/apache-maven-3.2.5
  • Name: Maven 3.1.1
    MAVEN_HOME: /var/jenkins/tools/apache-maven-3.1.1
  • Name: Maven 3.0.5
    MAVEN_HOME: /var/jenkins/tools/apache-maven-3.0.5
  • Name: Ant 1.9
    ANT_HOME: /var/jenkins/tools/apache-ant-1.9.4
  • Name: OpenJDK 6
    JAVA_HOME: /var/jenkins/tools/java-6-openjdk-amd64
  • Name: OpenJDK 7
    JAVA_HOME: /var/jenkins/tools/java-7-openjdk-amd64
 
Now we have to install the Swarm plugin. This will allow us to deploy VMs/physical machines at will with no end, and those will autodiscover the Jenkins master once the .jar is run on the templates. This is a great plugin, which can be found under the "Available" tab in the plugins segment.
 
At this point, we are halfway done. We now have a semi-vanilla Jenkins deployment with no projects, and the compilers are installed and configured with the Swarm plugin enabled. At this point, we could start a new project in Jenkins, and then we could go ahead and start the build process. It will build locally, but it will build. Technically we could leave it, but we're going to go ahead and prepare for scale before it's too late, and this is due to an architectural design of Jenkins.
 
Typically, we start with a master-only installation and then much later, add slaves as the projects grow. When we enable the master/slave mode, Jenkins automatically configures all the existing projects to stick to the master node. This is a precaution to avoid disturbing existing projects, since most likely users can’t configure slaves correctly without trial and error. After we configure slaves successfully, we need to individually configure projects to let them roam freely. This is tedious, but it allows us to work on one project at a time. Projects that are newly created on master/slave-enabled Jenkins will be by default configured to roam freely. So by configuring Jenkins in a distributed fashion, albeit small, it prevents us from running into project roaming issues later.
 
Now we are going to start on the Kickstart file for our Ubuntu Slave template. This will be the template VM from whence we deploy our slaves for Jenkins to distribute these build processes. Now the trick here is to easily deploy the VM, destroy it as needed, and then be completely unique. For all we know, we could need 20,000 slaves next week, and we have to be able to scale accordingly. So we’ll start with a Kickstart file that we like. I'm going to paste the Kickstart file that I am using, and then I will go through and explain what each segment does and why at the end.
#platform=AMD64 or Intel EM64T
#System language
lang en_US
#Language modules to install
langsupport en_US
#System keyboard
keyboard us
#System timezone
timezone --utc America/Denver
#Root password
rootpw imaslave123!
#Initial user
user build-slave --fullname "Build Slave" --password slave
#Reboot after installation
reboot
#Use text mode install
text
#Install OS instead of upgrade
install
#Use cdrom installation media
cdrom
#System bootloader configuration
bootloader --location=mbr
#Clear the Master Boot Record
zerombr yes
#Partition clearing information
clearpart --all --initlabel
# Advanced partition
preseed partman-auto-lvm/guided_size string 7680MB
part /boot --fstype=ext4 --size=512 --asprimary
part pv.1 --grow --size=1 --asprimary
volgroup vg0 --pesize=4096 pv.1
logvol / --fstype=ext4 --name=root --vgname=vg0 --size=1024
logvol /usr --fstype=ext4 --name=usr --vgname=vg0 --size=2048
logvol /var --fstype=ext4 --name=var --vgname=vg0 --size=1536
logvol swap --name=swap --vgname=vg0 --size=2048 --maxsize=2048
logvol /home --fstype=ext4 --name=home --vgname=vg0 --size=512
#System authorization infomation
auth  --useshadow  --enablemd5
#Network information
network --bootproto=dhcp --device=eth0
#Firewall configuration
firewall --disabled
#Do not configure the X Window System
skipx
%post --interpreter=/usr/env/bash
shortname=`dd if=/dev/urandom bs=1 count=6 2>/dev/null | hexdump | awk '{print $2 $3 $4 $5}' | sed '/^\s*$/d'`;
sudo hostname "$shortname".build.reboot-three-times.com;
sudo apt-get install -y openjdk-7-jre openjdk-7-jdk;
IP_ADDR=`ifconfig | head -n 2 | grep -i inet | cut -d ":" -f 2 | cut -d " " -f 1`
HOST_NAME=`hostname`
FOURTH_OCTET=`echo $IP_ADDR | cut -d "." -f 4`;
THIRD_OCTET=`echo $IP_ADDR | cut -d "." -f 3`
SECOND_OCTET=`echo $IP_ADDR | cut -d "." -f 2`
FIRST_OCTET=`echo $IP_ADDR | cut -d "." -f 1`
echo -e "server nameServerIPGoesHere\nupdate delete $HOST_NAME A\nupdate delete $HOST_NAME PTR\nupdate add $HOST_NAME 86400 A $IP_ADDR\nsend\nupdate add $FOURTH_OCTET.$THIRD_OCTET.$SECOND_OCTET.$FIRST_OCTET.in-addr.arpa 86400 PTR $HOST_NAME\nsend\nquit" >> /tmp/updatemedata
nsupdate < /tmp/updatemedata
sudo mkdir /jenkins && cd !!
wget http://jenkins-master.build.reboot-three-times.com/swarm-client-jar-with-dependencies.jar
sudo java -jar ./swarm-client-jar-with-dependencies.jar -description $HOSTNAME -name $HOSTNAME -retry -fsroot /jenkins

Open in new window

 

The first half of the Kickstart file is relatively straightforward;, it's all the basics for the Ubuntu deployment. There are zero bells and whistles, as this just has to be able to run the slave data. It's also designed to be blown away if there is an issue. There are enough modules to run the necessities, but if it's unresponsive, we should power it off, delete from disk, and then reprovision. It will boot, immediately load the Swarm client, connect to Jenkins, and then be added to the pool of slaves. That's its sole purpose, so there is no point in trying to troubleshoot it if it's broken when we can just redeploy it.
 
The second part of the Kickstart is all the information that is needed for the LVM and the post-boot installation. We're booting with DHCP, setting the unique hostname, creating the nsupdate file, and updating DNS. After that, we're just configuring for the slave; we create the directory, get the slave file, and then boot it.
 
In order for this to work, we have two options: we can either build the Kickstart reference into isolinux.cfg, or we can just reference it on boot. I would recommend building it into the mini iso so it will find the Kickstart file on boot. Just mount the iso in read-only, copy it, then edit the isolinux.cfg file, then repackage as an iso.
 
Here is how my isolinux.cfg looks:

# D-I config version 2.0
include menu.cfg
default vesamenu.c32
prompt 0
timeout 0
#append ks=http://jenkins-master.build.reboot-three-times.com/ks.cfg

Open in new window

 

To properly let the slave installation function, we should create the template VM with one network adapter, 8GB of disk space, and however much RAM we want it to have. Do not turn the template VM on! If it is turned on, then we will have to recreate the VM as that one is no longer usable. The reason we are creating a template VM is so that we can clone, as needed, it to the environment and it will add itself.
 
At this point, we've deployed Ubuntu, deployed Jenkins with some basic software building tools, written a Kickstart file, edited the ISO for the Kickstart, and now we're done! At this point, we have a distributed architecture for Jenkins with easily deployable slaves. The rest is up to you!
 
Credit:
Pat Carmichael, Systems Engineer with Tintri; Pat wrote the original script the the DNS update metadata.
Kathryn Spencer, Teacher with ASD 20; Kathryn was my technical editor. Thanks for making me sound smart!
Todd Eddy, http://vrillusions.com/; I stole a the LVM Kickstart lines from him.
3
Comment
Author:Mike Lloyd
1 Comment
LVL 5

Expert Comment

by:Uni Kitty
Hi Mike! So happy to see you on the site and what a great first article!! LOVE IT!
0

Featured Post

Rowby Goren Makes an Impact on Screen and Online

Learn about longtime user Rowby Goren and his great contributions to the site. We explore his method for posing questions that are likely to yield a solution, and take a look at how his career transformed from a Hollywood writer to a website entrepreneur.

Join & Write a Comment

This tutorial covers a step-by-step guide to install VisualVM launcher in eclipse.
This tutorial explains how to use the VisualVM tool for the Java platform application. This video goes into detail on the Threads, Sampler, and Profiler tabs.

Keep in touch with Experts Exchange

Tech news and trends delivered to your inbox every month