In the previous post I set up my development tools and my virtual UAT environment, using Subversion to control configuration changes to UAT. Now I can introduce automation in my setup.
There are several platforms that can help with automation, and they all do pretty much the same thing (execute jobs) in slightly different ways, but my vote goes to Jenkins for a few reasons:
I could install Jenkins on my laptop using apt-get, but since I already have a local Tomcat I will just download the war file and run it inside my Tomcat container. Later on I will use Tocmat to also run Artifactory, but I shall leave that part for another post.
Jenkins is already a usfeul platform "out of the box", but for my setup I need to add more functionality. There are many plugins available to add new functionality to Jenkins and integrate with different tools and platforms. However, for the purposes of configuring a job, I want to avoid using plugins as much as possible, relying on shell commands instead. This is not always feasible or practical, but that's the intention. The reason is that in a contingency situation where Jenkins might be unavailabl e I can always retrieve the shell commands from the Jenkins configuration and execute them by hand if needed. If I had to rely on Jenkins plugins to do the work for me, I wouldn't be able to keep things going in a contingency scenario. For example, if one of the steps in a Jenkins job was to publish a file to Artifactory, and I used the Artifactory plugin to perform that step, then in the event that Jenkins becomes unavailable I would have to devise a different way to publish the file (for example using Maven, or logging in to Artifactory and using the deployment page). This, however, contravenes one of the principles of continuous delivery: executing a task the same way every time. On the other hand, if I use a sequence of shell commands to achieve the same purpose, then I can always retrieve that sequence of commands from source control and execute it by hand if necessary, thereby operating still within the guidelines of continuous delivery. In many cases I could even temporarily use a cron job until I manage to get Jenkins back online, and nobody would even notice the difference.
Having said that, there are plugins that relate to Jenkins administration activities that I am very fond of. One of them is the SCM Sync Plugin. This plugin allows me to easily store and maintain all the Jenkins configurations (global and job-specific) into source control. Once configured, every change I make to a job's configuration or to the global configuration will be automatically checked in to source control. I create a new subversion repository, configure the SCM Sync Plugin to hook up to it, and check that the initial import worked before I move on.
This helps tremendously in a disaster recovery scenario where I might have to rebuild the Jenkins server from scratch, in which case I would simply check out the configuration from source control before starting up Jenkins. This is a very efficient and effective way of reconstructing the pre-disaster state of Jenkins exactly the way it was. Even in normal circumstances, I will now be able to revert to a previous known good state in case I happen to make some configuration changes and mess it up really bad, or even just if I don't particularly like the end result for whatever reason.
Now I can create a new Jenkins job that will update the Tomcat configuration on my virtual server whenever an updated configuration exists in source control. As soon as I create the job and hit the save button, its details will automatically be checked into source control thanks to the SCM Sync Plugin. The new job will be called "UAT_Update_Env" and its logical steps are going to be:
1) check Subversion repositories for changes and trigger the job when an update is available
2) SSH into the virtual server
3) update Tomcat6 configuration directories
4) restart Tomcat6 service
5) test configuration changes and fail the job if the test is unsuccessful
In order to SSH into the virtual server from a non-interactive session, I need first to add the virtual server's fingerprint to my local known hosts. I can do this by running an SSH session from the terminal on my laptop. On the virtual server I need to set up a user that will be dedicated to continuous delivery operations, and give this user some elevated privileges because it must be able to update the tomcat configuration and restart tomcat. I have chosen a user called "contdel".
In order to avoid interactive password prompts during the execution of the automated job, I choose to use private/public key authentication with SSH, so I generate the key on the virtual server and scp it to my laptop so that is visible to Jenkins. I could simply store it in Subversion, which is ok for the purposes of writing this series, but in a real production scenario I wouldn't do that.
There is one more thing I need to do on the virtual server, and it's to set up the correct access on the configuration files that I will update through automation and the restart script. I have already set up the user "contdel" on the virtual server, which is in the sudoers list. The permissions on the Tomcat configuration folder are set so that the owner (root) has read/write/execute rights, the group (root) has read/execute rights, and everyone else has read/execute rights.
As it is, I still need sudo to do an svn update, which is fine from an interactive terminal, but it wouldn't work from a non-interactive Jenkins ssh session. If I did that, I would get an error similar to this:
This happens because the sudo command prompts for the user's password, which is obviously not going to work in a non-interactive session. What I can do is change the sudoers configuration to allow the user contdel to restart tomcat using sudo without a password prompt. The instructions on how to do so are on the Ubuntu help site. In the latest version of Ubuntu, the sudoers file will automatically include other configuration files stored in /etc/sudoers.d/ so I can avoid editing /etc/sudoers (which is a good thing) and add my changes to /etc/sudoers.d/contdel, so if I happen to mess up a directive I can easily revert back by removing the new config file and try again.
Note that the password prompt comes from the local ssh command, not from the remote sudo command.
I can make it a little more secure by doing two things:
However, I am still more interested in getting the update job going, so I'll go ahead and create the UAT_Update_Env job. In my previous post I set up the directory structure in source control that will hold all the relevant files in my project: not only the application code, but also all configuration files for the environment and the database script. The environment configuration files are in $SVN_ROOT/envsetup/[environmentID], which for UAT means $SVN_ROOT/envsetup/UAT. This job will therefore look for source control changes in svn://192.168.56.1/mywebapp/trunk/envsetup/UAT every five minutes, and if any changes are detected it will apply them via SSH to the UAT virtual server.
In terms of how the commands will actually be executed, I have two equally viable options:
Just to keep things simple at this point in time, I'll go for the second option and use the SSH Slaves plugin that comes with the latest Jenkins "out of the box". I therefore need to create a new Jenkins slave called "UAT" that will run on my virtual server. Under the hood, Jenkins will connect to my virtual server via SSH, upload the slave configuration and binaries, and launch the slave process on the virtual server. When I set up my job in Jenkins, I will configure it to run only on the newly configured slave, and by so doing all commands and script will be executed by the slave on the virtual server. The root Jenkins directory on the new node will receive all files from the Jenkins master. These files will change often and after a job gets executed a number of times they can build up fairly quickly. I don't want to spend any effort cleaning them up regularly, so I'll just use /tmp as the root Jenkins directory. All files in there will disappear after each reboot.
The UAT_Update_Env job has the following characteristics.
Running the job verifies that it's all good.
In order to test that Tomcat is actually working, I have to fire an HTTP request to the Tomcat server and check that it doesn't spit out an error. To do this I create a new Jenkins job called UAT_Verify_Env that uses curl to connect to the virtual server on port 8080. If a connection error occurs, curl will return an error and the job will fail. Since I am interested in verifying a successful connection and not in the actual output of the curl command, I can redirect the output to /dev/null. The UAT_Verify_Env job therefore executes one simple shell command:
This job will be executed on the Jenkins master, not on the UAT slave, in order to verify external connectivity within the virtual network. Running this job on the UAT slave will only verify connectivity to the local network adapter.
Now I can test the full cycle: check out the tomcat configuration to a local workspace on my laptop, then change the HTTP connector port in server.xml, commit the changes, wait for Jenkins to figure out there are modifications in the source repository and fire the update job, then wait for the job to complete and use curl to check that tomcat is actually now listening on the new port. If I change the port to 8081 I will see the job UAT_Verify_Env fail, which is a good thing because it will prove that I have a way to quickly feed back a wrong environment configuration.
Changing the port back to 8080 and checking in the changes to source control will get everything back on a happy path.
In a similar fashion as for Tomcat, I can extend this process to also update the MySQL configuration. I will skip the detailed description here, but it follows exactly the same principles as above: make sure that user contdel can perform an svn update on the MySQL configuration folders and files, allow user contdel to restart MySQL via sudo without a password prompt, and add the relevant shell commands to the existing job configuration, then amend the UAT_Update_Env and UAT_Verify_Env jobs in Jenkins to account for canges in MySQL configuration.
There are several platforms that can help with automation, and they all do pretty much the same thing (execute jobs) in slightly different ways, but my vote goes to Jenkins for a few reasons:
- It's free.
- It's a mature platform with a very large user base.
- It is plugin-based, so I can install and configure only the functionality that I actually need.
- I can see fairly easily which plugins are actively maintained and use only those.
- It's really easy to use.
I could install Jenkins on my laptop using apt-get, but since I already have a local Tomcat I will just download the war file and run it inside my Tomcat container. Later on I will use Tocmat to also run Artifactory, but I shall leave that part for another post.
Jenkins is already a usfeul platform "out of the box", but for my setup I need to add more functionality. There are many plugins available to add new functionality to Jenkins and integrate with different tools and platforms. However, for the purposes of configuring a job, I want to avoid using plugins as much as possible, relying on shell commands instead. This is not always feasible or practical, but that's the intention. The reason is that in a contingency situation where Jenkins might be unavailabl e I can always retrieve the shell commands from the Jenkins configuration and execute them by hand if needed. If I had to rely on Jenkins plugins to do the work for me, I wouldn't be able to keep things going in a contingency scenario. For example, if one of the steps in a Jenkins job was to publish a file to Artifactory, and I used the Artifactory plugin to perform that step, then in the event that Jenkins becomes unavailable I would have to devise a different way to publish the file (for example using Maven, or logging in to Artifactory and using the deployment page). This, however, contravenes one of the principles of continuous delivery: executing a task the same way every time. On the other hand, if I use a sequence of shell commands to achieve the same purpose, then I can always retrieve that sequence of commands from source control and execute it by hand if necessary, thereby operating still within the guidelines of continuous delivery. In many cases I could even temporarily use a cron job until I manage to get Jenkins back online, and nobody would even notice the difference.
Having said that, there are plugins that relate to Jenkins administration activities that I am very fond of. One of them is the SCM Sync Plugin. This plugin allows me to easily store and maintain all the Jenkins configurations (global and job-specific) into source control. Once configured, every change I make to a job's configuration or to the global configuration will be automatically checked in to source control. I create a new subversion repository, configure the SCM Sync Plugin to hook up to it, and check that the initial import worked before I move on.
laptop$ mkdir -p ~/svnrepo/jenkins laptop$ svnadmin create ~/svnrepo/jenkins laptop$ svnserve -d -r ~/svnrepo --foreground --log-file ~/svnserve.log --listen-host 192.168.56.1
(a different shell)
laptop$ svn list svn://192.168.56.1/jenkins/trunk
config.xml
hudson.maven.MavenModuleSet.xml
hudson.scm.CVSSCM.xml
hudson.scm.SubversionSCM.xml
hudson.tasks.Ant.xml
hudson.tasks.Mailer.xml
hudson.tasks.Maven.xml
hudson.tasks.Shell.xml
hudson.triggers.SCMTrigger.xml
scm-sync-configuration.xml
This helps tremendously in a disaster recovery scenario where I might have to rebuild the Jenkins server from scratch, in which case I would simply check out the configuration from source control before starting up Jenkins. This is a very efficient and effective way of reconstructing the pre-disaster state of Jenkins exactly the way it was. Even in normal circumstances, I will now be able to revert to a previous known good state in case I happen to make some configuration changes and mess it up really bad, or even just if I don't particularly like the end result for whatever reason.
Now I can create a new Jenkins job that will update the Tomcat configuration on my virtual server whenever an updated configuration exists in source control. As soon as I create the job and hit the save button, its details will automatically be checked into source control thanks to the SCM Sync Plugin. The new job will be called "UAT_Update_Env" and its logical steps are going to be:
1) check Subversion repositories for changes and trigger the job when an update is available
2) SSH into the virtual server
3) update Tomcat6 configuration directories
4) restart Tomcat6 service
5) test configuration changes and fail the job if the test is unsuccessful
In order to SSH into the virtual server from a non-interactive session, I need first to add the virtual server's fingerprint to my local known hosts. I can do this by running an SSH session from the terminal on my laptop. On the virtual server I need to set up a user that will be dedicated to continuous delivery operations, and give this user some elevated privileges because it must be able to update the tomcat configuration and restart tomcat. I have chosen a user called "contdel".
laptop$ ssh contdel@192.168.56.2 pwd
The authenticity of host '192.168.56.2 (192.168.56.2)' can't be established.
ECDSA key fingerprint is 91:b2:ec:f9:15:c2:ac:cd:e2:01:d7:23:11:e0:17:db.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.56.2' (ECDSA) to the list of known hosts.
contdel@192.168.56.2's password:
/home/contdel
In order to avoid interactive password prompts during the execution of the automated job, I choose to use private/public key authentication with SSH, so I generate the key on the virtual server and scp it to my laptop so that is visible to Jenkins. I could simply store it in Subversion, which is ok for the purposes of writing this series, but in a real production scenario I wouldn't do that.
There is one more thing I need to do on the virtual server, and it's to set up the correct access on the configuration files that I will update through automation and the restart script. I have already set up the user "contdel" on the virtual server, which is in the sudoers list. The permissions on the Tomcat configuration folder are set so that the owner (root) has read/write/execute rights, the group (root) has read/execute rights, and everyone else has read/execute rights.
UAT$ ls -lsah /etc/tomcat6
total 100K
4.0K drwxr-xr-x 5 root root 4.0K May 8 22:18 .
4.0K drwxr-xr-x 94 root root 12K May 20 22:30 ..
4.0K drwxrwxr-x 4 root tomcat6 4.0K May 8 22:18 Catalina
4.0K -rw-r--r-- 1 root tomcat6 4.0K Jan 11 02:24 catalina.properties
4.0K -rw-r--r-- 1 root tomcat6 1.4K Aug 5 2007 context.xml
4.0K -rw-r--r-- 1 root tomcat6 2.4K Dec 13 2011 logging.properties
4.0K drwxr-xr-x 3 root tomcat6 4.0K May 8 22:18 policy.d
8.0K -rw-r--r-- 1 root tomcat6 6.6K Jan 11 02:24 server.xml
4.0K drwxr-xr-x 6 root root 4.0K May 8 22:18 .svn
4.0K -rw-r----- 1 root tomcat6 1.5K Nov 4 2010 tomcat-users.xml
52K -rw-r--r-- 1 root tomcat6 52K Nov 11 2011 web.xml
As it is, I still need sudo to do an svn update, which is fine from an interactive terminal, but it wouldn't work from a non-interactive Jenkins ssh session. If I did that, I would get an error similar to this:
sudo: no tty present and no askpass program specified Sorry, try again.
This happens because the sudo command prompts for the user's password, which is obviously not going to work in a non-interactive session. What I can do is change the sudoers configuration to allow the user contdel to restart tomcat using sudo without a password prompt. The instructions on how to do so are on the Ubuntu help site. In the latest version of Ubuntu, the sudoers file will automatically include other configuration files stored in /etc/sudoers.d/ so I can avoid editing /etc/sudoers (which is a good thing) and add my changes to /etc/sudoers.d/contdel, so if I happen to mess up a directive I can easily revert back by removing the new config file and try again.
UAT$ sudo visudo -f /etc/sudoers.d/contdel ... # Allow user contdel to restart tomcat via sudo without a password prompt contdel ALL=(ALL:ALL) NOPASSWD:/etc/init.d/tomcat6 ... laptop$ ssh contdel@192.168.56.2 sudo /etc/init.d/tomcat6 restart contdel@192.168.56.2's password: * Stopping Tomcat servlet engine tomcat6 ...done. * Starting Tomcat servlet engine tomcat6 ...done.
Note that the password prompt comes from the local ssh command, not from the remote sudo command.
I can make it a little more secure by doing two things:
- Restrict the sudoers directive for contdel so that it only applies when the command comes from the local network.
- Use private/public key authentication for user contdel instead of username/password. This is particularly useful for Jenkins configurations that might or might not expose the password in the logs (or anywhere else).
However, I am still more interested in getting the update job going, so I'll go ahead and create the UAT_Update_Env job. In my previous post I set up the directory structure in source control that will hold all the relevant files in my project: not only the application code, but also all configuration files for the environment and the database script. The environment configuration files are in $SVN_ROOT/envsetup/[environmentID], which for UAT means $SVN_ROOT/envsetup/UAT. This job will therefore look for source control changes in svn://192.168.56.1/mywebapp/trunk/envsetup/UAT every five minutes, and if any changes are detected it will apply them via SSH to the UAT virtual server.
In terms of how the commands will actually be executed, I have two equally viable options:
- I tell Jenkins to open a local shell and execute remote commands on UAT via SSH.
- I tell Jenkins to start a slave on UAT that will execute shell commands locally
Just to keep things simple at this point in time, I'll go for the second option and use the SSH Slaves plugin that comes with the latest Jenkins "out of the box". I therefore need to create a new Jenkins slave called "UAT" that will run on my virtual server. Under the hood, Jenkins will connect to my virtual server via SSH, upload the slave configuration and binaries, and launch the slave process on the virtual server. When I set up my job in Jenkins, I will configure it to run only on the newly configured slave, and by so doing all commands and script will be executed by the slave on the virtual server. The root Jenkins directory on the new node will receive all files from the Jenkins master. These files will change often and after a job gets executed a number of times they can build up fairly quickly. I don't want to spend any effort cleaning them up regularly, so I'll just use /tmp as the root Jenkins directory. All files in there will disappear after each reboot.
The UAT_Update_Env job has the following characteristics.
- The job is restricted to run onthe "UAT" node only.
- The job monitors the configuration files in Subversion for changes every 5 minutes.
- The job executes svn update /etc/tomcat6 followed by sudo /etc/init.d/tomcat6 restart.
Running the job verifies that it's all good.
Started by user anonymous Building remotely on UAT in workspace /tmp/jenkins_slave/workspace/UAT_Update_Env Cleaning local Directory . Checking out svn://192.168.56.1/mywebapp/trunk/envsetup/UAT at revision '2013-05-23T16:59:19.322 +1000' At revision 17 no change for svn://192.168.56.1/mywebapp/trunk/envsetup/UAT since the previous build [UAT_Update_Env] $ /bin/sh -xe /tmp/hudson603902310922720127.sh + svn update /etc/tomcat6 At revision 17. + sudo /etc/init.d/tomcat6 restart * Stopping Tomcat servlet engine tomcat6 ...done. * Starting Tomcat servlet engine tomcat6 ...done. Finished: SUCCESS
In order to test that Tomcat is actually working, I have to fire an HTTP request to the Tomcat server and check that it doesn't spit out an error. To do this I create a new Jenkins job called UAT_Verify_Env that uses curl to connect to the virtual server on port 8080. If a connection error occurs, curl will return an error and the job will fail. Since I am interested in verifying a successful connection and not in the actual output of the curl command, I can redirect the output to /dev/null. The UAT_Verify_Env job therefore executes one simple shell command:
curl http://192.168.56.2:8080 -o /dev/null
This job will be executed on the Jenkins master, not on the UAT slave, in order to verify external connectivity within the virtual network. Running this job on the UAT slave will only verify connectivity to the local network adapter.
Now I can test the full cycle: check out the tomcat configuration to a local workspace on my laptop, then change the HTTP connector port in server.xml, commit the changes, wait for Jenkins to figure out there are modifications in the source repository and fire the update job, then wait for the job to complete and use curl to check that tomcat is actually now listening on the new port. If I change the port to 8081 I will see the job UAT_Verify_Env fail, which is a good thing because it will prove that I have a way to quickly feed back a wrong environment configuration.
Started by user anonymous Building on master in workspace /var/local/jenkins/jobs/UAT_Verify_Env/workspace [workspace] $ /bin/sh -xe /tmp/tomcat6-tomcat6-tmp/hudson1161186187774233720.sh + curl http://192.168.56.2:8080 -o /dev/null % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) couldn't connect to host Build step 'Execute shell' marked build as failure Finished: FAILURE
Changing the port back to 8080 and checking in the changes to source control will get everything back on a happy path.
Started by user anonymous Building on master in workspace /var/local/jenkins/jobs/UAT_Verify_Env/workspace [workspace] $ /bin/sh -xe /tmp/tomcat6-tomcat6-tmp/hudson2858331979742011098.sh + curl http://192.168.56.2:8081 -o /dev/null % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 1887 100 1887 0 0 260k 0 --:--:-- --:--:-- --:--:-- 921k Finished: SUCCESS
In a similar fashion as for Tomcat, I can extend this process to also update the MySQL configuration. I will skip the detailed description here, but it follows exactly the same principles as above: make sure that user contdel can perform an svn update on the MySQL configuration folders and files, allow user contdel to restart MySQL via sudo without a password prompt, and add the relevant shell commands to the existing job configuration, then amend the UAT_Update_Env and UAT_Verify_Env jobs in Jenkins to account for canges in MySQL configuration.
No comments:
Post a Comment