23 May 2013

Trying Continuous Delivery - Part 2

In the previous post I set up my development tools and my virtual UAT environment, using Subversion to control configuration changes to UAT. Now I can introduce automation in my setup.

There are several platforms that can help with automation, and they all do pretty much the same thing (execute jobs) in slightly different ways, but my vote goes to Jenkins for a few reasons:

  1. It's free.
  2. It's a mature platform with a very large user base.
  3. It is plugin-based, so I can install and configure only the functionality that I actually need.
  4. I can see fairly easily which plugins are actively maintained and use only those.
  5. It's really easy to use.

I could install Jenkins on my laptop using apt-get, but since I already have a local Tomcat I will just download the war file and run it inside my Tomcat container. Later on I will use Tocmat to also run Artifactory, but I shall leave that part for another post.

Jenkins is already a usfeul platform "out of the box", but for my setup I need to add more functionality. There are many plugins available to add new functionality to Jenkins and integrate with different tools and platforms. However, for the purposes of configuring a job, I want to avoid using plugins as much as possible, relying on shell commands instead. This is not always feasible or practical, but that's the intention. The reason is that in a contingency situation where Jenkins might be unavailabl e I can always retrieve the shell commands from the Jenkins configuration and execute them by hand if needed. If I had to rely on Jenkins plugins to do the work for me, I wouldn't be able to keep things going in a contingency scenario. For example, if one of the steps in a Jenkins job was to publish a file to Artifactory, and I used the Artifactory plugin to perform that step, then in the event that Jenkins becomes unavailable I would have to devise a different way to publish the file (for example using Maven, or logging in to Artifactory and using the deployment page). This, however, contravenes one of the principles of continuous delivery: executing a task the same way every time. On the other hand, if I use a sequence of shell commands to achieve the same purpose, then I can always retrieve that sequence of commands from source control and execute it by hand if necessary, thereby operating still within the guidelines of continuous delivery. In many cases I could even temporarily use a cron job until I manage to get Jenkins back online, and nobody would even notice the difference.

Having said that, there are plugins that relate to Jenkins administration activities that I am very fond of. One of them is the SCM Sync Plugin. This plugin allows me to easily store and maintain all the Jenkins configurations (global and job-specific) into source control. Once configured, every change I make to a job's configuration or to the global configuration will be automatically checked in to source control. I create a new subversion repository, configure the SCM Sync Plugin to hook up to it, and check that the initial import worked before I move on.

laptop$ mkdir -p ~/svnrepo/jenkins
laptop$ svnadmin create ~/svnrepo/jenkins
laptop$ svnserve -d -r ~/svnrepo --foreground --log-file ~/svnserve.log --listen-host 192.168.56.1


(a different shell)
laptop$ svn list svn://192.168.56.1/jenkins/trunk
config.xml
hudson.maven.MavenModuleSet.xml
hudson.scm.CVSSCM.xml
hudson.scm.SubversionSCM.xml
hudson.tasks.Ant.xml
hudson.tasks.Mailer.xml
hudson.tasks.Maven.xml
hudson.tasks.Shell.xml
hudson.triggers.SCMTrigger.xml
scm-sync-configuration.xml

This helps tremendously in a disaster recovery scenario where I might have to rebuild the Jenkins server from scratch, in which case I would simply check out the configuration from source control before starting up Jenkins. This is a very efficient and effective way of reconstructing the pre-disaster state of Jenkins exactly the way it was. Even in normal circumstances, I will now be able to revert to a previous known good state in case I happen to make some configuration changes and mess it up really bad, or even just if I don't particularly like the end result for whatever reason.

Now I can create a new Jenkins job that will update the Tomcat configuration on my virtual server whenever an updated configuration exists in source control. As soon as I create the job and hit the save button, its details will automatically be checked into source control thanks to the SCM Sync Plugin. The new job will be called "UAT_Update_Env" and its logical steps are going to be:

1) check Subversion repositories for changes and trigger the job when an update is available
2) SSH into the virtual server
3) update Tomcat6 configuration directories
4) restart Tomcat6 service
5) test configuration changes and fail the job if the test is unsuccessful

In order to SSH into the virtual server from a non-interactive session, I need first to add the virtual server's fingerprint to my local known hosts. I can do this by running an SSH session from the terminal on my laptop. On the virtual server I need to set up a user that will be dedicated to continuous delivery operations, and give this user some elevated privileges because it must be able to update the tomcat configuration and restart tomcat. I have chosen a user called "contdel".

laptop$ ssh contdel@192.168.56.2 pwd
The authenticity of host '192.168.56.2 (192.168.56.2)' can't be established.
ECDSA key fingerprint is 91:b2:ec:f9:15:c2:ac:cd:e2:01:d7:23:11:e0:17:db.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.56.2' (ECDSA) to the list of known hosts.
contdel@192.168.56.2's password: 
/home/contdel

In order to avoid interactive password prompts during the execution of the automated job, I choose to use private/public key authentication with SSH, so I generate the key on the virtual server and scp it to my laptop so that is visible to Jenkins. I could simply store it in Subversion, which is ok for the purposes of writing this series, but in a real production scenario I wouldn't do that.

There is one more thing I need to do on the virtual server, and it's to set up the correct access on the configuration files that I will update through automation and the restart script. I have already set up the user "contdel" on the virtual server, which is in the sudoers list. The permissions on the Tomcat configuration folder are set so that the owner (root) has read/write/execute rights, the group (root) has read/execute rights, and everyone else has read/execute rights.

UAT$ ls -lsah /etc/tomcat6
total 100K
4.0K drwxr-xr-x   5 root root    4.0K May  8 22:18 .
4.0K drwxr-xr-x  94 root root     12K May 20 22:30 ..
4.0K drwxrwxr-x   4 root tomcat6 4.0K May  8 22:18 Catalina
4.0K -rw-r--r--   1 root tomcat6 4.0K Jan 11 02:24 catalina.properties
4.0K -rw-r--r--   1 root tomcat6 1.4K Aug  5  2007 context.xml
4.0K -rw-r--r--   1 root tomcat6 2.4K Dec 13  2011 logging.properties
4.0K drwxr-xr-x   3 root tomcat6 4.0K May  8 22:18 policy.d
8.0K -rw-r--r--   1 root tomcat6 6.6K Jan 11 02:24 server.xml
4.0K drwxr-xr-x   6 root root    4.0K May  8 22:18 .svn
4.0K -rw-r-----   1 root tomcat6 1.5K Nov  4  2010 tomcat-users.xml
52K -rw-r--r--   1 root tomcat6  52K Nov 11  2011 web.xml

As it is, I still need sudo to do an svn update, which is fine from an interactive terminal, but it wouldn't work from a non-interactive Jenkins ssh session. If I did that, I would get an error similar to this:

sudo: no tty present and no askpass program specified
Sorry, try again.

This happens because the sudo command prompts for the user's password, which is obviously not going to work in a non-interactive session. What I can do is change the sudoers configuration to allow the user contdel to restart tomcat using sudo without a password prompt. The instructions on how to do so are on the Ubuntu help site. In the latest version of Ubuntu, the sudoers file will automatically include other configuration files stored in /etc/sudoers.d/ so I can avoid editing /etc/sudoers (which is a good thing) and add my changes to /etc/sudoers.d/contdel, so if I happen to mess up a directive I can easily revert back by removing the new config file and try again.

UAT$ sudo visudo -f /etc/sudoers.d/contdel
...
# Allow user contdel to restart tomcat via sudo without a password prompt
contdel ALL=(ALL:ALL) NOPASSWD:/etc/init.d/tomcat6
...

laptop$ ssh contdel@192.168.56.2 sudo /etc/init.d/tomcat6 restart
contdel@192.168.56.2's password: 
* Stopping Tomcat servlet engine tomcat6
...done.
* Starting Tomcat servlet engine tomcat6
...done.

Note that the password prompt comes from the local ssh command, not from the remote sudo command.

I can make it a little more secure by doing two things:

  1. Restrict the sudoers directive for contdel so that it only applies when the command comes from the local network.
  2. Use private/public key authentication for user contdel instead of username/password. This is particularly useful for Jenkins configurations that might or might not expose the password in the logs (or anywhere else).

However, I am still more interested in getting the update job going, so I'll go ahead and create the UAT_Update_Env job. In my previous post I set up the directory structure in source control that will hold all the relevant files in my project: not only the application code, but also all configuration files for the environment and the database script. The environment configuration files are in $SVN_ROOT/envsetup/[environmentID], which for UAT means $SVN_ROOT/envsetup/UAT. This job will therefore look for source control changes in svn://192.168.56.1/mywebapp/trunk/envsetup/UAT every five minutes, and if any changes are detected it will apply them via SSH to the UAT virtual server.

In terms of how the commands will actually be executed, I have two equally viable options:

  1. I tell Jenkins to open a local shell and execute remote commands on UAT via SSH.
  2. I tell Jenkins to start a slave on UAT that will execute shell commands locally

Just to keep things simple at this point in time, I'll go for the second option and use the SSH Slaves plugin that comes with the latest Jenkins "out of the box". I therefore need to create a new Jenkins slave called "UAT" that will run on my virtual server. Under the hood, Jenkins will connect to my virtual server via SSH, upload the slave configuration and binaries, and launch the slave process on the virtual server. When I set up my job in Jenkins, I will configure it to run only on the newly configured slave, and by so doing all commands and script will be executed by the slave on the virtual server. The root Jenkins directory on the new node will receive all files from the Jenkins master. These files will change often and after a job gets executed a number of times they can build up fairly quickly. I don't want to spend any effort cleaning them up regularly, so I'll just use /tmp as the root Jenkins directory. All files in there will disappear after each reboot.


The UAT_Update_Env job has the following characteristics.
  1. The job is restricted to run onthe "UAT" node only.
  2. The job monitors the configuration files in Subversion for changes every 5 minutes.
  3. The job executes svn update /etc/tomcat6 followed by sudo /etc/init.d/tomcat6 restart.



Running the job verifies that it's all good.

Started by user anonymous
Building remotely on UAT in workspace /tmp/jenkins_slave/workspace/UAT_Update_Env
Cleaning local Directory .
Checking out svn://192.168.56.1/mywebapp/trunk/envsetup/UAT at revision '2013-05-23T16:59:19.322 +1000'
At revision 17
no change for svn://192.168.56.1/mywebapp/trunk/envsetup/UAT since the previous build
[UAT_Update_Env] $ /bin/sh -xe /tmp/hudson603902310922720127.sh
+ svn update /etc/tomcat6
At revision 17.
+ sudo /etc/init.d/tomcat6 restart
* Stopping Tomcat servlet engine tomcat6
...done.
* Starting Tomcat servlet engine tomcat6
...done.
Finished: SUCCESS

In order to test that Tomcat is actually working, I have to fire an HTTP request to the Tomcat server and check that it doesn't spit out an error. To do this I create a new Jenkins job called UAT_Verify_Env that uses curl to connect to the virtual server on port 8080. If a connection error occurs, curl will return an error and the job will fail. Since I am interested in verifying a successful connection and not in the actual output of the curl command, I can redirect the output to /dev/null. The UAT_Verify_Env job therefore executes one simple shell command:

curl http://192.168.56.2:8080 -o /dev/null

This job will be executed on the Jenkins master, not on the UAT slave, in order to verify external connectivity within the virtual network. Running this job on the UAT slave will only verify connectivity to the local network adapter.

Now I can test the full cycle: check out the tomcat configuration to a local workspace on my laptop, then change the HTTP connector port in server.xml, commit the changes, wait for Jenkins to figure out there are modifications in the source repository and fire the update job, then wait for the job to complete and use curl to check that tomcat is actually now listening on the new port. If I change the port to 8081 I will see the job UAT_Verify_Env fail, which is a good thing because it will prove that I have a way to quickly feed back a wrong environment configuration.

Started by user anonymous
Building on master in workspace /var/local/jenkins/jobs/UAT_Verify_Env/workspace
[workspace] $ /bin/sh -xe /tmp/tomcat6-tomcat6-tmp/hudson1161186187774233720.sh
+ curl http://192.168.56.2:8080 -o /dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (7) couldn't connect to host
Build step 'Execute shell' marked build as failure
Finished: FAILURE

Changing the port back to 8080 and checking in the changes to source control will get everything back on a happy path.

Started by user anonymous
Building on master in workspace /var/local/jenkins/jobs/UAT_Verify_Env/workspace
[workspace] $ /bin/sh -xe /tmp/tomcat6-tomcat6-tmp/hudson2858331979742011098.sh
+ curl http://192.168.56.2:8081 -o /dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  1887  100  1887    0     0   260k      0 --:--:-- --:--:-- --:--:--  921k
Finished: SUCCESS

In a similar fashion as for Tomcat, I can extend this process to also update the MySQL configuration. I will skip the detailed description here, but it follows exactly the same principles as above: make sure that user contdel can perform an svn update on the MySQL configuration folders and files, allow user contdel to restart MySQL via sudo without a password prompt, and add the relevant shell commands to the existing job configuration, then amend the UAT_Update_Env and UAT_Verify_Env jobs in Jenkins to account for canges in MySQL configuration.

13 May 2013

Trying Continuous Delivery - Part 1

Please note: I am making this stuff up as I go along, so I may have to update each post at a later stage to reflect some changes that I think might make things easier to manage.

In a previous post I introduced this series looking at continuous delivery from a practical standpoint, taking a simple web application and automating its lifecycle. This post outlines the initial setup of the continuous delivery system. These are the steps I will follow:

  1. Install VirtualBox, Subversion and Tomcat on my laptop
  2. Prepare Ubuntu Server virtual machine with Subversion and Tomcat
  3. Set up SVN repository and user access list on my laptop
  4. SVN checkin Tomcat settings from virtual server
  5. SVN checkout Tomcat settings onto my laptop
  6. SVN checkin modified Tomcat settings from my laptop
  7. SVN update Tomcat settings on the virtual server
  8. Restart Tomcat on the virtual server and verify changes
Installing VirtualBox and Subversion on my Ubuntu laptop is straightforward with apt-get,
but the Ubuntu Software Centre makes it even easier.


I can then prepare the virtual server by downloading the relevant ISO image and hooking it up to the virtual DVD of a new virtual machine. After installing Ubuntu Server I have a new environment ready to be configured as my integration box.


By default, the virtual machine has a NAT network adapter and later I will configure it to have a host-only connection with a static IP address to my new virtual server, so I can work with it also when I'm not connected to my home network. A host-only connection is also useful to simulate environments that wouldn't normally be connected to the Internet and that's ultimately the kind of adapter that I want to be working with. Before I do that, I need to install/update the relevant tools on the virtual machine: SSH Server, Subversion, Tomcat6, and MySQL, plus a few basic packages to facilitate administration and remote view.

VM$ sudo apt-get update
VM$ sudo apt-get install openssh-server subversion tomcat6 mysql-server X.org zerofree make gcc linux-headers-$(uname -r)

After I got my baseline software on the virtual machine I can create a host-only adapter in VirtualBox (File / Preferences / Network), so I will have two adapters: one NAT adapter (the default) and one host-only adapter.
After all updates are installed, I won't need the NAT adapter any more (at least until the next major package upgrade), so I can either disable it in VirtualBox, or disconnect it (untick the "Cable Connected" check box in the VM settings) without actually removing it altogether. If I ever need to install a new package or perform a major upgrade, I just re-enable the NAT adapter very easily and then disable it right away.


I need my host-only adapter to have a fixed IP address (no DHCP). The reason for using fixed IP addresses is twofold: firstly, because I want my continuous delivery scripts to be simple and deterministic without having to install a name server/DNS just for this system; and secondly, because a production-like infrastructure would most likely use fixed/reserved IP addresses anyway. I don't want to install DNS or any other name resolution service, and I only need to set up just a few virtual machines, so I'll just maintain all hostnames and addresses in /etc/hosts.

127.0.0.1       localhost
192.168.56.1    VM-host      # The VM host
192.168.56.2    UbuntuSRV    # The baseline VM
192.168.56.3    Integration  # The integration VM
192.168.56.4    UAT          # The user acceptance VM
192.168.56.5    PROD         # The production VM

I decide to assign IP address 192.168.56.1/24 to the host-only adapter in VirtualBox, for no particular reason other
than because it's the default address that VirtualBox is using for me. It follows that the IP address on my virtual
server will have to be in the format 192.168.56.xxx in order to communicate with the host (my laptop).
Change the interface configuration on the VM.

VM$ sudo vi /etc/network/interfaces

...
auto eth0
iface eth0 inet static
address 192.168.56.2
netmask 255.255.255.0
gateway 192.168.56.1
...

Restart the network interface on the VM.

VM$ sudo ifdown eth0
VM$ sudo ifup eth0
ssh stop/waiting
ssh start/running, process 1234

Test communication from virtual machine to laptop host.

VM$ ping -c 3 VM-host
PING VM-host (192.168.56.1) 56(84) bytes of data.
64 bytes from 192.168.56.1: icmp_req=1 ttl=64 time=0.828 ms
64 bytes from 192.168.56.1: icmp_req=2 ttl=64 time=1.03 ms
64 bytes from 192.168.56.1: icmp_req=3 ttl=64 time=1.893 ms

--- VM-host ping statistics ---
3 packets trasmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.828/0.919/1.037/0.090 ms


laptop$ ping -c 3 UbuntuSRV
PING UbuntuSRV (192.168.56.2) 56(84) bytes of data.
64 bytes from 192.168.56.2: icmp_req=1 ttl=64 time=1.54 ms
64 bytes from 192.168.56.2: icmp_req=2 ttl=64 time=0.760 ms
64 bytes from 192.168.56.2: icmp_req=3 ttl=64 time=0.737 ms

--- UbuntuSRV ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.737/1.012/1.540/0.374 ms

Now we can use the NAT adapter for internect connections, and the host-only adapter for connections to 192.168.56.xxx addresses , so the routing table needs to reflect this separation.

VM$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.56.1    0.0.0.0         UG    100    0        0 eth1
10.0.2.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.56.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1

VM$ ping -c 3 www.google.com
PING www.google.com (74.125.237.178) 56(84) bytes of data.

--- www.google.com ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2014ms
It looks like the default route is through the host-only adapter, and no route leads to the Internet. I need to change that, so I'm going to use the VirtualBox NAT adapter 10.0.2.2 as a gateway for the default route. This way I can have a host-only connection and a NAT connection, both working, and with Internet access.
When I want a strict host-only connection, all I have to do is untick the "Cable Connected" box in the VirtualBox settings for my virtual server's NAT adapter.

VM$ sudo route del default
VM$ sudo route add default gw 10.0.2.2 dev eth0
VM$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.2.2        0.0.0.0         UG    100    0        0 eth1
10.0.2.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.56.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1

VM$ ping -c 3 www.google.com
PING www.google.com (74.125.237.178) 56(84) bytes of data.
64 bytes from 74.125.237.178: icmp_req=1 ttl=64 time=19.9 ms
64 bytes from 74.125.237.178: icmp_req=2 ttl=64 time=10.7 ms
64 bytes from 74.125.237.178: icmp_req=3 ttl=64 time=12.6 ms

VM$ ping -c 3 VM-host
PING VM-host (192.168.56.1) 56(84) bytes of data.
64 bytes from 192.168.56.1: icmp_req=1 ttl=64 time=0.834 ms
64 bytes from 192.168.56.1: icmp_req=2 ttl=64 time=1.11 ms
64 bytes from 192.168.56.1: icmp_req=3 ttl=64 time=1.876 ms
Looks good, so I'll just make this route persistent and move on.

VM$ sudo vi /etc/network/interfaces

...
...
up route add default gw 10.0.2.2 dev eth0
...
...

VM$ sudo service networking restart
So far so good. Let's get Subversion running and test connectivity from the virtual machine. We'll need a local repository first of all, then run the standalone Subversion server listening on the adapter used for the
VirtualBox host-only connection.

laptop$ mkdir -p ~/svnrepo/mywebapp
laptop$ svnadmin create ~/svnrepo/mywebapp
laptop$ svnserve -d -r ~/svnrepo --foreground --log-file ~/svnserve.log --listen-host 192.168.56.1

VM$ svn info svn://VM-host/mywebapp
Path: mywebapp
URL: svn://VM-host/mywebapp
Repository Root: svn://VM-host/mywebapp
Repository UUID: cbcda313-59ce-438c-844d-f6a9db7a4d30
Revision: 0
Node Kind: directory
Last Changed Rev: 0
Last Changed Date: 2013-05-13 12:57:36 +1000 (Mon, 13 May 2013)
It looks like I have a good baseline virtual server that I can use as a template to create my other virtual environment. Let's start with the Integration environment (INT). I only have to clone the baseline VM in VirtualBox using the 'linked clone' option, also making sure I reinitialise the MAC address of all network cards.





What I want to do now is to import the existing Tomcat configuration from the INT virtual machine into Subversion. This way I will be able to control any modifications to these settings from my development laptop. When I check in new versions of the settings, I will shut down the relevant services on the virtual machine, update the settings' working copies, and restart the services in order to pick up the new settings. In fact, I can even manage these changes from Eclipse, together with the application source code. It is easy to create an empty folder structure in Eclipse on my laptop host and then commit it to Subversion. I can then checkout the relevant folders in my virtual machine, svn add the relevant configuration, and check it all in.

INT$ svn import /etc/tomcat6 svn://VM-host/mywebapp/trunk/envsetup/INT/tomcat6 -m "Initial import for tomcat6 config"
INT$ sudo svn co svn://VM-host/mywebapp/trunk/envsetup/INT/tomcat6 /etc/tomcat6 --force

I need to sudo these commands because these directories are usually owned by root, and adding them to Subversion is an operation that needs write access because it will automatically generate some synchronisation metadata files and folders in the working copy directory. Back on the laptop host, I now need to update the Eclipse project and I will be able to manage configuration changes from there.
To test that this arrangement actually works, I can use Eclipse to change the Tomcat HTTP port on the guest virtual machine. I'll change it back afterwards.
[server.xml (snippet)]
...
    <!-- changed from port 8080 to port 8081 -->
    <Connector port="8081" protocol="HTTP/1.1"
               connectionTimeout="20000"
               URIEncoding="UTF-8"
               redirectPort="8443" />
...


INT$ sudo /etc/init.d/tomcat6 stop
 * Stopping Tomcat servlet engine tomcat6 [OK]

INT$ sudo svn update /etc/tomcat6
U   /etc/tomcat6/server.xml
Updated to revision 6

INT$ sudo /etc/init.d/tomcat6 start
 * Starting Tomcat servlet engine tomcat6 [OK]

Now pointing my browser from the laptop host to the guest address http://Integration:8081 shows the Tomcat welcome page.


This means that now I can successfully drive any Tomcat configuration changes from my development environment.
Also, it means that I can safely revert to any previous revision of the Tomcat configuration if something goes wrong.
In fact, I'll just test that now.

[in my workspace folder]
laptop$ svn merge -r COMMITTED:PREV .
--- Reverse-merging r16 into '.':
U    envsetup/INT/tomcat6/server.xml

laptop$ svn commit -m "Reverting previous commit"
--- Reverse-merging r16 into '.':
U    envsetup/INT/tomcat6/server.xml

INT$ sudo svn update /etc/tomcat6
U    /etc/tomcat6/server.xml
Updated to revision 17.

INT$ sudo /etc/init.d/tomcat6 restart
 * Stopping Tomcat servlet engine tomcat6 [OK]
 * Starting Tomcat servlet engine tomcat6 [OK]
Now pointing my browser from the laptop host to the guest address http://Integration:8081 spits out a browser connection error, but http://Integration:8080 shows the Tomcat welcome page as expected.
To recap, this is what I have so far:
  • a physical host (my laptop) with Tomcat, Eclipse, and VirtualBox
  • a Subversion server running on the physical host, using a local directory as a repository
  • a virtual server (my baseline) with Tomcat, Subversion and MySQL, connected with the physical host using a host-only adapter
  • a virtual server (my integration server) cloned from my baseline, with its Tomcat configuration in the Subversion repository

and this is what I can do with the current setup:

  • change the Tomcat configuration of the virtual server from my development IDE on the physical host
  • easily pick up the latest Tomcat configuration changes on the virtual server
  • easily revert to any previous revision of the Tomcat configurations

Next post will introduce the element of automation in configuration changes.