20 July 2013

City Quiz for iPhone

I liked this little quiz game. Here are the solutions.

  1. Dubai
  2. London
  3. Paris
  4. Amsterdam
  5. New York
  6. Los Angeles
  7. Moscow
  8. Rome
  9. Male
  10. Beijing
  11. Milan
  12. Florence
  13. Chicago
  14. Athens
  15. San Francisco
  16. Toronto
  17. Sydney
  18. Barcelona
  19. Delhi
  20. Puerto Iguazu
  21. Miami
  22. Munich
  23. Monaco
  24. San Diego
  25. Tokyo
  26. Cairo
  27. Istanbul
  28. Lhasa
  29. Brussels
  30. Las Vegas
  31. Hong Kong
  32. Prague
  33. Montreal
  34. Bangkok
  35. Geneva
  36. Melbourne
  37. Shanghai
  38. Rio de Janerio
  39. Dublin
  40. Punta Cana
  41. Mecca
  42. Lisbon
  43. Vatican
  44. Houston
  45. Havana
  46. Venice
  47. Berlin
  48. Dallas
  49. Jerusalem
  50. Washington
  51. Madrid
  52. Taipei
  53. Atlanta
  54. Bucharest
  55. Edinburgh
  56. Valencia
  57. Lima
  58. Luxor
  59. Copenhagen
  60. Shenzhen
  61. Marrakesh
  62. Petra
  63. Stockholm
  64. Zurich
  65. Auckland
  66. Ho Chi Minh
  67. Seattle
  68. Singapore
  69. Chiang Mai
  70. Agra
  71. Palermo
  72. Macau
  73. Mexico City
  74. Ushuaia
  75. Buenos Aires
  76. Bogota
  77. Boston
  78. Siem Reap
  79. Bali
  80. Oslo
  81. Budapest
  82. Orlando
  83. Seville
  84. Kiev
  85. Cancun
  86. Vienna
  87. Manila
  88. Chennai
  89. Cannes
  90. Pattaya
  91. Kathmandu
  92. Antalya
  93. Jakarta
  94. Honolulu
  95. Warsaw
  96. Ha Noi
  97. Santorini
  98. Sao Paulo
  99. Seoul
  100. Riyadh
  101. Goa
  102. Phnom Penh
  103. St Petersburg
  104. New Orleans
  105. Kuala Lumpur
  106. Phuket
  107. Cape Town
  108. Vancouver
Enjoy!


23 May 2013

Trying Continuous Delivery - Part 2

In the previous post I set up my development tools and my virtual UAT environment, using Subversion to control configuration changes to UAT. Now I can introduce automation in my setup.

There are several platforms that can help with automation, and they all do pretty much the same thing (execute jobs) in slightly different ways, but my vote goes to Jenkins for a few reasons:

  1. It's free.
  2. It's a mature platform with a very large user base.
  3. It is plugin-based, so I can install and configure only the functionality that I actually need.
  4. I can see fairly easily which plugins are actively maintained and use only those.
  5. It's really easy to use.

I could install Jenkins on my laptop using apt-get, but since I already have a local Tomcat I will just download the war file and run it inside my Tomcat container. Later on I will use Tocmat to also run Artifactory, but I shall leave that part for another post.

Jenkins is already a usfeul platform "out of the box", but for my setup I need to add more functionality. There are many plugins available to add new functionality to Jenkins and integrate with different tools and platforms. However, for the purposes of configuring a job, I want to avoid using plugins as much as possible, relying on shell commands instead. This is not always feasible or practical, but that's the intention. The reason is that in a contingency situation where Jenkins might be unavailabl e I can always retrieve the shell commands from the Jenkins configuration and execute them by hand if needed. If I had to rely on Jenkins plugins to do the work for me, I wouldn't be able to keep things going in a contingency scenario. For example, if one of the steps in a Jenkins job was to publish a file to Artifactory, and I used the Artifactory plugin to perform that step, then in the event that Jenkins becomes unavailable I would have to devise a different way to publish the file (for example using Maven, or logging in to Artifactory and using the deployment page). This, however, contravenes one of the principles of continuous delivery: executing a task the same way every time. On the other hand, if I use a sequence of shell commands to achieve the same purpose, then I can always retrieve that sequence of commands from source control and execute it by hand if necessary, thereby operating still within the guidelines of continuous delivery. In many cases I could even temporarily use a cron job until I manage to get Jenkins back online, and nobody would even notice the difference.

Having said that, there are plugins that relate to Jenkins administration activities that I am very fond of. One of them is the SCM Sync Plugin. This plugin allows me to easily store and maintain all the Jenkins configurations (global and job-specific) into source control. Once configured, every change I make to a job's configuration or to the global configuration will be automatically checked in to source control. I create a new subversion repository, configure the SCM Sync Plugin to hook up to it, and check that the initial import worked before I move on.

laptop$ mkdir -p ~/svnrepo/jenkins
laptop$ svnadmin create ~/svnrepo/jenkins
laptop$ svnserve -d -r ~/svnrepo --foreground --log-file ~/svnserve.log --listen-host 192.168.56.1


(a different shell)
laptop$ svn list svn://192.168.56.1/jenkins/trunk
config.xml
hudson.maven.MavenModuleSet.xml
hudson.scm.CVSSCM.xml
hudson.scm.SubversionSCM.xml
hudson.tasks.Ant.xml
hudson.tasks.Mailer.xml
hudson.tasks.Maven.xml
hudson.tasks.Shell.xml
hudson.triggers.SCMTrigger.xml
scm-sync-configuration.xml

This helps tremendously in a disaster recovery scenario where I might have to rebuild the Jenkins server from scratch, in which case I would simply check out the configuration from source control before starting up Jenkins. This is a very efficient and effective way of reconstructing the pre-disaster state of Jenkins exactly the way it was. Even in normal circumstances, I will now be able to revert to a previous known good state in case I happen to make some configuration changes and mess it up really bad, or even just if I don't particularly like the end result for whatever reason.

Now I can create a new Jenkins job that will update the Tomcat configuration on my virtual server whenever an updated configuration exists in source control. As soon as I create the job and hit the save button, its details will automatically be checked into source control thanks to the SCM Sync Plugin. The new job will be called "UAT_Update_Env" and its logical steps are going to be:

1) check Subversion repositories for changes and trigger the job when an update is available
2) SSH into the virtual server
3) update Tomcat6 configuration directories
4) restart Tomcat6 service
5) test configuration changes and fail the job if the test is unsuccessful

In order to SSH into the virtual server from a non-interactive session, I need first to add the virtual server's fingerprint to my local known hosts. I can do this by running an SSH session from the terminal on my laptop. On the virtual server I need to set up a user that will be dedicated to continuous delivery operations, and give this user some elevated privileges because it must be able to update the tomcat configuration and restart tomcat. I have chosen a user called "contdel".

laptop$ ssh contdel@192.168.56.2 pwd
The authenticity of host '192.168.56.2 (192.168.56.2)' can't be established.
ECDSA key fingerprint is 91:b2:ec:f9:15:c2:ac:cd:e2:01:d7:23:11:e0:17:db.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.56.2' (ECDSA) to the list of known hosts.
contdel@192.168.56.2's password: 
/home/contdel

In order to avoid interactive password prompts during the execution of the automated job, I choose to use private/public key authentication with SSH, so I generate the key on the virtual server and scp it to my laptop so that is visible to Jenkins. I could simply store it in Subversion, which is ok for the purposes of writing this series, but in a real production scenario I wouldn't do that.

There is one more thing I need to do on the virtual server, and it's to set up the correct access on the configuration files that I will update through automation and the restart script. I have already set up the user "contdel" on the virtual server, which is in the sudoers list. The permissions on the Tomcat configuration folder are set so that the owner (root) has read/write/execute rights, the group (root) has read/execute rights, and everyone else has read/execute rights.

UAT$ ls -lsah /etc/tomcat6
total 100K
4.0K drwxr-xr-x   5 root root    4.0K May  8 22:18 .
4.0K drwxr-xr-x  94 root root     12K May 20 22:30 ..
4.0K drwxrwxr-x   4 root tomcat6 4.0K May  8 22:18 Catalina
4.0K -rw-r--r--   1 root tomcat6 4.0K Jan 11 02:24 catalina.properties
4.0K -rw-r--r--   1 root tomcat6 1.4K Aug  5  2007 context.xml
4.0K -rw-r--r--   1 root tomcat6 2.4K Dec 13  2011 logging.properties
4.0K drwxr-xr-x   3 root tomcat6 4.0K May  8 22:18 policy.d
8.0K -rw-r--r--   1 root tomcat6 6.6K Jan 11 02:24 server.xml
4.0K drwxr-xr-x   6 root root    4.0K May  8 22:18 .svn
4.0K -rw-r-----   1 root tomcat6 1.5K Nov  4  2010 tomcat-users.xml
52K -rw-r--r--   1 root tomcat6  52K Nov 11  2011 web.xml

As it is, I still need sudo to do an svn update, which is fine from an interactive terminal, but it wouldn't work from a non-interactive Jenkins ssh session. If I did that, I would get an error similar to this:

sudo: no tty present and no askpass program specified
Sorry, try again.

This happens because the sudo command prompts for the user's password, which is obviously not going to work in a non-interactive session. What I can do is change the sudoers configuration to allow the user contdel to restart tomcat using sudo without a password prompt. The instructions on how to do so are on the Ubuntu help site. In the latest version of Ubuntu, the sudoers file will automatically include other configuration files stored in /etc/sudoers.d/ so I can avoid editing /etc/sudoers (which is a good thing) and add my changes to /etc/sudoers.d/contdel, so if I happen to mess up a directive I can easily revert back by removing the new config file and try again.

UAT$ sudo visudo -f /etc/sudoers.d/contdel
...
# Allow user contdel to restart tomcat via sudo without a password prompt
contdel ALL=(ALL:ALL) NOPASSWD:/etc/init.d/tomcat6
...

laptop$ ssh contdel@192.168.56.2 sudo /etc/init.d/tomcat6 restart
contdel@192.168.56.2's password: 
* Stopping Tomcat servlet engine tomcat6
...done.
* Starting Tomcat servlet engine tomcat6
...done.

Note that the password prompt comes from the local ssh command, not from the remote sudo command.

I can make it a little more secure by doing two things:

  1. Restrict the sudoers directive for contdel so that it only applies when the command comes from the local network.
  2. Use private/public key authentication for user contdel instead of username/password. This is particularly useful for Jenkins configurations that might or might not expose the password in the logs (or anywhere else).

However, I am still more interested in getting the update job going, so I'll go ahead and create the UAT_Update_Env job. In my previous post I set up the directory structure in source control that will hold all the relevant files in my project: not only the application code, but also all configuration files for the environment and the database script. The environment configuration files are in $SVN_ROOT/envsetup/[environmentID], which for UAT means $SVN_ROOT/envsetup/UAT. This job will therefore look for source control changes in svn://192.168.56.1/mywebapp/trunk/envsetup/UAT every five minutes, and if any changes are detected it will apply them via SSH to the UAT virtual server.

In terms of how the commands will actually be executed, I have two equally viable options:

  1. I tell Jenkins to open a local shell and execute remote commands on UAT via SSH.
  2. I tell Jenkins to start a slave on UAT that will execute shell commands locally

Just to keep things simple at this point in time, I'll go for the second option and use the SSH Slaves plugin that comes with the latest Jenkins "out of the box". I therefore need to create a new Jenkins slave called "UAT" that will run on my virtual server. Under the hood, Jenkins will connect to my virtual server via SSH, upload the slave configuration and binaries, and launch the slave process on the virtual server. When I set up my job in Jenkins, I will configure it to run only on the newly configured slave, and by so doing all commands and script will be executed by the slave on the virtual server. The root Jenkins directory on the new node will receive all files from the Jenkins master. These files will change often and after a job gets executed a number of times they can build up fairly quickly. I don't want to spend any effort cleaning them up regularly, so I'll just use /tmp as the root Jenkins directory. All files in there will disappear after each reboot.


The UAT_Update_Env job has the following characteristics.
  1. The job is restricted to run onthe "UAT" node only.
  2. The job monitors the configuration files in Subversion for changes every 5 minutes.
  3. The job executes svn update /etc/tomcat6 followed by sudo /etc/init.d/tomcat6 restart.



Running the job verifies that it's all good.

Started by user anonymous
Building remotely on UAT in workspace /tmp/jenkins_slave/workspace/UAT_Update_Env
Cleaning local Directory .
Checking out svn://192.168.56.1/mywebapp/trunk/envsetup/UAT at revision '2013-05-23T16:59:19.322 +1000'
At revision 17
no change for svn://192.168.56.1/mywebapp/trunk/envsetup/UAT since the previous build
[UAT_Update_Env] $ /bin/sh -xe /tmp/hudson603902310922720127.sh
+ svn update /etc/tomcat6
At revision 17.
+ sudo /etc/init.d/tomcat6 restart
* Stopping Tomcat servlet engine tomcat6
...done.
* Starting Tomcat servlet engine tomcat6
...done.
Finished: SUCCESS

In order to test that Tomcat is actually working, I have to fire an HTTP request to the Tomcat server and check that it doesn't spit out an error. To do this I create a new Jenkins job called UAT_Verify_Env that uses curl to connect to the virtual server on port 8080. If a connection error occurs, curl will return an error and the job will fail. Since I am interested in verifying a successful connection and not in the actual output of the curl command, I can redirect the output to /dev/null. The UAT_Verify_Env job therefore executes one simple shell command:

curl http://192.168.56.2:8080 -o /dev/null

This job will be executed on the Jenkins master, not on the UAT slave, in order to verify external connectivity within the virtual network. Running this job on the UAT slave will only verify connectivity to the local network adapter.

Now I can test the full cycle: check out the tomcat configuration to a local workspace on my laptop, then change the HTTP connector port in server.xml, commit the changes, wait for Jenkins to figure out there are modifications in the source repository and fire the update job, then wait for the job to complete and use curl to check that tomcat is actually now listening on the new port. If I change the port to 8081 I will see the job UAT_Verify_Env fail, which is a good thing because it will prove that I have a way to quickly feed back a wrong environment configuration.

Started by user anonymous
Building on master in workspace /var/local/jenkins/jobs/UAT_Verify_Env/workspace
[workspace] $ /bin/sh -xe /tmp/tomcat6-tomcat6-tmp/hudson1161186187774233720.sh
+ curl http://192.168.56.2:8080 -o /dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (7) couldn't connect to host
Build step 'Execute shell' marked build as failure
Finished: FAILURE

Changing the port back to 8080 and checking in the changes to source control will get everything back on a happy path.

Started by user anonymous
Building on master in workspace /var/local/jenkins/jobs/UAT_Verify_Env/workspace
[workspace] $ /bin/sh -xe /tmp/tomcat6-tomcat6-tmp/hudson2858331979742011098.sh
+ curl http://192.168.56.2:8081 -o /dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  1887  100  1887    0     0   260k      0 --:--:-- --:--:-- --:--:--  921k
Finished: SUCCESS

In a similar fashion as for Tomcat, I can extend this process to also update the MySQL configuration. I will skip the detailed description here, but it follows exactly the same principles as above: make sure that user contdel can perform an svn update on the MySQL configuration folders and files, allow user contdel to restart MySQL via sudo without a password prompt, and add the relevant shell commands to the existing job configuration, then amend the UAT_Update_Env and UAT_Verify_Env jobs in Jenkins to account for canges in MySQL configuration.

13 May 2013

Trying Continuous Delivery - Part 1

Please note: I am making this stuff up as I go along, so I may have to update each post at a later stage to reflect some changes that I think might make things easier to manage.

In a previous post I introduced this series looking at continuous delivery from a practical standpoint, taking a simple web application and automating its lifecycle. This post outlines the initial setup of the continuous delivery system. These are the steps I will follow:

  1. Install VirtualBox, Subversion and Tomcat on my laptop
  2. Prepare Ubuntu Server virtual machine with Subversion and Tomcat
  3. Set up SVN repository and user access list on my laptop
  4. SVN checkin Tomcat settings from virtual server
  5. SVN checkout Tomcat settings onto my laptop
  6. SVN checkin modified Tomcat settings from my laptop
  7. SVN update Tomcat settings on the virtual server
  8. Restart Tomcat on the virtual server and verify changes
Installing VirtualBox and Subversion on my Ubuntu laptop is straightforward with apt-get,
but the Ubuntu Software Centre makes it even easier.


I can then prepare the virtual server by downloading the relevant ISO image and hooking it up to the virtual DVD of a new virtual machine. After installing Ubuntu Server I have a new environment ready to be configured as my integration box.


By default, the virtual machine has a NAT network adapter and later I will configure it to have a host-only connection with a static IP address to my new virtual server, so I can work with it also when I'm not connected to my home network. A host-only connection is also useful to simulate environments that wouldn't normally be connected to the Internet and that's ultimately the kind of adapter that I want to be working with. Before I do that, I need to install/update the relevant tools on the virtual machine: SSH Server, Subversion, Tomcat6, and MySQL, plus a few basic packages to facilitate administration and remote view.

VM$ sudo apt-get update
VM$ sudo apt-get install openssh-server subversion tomcat6 mysql-server X.org zerofree make gcc linux-headers-$(uname -r)

After I got my baseline software on the virtual machine I can create a host-only adapter in VirtualBox (File / Preferences / Network), so I will have two adapters: one NAT adapter (the default) and one host-only adapter.
After all updates are installed, I won't need the NAT adapter any more (at least until the next major package upgrade), so I can either disable it in VirtualBox, or disconnect it (untick the "Cable Connected" check box in the VM settings) without actually removing it altogether. If I ever need to install a new package or perform a major upgrade, I just re-enable the NAT adapter very easily and then disable it right away.


I need my host-only adapter to have a fixed IP address (no DHCP). The reason for using fixed IP addresses is twofold: firstly, because I want my continuous delivery scripts to be simple and deterministic without having to install a name server/DNS just for this system; and secondly, because a production-like infrastructure would most likely use fixed/reserved IP addresses anyway. I don't want to install DNS or any other name resolution service, and I only need to set up just a few virtual machines, so I'll just maintain all hostnames and addresses in /etc/hosts.

127.0.0.1       localhost
192.168.56.1    VM-host      # The VM host
192.168.56.2    UbuntuSRV    # The baseline VM
192.168.56.3    Integration  # The integration VM
192.168.56.4    UAT          # The user acceptance VM
192.168.56.5    PROD         # The production VM

I decide to assign IP address 192.168.56.1/24 to the host-only adapter in VirtualBox, for no particular reason other
than because it's the default address that VirtualBox is using for me. It follows that the IP address on my virtual
server will have to be in the format 192.168.56.xxx in order to communicate with the host (my laptop).
Change the interface configuration on the VM.

VM$ sudo vi /etc/network/interfaces

...
auto eth0
iface eth0 inet static
address 192.168.56.2
netmask 255.255.255.0
gateway 192.168.56.1
...

Restart the network interface on the VM.

VM$ sudo ifdown eth0
VM$ sudo ifup eth0
ssh stop/waiting
ssh start/running, process 1234

Test communication from virtual machine to laptop host.

VM$ ping -c 3 VM-host
PING VM-host (192.168.56.1) 56(84) bytes of data.
64 bytes from 192.168.56.1: icmp_req=1 ttl=64 time=0.828 ms
64 bytes from 192.168.56.1: icmp_req=2 ttl=64 time=1.03 ms
64 bytes from 192.168.56.1: icmp_req=3 ttl=64 time=1.893 ms

--- VM-host ping statistics ---
3 packets trasmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.828/0.919/1.037/0.090 ms


laptop$ ping -c 3 UbuntuSRV
PING UbuntuSRV (192.168.56.2) 56(84) bytes of data.
64 bytes from 192.168.56.2: icmp_req=1 ttl=64 time=1.54 ms
64 bytes from 192.168.56.2: icmp_req=2 ttl=64 time=0.760 ms
64 bytes from 192.168.56.2: icmp_req=3 ttl=64 time=0.737 ms

--- UbuntuSRV ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.737/1.012/1.540/0.374 ms

Now we can use the NAT adapter for internect connections, and the host-only adapter for connections to 192.168.56.xxx addresses , so the routing table needs to reflect this separation.

VM$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.56.1    0.0.0.0         UG    100    0        0 eth1
10.0.2.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.56.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1

VM$ ping -c 3 www.google.com
PING www.google.com (74.125.237.178) 56(84) bytes of data.

--- www.google.com ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2014ms
It looks like the default route is through the host-only adapter, and no route leads to the Internet. I need to change that, so I'm going to use the VirtualBox NAT adapter 10.0.2.2 as a gateway for the default route. This way I can have a host-only connection and a NAT connection, both working, and with Internet access.
When I want a strict host-only connection, all I have to do is untick the "Cable Connected" box in the VirtualBox settings for my virtual server's NAT adapter.

VM$ sudo route del default
VM$ sudo route add default gw 10.0.2.2 dev eth0
VM$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.2.2        0.0.0.0         UG    100    0        0 eth1
10.0.2.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.56.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1

VM$ ping -c 3 www.google.com
PING www.google.com (74.125.237.178) 56(84) bytes of data.
64 bytes from 74.125.237.178: icmp_req=1 ttl=64 time=19.9 ms
64 bytes from 74.125.237.178: icmp_req=2 ttl=64 time=10.7 ms
64 bytes from 74.125.237.178: icmp_req=3 ttl=64 time=12.6 ms

VM$ ping -c 3 VM-host
PING VM-host (192.168.56.1) 56(84) bytes of data.
64 bytes from 192.168.56.1: icmp_req=1 ttl=64 time=0.834 ms
64 bytes from 192.168.56.1: icmp_req=2 ttl=64 time=1.11 ms
64 bytes from 192.168.56.1: icmp_req=3 ttl=64 time=1.876 ms
Looks good, so I'll just make this route persistent and move on.

VM$ sudo vi /etc/network/interfaces

...
...
up route add default gw 10.0.2.2 dev eth0
...
...

VM$ sudo service networking restart
So far so good. Let's get Subversion running and test connectivity from the virtual machine. We'll need a local repository first of all, then run the standalone Subversion server listening on the adapter used for the
VirtualBox host-only connection.

laptop$ mkdir -p ~/svnrepo/mywebapp
laptop$ svnadmin create ~/svnrepo/mywebapp
laptop$ svnserve -d -r ~/svnrepo --foreground --log-file ~/svnserve.log --listen-host 192.168.56.1

VM$ svn info svn://VM-host/mywebapp
Path: mywebapp
URL: svn://VM-host/mywebapp
Repository Root: svn://VM-host/mywebapp
Repository UUID: cbcda313-59ce-438c-844d-f6a9db7a4d30
Revision: 0
Node Kind: directory
Last Changed Rev: 0
Last Changed Date: 2013-05-13 12:57:36 +1000 (Mon, 13 May 2013)
It looks like I have a good baseline virtual server that I can use as a template to create my other virtual environment. Let's start with the Integration environment (INT). I only have to clone the baseline VM in VirtualBox using the 'linked clone' option, also making sure I reinitialise the MAC address of all network cards.





What I want to do now is to import the existing Tomcat configuration from the INT virtual machine into Subversion. This way I will be able to control any modifications to these settings from my development laptop. When I check in new versions of the settings, I will shut down the relevant services on the virtual machine, update the settings' working copies, and restart the services in order to pick up the new settings. In fact, I can even manage these changes from Eclipse, together with the application source code. It is easy to create an empty folder structure in Eclipse on my laptop host and then commit it to Subversion. I can then checkout the relevant folders in my virtual machine, svn add the relevant configuration, and check it all in.

INT$ svn import /etc/tomcat6 svn://VM-host/mywebapp/trunk/envsetup/INT/tomcat6 -m "Initial import for tomcat6 config"
INT$ sudo svn co svn://VM-host/mywebapp/trunk/envsetup/INT/tomcat6 /etc/tomcat6 --force

I need to sudo these commands because these directories are usually owned by root, and adding them to Subversion is an operation that needs write access because it will automatically generate some synchronisation metadata files and folders in the working copy directory. Back on the laptop host, I now need to update the Eclipse project and I will be able to manage configuration changes from there.
To test that this arrangement actually works, I can use Eclipse to change the Tomcat HTTP port on the guest virtual machine. I'll change it back afterwards.
[server.xml (snippet)]
...
    <!-- changed from port 8080 to port 8081 -->
    <Connector port="8081" protocol="HTTP/1.1"
               connectionTimeout="20000"
               URIEncoding="UTF-8"
               redirectPort="8443" />
...


INT$ sudo /etc/init.d/tomcat6 stop
 * Stopping Tomcat servlet engine tomcat6 [OK]

INT$ sudo svn update /etc/tomcat6
U   /etc/tomcat6/server.xml
Updated to revision 6

INT$ sudo /etc/init.d/tomcat6 start
 * Starting Tomcat servlet engine tomcat6 [OK]

Now pointing my browser from the laptop host to the guest address http://Integration:8081 shows the Tomcat welcome page.


This means that now I can successfully drive any Tomcat configuration changes from my development environment.
Also, it means that I can safely revert to any previous revision of the Tomcat configuration if something goes wrong.
In fact, I'll just test that now.

[in my workspace folder]
laptop$ svn merge -r COMMITTED:PREV .
--- Reverse-merging r16 into '.':
U    envsetup/INT/tomcat6/server.xml

laptop$ svn commit -m "Reverting previous commit"
--- Reverse-merging r16 into '.':
U    envsetup/INT/tomcat6/server.xml

INT$ sudo svn update /etc/tomcat6
U    /etc/tomcat6/server.xml
Updated to revision 17.

INT$ sudo /etc/init.d/tomcat6 restart
 * Stopping Tomcat servlet engine tomcat6 [OK]
 * Starting Tomcat servlet engine tomcat6 [OK]
Now pointing my browser from the laptop host to the guest address http://Integration:8081 spits out a browser connection error, but http://Integration:8080 shows the Tomcat welcome page as expected.
To recap, this is what I have so far:
  • a physical host (my laptop) with Tomcat, Eclipse, and VirtualBox
  • a Subversion server running on the physical host, using a local directory as a repository
  • a virtual server (my baseline) with Tomcat, Subversion and MySQL, connected with the physical host using a host-only adapter
  • a virtual server (my integration server) cloned from my baseline, with its Tomcat configuration in the Subversion repository

and this is what I can do with the current setup:

  • change the Tomcat configuration of the virtual server from my development IDE on the physical host
  • easily pick up the latest Tomcat configuration changes on the virtual server
  • easily revert to any previous revision of the Tomcat configurations

Next post will introduce the element of automation in configuration changes.

21 April 2013

Continuous Delivery in Practice - Introduction

I am a big fan of automation, and this passion of mine has taken me to become also a big fan of Continuous Delivery. I may have not searched hard enough, but I can't seem to come across some practical examples of Continuous Delivery, so I decided to make my own for two reasons:
  1. to help myself in learning more about the pratical applications of Continuous Delivery
  2. to hopefully help others in doing the same
So this post is to start a multipart series on the practical implementation of Continuous Delivery for a very simple web application. I will not write much about the application itself because my main focus is to put together and describe the infrastructure and the mechanisms that keep the system always in a releasable state.
It is worth noting that Continuous Delivery does not imply Continuous Release. The idea is to make sure that the system as a whole (not just the application) is always in a releasable state. This way, a release becomes purely a business task that doesn't require lots of coordination efforts with a number of technical teams.

This post series is going to be loosely based on the book "Continuous Delivery" by Jez Humble and David Farley, which is (needless to say) a must read for anyone interested in this topic. The book outlines some fundamental principles of Continuous Delivery:
  1. Create a repetable, reliable process for releasing software
  2. Automate (almost) everything
  3. Keep everything in version control
  4. If it hurts, do it more frequently and bring the pain forward
  5. Build quality in, right from the start
  6. Done means released
  7. Everybody is responsible for the delivery process
  8. Continuous improvement
I will explicitly refer to most of these principles as this post series evolves. It is worth noting that, because of principle n.3, I will make right away a critical decision: if it can't be source-controlled, it won't be part of my technology stack. In other words, I will only use tools/packages/platforms driven by (mostly) text-based configuration (properties files, xml files, etc).

In the first post I will be setting up some virtual environments right from the start before any code is written, and by driving environment changes through source control.

Finally, the obligatory disclaimer:
Please note: I am making this stuff up as I go along, so I may have to update each post at a later stage to reflect some changes that I think might make things easier to manage.

20 April 2013

Useful Online Help

The point of this short post is that including some help tool with your product is not enough. Help tools are supposed to be helpful, and in order to be helpful it must be easily understood. I think that's common sense.

So why, oh, why do so many vendors, even the big names, avoid investing that little bit extra effort in proof-reading instructions and documentation? This is a classic example. The equipment is a Cisco EPC3925 EuroDocsis 3.0 2-PORT Voice Gateway.

"IP Flood Detection is hacker intrusion method when massive amount of packets are send to the networking equipment with a purpose to overwhelm it.This can expose weaknesses or cause failures.Enabling this future is essential part if intrusion prevention."

This is just one example. The rest of the documentation runs along the same standards.

I mean... really... Cisco... come on... Yes, I can understand what the statement actually means, but is this English? I can appreciate that translation into English from other languages might not be straightforward, I know, that's fine, but are you really unable to find one... ONE... English native-like speaker that can proof-read this stuff? I'm pretty sure you can afford it.


09 January 2013

Ossessione


fredde stelle
di una tremante sera
mi parlano
di cieli gialli e tempeste eterne
nell'ombra di un tetto di luce

mura che abbagliano
di una tenebra infinita
dentro scatole viventi che respirano bugie
nelle strade dell'uomo senza occhi

un bastone fiammeggiante
una carta dipinta di lacrime
un gioco che brucia il mondo
un mondo che brucia me

lasciamoci sfinire da noi stessi

O-S-S-E-S-S-I-O-N-E

29 June 2012

The relationship between language and gender and the implications for language planning.


The relationship between language and gender and the implications for language planning.


This essay will discuss the relationship between language and gender, and argue that some of the differences between men’s speech and women’s speech contribute to build and maintain a power imbalance between the two sexes. A summary will first be given of some generalizations about gender-language association, followed by an outline of the main sociolinguistic approaches used in the study of gender-related language variations. Some consideration will then be given to the influence of language on establishing and maintaining social structures and social divides, with some implications for language planning.

One of the study areas of Sociolinguistics is articulated around language varieties that are associated with one gender or the other. Since their distinguishing element is the association with gender, these varieties are also known as genderlects and their characteristics vary across countries, cultures, generations, religions, and social groups. For example, speech patterns associated with male western-European youth cultures are different from those associated with male eastern-European youth cultures, or male western-European adult cultures. Commonly, there are numerous stereotypes associated with male and female speech that give rise to generalizations about men and women, like the perception that women are more empathetic and polite while men are more factual and “dry”. 

Initial interest in gender-related speech patterns arose in the 1960s, when Labov published a study on the social motivation of sound changes (1963) and another one on the social stratification of language (1966). The objective of Labov’s studies was to examine the contexts in which certain phonological patterns would apply in relation to a set of social variables, among which were status, ethnic group, and sex. Among his findings there was a tendency of women to adopt standard variants and to “follow an extreme pattern of stylistic variation” (1966, p. 312), or to “hypercorrect”, in an effort to emulate other linguistic patterns that are perceived to be somewhat superior or set the benchmark. The systematic study of genderlects, however, seems to have started in the early 1970s, when Lakoff published a study on the frequency in which a number of linguistic features occurred in women’s speech compared to men’s speech (1973). Lakoff’s approach was labelled the “deficit approach”, because it alleged that women’s speech was characterized by “uncertainty” and “lack of confidence on the part of women” (Fromkin et al. 2011, p. 449). This view was supported by the notion (at the time) that male speech was considered to represent the unmarked standard form; therefore any variation from the unmarked form (like the variation brought by female speech) was - by definition - a marked form. As a result, with the male speech setting the benchmark by convention, any variation from it would automatically be perceived somewhat inferior.

Moving in a similar direction was the “dominance approach”, according to which male speech was dominant over the subordinate female speech, probably as a result of traditional western patriarchal social structures (Spender, 1985). Even though the dominance approach does not overtly assume female genderlects to be substandard or inadequate, it still reflects a concentration of power and authority in male speech, placing male society in a status of supremacy. 

The “difference approach” removes the ideas of “better” and “dominant” and looks at male and female genderlects as manifestations of two different sub-cultures: there is no differentiation in terms of power, but there is differentiation in terms of cultural contexts. Tannen (1990) looked at genderlect differences from a pragmatic point of view, looking at meta-messages: what is really being communicated as opposed to what is actually being said. For example, an offer for help might convey a meta-message that the person offering help is more capable and competent than the person being helped. This way, meta-messages can cause the listener to perceive either a difference or an equality in status between the conversation participants. Tannen’s findings show different areas of differentiation between male and female speech. For example, Tannen claims that women tend to focus more on intimacy and connection with their interlocutors, while men tend to shape their conversation around meta-messages of competition and independence. Also, men tend to be more comfortable than women when talking in public, while women are generally more comfortable than men when talking in private settings.

Cameron (2007) argues that historical approaches to the study of genderlects may be incomplete, especially with reference to both the dominance approach and the difference approach. One aspect of Cameron’s argument is that male genderlects are usually considered to be unmarked dialect forms, whilst female genderlects by reflection become the deviant forms. However, this is just an artificial benchmark, since there is no evidence to suggest that it could not be the reverse: female genderlects providing the unmarked form and male genderlects being deviant. Another aspect of the argument against the difference approach is that, although it tries to offer a more egalitarian or levelling perspective on genderlects by focusing on contextual differences rather than power and submission, it seems to miss the idea that those differences are still actual manifestations of relative two-way dominance. For example, conversational dominance is not just about the behaviour of certain participants of the conversation in the establishment of hierarchical prestige: it is also about the conscious choice of the other participants to allow that to happen. In general, Cameron argues, there seems to be no evidence to support the relationship between the roles of men and women in society, and their language use.

More recently, the social constructionist approach has developed a dynamic study of language and genders, following the principle that language is an expression of culture and, as a result, social constructs are directly linked to speech constructs. Speech constructs, however, go beyond being simply a projected expression of an underlying “fixed” social structure and become tools that the participants actively use to dynamically shape and maintain the social structure (Zimmermann and West, 1975). The use of different language varieties across specific social structures was examined in detail by Eckert and McConnell-Ginet (1999), who introduced the concept of Communities of Practice, in which groups of people are connected by common tasks or projects and over time construct a set of shared practices. As such communities are established and maintained, linguistic patterns and speech strategies are also established and maintained therein. When single-sex communities of practice come to exist, whether by coincidence or by some form of social or institutional barrier, it is easy to mistake for gender identity the linguistic identity that those communities build and develop for themselves. In other words, the association between a language form and a community of practice whose member just happen coincidentally to belong to the same sex, over time can transform into an association between that language form and a specific gender, according to some kind of social projection where community of practice and gender are two faces of the same sociolinguistic coin.

As theories of language and gender developed, attention seems to have shifted away from physical and biological variables towards social, cultural, and historical variables. The deficit approach considered sex to be one of the variables that shaped and influenced language patterns, while the dynamic approach completely reverted the cause-effect relationship and considered linguistic variables to shape and influence the expression of gender.

In general, modern approaches look at linguistic patterns and conversational styles, and some generalizations have become popularized with various degrees of accuracy. For example, minimal responses are paralinguistic features in the form of vocal expressions that can be used for different purposes by men (showing agreement) and women (building collaborative discourse). Questioning is usually a strategy adopted by men to request information and without further motive; it is however adopted by women to build participation, elicit contributions, and generally engage in collaborative discourse. Disclosure of personal information is perceived by men as a sign of weakness and strongly avoided, but it is a subtle strategy used by women to connect and build intimacy and trust with other discourse participants. Attentiveness and listening are two important factors in the construction of meaning, because meaning is shaped in the mind of the listener thanks also to other types of input than just what is being said. Women’s tendency to build intimacy and trust makes them also “good listeners”, while men’s tendency to build up status through turn dominance makes them “bad listeners”. 

While all the above strategies and tendencies may be verifiable in certain contexts and cultures, there seems to be no consensus on their universality. There appears to be, however, agreement on the notion that gender is one of the identity traits expressed through language. Language is used by individuals not only to negotiate meaning but also to negotiate identity, so gender is something that people “do” or construct through language. If it were true that standard forms were perceived as being more prestigious and desirable than non-standard forms, then there should be very little motivation for individuals to keep using non-standard forms. However, ordinary life experiences present strong evidence not only of the existence of numerous social groups that use non-standard forms, but also of their conscious effort to keep those forms instead of aiming at the adoption of standard forms. According to Labov (1966), this is due to group members perceiving and receiving covert prestige in the establishment and maintenance of group identity, instead of pursuing more standardized dialect variants.

There are two main areas of concern about genderlects that have emerged thus far in this essay: the first one pertains to their initial development (how genderlects came to exist and separate in the first place), and the second concerns their preservation (how or why they still exist in modern societies). In terms of their initial development, genderlects may find the roots of their differentiation in the establishment and perpetuated existence of single-sex communities of practice in the past. For example, in many societies war used to be an activity reserved to men alone, while women would engage in “safer” activities like socializing and nurturing, and so it was for many centuries. It seems therefore logical to suppose that most dialects that developed in the communities of practice directly related to war activities eventually became male genderlects. With time, the establishment of numerous single-sex communities of practice contributed to the reinforcement of genderlects (and gender) differentiation in what Eckert and McConnell-Ginet (1999) call “institutionalization of gender” (p. 189). In terms of genderlects preservation, Labov’s explanation that group identity is more prestigious than the adoption standard forms of language seems to be very logical. It is important to stress that individuals can and do belong to multiple communities of practice, and that is why group identity is not simply adopted or built unilaterally: it is negotiated according to the prevailing context. Following a social constructionist view, it is important to consider the notion that humans participate in the construction of a dynamic society with multiple aspects or facets, and the construction of individual identity and group identity is part of the process. Since social facets are not natural constructs they need to be continuously maintained, and humans do this through the institutionalization of social phenomena and the construction of traditions which, when applied to single-sex communities of practice, serve to also maintain gender separation and relative dominance.

There is also a third area of concern about genderlect differentiation: the rise of discriminatory attitudes towards sex (as a biological trait) and gender (as a component of identity). Linguistic discrimination happens when people are treated differently based on the language they use. If certain linguistic forms are distinctly associated with one gender or the other, then linguistic discrimination can become a form of sex discrimination. The preservation of genderlects as instruments used to assert individual identities therefore may become (perhaps thanks to the popular “law of unintended consequences”) a mechanism to also preserve sex discrimination. Also, if the standard form of a language has a bias towards the use of male-specific or female-specific lexicon and constructs, this may well have an influence on the perception that one gender has a higher prestige than the other, shaping social attitudes accordingly and maintaining unequal states in society. For example, the Italian language is gender-biased in many ways, one of which is that there is no neutral grammatical gender, so all nouns are either masculine or feminine, and pronouns in a sentence are assumed to refer to masculine entities unless specifically intended otherwise. In modern societies there seems to be widespread acceptance of gender-discriminatory constructs even in languages that are not perceived to be gender-biased. For example, the English language is grammatically gender-unbiased (nouns can be masculine, feminine, or neuter), but it is not uncommon to find tiles like “stewardess”, “fireman”, or “waitress” on job adverts in English-speaking countries, maintaining the historical gender connotations of various communities of practice.

Wardhaugh’s views on the development and preservation of gender-specific constructs are that

“[...] men’s and women’s speech differ because boys and girls are brought up differently and men and women often fill different roles in society. Moreover, most men and women know this and behave accordingly” (2006, p. 354).

In other words, there seems to be a strong traditional attitude in the sense that the social development of new generations has always been accomplished in a certain gender-biased way, and unless something is done to disrupt this attitudinal inertia it will endure in exactly the same way. Wardhaugh continues by proposing that less sexist child-rearing practices and role differentiation would reflect into less sexist language practices and bring a “greater freedom of choice”.

According to Wardhaugh, language planning is “an attempt to interfere deliberately with a language or one of its varieties” (p. 379) and there are different ways in which it operates. Corpus planning is a form of structural engineering for a language: it involves processes that are aimed at altering linguistic constructs through graphization (the prescription of orthographic conventions), standardization (the elevation to standard form of one variety over the others) and modernization (the extension of a variety to meet new functions). Status planning aims at influencing the perceived status of a language variety with respect to other varieties, by means of association with specific functional domains, so that the language variety acquires the same status as the functional domain in which it is commonly used. It is perhaps thanks to a combination of both, channeled through the education system and the media, that progress can be made in the mitigation and control of sex discrimination.

This is in some ways an application of the Sapir-Whorf hypothesis of linguistic relativity: if the language we use influences thought and society, and if linguistic constructs determine cognitive constructs, then any changes in the language we use will determine some changes also in our social attitudes, so by carefully crafting a series of structural linguistic alterations around sexist dialectal forms we should be able to induce favourable changes in society’s sexist attitudes towards a higher degree of gender equality. If that is the case, then these principles can be used by language planners in the formulation of appropriate policies to reduce or even eliminate sex discrimination in society.

Marco Scata

References

Cameron, D. (2007). The Myth of Mars and Venus. Oxford: Oxford University Press.

Eckert, P., and McConnell-Ginet, S. (1999). New generalizations and explanations in language and gender research, Language in Society, 28 (2), 185-201. Cambridge: Cambridge University Press

Fromkin, V. et al. (2011). An Introduction to Language (9th ed, international). Wadsworth: Cengage Learning.

Kaplan, B., & Baldauf, B. (1997). A framework for planning: who does what to whom?. In Language planning from practice to theory (pp. 28-58). Philadelphia, PA: Multilingual Matters.

Labov, W. (1963). The social motivation of a sound change. Word 19, 273-309

Labov, W. (1966). The social stratification of English in New York city. Washington, D.C.: Center for Applied Linguistics.

Lakoff, R. (1973). Language and woman's place. Language in Society, 2 (1), 45-80. Cambridge: Cambridge University Press

Spender, D, (1985). Man made language. (2nd ed.) London: Routledge & Kegan

Tannen, D. (1990). You just don’t understand. Women and men in conversation. New York: William Morrow.

Wardhaugh, R. (2006). An introduction to sociolinguistics (6th ed.). Oxford: Blackwell Publishers.

Zimmermann, D.H., and West, C. (1975). “Sex roles, interruptions and silences in conversation”.  In Thorne, B. and Henly, N. (1975). Language and Sex: Difference and Dominance, 105-29. Rowley, Massachusetts: Newbury.

02 June 2012

Boken design series: Gamevil's Fantasy War

With this post I will start a series dedicated to broken design. What's broken design? It's about things that we use that are fundamentally flawed. It's about stuff that seems obvious to some people but not to product designers.
I will start the series with a post about a game for the iPhone that I recently tried. The game is called Fantasy War, produced by Gamevil, and can be found on the Gamevil website and on iTunes.
The purpose of the game is to become the leader of a fantasy realm. You can be human, orc or elf, and you progress through the game by completing some quests, battling the forces of evil in some dungeon, and waging war against other players. As you accumulate experience and resources, you can improve the infrastructure (gold mine and lumber mill), learn new skills (cloth making, leatherworking, etc), upgrade your weapons, and hire stronger and more powerful armies. The graphics resembles some 1990s arcade games, but it's still not bad for a mobile game. The rules are fairly quick to learn without explicit instructions and the level progression gradually unlocks more interesting content. The developers make money selling "runestones", which allow you to do a lot more stuff and get really powerful armies.
Sounds ok, right? Well I actually like the game. It's very easy to play and the content is not bad at all. So where's the problem? What's the broken design?
Well, the problem is that this is not an iPhone game, or at least it's not meant to be. When you install the game, you actually install a simplified internet browser with little or no content. Every time you interact with the game, you actually navigate through a web page. All commands, all icons, all images, are just hyperlinks on a web page. Hang on, so where's the problem? I use internet all the time with my iPhone, and I have a lot of games on my iPhone that need an internet connection to work. No big deal. No, indeed, it wouldn't be a big deal... if the game wasn't so connection greedy! There is no content on the iPhone. Everything needs to be downloaded from the internet, including all the images (and there are a LOT of images!). In other words, this game assumes that it is running on a permanently connected device on broadband speeds... which is exactly the opposite of what an iPhone is. My user experience with this game goes something like this:
  1. start the game
  2. watch the rotating "loading..." icon for a few seconds
  3. watch a dark screen behind the "loading..." icon
  4. watch the text labels of the home screen appear behind the "loading..." icon
  5. watch the graphic background appear behind the "loading..." icon
  6. watch the images loading in chunks behind the "loading..." icon
  7. home screen fully loaded
  8. tap on any activity
  9. watch the rotating "loading..." icon for a few seconds
  10. repeat from point 3
In the end, I spend more time watching the rotating "loading..." icon that actually playing the game. Add to this the fact that there is no fault tolerance built in this simple browser, and you also get to build familiarity with the "check your internet connection" error, after which you can only shut down the game and start it again.
Also add to this the fact that dungeons are time limited (you need to kill the boss within 'N' minutes, or you have to wait for an hour or more before you can try again) and you can see how you can quickly build up frustration.
The cherry on the cake is that if something goes wrong with the game you get some Korean error message popping up, which is ok if you can read and understand Korean (I can't do either).
The verdict: broken design!
This is a basic architectural requirement for anything that is supposed to run on an iPhone: it must (please note, "must", not "should") be built for a seldom connected environment with less-than-broadband speeds. If that's not the case, then don't bother.







05 May 2012

Quality Control



There seems to (still) exist a great deal of confusion in many teams about the concept of continuous integration (CI).

The mighty Martin Fowler explains it in one of his seminal articles (http://martinfowler.com/articles/continuousIntegration.html)

Wikipedia defines CI as the principle of implementing "continuous process of applying quality control" (http://en.wikipedia.org/wiki/Continuous_integration)

Just googling up the term produces nearly 6 million references (http://www.google.com/search?hl=en&q=continuous+integration&btnG=Google+Search&meta=) so one might infer at least two things out of this:

1) that the topic is well covered
2) that the topic is very popular

Why, then, have I found it so difficult to find a proper implementation of continuous integration?

Perhaps it depends on what I mean by "proper implementation", so I thought about what I would really expect to receive from a "proper" CI implementation. In other words, what is the quantifiable value that a good CI process adds to my team?

My concept is very much in tune with the Wikipedia definition, in that CI is one of the many means to an end: quality control. The value of a CI process is in quality control, so it is useless if it does not deliver the means to check the quality of the deliverables.

18 March 2012

Primal Noise


You need to enable JavaScript to see this post.

This is what the first 1000 prime numbers look like when you plot them using their binary representation, one after the other. Are there fractal patterns in this image, like noise on a phone line? I don't know. Maybe I'll discuss it in another post.

This image is created by first translating each number in its binary string (e.g. decimal 53 to binary 110101), then converting each '1' into a little black square and each '0' into a blank space.

The image is produced using simple JavaScript scode. Have a look at the page source if you're interested. Search for 'var primes', 'function createRow' and 'function createTable'. I used jQuery to simplify it a little bit.

27 May 2011

What's in a number?


Imagine this discussion.


Developer 1: In the past week our code coverage raised from x% to y%


Developer 2: That's pretty good. There was is still quite a bit of dead code after the recent refactoring so I think we'll get to z% by next week.


Developer 3: I still think we should aim for 100%.


User/Analyst/Sponsor: What does that mean?


Developer 1: It shows how much of the whole code base we can actually test as part of our build process.


User/Analyst/Sponsor: Ok, and why is that important?


Developer 2: It's important to know how much is not covered.



User/Analyst/Sponsor: Oh, so you know how much more you have to work before you get to 100%


Developer 3: Well, not exactly. Our policy is to aim for z%


User/Analyst/Sponsor: Hmmm, and what's the use of z%? What does that mean to us?


Developers: What does it mean to you? Well it means that... hmm... that you know... it means... well... er...




That's not quite my direct experience, but I recently had to explain code coverage to non-coders and I realised how little thought some developers have put into the practical importance of code coverage outside the developers' domain.

I think most developers should be familiar with the concept of code coverage, which is simply the number of lines of code that get executed as part of the test suite expressed as a percentage of the total number of lines of code written. Full coverage (100%) means that my tests manage to execute (cover) all of my code. Less than full coverage means that there is some code that is not being tested. I'm not just talking about unit tests: I am also including integration tests and automated UI tests into the picture.

There have been countless debates on how much coverage is good coverage, including various tools and methodologies, so I won't go into a lot of detail here and I don't want to entertain anyone with yet another post on the miracles of test-driven development (TDD). The main concern here is that we cannot measure how "important" a line of code is with respect to another line of code. If we could, we would be able to come up with an "ideal" coverage target and be happy with that. Unfortunately in practice when I have anything less than full coverage I cannot tell whether the "uncovered" code is going to eventually bite me with a vengeance, or whether it will ever get noticed at all. It all depends on usage.

What's the meaning of an x% code coverage to me as a developer? In all practical terms, if it's less than 100%, then it means nothing. So either I strive for full coverage, or why should I bother? That is the basis upon which code coverage sometimes is ignored altogether, or is used only to produce colorful block charts for project status reports.


Now what does that mean to testers? Testers usually understand the concept of code coverage and many are wise enough to avoid taking code coverage as a measure of quality of deliverables. Why is that? Well, quite simply, my automated tests might well test 100% of my code base, but that doesn't mean they actually test the right thing in the correct way.


So what does code coverage means to me as a tester? In general terms, a code coverage of x% means that, for every new feature I add to the product, or every bug I fix, I have roughly an x% degree of confidence that I am not going to break existing features or resurrect old bugs (provided my tests actually test behavior - not just classes and methods - and that each new bug is reproduced with a test before it gets fixed).

But what's the meaning of it to users and analysts? When they see the regular progress report on the system being developed, what information does code coverage convey to them?


The meaning of code coverage to me as a user is actually more important than one might think. Roughly speaking, a code coverage of x% means that with every action I perform on the system, there is roughly a (1-x)% probability that something will go wrong.
To put it slightly differently (or perhaps more precisely), an x% code coverage means there is roughly an x% probability that my interaction with the system produces the expected result because that's what it was programmed to do. Conversely, there is roughly a (1-x)% probability that my interaction with the system produces the expected result by chance.   The degree of criticality of the system then dictates the degree of acceptance or tolerance for failure (or tolerance to chance), which we can match against code coverage.

12 December 2010

Paste di Mandorla (Almond Biscuits)

Ingredients

  • 2 egg whites
  • 200g caster sugar
  • a few spoonfuls of icing sugar (for coating)
  • 400g almond meal
  • 1 teaspoon of almond essence
  • 1 teaspoon of honey (preferably orange honey)
  • 1 tablespoon of lemon juice
  • half a teaspoon of baking powder
  • half a teaspoon of sodium bicarbonate
  • 1 teaspoon of vanilla paste
  • pinch of salt


Preparation

  1. Whisk the egg whites until stiff
  2. Add the lemon juice, the honey, the vanilla paste and the almond essence.
  3. Whisk vigorously. The mixture will become much softer, but it should still be fluffy.
  4. Add the caster sugar, the baking power, the sodium bicarbonate and the salt.
  5. Whisk vigorously.
  6. Add the almond meal.
  7. No more whisking. This time use a spoon to mix everything. The result should be a soft and somewhat loose cookie dough that will hold whatever shape you want to give it.
  8. Make small balls of about 4cm and roll them in the icing sugar. Then shape them however you like and put on a baking tray lined with baking paper (no need to grease it up).


Cooking


Bake at 180'C for 15 mins. The aim is to make thick cookies that are somewhat crunchy outside and soft and sligthly chewy on the inside, so bear in mind that any shape less than 2cm thick will cook quickly and get crunchy quickly.
After baking, you can use any leftover icing sugar to dust the lot, or you can just enjoy them 'au naturel'.


Enjoy!
M.