24 December 2013

Trying Continuous Delivery - Part 3

Please note: I am making this stuff up as I go along, so I may have to update each post at a later stage to reflect some changes that I think might make things easier to manage.

In my previous post I set up the basic environment that will host my application for Integration work. Now I can start writing the application and put it into a continuous delivery pipeline. The purpose of these posts is only to put into practice some of the principles of continuous delivery, so I will not spend much time writing about the details of the web application. In fact, the application will be a simple noticeboard, where anyone can write a new post and anyone can read all posts in the system.

I set up my application in Eclipse as a Maven project and start writing my acceptance tests right away. I could use behavioral testing frameworks like JBehave or easyb, but I will keep it simple and use plain JUnit tests written in a format that still defines and asserts application behavior. Just as a useful practice, I like separating my tests into packages that reflect their general purpose:

Package Name Tests Purpose
mywebapp.tests.unit These will test small pieces of functionality. They run very fast and are not supposed to test multiple application layers, so they use test doubles (mocks, stubs, etc) when needed.
mywebapp.tests.integration These will test multiple application layers (e.g. services and data access) and run a bit slower than unit tests.
mywebapp.tests.acceptance Arguably the most important type of tests. These are translated directly from requirements/stories. They are usually the slowest tests to run.

This separation strategy allows me to do a couple of things that will be very useful once I start shaping my delivery pipeline (which will be the subject of a later post):
  • Run individual test suites from Eclipse by right-clicking the relevant package and choose Run As -> JUnit Test
  • Separate the test suites in Maven by using simple include directives for the Surefire Plugin and the FailSafe Plugin
  • Run faster tests early in the feedback cycle and slower tests only after all the faster tests have succeeded

More packages would eventually emerge as I start to automate other types of tests (performance, capacity, etc), but these will suffice for now.
All my acceptance tests will follow a given-when-then pattern, and my first acceptance test will check that anyone is able to add a new comment on the noticeboard.

package mywebapp.tests.acceptance;

import org.junit.Test;

public class AnyVisitorCanWriteNewEntryTest {
 
 /**
  * Loads the noticeboard page.
  */
 private void givenTheNoticeboardPage() {
  ...
 }

 /**
  * Fills the noticeboard form with a new entry.
  */
 private void whenAnonymousUserWritesNewNotice() {
  ...
 }

 /**
  * Submits the noticeboard form.
  */
 private void andSubmitsForm() {
  ...
 }

 /**
  * Examines the noticeboard table and tries to find the
  * unique comment that was used to submit the form.
  */
 private void thenAnonymousUserCanSeeTheNewEntryOnNoticeboardPage() {
  ...
 }

 @Test
 public void anonymousUserCanWriteNewEntry() throws Exception {
  givenTheNoticeboardPage();
  whenAnonymousUserWritesNewNotice();
  andSubmitsForm();
  thenAnonymousUserCanSeeTheNewEntryOnNoticeboardPage();
 }
}

The test will clearly fail because I haven't written any application code yet. It is meant to simulate user actions on a web interface, so it follows a WebUI-centric pattern:
  1. Load a web page
  2. Find some element on the page, e.g. an input box
  3. Perform some operation on the element, e.g. change one of its attributes
  4. Perform some action on the page, e.g. submit a form
  5. Check the result, e.g. different elements are displayed on the page
However, this doesn't necessarily have to be the case for acceptance tests on a web application: if the presentation layer is well decoupled from the business logic (as it should be), then acceptance tests could exercise the business logic directly instead. Either way, there are some important tradeoffs to consider. For example, a UI-centric approach will almost certainly depend on page elements having specific names and/or IDs, which can change many times during the application's development lifecycle and waste valuable feedback cycles and spend valuable development time maintaining and fixing tests that will seemingly break for no reason. On the other hand, one could argue that exercising the business logic directly and bypassing the presentation layer altogether does not actually simulate any user interaction and is therefore not good enough to be considered a "proper" acceptance test. My purpose here is to provide an example for continuous delivery, so my choice of following a UI-centred approach is irrelevant in this instance, but the main point here is that there must be automated acceptance tests as part of the continuous delivery feedback cycle.

From the acceptance test I can work my way down to more granular tests that eventually translate into application code, using a test-driven approach. I won't go into any of those details here, but I will introduce some profiles in my pom.xml in order to neatly separate the different types of tests for my delivery lifecycle.
<profile>
    <id>unit-tests</id>
      <build>
        <plugins>
          <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-surefire-plugin</artifactId>
            ...
            <configuration>
              <includes>
                <include>**/mywebapp/tests/unit/**/*.java</include>
              </includes>
            </configuration>
          </plugin>
        </plugins>
      </build>
    </profile>
    <profile>
      <id>integration-tests</id>
      <build>
        <plugins>
          <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-surefire-plugin</artifactId>
            <configuration>
              <skipTests>true</skipTests>
            </configuration>
          </plugin>
          <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-failsafe-plugin</artifactId>
            ...
            <configuration>
              <includes>
                <include>**/mywebapp/tests/integration/**/*.java</include>
              </includes>
            </configuration>
          </plugin>
        </plugins>
      </build>
    </profile>
    <profile>
      <id>acceptance-tests</id>
      <build>
        <plugins>
          <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-surefire-plugin</artifactId>
            <configuration>
              <skipTests>true</skipTests>
            </configuration>
          </plugin>
          <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-failsafe-plugin</artifactId>
            ...
            <configuration>
              <includes>
                <include>**/mywebapp/tests/acceptance/**/*.java</include>
              </includes>
            </configuration>
          </plugin>
        </plugins>
      </build>
    </profile>
It's probably worth noting the use of skipTests in the integration-tests and acceptance-tests profiles in order to specifically skip the execution of the Maven surefire plugin.

Why did I use profiles? Why didn't I just manage all tests in the main build block? I don't think there is one single right answer here, but my preference for continuous delivery is to neatly separate the different types of tests into individual jobs in Jenkins. By doing this, I can potentially distribute the load of slow acceptance tests across several Jenkins slaves running in parallel, based on the names of test classes or their sub-packages.

For now, I just need 3 Jenkins jobs to run my tests:

Job Name Execution
MyWebapp-Unit-Tests mvn clean test -P unit-tests
MyWebapp-Integration-Tests mvn verify -P integration-tests
MyWebapp-Acceptance-Tests mvn verify -P acceptance-tests

In particular, the job running acceptance tests will package the application into a war file and deploy it to Tomcat before executing the tests. This can be accomplished by using the maven tomcat plugin or Cargo. Yes, I could use Jetty just for testing, but I need to make sure that my acceptance tests are run against an environment that is as similar as possible to my production environment, so I don't want to test on Jetty if I already know that I'm ultimately going to deploy to Tomcat.

... TO BE CONTINUED ...