Replacing Hard Coded Processes Using Activiti

Home  >>  Common  >>  Replacing Hard Coded Processes Using Activiti

Replacing Hard Coded Processes Using Activiti

On March 15, 2011, Posted by , In Common, With No Comments

A few weeks ago I was put in charge to replace a piece of code in a software product. The code in question was the implementation of several business processes. The hard wired nature of those processes made it difficult for consulting people to customize the software to customer’s requirements. So the mission was to integrate a workflow engine with separated workflow definitions for each process1.

The heart of the old workflow engine was an EJB 2.0 message driven bean that looked something like this:

public class MyMessageDrivenBean {
  private static final String JMS_TYPE_MONDAY = "monday";
  private static final String JMS_TYPE_TUESDAY = "tuesday";
  // [...] other weekdays go here

  public void onMessage( Message message ) throws JMSException {
    String jmsType = message.getJMSType();
    if( JMS_TYPE_MONDAY.equals( jmsType ) ) {
      doMondayWorkflow();
    } else if( JMS_TYPE_TUESDAY.equals( jmsType ) ) {
      doTuesdayWorkflow();
    // [...] other weekdays go here
    }
  }

  private void doMondayWorkflow() {
    // here goes the important stuff that has to be done on mondays
  }

  private void doTuesdayWorkflow() {
    // here goes the important stuff that has to be done on tuesdays
  }
  // [...] other weekdays handler methods go here
}

As you can see the message driven been worked as a central hub. Depending on the value of the message argument it selected one of the workflow implementations. In the example above those workflows are represented by the methods following the ‘doCurrentWeekdayWorkflow’ name pattern. The methods themselves were somewhat unstructured tapeworms that used a Spring application context to reference collaborator beans. As the message driven bean was managed by an EJB container the collaborators were not injected by Spring. Hence access to the spring application context was provided by an utility class holding the context as class variable.

  ApplicationContext context = AppContext.getApplicationContext();
  MyBean myBean = context.getBean( "myBean", MyBean.class );
  myBean.doSomethingImportant();

Beside the spring beans the message driven been has been coupled to some other classes using static method calls. And those methods again referenced to beans in the application context the way described above.

  MyHelper.doSomethingImportant( myImportantParam );

Given this scenario decision was made to integrate the Activiti process engine as new heart of the workflow control. Although a very new technology some good reasons spoke for Activiti: it integrates nicely with Spring, it uses BPMN 2.0 as process definition language and it was build from people who have a lot of experience in this area.

That was about the time when I was involved to do the actual work. I did not know the system and there were no tests around. So I followed Martin Fowler‘s rule of “The First Step in Refactoring”2 – to build a solid set of tests for the section of code under rework.

I needed a couple of tests that had to cover all paths in the given section of code, running all cases of input possibilities and assert the outcome for a given input. Additionally those tests had to run fast, because I intended to run them very often. They were expected to give me early feedback if my changes would break expected behavior. For this reason I decided to replace all the collaborators in form of spring beans and static method calls with stubs to avoid a heavy weighted system setup process. I also abandoned any container stuff and build a suite of plain and simple JUnit tests for the job.

To be honest I simplified the stubbing by replacing the calls to static methods with beans that encapsulate the static calls:

public class MyHelperBean {
  public void doSomethingImportant( String myImportantParam ) {
    MyHelper.doSomethingImportant( myImportantParam );
  }
}

Usage:

 ApplicationContext ctx = AppContext.getApplicationContext();
MyBean myBean = ctx.getBean( "myHelperBean", MyHelperBean.class );
myBean.doSomethingImportant( myImportantParam );

Now it was possible to double all the collaborators by stubbing the application context and the beans it was providing. The stub setup needed some amount of coding that differed only slightly from test to test. Because of this I introduced a fixture class that did the basic setup and provided some API to modify certain stub values. Stubbing itself was done using Mockito for read only collaborators. Simple value holder implementations were used for verifications of the outcome.3.

public class Fixture {
  private static final String DEFAULT_MY_BEAN_VALUE = "defaultValue";

  private ApplicationContext context;
  private MyBean myBean;

  public Fixture() {
    this.context = mock( ApplicationContext.class );
    this.myBean = mock( MyBean.class );
    when( context.getBean( "myBean", MyBean.class ) )
      .thenReturn( myBean );
    setMyBeanValue( DEFAULT_MY_BEAN_VALUE );
  }

  public void setMyBeanValue( String value ) {
    when( myBean.getValue() ).thenReturn( value );
  }
}

Note that the code above is only meant to show the principal idea. The real world fixture is a little bit more complicated and wraps every bean stub in its own class. Note also that the fixture evolved while writing the tests and was not created in a front up development effort.

Once the test suite was in place I felt confident enough to fight the tapeworms. Taken the advices of Robert C. Martin about functions4 I broke up the large workflow methods into smaller ones – each of those methods only doing one thing. Soon I got to the point where it was possible to move methods that belonged together into separate classes. After a while I got the workflow methods looking something like this:

  private void doMondayWorkflow() {
    Mood mood = getBean( "mood", Mood.class );
    Mailer mailer = getBean( "mailer", Mailer.class );

    if( mood.isTooLazy() ) {
      mailer.sendExcuses();
    }
  }

Before we lose track remember that I did not practice this as an end in itself. Instead I was on my way to replace the workflow methods with BPMN 2.0 process definitions. In our example above the method could be displayed as process in the following way:

or written in BPMN 2.0 notation as

Please have a look at the emphasized text passages in the xml definition. They show why I provided the refactoring results (Mood and Mailer) as Spring beans: Activiti allows you to access Spring beans within the workflow definition using UEL.

The next step was to integrate Activiti to be able to execute our new workflow definition. To do so I replaced the hub functionality in the message driven bean. The replacement selects a certain workflow id instead of delegating to one of the old methods. After that it hands the workflow id over to Activiti to execute the according workflow.

public void onMessage( Message message ) throws JMSException {
  String workflowId = message.getJMSType();
  RuntimeService runtimeService
    = getBean( "runtimeService", RuntimeService.class );
  runtimeService.startProcessInstanceByKey( workflowId );
}

As you probably guess the real implementation does some more sophisticated stuff, but I think you get the idea. Note how the RuntimeService instance of the Activiti engine is provided as Spring bean. Actually you can configure the engine completely with Spring.

Once I had the workflow engine in place I wrote BPMN 2.0 definitions as replacement for each of the existing workflow methods. I did this step wise, one workflow at a time of course. Trouble was that my tests were written to run without using the Spring container, so they did not work anymore. But the tests had evolved during the refactorings just as the code did. Extracting the new classes led also to new unit tests that verified the functionality of those classes. So in fact only a few tests – those that tested the control flow of the workflows – failed.

To get the control flow tests up and running again I had to startup Spring and Activiti for those tests. The following code shows the approach:

@RunWith( SpringJUnit4ClassRunner.class )
@ContextConfiguration( locations = {
  "workflow.test.context.xml"
} )
public class MyMondayWorkflowTest {
  @Autowired
  public RuntimeService runtimeService;
  @Autowired
  public ApplicationContext applicationContext;
  @Rule
  @Autowired
  public ActivitiRule activitiRule;

  @Test
  @Deployment( resources = {
    "biz/facon/activiti/workflow/example/monday.bpmn20.xml"
  } )
  public void testTooLazy() {
    Fixture fixture = new Fixture( applicationContext );
    fixture.getMood().setTooLazy( true );

    runtimeService.startProcessInstanceByKey( "monday" );

    assertTrue( fixture.getMailer().isUsed() );
  }

  @Test
  @Deployment( resources = {
    "biz/facon/activiti/workflow/example/monday.bpmn20.xml"
  } )
  public void testNotTooLazy() {
    Fixture fixture = new Fixture( applicationContext );
    fixture.getMood().setTooLazy( false );

    runtimeService.startProcessInstanceByKey( "monday" );

    assertFalse( fixture.getMailer().isUsed() );
  }
}

Ok that’s quite a bit of Code. For those who are interested in, but not familiar with the Spring annotations used in the example above, I recommend the testing chapter of the spring documentation. The same goes for the documentation of the activiti related annotations. For this post it is enough to understand that they ease the task of setting up the Spring container, the activiti workflow engine, the database and the handling of the workflow definition deployment.

The interesting part from our point of view is something else. I had to adapt the fixture implementation a little bit to work together with the Spring container. Remember that the fixture stubs an application context with bean doubles provided by Mockito. But now I had a real application context and I had to get the bean stubs registered at that context. I solved this problem with the @Configuration and @Bean annotations provided by Spring. I modified the given fixture code from above to show the basic idea.

@Configuration
public class Fixture {
  private static final String DEFAULT_MY_BEAN_VALUE = "defaultValue";

  private ApplicationContext context;
  private MyBean myBean;

  public Fixture() {
    this( null );
  }
  
  public Fixture( ApplicationContext applicationContext ) {
    initContext( applicationContext );
    myBean = registerBean( "myBean", myBean(), MyBean.class );
    setMyBeanValue( DEFAULT_MY_BEAN_VALUE );
  }

  @Bean
  public MyBean myBean() {
    return mock( MyBean.class );
  }
  
  public void setMyBeanValue( String value ) {
    when( myBean.getValue() ).thenReturn( value );
  }

  private void initContext( ApplicationContext applicationContext ) {
    context = applicationContext;
    if( context == null ) {
      context = mock( ApplicationContext.class );
    }
  }

  private <T> T registerBean( String name, T bean, Class<T> type ) {
    T result = getBean( name, type );
    if( result == null ) {
      result = bean;
      when( context.getBean( name, type ) ).thenReturn( result );
    } else {
      reset( result );
    }
    return result;
  }

  private <T> T getBean( String name, Class<T> type) {
    return context.getBean( name, type );
  }
}

Enablement of classpath scanning in the Spring configuration for the test above enables the container to find the Fixture class annotated with @Configuration. Spring uses such configuration classes as bean factories. It registers the objects returned by the methods annotated with @Beans at the application context. The fixture instance used in the test case above takes such an application context as constructor parameter. Instead of creating the stubs by itself the fixture uses those provided by the context. As the application context survives all tests for performance optimization it is important to reset the bean stubs during fixture initialization.

Conclusion
After all this work folks now have a piece of software doing the same thing as before – sounds a little bit weird, doesn’t it? :-)

But remember the mission statement given in the opening paragraph. With the new implementation now they do have separated workflow definitions that can easily be customized. But in my opinion they got even more.

Because of the comprehensive test coverage they are now able to actually ensure that the workflows do what they are supposed to do during further development. I mean without doing a lot of costly manual checks. They also got a test fixture that can easily be used to test the customized workflows as well. This can save a lot of time checking whether an update of the core software is still compatible with the customizations.

Because of this the investment did not only make the system more flexible, it also increased code quality and development speed.


  1. I use the term process in the context of non-technical functional requirements and the term workflow in the context of technical solutions
  2. Refactoring: Improving the Design of Existing Code, ISBN-10: 0201485672, ISBN-13: 978-0201485677
  3. Following the conventions Martin Fowler uses in his article Mocks Arent’t Stubs I use the term stub as the tests do state verification
  4. Clean Code, Chapter 3, ISBN-13: 978-0-13235088-4, ISBN-10: 0-13-235088-2
Follow me

Frank Appel

Frank is a stalwart of agile methods and test driven development in particular. He understands software development as a craftsmanship based on a well-balanced mix of knowledge and the experience of the daily work.

fappel@codeaffine.com
Follow me

Latest posts by Frank Appel (see all)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>