BPM Error Handling Best Practice

Creating Business Processes using a BPM (in my case Bonitasoft BPM) we had the problem to handle failures in the right way. In the first time we tried to catch all errors and handled them with a End/Terminate-Entity. Looking backward it was a odd way to process the exception states.

The focus should be by maintenance and at most by the customers using the system. Customers don’t want to reinitialize a process every time an error occurred. The want to maintainer to fixe it and let the process flow. Maintainer don’t want to have much work with processes running.

I should show two very common scenarios happen in the real life:

  1. All processes are based on tasks using operations working over the network. Maybe sending mails using a database etc. A lost of network connection (maybe only a segment) will cause a lot of tasks to fail and trigger the error handling.
  2. User is creating a process instance inserting data that’s simply wrong. But the data can only be validated later in the flow. For example a wrong customer or contract id.

The first case shows a technical problem. It should be fixed by the administrators and then the processes should be restarted and do the work. It’s a technical failure.

The second case shows a professional problem. We have the wrong information from the user. A task will fail and could not be done, even if it will be retried. In this case a task error handler trail must be followed. And important: There could occur different incidents.

To implement this concept we changed the definition of automatic tasks with connectors. A connector should throw an exception if something technical went wrong. This exception should the task force to change status to ‘failed’. In this case we can retry the task if the technical problem is solved.

Next we check the return values of the connector. If something professional went wrong the returned data should contain such a information, like “returncode=-5” or empty values “customerId=”. Small post processor scripts can check the data and fire errors handlers to  jump into another part of the process.

The following example shows the behavior. I used a manual task to insert the ‘returned data’

Bonita_BPM

and validate it with post processors

Bonita_BPM_Error_post

The processors are very simple, e.g ‘checkStatusNo’:

if (status.equals("no"))
  throw new Exception("status is no");

The ‘checkStatusError’ script will change the status of the task to ‘failed’. It’s the situation of a technical failure.

In this way processes are more robust, customers are more happy and visual representation is more clear. It’s a win-win situation 😉

Download the sample process.

Advertisements

Migrate from Karaf 3 to 4, Part 2

Migrating from karaf 3 to 4 another funny thing happened. All my jdbc datasources, configured in the deploy folder, where gone. In the first moment I was very hysteric because we want to migrate the productive environment the next days. But in the next moment I recognized that we did all the test cases without an impact.

Playing around and in the end a deeper look into karaf sources showed me the solution. The new commands provided by karaf are using a more complex query and filter to find jdbc datasources. The new command jdbc:ds-list need a property ‘dataSourceName’ to be defined on the service to show the datasource in the list. The datasource itself was present like before but not shown.

First I reimplemented the old command jdbc:datasources and showed all the datasources present as they are, by implemented interface (mhus-osgi-tools). Then I changed all the blueprint xml files and append the claimed property

<entry key=”dataSourceName” value=”${name}”/>

to be compatible to the new jdbc commands of karaf.

Shell: Simple bundle watch list

Creating a watchlist could be laboriously (what a word :-/). Therefore using shell scripting could help a lot.

The first sample shows how to grep a list of interesting bundles to watch. In my case it’s all mhu-lib bundles (add ‘–color never’ to avoid creation of violating escape sequences):

karaf@root()> bundle:list|grep --color never mhu-lib
 89 | Active |  80 | 3.3.0.SNAPSHOT     | mhu-lib-annotations
 90 | Active |  80 | 3.3.0.SNAPSHOT     | mhu-lib-core
 91 | Active |  80 | 3.3.0.SNAPSHOT     | mhu-lib-jms
 92 | Active |  80 | 3.3.0.SNAPSHOT     | mhu-lib-karaf
 93 | Active |  80 | 3.3.0.SNAPSHOT     | mhu-lib-persistence
karaf@root()>

I only need the bundle names, so cut the last column out of the result:

karaf@root()> bundle:list|grep --color never mhu-lib|cut -d '\|' -f 4 -t
mhu-lib-annotations
mhu-lib-core
mhu-lib-jms
mhu-lib-karaf
mhu-lib-persistence
karaf@root()>

Now we need to parse it line by line. A loop would help. The results are used to add the bundle to the bundle:watch list

bundle:list|grep --color never mhu-lib|cut -d '\|' -f 4 -t|run -c "for b in read *;bundle:watch \$b;done"

The ‘read *’ command reads everything from the pipe and the for loop will cut it into lines and run the loop for every entry. The line content is stored in ‘b’. To stop replacement of ‘$b’ by the shell itself (should be done later in the loop) you need to escape the ‘$’ character.

If you want to use a persistent bundle watch use the ‘mhu osgi tool’ called ‘bundle:persistentwatch’. You need to add the entries to the persistent list.

bundle:list|grep --color never mhu-lib|cut -d '\|' -f 4 -t|run -c "for b in read *;bundle:persistentwatch add \$b;done"

Print the list using ‘list’:

karaf@root()> bundle:persistentwatch list
Bundle             
-------------------
mhu-lib-annotations
mhu-lib-core       
mhu-lib-jms        
mhu-lib-karaf      
mhu-lib-persistence

 

Karaf: Scheduling GoGo Commands Via Blueprint

A new feature with mhu-lib 3.3 is the karaf scheduling service. The service is designed to be configured by blueprint and executes gogo-shell scripts. In this way you are able to execute every regular maintenance by automation.

Use this sample blueprint to print a hello world for every 2 minutes:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
    <bean id="cmd" 
          class="de.mhus.lib.karaf.services.ScheduleGogo" 
          init-method="init" destroy-method="destroy">
      <property name="name" value="cmd_hello"/>
      <property name="interval" value="*/2 * * * *"/>
      <property name="command" value="echo 'hello world!'"/>
      <property name="timerFactory" ref="TimerFactoryRef" />
    </bean>
    <reference
       id="TimerFactoryRef" 
       interface="de.mhus.lib.core.util.TimerFactory" />
    <service 
      interface="de.mhus.lib.karaf.services.SimpleServiceIfc" 
      ref="cmd">
        <service-properties>
            <entry key="osgi.jndi.service.name" value="cmd_hello"/>
        </service-properties>
    </service>
</blueprint>

Migrate shell commands from Karaf 3 to Karaf 4

Today the migration from Karaf 3 to version 4 brings some new interesting effects. One of them is a full yellow ‘blinking’ source code where shell commands are implemented.

It looks like all the shell interfaces from version 3 are deprecated now. The reason is that the developers want to define commands without using blueprint definition files in the OSGI INF folder any more. But to establish the new way a new interface is created and in focus.

To use the new interface you first  have to change the maven configuration of your project. Add the following parameters:

 <felix.plugin.version>3.0.1</felix.plugin.version>
 <maven.version>2.0.9</maven.version>

And the following parts inside your main pom.xml:

<dependencyManagement>
 <dependencies>
 <dependency>
 <groupId>org.apache.felix</groupId>
 <artifactId>maven-bundle-plugin</artifactId>
 <version>${felix.plugin.version}</version>
 </dependency>
 <dependency>
 <groupId>org.apache.maven</groupId>
 <artifactId>maven-plugin-api</artifactId>
 <version>${maven.version}</version>
 </dependency>
 </dependencies>
 </dependencyManagement>
 <pluginManagement>
 <plugins>
 <plugin>
 <groupId>org.apache.karaf.tooling</groupId>
 <artifactId>karaf-services-maven-plugin</artifactId>
 <version>${karaf.version}</version>
 <executions>
 <execution>
 <id>service-metadata-generate</id>
 <phase>process-classes</phase>
 <goals>
 <goal>service-metadata-generate</goal>
 </goals>
 </execution>
 </executions>
 </plugin>
 </plugins> 
 </pluginManagement>

Now you need to add the following build instruction to every sub project into the build/plugin part of the pom.xml:

 <plugin>
 <groupId>org.apache.karaf.tooling</groupId>
 <artifactId>karaf-services-maven-plugin</artifactId>
 </plugin>

This was the basic configuration to instruct maven to build everything right. Now you can remove the old blueprint.xml files because they are no more in use.

To create or update command add the following imports:

import org.apache.karaf.shell.api.action.Action;
import org.apache.karaf.shell.api.action.Argument;
import org.apache.karaf.shell.api.action.Command;
import org.apache.karaf.shell.api.action.Option;

Mark the class as component and service and enhance ‘Action’:

@Command(scope = “test”, name = “cmd”, description = “Test Command”)
@Service
public class CmdTest implements Action {

The old interface had a method ‘execute(Session)’ but the new one is only ‘execute()’. The parameter is left. To have access to the session you need to add a reference variable like this:

@Reference
private Session session;

after building and deploying into karaf engine the command is available as usually.

POJO handling with mhu-lib

mhu-lib brings a full featured POJO handler. The framework is able to parse POJO objects uses the found attributes to get and set values. Also a toolset to transform JSON or XML structures directly into/from POJO objects. It’s al located in the package ‘de.mhus.lib.core.pojo’.

The base class is the POJO Parser. It’s not working a lot but it brings all together. First important choice is how to parse. Parse strategies are implemented by default looking for attributes (AttributesStrategy) or functions (FunctionsStrategy). The default strategy (DefaultStrategy) combines both but it’s possible to change the strategy object for the parser. Strategies are also looking for the @Embedded annotation and parse deeper inside this attributes. Important: The attribute based Strategy is also able to access ‘private’ declared values! Is no need to declare them all ‘public’.

The strategy creates a PojoModel which can be manipulated by filters. The default filter (DefaultFilter) remove @Hidden tagged attributes. The resulting model allows the coder to access the attributes.

POJO handling with mhu-lib

An example POJO:

public class MyPojo {
  private String name;
  private long id;
  private String displayName;

  public String getName() {
    return name;
  }
  public String getDisplayName() {
    return displayName;
  }
  public void getId() {
    return id;
  }
}

Now use the POJOParser to create the POJOModel:

model = new PojoParser().parse(MyPojo.class).getModel();

More complex, use only @Public tagged attributes and concat @Embedded with “_”.:

model = new PojoParser().parse(MyPojo.class, "_", new Class[] {Public.class}).filter(true,false,true,false,true).getModel();

To set the values get the model parameters. Identifiers will be lower case. To access ‘displayName’ the identifier is ‘displayname’:

model.getAttribute("displayname").set(instance, "this is a sample");

The framework is a fast, stable and flexible way to use POJO objects in a common way.

Tipp: Create the model once and use it for all POJOs of the same type.

Parameter Related Classes Tree

Parameter Related Classes Tree

In mhu-lib there is a general attention to properties or attribute related objects. The implementation follows the philosophy that most thirst are attribute related and that is should be handled as the same. Properties and attributes are handled as the same not because they are the same but they have the same behavior.

First of all the IProperties class (since mhu-lib 3.3 a real interface) define a the basic behavior to set and get different value types. All java primitives are supported and the ‘Object’ type. The default implementation (e.g. MProperties) uses the getObject() variant and cast the object to the asked primitive by using the ‘MCast’ utilities. This simple structure is a flat properties store.

The ‘ResourceNode’ and ‘WritableResourceNode’ extends the structure to be a tree. With ‘getNodes()’ or ‘getNode(key)’ or ‘getParent()’ it is possible to traverse thru the tree structure.

An interesting extension of ‘WritableResourceNode’ is ‘IConfig’ with a lot of implementations to load configuration information from different types of sources like XML, JSON and properties files or a memory only variant.

The ‘CaoNode’ from the Content Access Object implementation is also attribute based. The framework enables common access to different tree based content structures like filesystem. (It’s currently recreated and not stable in later versions (mhu-lib 2.x) an EMC Documentum and Adobe AEM/CQ5 driver where available also.)