I’m looking at jBPM specifically, but I think this applies with other systems too.
As I understand it..
When placing an item into a workflow, one does not generally put a reference to that item, one puts the whole serialized item or document into the workflow. This means that the state of the workflow and the item undergoing workflow can all be kept in the workflow systems database, making it easy to make atomic updates.
At the end of a workflow, or sometimes in explicit steps along the way, calls can be made to downstream services, and these may make changes to the state of their databases. How does one handle this?
Is it:
-
The workflow and services should all be deployed in a single application server container, and a transaction manager used to keep everything consistent?
-
You call a service, then the workflow state is updated, but not in a transaction. Failure and restart of the workflow may cause duplicate calls to be made, if the state update failed.
-
You decouple them using a transactional JMS queue. Then downstream systems can read from that queue in their own transaction. Updates happen transactionally and in the correct order.
Or something else, or whatever combination of the above that you like.
I should add, where I work we have micro-services (whatever that means). It means we do not have an application server, so global transactions accross service end-points are not possible (no ws-tx either, the services are REST/json). Is the JMS solution the only one that is open to us?