Using Java Lambdas to implement a chain processor
Chain of Responsibility is a well-known design pattern, some people hate it,
some people love it [Wikipedia]. I use a variation of the chain of responsibility to create a processing
pipeline. In this variation, there are still a series of processing objects but
the key difference is that the command object is passed through the chain of
processors each applying its specific logic to the data in the command object
and passing that command object to the next processor.
And there you have it a processing pipeline using a functional interface. The nice thing about using
a functional interface is that you do not even need to create concrete implementations, you can use
method references or lambdas if the processing logic is simple enough.
What I like about this design
A processor can stop the chain if something has gone wrong, either by
returning false from the interface method or by throwing an exception which
can be caught by the caller of the chain's execute method. I will go through
this variation step by step.
Step 1: Create the Link Interface
This is the only interface that will be necessary for this variation. The
chain interface is a functional interface and defines two items. First, a
method definition that will be used in each link of the chain, I like to name
this `execute` but it can be named anything that makes sense for the
processing pipeline. The second item is a `default` method to compose
implementations of the Link interface creating a Chain.
Step 2: Create a processing context
In the above Link.java file, each Link is passed a ProcessingContext object. This is just a
class that extends a LinkedHashMap. I like to have a named object for this purpose but a plain
Java collection could be passed instead.
Step 3: Create a Link implementation
After the Link interface and the processing context are created, it is now a matter of creating
Link interface implementations that will contain the procesing logic.
In the above sample, rather than using String objects as the keys, normally I would reccommend using
and enumeration to avoid null pointer exceptions due to a spelling error.
Step 4: Chain the Link implementations
Once all the Link implementations have been programmed, it is time to chain them all together in
the pipeline.
Step 5: Execute the pipeline
What I like about this design
- Links can be classes, method references or lambdas
- Easy to isolate logic and adhere to the Single Responsiblity Rule
- Unit testing Links is straight forward
- Pipeline logic is laid out in a single place, making it easy to insert or remove links in the processing
- Composition of links reduces boiler plate code, if you were to chain all links in the classic way
- Context object that is passed from Link to Link can feel like a "god" object
- Technically, Links do no represent "pure" methods
- If you are not disciplined, the context object can become very messy. You should create custom objects rather than stuff it full of Java types
Comments
Post a Comment