Continuing my chapter-by-chapter review of A Philosophy of Software Design By John Ousterhout, today I’m going to review Chapter 7.
Chapter 7: Different Layer, Different Abstraction
Right out of the gate we’ve got this great piece of advice:
In a well-designed system, each layer provides a different abstraction from the layers above and below it
Keep this quote in mind, we’ll be coming back to it.
Pass-through methods
A method which passes its parameters down to the next layer without making significant changes or adding value is a “pass-through” method, a problem when different layers should have different abstractions.
This typically indicates that there is not a clean division of responsibility between the classes.
Yes, your Single Responsibility Principle senses should be tingling. A single method in a class which passes arguments down without modification might not be a problem. After all, an abstraction means that the underlying implementation is None Of Your Business, and the actual implementation might be a simple pass-through. I like to reiterate this point, because I believe it strongly: Abstraction is not saying “I have some hidden complexity that you don’t know about”, it’s saying “everything on this side of the line is None of Your Business”. The abstracted details might be “We’ve got this complicated algorithm and state” or it might be “yeah, we’re just flipping a bit” or “we don’t actually need to do anything”. This, I think, is the real beauty of abstraction. If you can make the consumer think you’re doing something important, and you end up doing nothing at all (or, nearly nothing) you’ve got a good abstraction and good performance to boot. After all, the fastest calculation is one where you don’t have to calculate anything.
But, let’s get back to the point at hand. Pass-through methods do add complexity to the system by having at least two layers which perform only one task. That’s not a great ratio of layers to work, in general. One pass-through method on a complicated interface might not be a real issue, but if you have a class with several of these methods it might be a good indication that you’ve probably drawn your abstraction boundaries in the wrong place.
Adaptors and Facades seem like they should be immune from this criticism, by design. That is, the intended use for these patterns is to pass through data from one form to another, and ideally the translation work is very small. A Gateway that passes a request through to a database or a webservice or something else likewise may be very much like a pass-through without too much problem.
Going back to our first quote, passing through a list of method parameters to a nearly-identical method in a lower layer also might not be a problem if the level of abstraction changes. It’s not just what parameters you are passing and what your method signatures look like, it’s what the semantic values of those parameter values are. If the semantic meaning changes, and the abstraction changes, that seems like a sufficient difference to me.
Consider the example of making a URL shortening service in WebAPI with a URL like /url/{id}
and a controller method like:
[HttpGet("{id}")]
public IHttpActionResult Get(string id)
{
var newUrl = service.GetUrlByKey(id);
return Redirect(newUrl);
}
In our service we have something like:
public string GetUrlByKey(string key)
{
return _urlCache[key];
}
The same value is passed to a method with the same signature, but it has a different semantic meaning. In the first method, the value is a user-facing ID value of the URL, where in the second method it is a lookup key for a cache. It could also just as easily have been a Primary Key in a database somewhere.
Aside: Parrot’s C API
When I was still working on the Parrot project, one of the features I worked on extensively was the public API for C embedding. The goal was to be able to embed a Parrot interpreter into other applications and be able to make calls from C code into Parrot and vice-versa. The problem is that there is an immediate mismatch between the inside of Parrot (which uses it’s own Exception-handling mechanism) and the outside of Parrot where C doesn’t have exceptions and C++ has it’s own Exception-handling mechanism.
The API layer often passed request parameters more-or-less unmolested into the underlying functions of the interpreter. What changed was the ambient error-handling strategy: The API layer would set up an exception handler and return error information as a false
flag to the caller. Outside the API layer, code would operate with the normal C idioms of status flags, but inside the API layer code would operate with Exceptions. While the parameters were technically “pass through”, the abstraction was significantly different: One one side you could safely throw exceptions, and on the other side you couldn’t.
Decorators
As promised, John does discuss the Decorator pattern directly (though he doesn’t mention other similarly-afflicted patterns like Facade, Bridge or Adaptor).
decorator classes tend to be shallow: they introduce a large amount of boilerplate for a small amount of new functionality. It’s easy to overuse the Decorator class…
Call it a bit of a heuristic, but I would try never to use the Decorator pattern for classes which have a large public interface. Interface Segregation is your friend here. Smaller public interfaces mean less boilerplate for your Decorators. I’d hate to draw too deep a line in the sand, but I probably wouldn’t use decorators if the interface
I was wrapping had more than 4 or 5 public methods exposed. Your mileage may vary.
Acquaintance makes heavy use of Decorators, but that system is all about composing pipelines. We use a sequence of decorators to add more complex behavior around a messaging pipeline, in a dynamic and composable way. Decorators are a fundamental part of the design, but the system has been designed around that idea from the bottom up. The interfaces which are decorated are intentionally kept small.
Pass-through variables
This next section I think best deserves to be given in John’s own words:
Another form of API duplication across layers is a pass-through variable, which is a variable that is passed down through a long chain of methods.
The solution I use most often is to introduce a context object… A context stores all of the application’s global state…
Contexts are far from an ideal solution
Unfortunately, I haven’t found a better solution than contexts
Aside: C2
First, as an aside, let’s try to be precise about what we are talking about. C2 defines the Context Object like this:
A context object encapsulates the references/pointers to services and configuration information used/needed by other objects.
That is, C2 sees the Context Object as being the context of the entire program at the time an operation is performed, sort of like a Service Locator but with better strong typing. Then, this definition in hand, C2 tells us that these Context Objects are Evil. I don’t disagree. By this definition, a Service Locator with a slightly different usage pattern, it is a bad idea. This isn’t what I think of when I talk about Context objects, and I don’t think it’s what John is talking about either.
When I talk about a Context object, I’m not talking about “the execution context of the application, as viewed by my object looking up”, I’m talking instead about “the context of a single operation, as viewed by the operation orchestrator looking down”. The Context object in my formulation (and, if there is severe confusion, perhaps we need a better name) contains the mutable state of a single operation to help with orchestration between the objects to which parts of the operation are delegated.
In his brief description, John seems to be talking more about my definition and less about the definition described on C2.
Context Objects
I’m in the same boat as John, I’ve never found a better solution to the problem than a context object, and I don’t feel like context objects are that great a solution. Context objects do offer a few advantages:
- They hoard all the mutable state, allowing many of your interactor and utility classes to be immutable and reusable without clobbering state
- They provide a common “language” for multiple interactor and utility classes on the same layer to speak
- They allow the easy extension of an operation without needing to modify existing steps
- They simplify the orchestration of a complex operation by allowing all the variables of state and all the parameter lists of delegated methods to be simple and standard.
A pattern that I’ve used pretty frequently is for a Service Layer method to create a Context for an operation and pass that context into an ordered list of steps in lower-level interactor objects. At the end of the service method we can .Commit()
any pending operations or Units of Work and return our results. I have found this technique to be powerful, at least from the point of view of the service. The Context object, on the other hand, often becomes a dumping ground for unrelated bits of mutable state, and I haven’t quite figured out a better way to organize that state. In the end, just having enough discipline to not pass around things you don’t really need is the best piece of advice I can offer here. Consider this kind of code example for a service method which updates a product in an ecommerce site:
public void UpdateProduct(int id, Action<Product> update)
{
var context = new UpdateProductContext(id);
var operation = new UpdateProductOperation();
// Get the product by ID from the database
operation.LoadProductOrThrow(id, context);
// Update the loaded product model and stage changes in our Unit of Work
operation.StageDbChanges(update, context);
// With no exceptions thrown, we can commit our Unit of Work
operation.CommitDbChanges(context);
// Publish events so other applications can be aware of this:
operation.PublishEvents(context);
}
The Operation class contains named methods for each step in the sequence, and the Service method executes these steps in order sort of like a Transaction Script. The Operation class holds all the logic for the complex operation, and organizes each step as a single public method with a distinct name. The Context holds the mutable state, which means the Operation object itself can be easily reusable and it should be thread-safe. Further, it’s very easy to rearrange these steps to perform variant operations. Consider if we want to validate an update, we could have a method which calls the load and stage methods but not .CommitDbChanges()
and .PublishEvents()
. Or, if we are worried about down-stream methods becoming out of sync, we could have a method which only calls .PublishEvents()
to make sure downstream event consumers were triggered. When I need to add new steps to the operation, such as a step to update a secondary data cache or perform additional validation, I add new public methods to the Operation object and call them where necessary. It’s not a perfect pattern and I wouldn’t use all this ceremony for very simple operations, but I’ve found it to be an extremely effective way of organizing the various steps in large operations.
I’ve experimented with callback lambdas to replace contexts and seen some interesting results. Though, ultimately, I never thought they were superior to a context object in terms of either code simplicity or system performance.
Aside: Best format for the job
One piece of advice I was given early in my career which I have found very effective is this:
Convert your data into a format which is easy to work with, do your work, and then translate it back.
Often we get data input in the form of an array, but our operation would prefer it organized as a queue or a hashmap. Instead of working with the data in the native format, transform it to a format where the work is easy. Then when your work is complete, transform it back to the format expected by the caller. Embrace the Mapper pattern, it allows you to change the representation of data (and, the level of abstraction) in a repeatable and testable way so you can work with it.
Convert your brain-dead DTOs from the wire into a Domain Request, pass that to your Services and Domain Models, and then convert your Domain Response back into a brain-dead DTO for sending back over the wire. Do this anywhere in your application where the abstraction needs to change significantly to make your work easier and your algorithms simpler. Embrace the Mapper.
Conclusion
Methods add complexity and every method needs to add some kind of value to justify its own existance. Often this value should come in the form of a change of abstraction: Either the method should change the form of the data, or at least it should change the semantic meaning of the data or the context in which the data is viewed.