Clean Architecture and the SOLID Principles

Jero
13 min readJun 4, 2023

The following is a set of personal notes about the book Clean Architecture by Robert C. Martin and my thought related to that:

The book starts talking about the importance of having a well-defined structure for a System Architecture. And mainly how architecture is about making decisions. These decisions are based on business requirements and taking consideration hardware limitations too. Following is a set of quotes:

Architecture represents the significant design decisions that shape a system, where significant is measured by cost of change. — Grady Booch

Architecture is the decisions that you wish you could get right early in a project, but that you are not necessarily more likely to get them right than any other. — Ralph Johnson

A good architecture comes from understanding it more as a journey than as a destination, more as an ongoing process of enquiry than as a frozen artifact.

The only way to go fast, is to go well. —Robert C. Martin

Introduction

The introduction explains that making good programs take time and it’s hard. But when the software is done right, it takes a fraction to maintain, scale, and grow rather than harmful software that becomes a Big-Ball-of-Mud. So the goal of a good software design is:

The goal of software architecture is to minimize the human resources required to build and maintain the required system.

The measure of design quality is simply the measure of the effort required to meet the needs of the customer. If that effort is low and stays low throughout the system's lifetime, the design is good. If that effort grows with each new release, the design is bad. It’s as simple as that.

Also mentioned, are some use cases, in which the productivity of a company decreased so much, because it was not taken into consideration the make quality code, just throw away code to go faster in the market. The fact is that making messes is always slower than staying clean, no matter which time scale you are using.

So, to conclude this chapter, try to stop thinking like the overconfident Hare and start taking responsibility for the mess that they’ve made.

The only way to go fast, is to go well. — Robert C. Martin

Behavior and Structure

Every software system provides two different values to the stakeholders: behavior and structure.

The urgent is the features themselves, the business requirements and on the other hand, we have the important which is the structure, code conventions, and architecture stuff, which leads to how we organize in the code these behaviors. Let’s review an interesting metric:

Eisenhower matrix

So, we need to understand that it’s essential to keep important stuff first than urgent. If you only focus on urgent, it will delay in the long run until it becomes your company non-profitable at all. So the order should be the following:

1. Urgent and important
2. Not urgent and important
3. Urgent and not important
4. Not urgent and not important

So it’s really important to take seriously the structure of the code, this task is responsible for every software engineer and mainly for the Architects that create an architecture that allows those features and functions to be easily developed, easily modified, and easily extended.

Paradigm Overview:

In this chapter we will see a high level of the three most important paradigms:

Structured Programming: The first paradigm to be adopted, but the latter was discovered by Edsger Wybe Dijkstra in 1968. Dijkstra, remove the goto operation and replace with `if/then/else` and `do/while/until`.

Structured programming imposes discipline on direct transfer of control.

Object Oriented Programming: The second paradigm to be adopted was actually discovered two years earlier, 1966 by Ole Johan Dahl and Kristen Nygaard. These two programmers noticed that the function call stack from in the ALGOL language could be moved to a heap, thereby allowing local variables declared by a function to exist long after the function returned.

Object-oriented programming imposes discipline on indirect transfer of control.

Functional Programming: it was the first invention and it was strongly related to Alonzo Church's work, who invented in 1936 l-calculus, and in 1958 John McCarthy implemented it in a turning machine complete with Lisp. a foundational notion of l-calculus is the immutability and no-assignment statement.

Functional programming imposes discipline upon assignment.

Test and conclusion

Dijkstra once said, “Testing shows the presence, not the absence, of bugs.”

All that tests can do, after sufficient testing effort, is allow us to deem a program to be correct enough for our purposes.

If such tests fail to prove incorrectness, then we deem the functions to be correct enough for our purposes.

Software development is not a mathematical endeavor, even though it seems to manipulate mathematical constructs. Rather, software is like a science. We show correctness by failing to prove incorrectness, despite our best efforts.

Software architects strive to define modules, components, and services that are easily falsifiable (testable). To do so, they employ restrictive disciplines similar to structured programming, albeit at a much higher level.

Plugin Architecture and Object Oriented Programming

Why did the UNIX operating system make IO devices plugins? Because we learned, in the late 1950s, that our programs should be device independent. Why? Because we wrote lots of programs that were device dependent, only to discover that we really wanted those programs to do the same job but use a different device.

The plugin architecture was invented to support this kind of IO device independence and has been implemented in almost every operating system since its introduction.

Even so, most programmers did not extend the idea to their own programs, because using pointers to functions was dangerous.

OO allows the plugin architecture to be used anywhere, for anything.

What is OO? There are many opinions and many answers to this question. To the software architect, however, the answer is clear: OO is the ability, through the use of polymorphism, to gain absolute control over every source code dependency in the system. It allows the architect to create a plugin architecture, in which modules that contain high-level policies are independent of modules that contain low-level details. The low-level details are relegated to plugin modules that can be deployed and developed independently from the modules that contain high-level policies.

Functional Programming, Immutability and Plugin Architecture

In this section describe how a piece of functional code implemented by Java (mutability fashion) and Closure (immutability fashion) could perform the same result:

Java:

public class Squint {
public static void main(String args[]) {
for (int i=0; i<25; i++)
System.out.println(i*i);
}
}

Closure:

(println (take 25 (map (fn [x] (* x x)) (range))))

The whole point of this example is to point out the Closure way it’s fully immutable, and it finish to say that:

Variables in functional languages do not vary.

And the importance of being immutable, is you don’t longer have concurrency problems like race conditional, deadlocks, etc. Much simpler and more predictable code.

Segregation of Mutability

Here describe how to segregate the application into Immutable and Mutable parts. Let’s take a look closer and compare with a modern web application with React & Redux:

Architects would be wise to push as much processing as possible into the immutable components and to drive as much code as possible out of those components that must allow mutation.

Jero’s comment: If we take a closer look, we can see that the architecture in question has an Immutable Component at the top. This component is often called a presentational component or a dumb component in the React world. It is a stateless component that is used for rendering purposes only.

On the other hand, at the bottom of the architecture, we have a Mutable Component, which is also known as a Container Component. This component is stateful and can perform side effects, such as making API calls or updating the state of the application.

Finally, we have the Transactional Memory, which can be compared to the Store in Redux. This memory is updated under very disciplined conditions, such as updating the Store through reducers in Redux. This ensures that the data flow is predictable and consistent throughout the application.

By separating the presentation logic from the state management logic, we can achieve a more modular and maintainable codebase. The Immutable Component can be reused across the application, while the Mutable Component can be customized and composed to fit different use cases. This architecture also enables us to test the presentation logic and the state management logic separately, which makes our tests more focused and easier to write.

EVENT SOURCING

Event sourcing is based on the idea of more memory we have, and the faster our machines are, the less we need a mutable state. Event sourcing is a strategy wherein we store the transactions, but not the state. When the state is required, we simply apply all the transactions from the beginning of time.

Of course, we can take shortcuts. For example, we can compute and save the state every midnight. Then, when the state information is required, we need compute only the transactions since midnight.

As a consequence, our applications are not CRUD; they are just CR. Also, because neither updates nor deletions occur in the data store, there cannot be any concurrent update issues.

If we have enough storage and enough processor power, we can make our applications entirely immutable—and, therefore, entirely functional.

Conclusion

To summarize:

• Structured programming is discipline imposed upon direct transfer of control (over unrestrain jump of control using goto).

• Object-oriented programming is discipline imposed upon indirect transfer of control.

• Functional programming is a discipline imposed upon variable assignment.

Design Principles

Let’s review the famous SOLID principles from an architecture perspective because good software systems begin with clean code. On the one hand, if the bricks aren’t well made, the architecture of the building doesn’t matter much. On the other hand, you can make a substantial mess with well-made bricks. This is where the SOLID principles come in, but before talk about these principles, let’s talk about an important concept related to all these principles, which is the Cohesion and modular design.

Cohesion and Modular Design

But where did this SOLID concept come from? and who first described it? Well, it actually was a long time ago, well before Uncle Bob’s books were published. Following there are some books and papers related with that:

The optimal modular design is one in which relationships between elements that are not in the same module are minimized. There are two ways to achieve this: minimize relationships between modules and maximize relationships between elements in the same module. These two methods are complementary and are used together.

Element in this sense means any form of a module part, such as a declaration, a segment, or a subfunction. Any program has certain relationships between all of its elements. The basic intent of the module force is to organize these items so that closely related items fall into a single module and unrelated items fall into separate modules.

SRP: The Single Responsibility Principle

An active corollary to Conway’s law: The best structure for a software system is heavily influenced by the social structure of the organization that uses it so that each software module has one, and only one, reason to change. When we say the reason we could say one or many stakeholders that want a change, which we call actor:

A module should be responsible to one, and only one, actor.

Let’s review a violation example of this principle:

Three different teams/peoples which belong to different teams, consume functions for the same module/class. This could end with a lot of errors and virtual unnecessary dependency between teams when some team changes something to break the work of another team, so we should try to avoid the situation where we put code that different actors depend on into close proximity.

The SRP says to separate the code that different actors depend on.

Jero’s comment: In my opinion, in terms of the Single Responsibility Principle (SRP) each object in our model should represent a single entity in our domain, and each object should have a clear role in its context. By doing this, we can have more cohesive objects with a single reason to change, which makes our code easier to understand and maintain.

Another important aspect of following SRP is implementing separation of concern in our code to separate the HOW from the WHAT. Let’s review a simple example of this separation, and how could looks like:

BAD

function checkout(RequestOrder: RequestOrder): OrderResponse {
// create the order
const order = {
name: reuqestOrder.name;
items: requestOrder.items.map(item => ...)
....
}

// Create the billing
const billing = {
total: requestOrder.items.reduce((total, item => total + item. Price), 0)
date: new Date()
....
}

// Create a delivery note
const deliveryNote = {
....
}

return { order, billing, deliveryNote };
}

GOOD

function checkout(RequestOrder: RequestOrder): OrderResponse {
const order = createOrder(requestOrder);
const billing = createBilling(order);
const deliveryNote = createDeliveryNote(order);
return { order, billing, deliveryNote }
}

In the previous code examples, there’s a function called checkout that takes an RequestOrder object as input and returns an OrderResponse object. However, the first code example shows that the function is doing too much in terms of HOW it is doing it by creating the order, billing, and delivery note all in one function. This violates the Single Responsibility Principle (SRP), which states that a function should have only one reason to change.

To improve the code, we can separate the sub-functions needed for the checkout into their own functions in a declarative way. In the second code example, we created three functions called createOrder, createBilling, and createDeliveryNote. These functions each declare WHAT they do, and not HOW they do it. They are used by the checkout function to perform the checkout operation in an imperative way. By doing this, the checkout function becomes more cohesive and is not coupled with the implementation details of the three sub-functions.

If you would like to read more about good and bad design and some principles/heuristics to produce good design, you could read my other article here.

As you see, the Single Responsibility Principle (SRP) is closely related to the idea of Cohesion, that we mentioned before.

OCP: The Open-Closed Principle

Bertrand Meyer made this principle famous in the 1980s. The gist is that for software systems to be easy to change, they must be designed to allow the behavior of those systems to be changed by adding new code, rather than changing existing code.

The arrow indicates the direction of the dependencies

In the image above, we see how FinancialReportIteractor, does not depend on anyone, it’s totally independent of FinancialReportController or even FinancialDatabase.

This is how the OCP works at the architectural level. Architects separate functionality based on how, why, and when it changes, and then organize that separated functionality into a hierarchy of components. Higher-level components in that hierarchy are protected from the changes made to lower-level components.

Jero comments: Let’s review an original definition from Bertrand Mayer about this principle:

We should analyze the context of this principle, back in the day of C++ code in the 90’s, implementing a single change in an existing base code could take hours of compilation time. In the old days, the metaphor was more like building blocks. There was no feedback loop between the specs and the code like we have today. So this principle, it’s strongly related to establishing and following contracts. Like, the use of polymorphism. So in that case, the code could be extensible by implementing a new polymorphism object, in that case, you will prevent changing existing code, and encourage to add new code instead.

LSP: The Liskov Substitution Principle

Barbara Liskov’s famous definition of subtypes, from 1988. In short, this principle:

What is wanted here is something like the following substitution property: If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of

T, the behavior of P is unchanged when o1 is substituted for o2 then S is a subtype of T.1

Says that to build software systems from interchangeable parts, those parts must adhere to a contract that allows those parts to be substituted one for another.

Jero’s comment: This is just the principle of least surprise applied to code substitution, and as such, it’s quite simple. If I tell you that something is a valid subtype of what you have, then you should be able to assume that it will act the same way in any way that interests you. Also, the use of polymorphism objects over here is strongly related too.

ISP: The Interface Segregation Principle

This principle advises software designers to avoid depending on things that they don’t use.

Consider, for example, an architect working on a system, S. He wants to include a certain framework, F, into the system. Now suppose that the authors of F have bound it to a particular database, D. So S depends on F. which depends on D:

Now suppose that D contains features that F does not use and, therefore, that S does not care about. Changes to those features within D may well force the redeployment of F and, therefore, the redeployment of S. Even worse, a failure of one of the features within D may cause failures in F and S.

The lesson here is that depending on something that carries baggage that you don’t need can cause you troubles that you didn’t expect.

Jero’s comment: Here we see again, the concept of cohesion and coupling is strongly related. You should separate your code in a way that the element of some modules don’t depend on elements of other modules. Maximize cohesion, minimize coupling!

DIP: The Dependency Inversion Principle

The code that implements high-level policy should not depend on the code that implements low-level details. Rather, details should depend on policies.

Note that the flow of control crosses the curved line in the opposite direction of the source code dependencies. The source code dependencies are inverted against the flow of control — which is why we refer to this principle as Dependency Inversion.

Please give me your thought about this short article, I love to share ideas and learn from others and I hope this article may be helpful to someone out there!

Also, you can be following me on Twitter, or contact me on Linkedin.

--

--