Insights into legacy software


Nowadays microservices are so popular that it is inevitable they are used in places where they should not.

Microservices is a pattern that has 2 main drivers: optimising for team size and system load. However, often they are used out of fear of monolith. Idea is that the only way to avoid big ball of mud is splitting the system into microservices from the start.

We’re hiring. Check out our open roles.

This is not mud. just sand. Photo from personal collection.

From the outset it makes sense. If I know that at some point I anyway need to split my system then why not skip the monolith phase and jump directly into microservices? Imagine how much time and money I can save.

Unfortunately this is quite problematic. Main issue is that as a rule I rarely have enough knowledge about the product when starting to build it. At that point I need to be able to change it quickly. This is exactly what microservices do not enable when I have a small team. Yes, for big organisation they enable scaling autonomous teams but this is not what I need when I have just a single team.

Microservices is the most expensive way to police the architecture.

Things that are simple inside a single service/codebase like moving stuff around become multi step micro projects in microservices world. Of course, microservices are very powerful for enforcing boundaries. However, that is something I really don’t need/want when organisation has just a single product team.

Using microservices effectively requires investing quite a bit into building the necessary level of operational maturity. The bigger the organisation the easier it will be to make that kind of investment. I want to make that investment just at the right time not too early (and not too late either).

Main reason why monoliths turn into big ball of mud is that when product in going through rapid iterations investing into technical scalability doesn’t make much sense. However, at some point as the product matures engineering organisation should sift gears and push more for system long term health. Noticing that point can be quite challenging as it requires change in the whole product organisation’s mindset.

In the following post I list 4 alternatives starting from the most lightweight ones that require relatively little upfront investment to more heavy ones. As the organisation grows I’d be looking for moving from the lighter alternatives to the more heavy ones and then finally switching to full blown microservices.

A small stable team should be able to agree on how to build stuff. If a team of 5–7 people is not able to establish some basic guidelines about their architecture then it’s not likely going to be successful doing anything anyway.

Inside a single codebase it is perfectly possible to hide the internals of each module and minimise the public APIs between them. Usually the reason why modular design is so hard is not because every team has some evil engineer who consciously adds code to the wrong module. It is because he/she is unable to find the proper place where new logic should be implemented. Microservices are not solving this dilemma in any way.

Let’s say I have modules A, B, C. My challenge is adding new code that does not fit into any of them. Why should I believe that having services A, B, C will make it any easier? Most likely what will happen is that I will still add something to service A that should not be there at all. So over time that service A becomes big ball of mud or even worse I will have a distributed monolith in my hands. With modules the barrier to do the right thing and refactor existing structure is much lower.

Btw I can also split the data storage into multiple schemas or logical databases. When the time comes I can split it between multiple DB servers. This does not add much cost. However, it prepares me for possible future situation where database needs to be broken down into smaller pieces.

If just an agreement or a decision record is not enough I can introduce tools that verify nobody is accidentally doing something they shouldn’t.

Tools like ArchUnit or JDepend can be used to verify that there are no unwanted dependencies between the modules/subsystems. Neal Ford calls these architectural fitness functions.

This is heavier solution but also allows to enforce the boundaries more strictly. At the same time there is still no operational overhead of managing microservices. I still compose all module artifacts into one deployable unit.

This is technically already microservices but wanted to add it here as a lighter version of how microservices are typically implemented.

Instead of pushing each microservice into its own code repository all services owned by a single team can reside in a single repository. While this gives me all the benefits of strict isolation I can still move stuff around more easily than with polyrepos.

Important point is that this is not monorepo for the whole organisation but just for one team. As soon as there are multiple teams working on a single codebase things get more complicated. Teams start reusing what should not be shared across boundaries and therefore become more coupled to each other. Again with conscious effort this can work even with multiple teams but already at 3 teams the benefits of this approach become more questionable.

Microservices are very powerful but also very expensive. The bigger the organisation the more value microservices offer. When team is small and product is still going through a lot of changes it is not likely the best choice. Instead of going for the heaviest solution there are several much cheaper options to evaluate until the engineering organisation is big enough to benefit from microservices.

Monolith is not something to be afraid of. The key thing is recognising the point at which it starts to slow down the organisation. Splitting single codebase into smaller pieces is not an easy endeavour but it has some benefits as well. It helps to avoid having too much knowledge stuck in legacy systems and is an opportunity to renew team’s knowledge of the system.