Building better monoliths

How do you take the best benefits from service-oriented architecture and modularity, without getting wrapped up in deployment complexity? Some people favour making monolithic applications: one codebase, that builds to one artifact, runs as one process, and does everything.
Monoliths are the future, right?

The opposing viewpoint is that everything should be microservices. Microservices mean that you can scale each component horizontally and closely match actual demand, offering resilience not just by running across multiple hosts and zones, but also by forcing clients to account for some level of backend unavailability.

If you like spending more time getting the app to deploy than you did on writing it, microservices might be for you. Once you’ve got the ground works sorted, adding another component is (or should be) easy, but laying those foundations takes a bunch of work you might not want or need.

So, if microservices aren’t the right fit and a monolith isn’t either, what do you do? Well, for a lot of cases you can start by building a better monolith. And by better, I mean “more modular”.

Raffi Asdourian

Loki, from Grafana, takes a decomposable approach to its application architecture. Loki is a logging solution inspired by Prometheus, using labels to let you query ingested log streams. The design of Loki lets you scale it by component in production, whilst letting you build and run the same code on a small local system, even away from good internet (Tom Wilkie calls this “airplane mode”).

From the outset, the authors set out to make Loki a good fit for these different scales. As a developer you can clone the code, build one container image, and run with minimal extra faff. On the other hand, maybe you work for a firm like Grafana Labs; you want to ingest, process and store logs without directly involving people in your availability and performance story. The same codebase lets you build Loki as modules and run it as a set of containers, sized for each component, and deployed across multiple failure zones.

You don’t need to use containers to make a modular monolith. You might split your app so it builds either into separate packages (.deb.rpm / your chosen technology) or as a single artefact for local development and testing. The point I’m making here is about a pattern, not any particular approach.

The advantage

Making your app easy to deploy in development means you spend more time on end-user value and less time wrestling with your local Docker daemon or Kubernetes cluster. If you’re writing code, you’ll usually be writing changes for just one, maybe two components at a time. The less you have to worry about the rest of your app, the more you can concentrate on coding.

Writing an app to work in this way has some overhead: it forces you to think up front about components and boundaries, and you need to implement a way for the different parts to communicate. What else do you get in return for the extra effort?

Just like a service-oriented or microservices architecture, the isolation between components helps limits the impact of changes. Those limits let you make more changes, more quickly.

When someone publishes code that uses this pattern, I’m really happy. As a passive consumer it’s great to have options for deployment; as a potential project contributor, this approach helps make for a smooth journey to getting changes ready, tested, reviewed and merged.

Does this mean all your code needs to be in one programming language, or one source code repository? It shouldn’t.

To make this pattern work, you do need a way to start one process that builds all the components of that workload. You’ll get the most out of it if that’s automated and it’s the same approach that your continuous delivery system is using. Big modular-monolith projects often have a few manual release steps that glue together really impressive amounts of automation.

The right kind of modularity depends on your context. Linus Torvalds is famously opposed to rewriting Linux as a microkernel. That’s kind of fine because people don’t try to run Linux as a distributed system at kernel level. However, if someone wants to pick a part of the kernel and try to improve it, there’s a high barrier to entry. Dodge that bullet if you can.

The Kubernetes project uses this pattern at several zoom levels. The overall cluster architecture separates out compute nodes from the control plane. Within the Kubernetes control plane, there are more components: a single API endpoint actually hides a separate persistence layer, general control loop processes, and the Kubernetes scheduler (another control loop, but split out).
Track closer in and you see that the general control loop processes (kube-controller-manager) are deliberately written so that you can, if you need to, split one out and scale it separately.

You don’t need to go modular for the early revisions of your code. If you’re working on a minimum viable demo, it makes sense to get that shippable before you think about modules and boundaries. Once you’ve actually tried implementing something you’ll have a much better idea of where those boundaries should be.

The promise

When this pattern works well, you get the best of both aspects. You can build a system that runs lean and provides value when it does. Kubernetes' modular design allowed Rancher to build k3s, a cut-down but nevertheless 100% certified compatible Kubernetes platform that runs on a Raspberry Pi upwards. Vanilla Kubernetes demands quite a bit more resources.

If you’re a SaaS provider with a tiered product offering, imagine you have a standard container image that you scale horizontally to run your free and basic product tiers. At the next level up, you’re offering extra facilities that demand additional tuning and separated deployments to make sure they scale, so you take the same codebase and build it a different way. Now you can scale out your advanced tier without increasing the cost of running the entry level.

Some vendors offer a “contact us” Enterprise model, where you’ll run a managed, but bespoke, flavour of your service with the integrations each tenant needs. Adopting a modular-monolith architecture lets you swap out the standard code for a custom component, whilst keeping the common parts consistent and easier to manage.

The catch

Maybe you already spotted a snag? This all works fine when your whole application runs using code you control. Whether that’s proprietary programming from the ground up, or a mixture of your own and open-source elements, what I suggested so far relies on build able to build and launch the whole thing, then run it locally.

Does this mean you can’t or shouldn’t structure your app the way I’ve outlined? To my mind, it doesn’t. The cloud is here and isn’t going away any time soon.

The way I see it, you can think of three kinds of managed service:

  • fundamental infrastructure such as Amazon EBS. Swapping these out would mean little or no code changes.
  • complex managed components: the service has direct dependencies on other vendor components. For example, AWS Lake Formation depends on S3, Glue, and IAM - with ties to other services if that didn’t feel like enough.
  • simple managed components, such as Amazon DocumentDB. Behind the scenes this uses EBS, KMS, EC2, IAM and other services - but as the client using it, you don’t need to care. That matters.

For the complex managed components, my advice is almost never to integrate your app directly into something like that. At some point you’re likely to regret it and extricating your own code is going to take time.

For me the last scenario is the interesting one. I’ll wrap up the article by covering a way forward.

The fix

OK, there isn’t a fix—there’s a lot of different options here.

Using DocumentDB as an example, you can take an approach from my last article: run the MongoDB in development, then swap that out for MongoDB Atlas or DocumentDB when you deploy to live. If you’re choosing between the simple kind of managed component, and the vendor doesn’t offer any an option for running a local development version of it in a container, keep looking a bit longer—you might find one that does.

Even if you’re not using containers for local development today, focusing on options that run in containers is a good idea. Apps that can run in containers tend to be less trouble than the ones that outright cannot. Developers who make the effort to publish a container image are showing that they are interested in supporting the community around their product.

Sometimes it doesn’t make sense to run the backend service locally. The way to decide comes down to how you use that managed service (and usually not on what that component is or does). If you’re using Amazon S3 for object storage with no frills, that’s not so hard to switch out (maybe for Minio or Google Cloud Storage). If you need all the nuances, including bucket policies, interaction with IAM, object locking, and more—then one service fits: Amazon S3.

How do you hide that complexity? Maybe you don’t have to. The right fit is going to depend a lot on your reason for building a modular monolith, and the value that this architecture provides once you have it. It’s OK to have components and code that leave in the rough edges of real-world services.
Your modular system might not work exactly right as a pure monolith, and that’s OK because all your real world deployments can rely on the real cloud service.

It’s a powerful pattern. It’s extra effort that, when you need it, lets you build scalable systems you can actually maintain. Just remember, you aren’t always gonna need it.


Keeping on top of all the latest features can feel like an impossible task. Is practical infrastructure-modernisation an area you are interested in hearing more about? Book a free chat with us to discuss this further.


This blog is written exclusively by The Scale Factory team. We do not accept external contributions.

Free Healthcheck

Get an expert review of your AWS platform, focused on your business priorities.

Book Now

Discover how we can help you.


Consulting packages

Advice, engineering, and training, solving common SaaS problems at a fixed price.

Learn more >

Growth solutions

Complete AWS solutions, tailored to the unique needs of your SaaS business.

Learn more >

Support services

An ongoing relationship, providing access to our AWS expertise at any time.

Learn more >