Organization throughput, delivery contention and distributed systems
The book Accelerate provides a model to measure the performance of software teams. Teams are classified as "high performers" or "low performers" based on two key attributes: stability and throughput.
Stability is tracked by two metrics: the change failure rate (the rate at which a change introduces a defect) and the recovery failure time (the time taken to recover from a failure).
Throughput is measured by lead time (how long for a change to go from idea to production) and frequency (how often changes are deployed).
Measuring stability is important as it indicates the quality of work done.
Throughput tells us how long does it take to get a change into the hands of users, and how often is that achieved?
Both stability and throughput are technical measures used to answer questions such as "What is the quality of our work?" and "How efficiently can we produce work of that quality?”
Throughput is a measure of a team’s efficiency at delivering ideas, in the form of working software.Accelerate: The Science of Lean Software & DevOps
In the book it’s also noted that achieving high performance in both of these attributes is possible with any kind of system, provided that it is loosely coupled. This is a key architectural property that can be achieved in many different ways. Achieving a loosely coupled, well-encapsulated architecture with an organizational structure to match (let’s not forget conways law) is critical for delivery performance and scaling an engineering organization while increasing productivity linearly.
The orthodox view of scaling software development teams states that while adding developers to a team may increase overall productivity, individual developer productivity will in fact decrease due to communication and integration overheads.
In order to achieve this there are in reality two options: build big systems and optimize them for modularity, deployability and testability, or break them into smaller, separate pieces a.k.a. microservices.
When lots of people work together, they can get in each other's way. Differing ideas on when to deploy and who makes decisions can lead to confusion. In the book 'Monolith to Microservices: Evolutionary Patterns to Transform Your Monolith', this problem is referred to as delivery contention. Having a monolith doesn't guarantee you’ll have this problem. But with a microservices architecture, you’ll have clearer boundaries that make it easier to reduce avoid it.
Microservices are a common strategy for server-side development nowadays, but user interfaces are often left as a single, large monolithic layer.
If we want an architecture that makes it easier to deploy new features quickly, leaving the UI as a monolith, can make it harder to achieve, and impose a cap on the organisation throughput. Splitting user interfaces should be an option for the frontend as valid as microservices are for the backend.
A loosely coupled architecture for the frontend needs to provide a solution that shortens lead times and allows to get feedback from users quickly.
The frontend is a battleground. It's where everything most come togehther as a cohesive whole for the user, so it receives the most feedback and experiences the most change. In this domain most code is ephemeral and meant to be replaced by a new A/B test or feature, so it should be optimized for change, deletability and constant A/B testing. We need an architecture for the frontend that optimises for iteration speed.
I would argue that iteration speed is one of the most important architectural properties of a frontend application.
Microfrontends are a strategy to improve organization throughput targeting frontend applications, however they come with trade-offs. You need to carefully decide how to split an application. The granularity of the applications that come out of this strategy can have significant trade-offs, some due to the constraints inherent of distributed systems and some due to specific constraints of frontend applications. But if you carefully keep these things in mind i believe that a distributed architecture for the frontend can be beneficial in a large organisation.
Everything in software architecture is a trade-off. First Law of Software Architecture
Building evolutionary architectures
Links:
A really nice set of talks on microfrontends:
https://portal.gitnation.org/tags/micro-frontends
A good article about module federation shared api, one of the first i’ve seen mentioning the issues of resolving dependency versions using semver at runtime. Although i don’t think the article stresses the issues that may come with this approach enough:
https://dev.to/infoxicator/module-federation-shared-api-ach