[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Planning, Automation and Monorepo: How Monzo Does Code Migrations Across 2800 Microservices

Planning, Automation and Monorepo: How Monzo Does Code Migrations Across 2800 Microservices

Monzo products are supported by an extensive microservice-based platform of over 2800 services. The company relies on planning and heavy automation to drive code migrations at scale and leverages config service to support gradual roll forwards and quick rollbacks in case of issues. Migrations are managed by a central team rather than service owner teams to avoid delays and inconsistencies.

Monzo's vast portfolio of microservices poses unique challenges when rolling out sweeping changes, such as updating library versions for consistency and freshness. The company opted to centralize code migrations in most cases to speed up migrations and improve consistency across services.

The team responsible for running migrations laid down principles for applying changes at scale, including making migrations transparent to service owners, avoiding downtime, gradual rollouts to reduce blast radius, and applying the 80/20 rule for automation to avoid diminishing returns for tackling unusual use cases. The company heavily standardized its development stack, choosing Go as the programming language and adopting monorepo for all its source code.

Using a Library Wrapper to Switch Between Libraries (Source: Monzo Engineering Blog)

In the post, Will Sewell, platform engineer at Monzo, describes a typical process for rolling out a new library, using the migration from OpenTracing to OpenTelemetry as an example. For any extensive change, the migration team would present a proposal in Slack or during the architecture review meeting to raise awareness and solicit feedback from the engineering organization before starting work.

The team uses a combination of automatic code updates using language-specific code replacement tools and manual updates for any unusual use cases. To switch between external libraries, the team ensures the wrapper is deployed to all services first. Then, it uses a configuration service to switch implementations gradually for a particular set of users or a percentage of requests.

InfoQ contacted Sewell to learn more about Monzo's microservices platform, considering its size and the nature of the banking sector.

InfoQ: You mentioned Monzo uses monorepo to host its entire codebase. Given the large number of services, how did you alter your CI/CD processes and tooling to support this approach?

Will Sewell: having a monorepo is hugely helpful with managing some of the challenges commonly associated with microservices:

  • All services use a consistent set of dependency versions (both internal and external libraries). This reduces cognitive overhead when engineers switch between services, and also means they can be operated in the same way.
  • When we make any global changes, like changing libraries, or adding new CI checks, we can easily re-run tests against the entire code base.
  • Our RPC clients import the generated protobuf code of the servers. This means that we get immediate compile-time feedback for whole classes of breaking API changes. It also gives us some nice DevEx benefits like 'jump to definition' working across services.

InfoQ: What advice would you give to companies struggling with the sprawl of microservices? Which practices, techniques, or tools should be their first priorities for improving the building and management of their microservices platform?

Will Sewell: We use a limited set of consistent technologies across all services: the same language/libraries and infrastructure. We also pin the versions of these technologies across all services. To maintain that consistency, we've had to build the kind of automation for upgrades and deployments that I discuss in the blog post. We also heavily use CI checks to enforce consistency (which are easier to implement if you have a monorepo).

Services run on a mature and opinionated platform: this means that implementing a service tends to just involve coding the core business logic. For operating services, we have off-the-shelf CLI tools and observability UIs. Infrastructure like databases, queues, metrics, logging, etc, are all available 'for free' without needing any separate provisioning. There are rare cases where services have unique requirements, but for 99% of services we find our opinionated platform is sufficient. The platform is also one of the things that re-enforces our limited set of consistent technologies.

These are pretty fundamental things that can't be changed overnight if you don't have them, but it's definitely a state that you can step towards.

InfoQ: Are you working or planning to work on any improvements to the code migrations approach?

Will Sewell: One thing comes to mind. While we've converged on a set of principles for running migrations, we have so far tended to build quite a bit of the tooling ad hoc each time. For example, as part of our bigger infrastructure migrations, we have implemented 'migrator' services for orchestrating the migrations and providing a hands-off approach.

We've implemented this pattern enough times now that I think we could polish it off and provide it as a platform-level abstraction for all engineers to use as a framework. This isn't something we're actively working on at the moment, but I think it would be a great direction to head in!

About the Author

Rate this Article

Adoption
Style

Related Content

BT