Skip to main content
All posts
DevSecOps10 min read

From Monorepo to Multi-Repo (or Back): Making the Right Decision for Enterprise Teams

A decision framework for choosing between monorepo and multi-repo strategies in enterprise environments, covering tooling comparison, CI/CD implications, hybrid patterns, and practical migration guidance.

Published

The monorepo debate generates more opinions than evidence. Advocates point to Google, Meta, and Microsoft as proof that monorepos scale. Critics point to the same companies and note they built custom tooling costing millions to make it work. The truth, as always, is that the right answer depends on your organization.

This post provides a decision framework based on real enterprise migration projects. We have helped teams move from multi-repo to monorepo, from monorepo to multi-repo, and — most commonly — to a hybrid pattern that takes the best of both.

Monorepo Benefits: What You Actually Get

Atomic Changes Across Services

This is the primary benefit and the one that justifies all the complexity. When a shared library changes, you update every consumer in the same pull request. No coordinated multi-repo releases. No "deploy library v2.3.1 then update service A then update service B" choreography.

Code
# Monorepo atomic change — single PR
packages/shared-auth/src/token.ts       # Library change
services/api-gateway/src/middleware.ts   # Consumer update
services/user-service/src/auth.ts       # Consumer update
services/order-service/src/auth.ts      # Consumer update

In a multi-repo setup, this same change requires four pull requests, four CI runs, and careful ordering.

Shared Tooling and Standards

A monorepo enforces consistency. ESLint config, TypeScript settings, Docker base images, CI/CD templates — they all live in one place. When you update a linting rule, every project picks it up.

Code
monorepo/
├── .eslintrc.js              # One config for all
├── tsconfig.base.json        # Shared TypeScript settings
├── Dockerfile.base           # Shared base image
├── nx.json                   # Build orchestration
├── packages/
│   ├── shared-auth/
│   ├── shared-logging/
│   └── shared-models/
├── services/
│   ├── api-gateway/
│   ├── user-service/
│   └── order-service/
└── infrastructure/
    ├── terraform/
    └── helm/

Cross-Project Visibility

Every developer can see every service. They can read the code, understand the architecture, and find examples of how to use a shared library. This sounds trivial, but in a 200-developer organization with 50 repositories, discovering how other teams solved a problem is genuinely difficult.

Multi-Repo Benefits: What You Actually Get

Team Autonomy

Each team owns their repository. They choose their branching strategy, their CI/CD pipeline structure, their release cadence. No coordination with other teams for builds, merges, or deployments.

Access Control

In regulated environments, some code must be accessible only to specific teams. Multi-repo makes this simple: set repository permissions. In a monorepo, you need CODEOWNERS files and careful path-based access control, which most Git hosting platforms support imperfectly.

Build Isolation

A broken test in one repository does not block another team's deployment. In a monorepo, a failing test in a shared package can block every team until it is fixed. Monorepo tooling mitigates this with affected-project detection, but it is never as clean as true isolation.

Simpler CI/CD

Each repository has its own pipeline. The pipeline knows exactly what to build, test, and deploy. No build graph analysis, no affected-project detection, no cache management. This simplicity matters for teams without dedicated platform engineers.

The Hybrid Pattern

Most enterprises end up here. The pattern:

Monorepo for tightly coupled services. A bounded context with 3-5 services that share models, deploy together, and are owned by a single team or closely collaborating teams.

Separate repositories for independent products. A payment platform, a customer portal, and an internal tool each get their own repository (or their own monorepo).

Shared libraries in a dedicated repository. Published as packages to a private registry (Azure Artifacts, GitHub Packages, npm private). Versioned with semantic versioning.

Code
Organization structure:
├── payment-platform/          # Monorepo (4 services, 1 team)
│   ├── services/
│   │   ├── payment-api/
│   │   ├── payment-processor/
│   │   ├── payment-reconciler/
│   │   └── payment-gateway/
│   └── packages/
│       ├── payment-models/
│       └── payment-utils/
├── customer-portal/           # Monorepo (3 services, 2 teams)
│   ├── services/
│   │   ├── portal-bff/
│   │   ├── portal-web/
│   │   └── portal-api/
│   └── packages/
│       └── portal-components/
├── shared-libraries/          # Multi-repo (published packages)
│   ├── auth-library/
│   ├── logging-library/
│   └── http-client/
└── infrastructure/            # Separate repo (platform team)
    ├── terraform-modules/
    ├── helm-charts/
    └── policy-definitions/

Tooling Comparison

Nx

Best for: TypeScript/JavaScript monorepos with 5-50 projects. Strong ecosystem integration (React, Angular, Next.js, Node.js).

JSON
// nx.json
{
  "targetDefaults": {
    "build": {
      "dependsOn": ["^build"],
      "cache": true
    },
    "test": {
      "cache": true
    },
    "lint": {
      "cache": true
    }
  },
  "namedInputs": {
    "default": ["{projectRoot}/**/*", "sharedGlobals"],
    "production": ["default", "!{projectRoot}/**/*.spec.ts"]
  }
}

Strengths:

  • Computation caching (local and remote via Nx Cloud)
  • Affected command — only build/test what changed
  • Code generators for scaffolding new projects
  • Dependency graph visualization
  • Plugin ecosystem for common frameworks

Weaknesses:

  • Heavy configuration for non-JavaScript projects
  • Nx Cloud required for remote caching (paid for large teams)
  • Learning curve for the plugin system

Turborepo

Best for: Simpler TypeScript/JavaScript monorepos where you want caching without heavy configuration.

JSON
// turbo.json
{
  "pipeline": {
    "build": {
      "dependsOn": ["^build"],
      "outputs": ["dist/**", ".next/**"]
    },
    "test": {
      "dependsOn": ["build"],
      "outputs": []
    },
    "lint": {
      "outputs": []
    },
    "deploy": {
      "dependsOn": ["build", "test", "lint"],
      "outputs": []
    }
  }
}

Strengths:

  • Minimal configuration
  • Fast remote caching via Vercel or self-hosted
  • Incremental builds with content-aware hashing
  • Simple mental model

Weaknesses:

  • Less mature than Nx for complex dependency graphs
  • Fewer code generation features
  • Primarily JavaScript/TypeScript focused

Bazel

Best for: Large-scale polyglot monorepos (500+ projects, multiple languages). Enterprise teams with dedicated build infrastructure.

Python
# BUILD.bazel
load("@rules_dotnet//dotnet:defs.bzl", "csharp_library", "csharp_test")

csharp_library(
    name = "payment-models",
    srcs = glob(["src/**/*.cs"]),
    deps = [
        "//packages/shared-models",
        "@nuget//Newtonsoft.Json",
    ],
    visibility = ["//services/payment:__subpackages__"],
)

csharp_test(
    name = "payment-models-tests",
    srcs = glob(["tests/**/*.cs"]),
    deps = [
        ":payment-models",
        "@nuget//xunit",
        "@nuget//xunit.runner.visualstudio",
    ],
)

Strengths:

  • Hermetic builds (guaranteed reproducibility)
  • Language agnostic (Java, C#, Go, Python, TypeScript, C++)
  • Remote execution (distribute builds across a cluster)
  • Fine-grained dependency tracking at the file level

Weaknesses:

  • Steep learning curve (Starlark build language)
  • Significant infrastructure investment (remote execution service)
  • Community rulesets vary in quality
  • Overkill for teams under 100 developers

CI/CD Implications

The repository strategy fundamentally shapes your CI/CD architecture.

Monorepo CI/CD

The challenge is avoiding "build everything on every commit." You need affected-project detection:

YAML
# GitHub Actions — Nx affected builds
name: CI
on:
  pull_request:
    branches: [main]

jobs:
  affected:
    runs-on: ubuntu-latest
    outputs:
      matrix: ${{ steps.set-matrix.outputs.matrix }}
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - uses: actions/setup-node@v4
      - run: npm ci
      - id: set-matrix
        run: |
          AFFECTED=$(npx nx show projects --affected --base=origin/main --head=HEAD --json)
          echo "matrix=$AFFECTED" >> $GITHUB_OUTPUT

  build:
    needs: affected
    runs-on: ubuntu-latest
    strategy:
      matrix:
        project: ${{ fromJson(needs.affected.outputs.matrix) }}
    steps:
      - uses: actions/checkout@v4
      - run: npx nx build ${{ matrix.project }}
      - run: npx nx test ${{ matrix.project }}
YAML
# Azure DevOps — path-based triggers
trigger:
  branches:
    include: [main]
  paths:
    include:
      - services/payment-api/**
      - packages/shared-models/**

# This pipeline only runs when payment-api or its dependencies change

Multi-Repo CI/CD

Each repository has a simple, independent pipeline:

YAML
# azure-pipelines.yml — per-repository
trigger:
  branches:
    include: [main]

stages:
  - stage: Build
    jobs:
      - job: BuildAndTest
        steps:
          - script: dotnet build
          - script: dotnet test
          - script: docker build -t payment-api .

  - stage: Deploy
    jobs:
      - deployment: DeployToAKS
        environment: production

The simplicity is compelling. But coordinating changes across multiple repos requires a shared library versioning strategy:

YAML
# Dependency update workflow
# 1. Merge change to shared-auth library
# 2. Library CI publishes shared-auth@2.3.1 to Azure Artifacts
# 3. Dependabot/Renovate detects new version in consumer repos
# 4. Automated PRs update package references
# 5. Consumer CI validates compatibility
# 6. Teams merge and deploy independently

This works but adds latency. A breaking change in a shared library takes hours or days to propagate, compared to minutes in a monorepo.

Decision Framework

Loading diagram...

Use this framework to guide your decision. Score each dimension for your organization:

Choose Monorepo When:

  • Team coupling is high — Teams frequently change each other's code or share models
  • Deployment coordination is painful — You spend significant time orchestrating multi-service releases
  • Consistency matters — Regulatory or compliance requirements demand uniform tooling and standards
  • You have platform engineering capacity — Someone can own the monorepo tooling (Nx, Bazel, CI/CD)
  • Your codebase is primarily one language — Nx and Turborepo work best in JavaScript/TypeScript ecosystems

Choose Multi-Repo When:

  • Team autonomy is critical — Teams have different tech stacks, release cadences, or compliance requirements
  • Access control is non-negotiable — Regulatory requirements demand strict code access boundaries
  • Your CI/CD is simple — Each service builds and deploys independently with no cross-dependencies
  • No platform team — You do not have the capacity to maintain monorepo tooling
  • Teams are geographically distributed — Remote teams benefit from smaller, faster clones and focused code review

Choose Hybrid When:

  • You have bounded contexts — Groups of 3-5 services that are tightly coupled within the group but loosely coupled between groups
  • Mixed tech stacks — Some teams use .NET, others use TypeScript, others use Python
  • Growing organization — You started multi-repo and specific teams are hitting coordination pain, but not everyone
  • Gradual migration — You want to move toward monorepo incrementally without a big-bang reorganization

Shared Library Update Flow Comparison

Loading diagram...

Migration Considerations

Multi-Repo to Monorepo

Bash
# Preserve history when merging repos
# For each repo, rewrite paths to target subdirectory
git clone https://github.com/org/payment-api.git
cd payment-api
git filter-repo --to-subdirectory-filter services/payment-api

# In the monorepo, add as remote and merge
cd ../monorepo
git remote add payment-api ../payment-api
git fetch payment-api
git merge payment-api/main --allow-unrelated-histories
git remote remove payment-api

Risk: Breaking CI/CD during migration. Mitigate by running both pipelines (old per-repo and new monorepo) in parallel for 2 weeks.

Monorepo to Multi-Repo

Bash
# Extract a service with full history
git clone https://github.com/org/monorepo.git service-extract
cd service-extract
git filter-repo --path services/payment-api/ --path packages/payment-models/
# This creates a new repo with only the relevant history

Risk: Breaking shared library references. Mitigate by publishing shared libraries as packages before extracting services.

Performance at Scale

Large monorepos hit performance walls. Here is what to expect and how to mitigate:

ScaleChallengeMitigation
50 projectsNoneStandard Git works fine
100 projectsSlow CI (building everything)Nx/Turborepo affected detection
500 projectsSlow clone, large working treeShallow clones, sparse checkout
1000+ projectsGit performance limitsVFS for Git, Bazel remote execution
Bash
# Sparse checkout — only checkout what you need
git clone --no-checkout --filter=blob:none https://github.com/org/monorepo.git
cd monorepo
git sparse-checkout init --cone
git sparse-checkout set services/payment-api packages/shared-models
git checkout main

Conclusion

The repository strategy is an infrastructure decision, not a religious one. Monorepos optimize for coordination and consistency. Multi-repos optimize for autonomy and simplicity. Hybrids trade some optimization for flexibility.

Start with the decision framework. Score your organization honestly on coupling, compliance, platform capacity, and team structure. The answer will be obvious once you stop treating it as a binary choice.

If you need help evaluating your repository strategy or planning a migration between monorepo and multi-repo, contact us at mbrahim@conceptualise.de. We have guided enterprise teams through both directions of this migration and can help you avoid the common traps.

Topics

monorepo vs multi-repo enterpriseNx Turborepo Bazel comparisonmonorepo CI CD pipelinerepository strategy enterprisemonorepo migration guide

Frequently Asked Questions

No. A monorepo is a repository strategy, not an architecture pattern. You can have a monorepo containing dozens of independently deployable microservices, shared libraries, and infrastructure code. The code in a monorepo can be highly modular. A monolith is an architecture where all functionality is deployed as a single unit. You can have a monolith in a multi-repo setup and microservices in a monorepo.

Expert engagement

Need expert guidance?

Our team specializes in cloud architecture, security, AI platforms, and DevSecOps. Let's discuss how we can help your organization.

Get in touchNo commitment · No sales pressure

Related articles

All posts