Back to blog

I built a monorepo template because I kept wasting the first two weeks

Every project starts the same way: auth, database, CI, Docker, tests. I got tired of redoing it, so I built a template. Here's what kept going wrong and how I fixed it.

Every project I’ve started in the last few years has had the same first two weeks. Not building the product. Building the stuff around the product. Auth. Database migrations. Docker Compose. CI pipeline. Linting config. Test setup. OpenTelemetry. Every single time.

Sometimes I’d copy from the last project. That meant spending two days untangling business logic from infrastructure code, renaming things, fixing the stuff that rotted since I last touched it. Sometimes I’d start fresh. That meant a week of setup before I could write a single line of domain logic.

I finally built a template. Clone it, docker compose up, start writing features. Here’s every problem that led to it.

”We’ll add auth later”

I’ve said this on three projects. It never goes well.

You start building with user.id = 1 hardcoded somewhere. Routes don’t check permissions. The frontend assumes you’re logged in. It works fine for two months. Then someone says “we need real users” and you discover that auth touches everything. Every API endpoint needs a middleware. Every frontend route needs a guard. The database schema needs tenant scoping. Two weeks of retrofitting.

The template ships with Zitadel (OIDC) fully configured. docker compose up starts Zitadel with its own Postgres, a login UI, and a Terraform container that auto-provisions the OIDC clients. The API has auth middleware from the first request. The frontend has oidc-client-ts wired in with token refresh and session management. Three test users with different roles are pre-created.

I never want to retrofit auth again.

”I’ll set up observability when we need it”

You need it on the first production bug. You just don’t know it yet.

I’ve joined two projects where observability was added after launch. Both times it was a mess. Log lines with console.log("here"). No trace IDs connecting a frontend error to a backend failure to a database query. Someone asks “why was this request slow?” and the answer is “let me add some logging and deploy, then try again.”

The template has OpenTelemetry from line one. Every HTTP request gets a trace. The API logs structured JSON via Pino. Prometheus scrapes metrics. The setup is in api/src/core/telemetry/ and it takes zero effort per feature because the middleware handles it.

The trick that makes it painless: every core module has a fallback. No Prometheus running? Metrics go nowhere. No Jaeger? Traces go to stdout. The API starts and works regardless. You don’t have to run the full observability stack locally to develop. But when you need it, it’s already there.

”The tests will come later”

They won’t.

The template has three test layers baked in from the start:

Unit tests run with Bun’s built-in test runner. Fast, no setup. Each DDD feature has a test/ directory with tests for the domain and application layers.

Integration tests boot the real API against a real Postgres and make HTTP requests. These catch the bugs that unit tests miss: auth middleware rejecting valid tokens because the JWKS endpoint changed, a Drizzle migration that added a NOT NULL column the seeder doesn’t populate.

E2E tests use CodeceptJS with Playwright. The CI pipeline spins up all 15 Docker services, waits for health checks, runs the tests with 4 parallel workers, and collects coverage. It’s slow. But it catches “the login flow breaks when Zitadel returns a slightly different claim format” which nothing else catches.

The CI pipeline enforces all three. Quality checks (lint, format, typecheck) run in parallel per package. Unit tests run next. Integration tests after that. Docker builds only happen if tests pass. E2E runs last against the built images. If any stage fails, nothing deploys.

I don’t have to decide to add tests. They’re already there. I just write them alongside the feature because the infrastructure expects them.

”Docker Compose is just for dev, we’ll figure out prod later”

The template’s docker-compose.yml runs 15 services:

  • The API (Bun + Elysia)
  • The frontend (Vite dev server)
  • PostgreSQL (app database)
  • Redis + Valkey (cache)
  • Zitadel + its Postgres + login UI + Terraform provisioner (auth)
  • MailHog (dev email)
  • Prometheus (metrics)
  • Metabase (analytics dashboards)
  • PostHog (product analytics)

That sounds like a lot. It is. But every one of these exists because I had a project where we didn’t have it and paid for it later. No email server locally? “I’ll test the password reset flow in staging.” No analytics? “How many people use this feature?” shrug. No MailHog? You accidentally send test emails to real users.

The Compose file has health checks on everything. The Terraform container waits for Zitadel to be healthy, provisions the OIDC clients, and writes a config file that the API reads on startup. The API waits for Postgres. It all comes up in the right order.

New developer joins? git clone, docker compose up, write code. Not “install Go 1.22, Node 18, Postgres 15, set up Zitadel manually, configure 47 environment variables."

"We’ll clean up the architecture when it gets messy”

It’s always messy. You just stop noticing.

The template enforces DDD from the start. Every feature has four layers:

api/src/features/<feature>/
├── domain/           # Entities, value objects, repository contracts
├── application/      # Use cases
├── infrastructure/   # Drizzle repos, external adapters
├── presentation/     # Routes, DTOs
├── container/        # DI bindings (Inversify)
└── test/

The domain defines interfaces. The infrastructure implements them. The application uses the interfaces. Inversify wires them together. If I want to test a use case without Postgres, I inject a fake repository. If I want to swap Redis for Memcached, I write a new adapter and change one DI binding.

Error handling uses Effect.ts. No thrown exceptions in business logic. Every operation returns a typed result: success or a typed error. The presentation layer maps these to HTTP status codes. Error paths are explicit, testable, and visible in the type system.

Is this over-engineered for a solo project? Maybe. But I’ve watched three codebases turn into unmaintainable spaghetti because “we’ll add structure later.” Later never comes. Starting with structure is cheaper than adding it retroactively.

”Turborepo is overkill for two packages”

It’s not, because of caching.

The monorepo has four packages: api, web, e2e, packages/shared. pnpm workspaces manage dependencies. Turborepo orchestrates builds.

The real value: if I change a file in web/, running pnpm test skips the API tests entirely. Turbo hashes the inputs and checks the cache. Unchanged packages get a cache hit. On a full pipeline that runs lint, typecheck, unit tests, integration tests, and two Docker builds, the cache cuts execution time in half for most changes.

{
  "tasks": {
    "build": { "dependsOn": ["^build"], "outputs": ["dist/**"] },
    "test": { "dependsOn": ["^build"] },
    "lint": {},
    "dev": { "persistent": true, "cache": false }
  }
}

^build means “build my dependencies first.” So packages/shared builds before api and web, which depend on it. Everything else parallelizes.

CI uses Kaniko for Docker builds instead of Docker-in-Docker. No privileged containers, no Docker daemon in CI. Just a regular container that builds images. One less security problem.

The template ships with examples you delete

There’s a working organisation feature with full multi-tenancy, RBAC, member management. It goes through all four DDD layers with unit tests, integration tests, and benchmarks. There’s also a simpler example and an impersonation feature for staff access.

The point isn’t to keep them. It’s to read them, understand the patterns, then delete them and write your features. The docs/ directory has architecture guides for every core module and step-by-step feature implementation guides, because I kept forgetting my own patterns after a few weeks away.

Was it worth three weeks

The template took about three weeks of focused work. Since then I’ve used it to start Dosismart (which is now a real product) and one project that didn’t go anywhere. Both times I went from “new repo” to “writing business logic” in under an hour. Auth worked. CI worked. Docker worked. Tests worked.

The maintenance cost is real. Bun, Elysia, Drizzle, Effect.ts all move fast. Dependencies need bumping every few weeks. Patterns evolve. Improvements from active projects need porting back. The template is alive, not a one-time scaffold.

But the alternative is doing the same two weeks of setup every time. And every time, forgetting one thing (auth, observability, test infrastructure, Docker health checks) that costs more to add later than it would have cost to include from the start.

I’ll take the maintenance.