Back to blog

Building a SaaS for radiation dosimetry in Go and React

Dosismart is a dose calculation platform I'm building for the French operational dosimetry market. Zitadel auth, OpenTelemetry tracing, 12 containers in the dev stack, and HDS certification ahead.

I’m building Dosismart, a radiation dose calculation and radionuclide management platform for the French operational dosimetry market. It’s a regulated domain: health data (HDS certification required), which means every architectural decision gets filtered through “can we prove this is secure and auditable.”

The backend

Go 1.25 with Gin for HTTP. The architecture is CSR (Controller-Service-Repository) with centralized utility packages. Controllers handle request parsing and response formatting. Services contain business logic. Repositories talk to PostgreSQL 17 via GORM.

The centralized utilities matter more than they sound. Without them, every domain (radionuclides, dose calculations, users, billing) would duplicate the same error handling, pagination, validation, and response formatting logic. A shared controller utility means adding a new domain is: define the model, write the service, plug it into the controller pattern. The boilerplate is already handled.

One important rule: unit strings are display-only. If a user enters a value in millimeters, the backend converts to centimeters (canonical unit) for storage and calculation, but keeps the original unit string for UI display. The calculation engine never sees anything except canonical SI units. This sounds obvious but I’ve seen dose calculation software get this wrong. A unit confusion in dosimetry isn’t a rounding error, it’s a safety incident.

Auth: Zitadel over Keycloak

I picked Zitadel as the OIDC provider over Keycloak. The deciding factors were:

  1. Zitadel is lighter. Keycloak’s JVM footprint is significant when you’re already running 12 containers locally.
  2. The API is more straightforward. Creating OIDC applications, managing users, and configuring custom claims actions in Keycloak requires navigating a Java admin console. Zitadel has a cleaner REST API.
  3. The entire Zitadel setup is provisioned with Terraform: OIDC applications, users, roles, branding, custom claims. Tear down the dev environment, terraform apply, and you have auth back in 30 seconds.

The dev environment has three pre-configured users: regular user, admin, and biller. Each has different role claims so I can test authorization boundaries without manual setup.

The OIDC flow is standard: React frontend redirects to Zitadel, user authenticates, Zitadel redirects back with tokens, Go backend validates them. The Caddy proxy in front of Zitadel’s login UI handles TLS termination. Getting the proxy config right took longer than the actual OIDC integration because Zitadel’s login UI is a separate Next.js app that expects specific headers.

The container situation

The dev environment runs 12+ containers split across composable Compose files:

compose.yaml (orchestrator)
├── compose.app.yaml         # Vite frontend, Go backend, MailHog
├── compose.db.yaml          # PostgreSQL 17, pgAdmin
├── compose.zitadel.yaml     # Zitadel, login UI, Caddy proxy
├── compose.observability.yaml # OTel collector, Fluentd, OpenSearch, Data Prepper
├── compose.seed.yaml        # Database seeding
└── compose.metabase.yaml    # Analytics dashboard

Source code is bind-mounted with hot-reload for both frontend and backend. The Go binary rebuilds inside the container on file change. The React frontend uses Vite’s HMR. This means docker compose up gives you a full production-mirror environment with live editing. No “works on my machine” because everyone runs the same containers.

The split into separate Compose files is deliberate. I don’t always need Metabase running. I don’t always need the full observability stack. docker compose -f compose.yaml -f compose.app.yaml -f compose.db.yaml up gives me the minimum. Add observability when I’m debugging tracing issues.

Observability from day one

OpenTelemetry is integrated into the Go backend from the start. Every HTTP request generates a trace that flows through the OTel collector and into OpenSearch via Data Prepper. Fluentd handles log aggregation.

This sounds like overkill for a product that isn’t live yet. But I’ve joined too many teams where observability was added after launch and it’s always a mess. When a dose calculation endpoint returns the wrong value in production, I want a trace that shows exactly which service was called, what inputs it received, and where the logic diverged. Retrofitting that onto a running system is painful. Building it in from the start is cheap.

Testing strategy

Three layers:

Unit tests: Vitest for the React frontend, Go’s standard testing package for the backend. These run fast and cover business logic in isolation.

API functional tests: A separate TypeScript project that makes real HTTP requests to a running backend. This catches integration issues that unit tests miss, like “the auth middleware rejects valid tokens because the JWKS endpoint URL changed” or “the database migration added a NOT NULL column but the seeder doesn’t populate it.”

E2E tests: Playwright running headless against the full Docker Compose stack. The CI pipeline starts everything up, waits for health checks, runs Playwright, and tears it down. This is slow (minutes, not seconds) but it catches real user-facing bugs.

CI runs in GitLab: test, lint, build, E2E, deploy. The stages are sequential because there’s no point building Docker images if the tests fail.

Production builds

The Dockerfiles use multi-stage builds. Frontend: Node 22 slim for the build step (pnpm install, vite build), then copy static assets into nginx:alpine. Backend: Go 1.25 alpine for the build (static binary with CGO disabled), then copy into gcr.io/distroless/static:nonroot.

The distroless base image for the backend contains no shell, no package manager, nothing except the binary and CA certificates. The process runs as non-root (UID 65534). This matters for HDS certification because auditors want to see minimal attack surface and least-privilege execution.

What’s left

The product architecture is done. The hard part ahead is HDS certification, which involves a security audit, a disaster recovery plan, hosting on French HDS-certified infrastructure, and documentation that proves the system handles personal health data correctly. The technical requirements are manageable. The paperwork is where it gets painful.

Stripe is wired in for billing. Metabase is there for analytics. The deployment will be on Scaleway (which has HDS certification) via the same ArgoCD/Terraform stack I use for everything else. The boring parts are boring by design.