Published on

The 12-Factor App in 2026 — Revisiting Cloud-Native Best Practices

Authors

Introduction

In 2011, Heroku published the 12-Factor App methodology. It remains the gold standard for building cloud-native applications. Some principles evolved with Kubernetes and serverless, but the core ideas—strict separation of config and code, treating logs as event streams, disposability—are more relevant than ever.

I. Codebase: One Codebase, Many Deployments (Monorepo Considerations)

One codebase tracked in version control, deployed to dev, staging, production. In 2026, this often means a monorepo with multiple services:

// Example: Monorepo structure
repo/
├── packages/
│   ├── api-server/              # Deployed as separate service
│   │   ├── src/
│   │   ├── Dockerfile
│   │   └── package.json
│   ├── worker/                  # Deployed separately, same repo
│   │   ├── src/
│   │   ├── Dockerfile
│   │   └── package.json
│   ├── shared-lib/              # Shared code, published to npm
│   │   ├── src/
│   │   └── package.json
│   └── database/                # Migrations
├── infrastructure/              # IaC
├── lerna.json                   # Monorepo management
└── .github/workflows/           # CD for each service

// Problem 1: Each service has separate codebase (forbidden)
// repo-api/ (git repo)
// repo-worker/ (different git repo)
// repo-shared/ (third git repo)
// WRONG - violates factor I

// Problem 2: One codebase deployed differently per environment
// api-server/Dockerfile.dev
// api-server/Dockerfile.prod
// WRONG - codebase should be identical

// Correct: One codebase, build once, deploy everywhere
// Same Docker image runs in dev, staging, prod
// Environment differences via config (Factor III)

II. Dependencies: Explicit Declaration

Declare all dependencies in a manifest. Never rely on system packages:

// Good: All dependencies explicit
// package.json
{
  "dependencies": {
    "express": "^4.18.2",
    "prisma": "^5.0.0",
    "pino": "^8.0.0"
  },
  "devDependencies": {
    "@types/node": "^20.0.0",
    "jest": "^29.0.0"
  }
}

// package-lock.json or pnpm-lock.yaml (locked versions, committed to git)

// Dockerfile: Install from manifest, never assume system packages
FROM node:20-alpine

WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
CMD ["npm", "start"]

// Bad: Relying on system packages
FROM ubuntu:22.04

WORKDIR /app
RUN apt-get update && apt-get install -y nodejs npm  # Version unpredictable
COPY . .
RUN npm install
CMD ["npm", "start"]

// Why it matters:
// Developer on macOS with Homebrew Node 18
// CI/CD has Node 20
// Production has Node 16
// Bugs only appear in production due to version differences

III. Config: Strict Separation of Config and Code

Store configuration in environment variables, never in code:

// Good: Config from environment
import dotenv from 'dotenv'
dotenv.config()

const config = {
  port: parseInt(process.env.PORT || '3000', 10),
  nodeEnv: process.env.NODE_ENV || 'development',
  database: {
    url: process.env.DATABASE_URL,
    pool: parseInt(process.env.DB_POOL_SIZE || '20', 10)
  },
  redis: {
    url: process.env.REDIS_URL || 'redis://localhost:6379'
  },
  auth: {
    jwtSecret: process.env.JWT_SECRET,
    jwtExpiry: process.env.JWT_EXPIRY || '24h'
  },
  external: {
    stripeKey: process.env.STRIPE_SECRET_KEY,
    sendgridKey: process.env.SENDGRID_API_KEY
  }
}

export const getConfig = () => ({
  ...config,
  // Validate required env vars at startup
  validate() {
    if (!config.database.url) throw new Error('DATABASE_URL not set')
    if (!config.auth.jwtSecret) throw new Error('JWT_SECRET not set')
    if (process.env.NODE_ENV === 'production' && !config.external.stripeKey) {
      throw new Error('STRIPE_SECRET_KEY required in production')
    }
  }
})

// Bad: Hardcoded config
const config = {
  port: 3000,
  database: 'postgresql://user:password@localhost:5432/myapp',
  stripeKey: 'sk_live_abc123'
}

// Bad: Config per environment (violates Factor I)
const config =
  process.env.NODE_ENV === 'production'
    ? { /* prod config */ }
    : { /* dev config */ }

IV. Backing Services: Treat as Attached Resources

Any resource (database, cache, queue) accessed via URL, not special code:

// Good: Services as URLs
const dbUrl = new URL(process.env.DATABASE_URL!)
const redisUrl = new URL(process.env.REDIS_URL!)
const sqsUrl = new URL(process.env.SQS_QUEUE_URL!)

// Initialize from URLs
const prisma = new PrismaClient({
  datasources: {
    db: { url: process.env.DATABASE_URL }
  }
})

const redisClient = redis.createClient({ url: process.env.REDIS_URL })

const sqs = new SQSClient({ endpoint: process.env.SQS_QUEUE_URL })

// Swap implementations by changing URL
// In dev: PostgreSQL local
// In production: Aurora PostgreSQL
// Code unchanged

// Bad: Special casing for different backing services
let db
if (process.env.NODE_ENV === 'test') {
  db = new InMemoryDatabase()
} else if (process.env.NODE_ENV === 'production') {
  db = new PostgresDatabase()
} else {
  db = new SQLiteDatabase()
}

// Bad: Importing provider SDK directly
import { MongoClient } from 'mongodb'
import { DynamoDBClient } from '@aws-sdk/client-dynamodb'
// Now switching databases requires code changes

V. Build, Release, Run: Strict Separation

Three distinct stages, each producing immutable artifacts:

# STAGE 1: BUILD (codebase + dependencies → runnable image)
docker build -t myapp:sha-abc123 .
# Output: Docker image (immutable)

# STAGE 2: RELEASE (image + config → release)
# Tag image with build number, copy config
docker tag myapp:sha-abc123 myapp:v1.2.3
docker push myapp:v1.2.3
# Output: Tagged image in registry (immutable)

# STAGE 3: RUN (release → running process)
docker run myapp:v1.2.3 \
  -e DATABASE_URL=postgres://prod \
  -e NODE_ENV=production
# Output: Running container

# Why three stages matter:
# - You can't build in production (too slow, risky)
# - You can't change running code (violates immutability)
# - Stages are always one-way: Build → Release → Run
// Example: CI/CD pipeline implementing this
// .github/workflows/deploy.yml

name: Build, Release, Run

on:
  push:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest
    outputs:
      image: ${{ steps.build.outputs.image }}
    steps:
      - uses: actions/checkout@v3

      - name: Build image
        id: build
        run: |
          IMAGE=myapp:${{ github.sha }}
          docker build -t $IMAGE .
          docker push $IMAGE
          echo "image=$IMAGE" >> $GITHUB_OUTPUT

  release:
    needs: build
    runs-on: ubuntu-latest
    steps:
      - name: Tag release
        run: |
          docker pull ${{ needs.build.outputs.image }}
          docker tag ${{ needs.build.outputs.image }} myapp:v${{ github.run_number }}
          docker push myapp:v${{ github.run_number }}

  deploy:
    needs: release
    runs-on: ubuntu-latest
    steps:
      - name: Deploy to production
        run: |
          # Deploy release image to production
          kubectl set image deployment/myapp \
            app=myapp:v${{ github.run_number }}

VI. Processes: Stateless and Sharing Nothing

Processes don't store state. Session data goes to Redis or database:

// Bad: Storing state in process memory
let sessionStore = new Map()

app.post('/login', (req, res) => {
  sessionStore.set(req.body.userId, { loggedIn: true })
  res.json({ success: true })
})

app.get('/profile', (req, res) => {
  if (sessionStore.get(req.user.id)?.loggedIn) {
    // This only works if user hits same process instance!
    // If load balancer routes next request to different container, fails
    res.json({ profile: true })
  }
})

// Good: State in persistent layer
app.post('/login', async (req, res) => {
  const token = jwt.sign({ userId: req.body.userId }, process.env.JWT_SECRET)
  // Token is self-contained, doesn't need server-side session
  res.json({ token })
})

app.get('/profile', async (req, res) => {
  // Or if you must store session:
  const session = await redis.get(`session:${req.user.id}`)
  // Redis shared across all instances
  res.json({ profile: true })
})

// Benefit: Processes are interchangeable
// Kubernetes can kill any pod, spin up new one
// Traffic continues without dropped sessions

VII. Port Binding: Self-Contained Service

App exports HTTP via port, doesn't rely on separate web server:

// Good: App includes HTTP server
import express from 'express'

const app = express()
app.get('/health', (req, res) => res.json({ ok: true }))

const port = process.env.PORT || 3000
app.listen(port, () => console.log(`Server running on ${port}`))

// Dockerfile runs app directly
CMD ["node", "dist/server.js"]

// Bad: Separate web server
# Dockerfile
FROM node:20
COPY . .
RUN npm install

# Expects separate Nginx reverse proxy
# App is not self-contained

VIII. Concurrency: Horizontal Scaling Via Multiple Processes

Scale by running more processes, not by making processes bigger:

// Process model:
// web: HTTP server (scales with traffic)
// worker: Background jobs (scales with queue depth)
// scheduler: Runs periodic tasks (single instance)

// Dockerfile supports multiple commands
CMD ["node", "dist/server.js"]  # web process
# or
CMD ["node", "dist/worker.js"]  # worker process
# or
CMD ["node", "dist/scheduler.js"]  # scheduler process

// kubernetes deployment specifies process type
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-web
spec:
  replicas: 5  # 5 web processes
  template:
    spec:
      containers:
      - name: web
        image: myapp:v1.2.3
        env:
        - name: PROCESS_TYPE
          value: "web"

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-worker
spec:
  replicas: 2  # 2 worker processes
  template:
    spec:
      containers:
      - name: worker
        image: myapp:v1.2.3
        env:
        - name: PROCESS_TYPE
          value: "worker"

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-scheduler
spec:
  replicas: 1  # Only 1 scheduler (single instance)
  template:
    spec:
      containers:
      - name: scheduler
        image: myapp:v1.2.3
        env:
        - name: PROCESS_TYPE
          value: "scheduler"

IX. Disposability: Fast Startup, Graceful Shutdown

Processes should start quickly and handle SIGTERM gracefully:

// Good: Handle shutdown signals
import express from 'express'
import { Server } from 'http'

const app = express()
let server: Server

app.listen(3000, () => {
  console.log('Started')
  server = app.listen(3000)
})

// Handle graceful shutdown
process.on('SIGTERM', async () => {
  console.log('SIGTERM received, shutting down gracefully')

  // Stop accepting new requests
  server.close(() => {
    console.log('HTTP server closed')
  })

  // Wait for in-flight requests to complete
  const timeout = setTimeout(() => {
    console.log('Timeout, force exiting')
    process.exit(1)
  }, 30000)  // 30 second timeout

  // Close database connections
  await prisma.$disconnect()
  clearTimeout(timeout)

  console.log('Shutdown complete')
  process.exit(0)
})

// Bad: Ignoring shutdown signals
app.listen(3000) // Doesn't handle SIGTERM
// Container gets killed with in-flight requests dropped

// Bad: Slow startup
app.use(async (req, res, next) => {
  // Connecting to database on every request
  const db = await connect()
  req.db = db
  next()
})
// Startup slow, Kubernetes thinks container is dead

X. Dev/Prod Parity: Keep Environments Nearly Identical

Same code, dependencies, and backing services locally and in production:

// Development setup
docker-compose up -d

# Brings up:
# - api-server (same image as production)
# - postgres (same version as production)
# - redis (same version as production)

# src/index.ts
const prisma = new PrismaClient({
  datasources: {
    db: { url: process.env.DATABASE_URL }
  }
})

// In dev: DATABASE_URL=postgresql://postgres:password@localhost:5432/dev
// In prod: DATABASE_URL=postgresql://[prod-cluster]

// Same code connects to both

// Bad: Different tools locally vs production
# Local development
sqlite3 myapp.db   # SQLite

# Production
RDS Aurora PostgreSQL  # Different database entirely

# Code has conditional logic
if (process.env.NODE_ENV === 'production') {
  // PostgreSQL
} else {
  // SQLite
}

// This causes: "It works on my machine" bugs

XI. Logs: Treat as Event Streams

Write logs to stdout, let infrastructure handle routing:

// Good: Log to stdout as JSON
import pino from 'pino'

const logger = pino({
  level: process.env.LOG_LEVEL || 'info',
  transport: {
    target: 'pino-pretty',
    options: {
      colorize: true
    }
  }
})

app.get('/orders', (req, res) => {
  logger.info({ userId: req.user.id, action: 'list_orders' })
  res.json({ orders: [] })
})

// Output to stdout (line-delimited JSON)
// {"level":30,"time":"2026-03-15T...","userId":"123","action":"list_orders"}

// Infrastructure routes logs
# Dockerfile
CMD ["node", "dist/server.js"]  # Logs go to stdout

# Kubernetes catches stdout → logs to aggregator (ELK, Datadog, etc)

# Bad: Writing logs to files
fs.appendFileSync('/var/log/myapp.log', logLine)
// Can't access logs from container
// Logs lost when container restarts

// Bad: Conditional logging
if (process.env.NODE_ENV === 'development') {
  console.log(debug info)  // Only in dev!
}
// Missing logs in production

XII. Admin Processes: Run as One-Off Tasks

Database migrations, backups, data cleanup—run as separate processes:

// Migrations: Run once per deployment
# Dockerfile
ENTRYPOINT ["node", "dist/migrate.js"]
# or as separate Kubernetes Job

apiVersion: batch/v1
kind: Job
metadata:
  name: db-migrate
spec:
  template:
    spec:
      containers:
      - name: migrate
        image: myapp:v1.2.3
        command: ["npm", "run", "db:migrate"]
        env:
        - name: DATABASE_URL
          value: postgresql://prod
      restartPolicy: Never

---
# Main deployment runs after migration succeeds
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  template:
    spec:
      containers:
      - name: app
        image: myapp:v1.2.3

// Database maintenance: Run as one-off task
npm run db:cleanup   # Delete old sessions, cache
npm run db:reindex   # Rebuild indices

// Don't run admin tasks in web processes
// Bad: Migration in startup
app.listen(3000, async () => {
  await prisma.$executeRaw('ALTER TABLE users ADD COLUMN...')
  // Slow startup, blocks requests, risky if multiple instances start simultaneously
})

Checklist

  • Single codebase tracked in git, deployed to dev/staging/prod
  • All dependencies explicit in manifest (package.json, Gemfile, etc)
  • Config only from environment variables, no hardcoded secrets
  • Database, cache, queue accessed via URL, not special code
  • Build produces immutable image, release tags it, run executes it
  • No session state stored in process memory
  • App exports HTTP via port, self-contained
  • Scale via process replication, not per-process growth
  • Handles SIGTERM gracefully with timeout
  • Dev environment matches prod (Docker Compose mirrors prod stack)
  • Logs written to stdout as structured JSON
  • Database migrations run as separate one-off tasks before deployment

Conclusion

The 12-Factor App isn't about specific tools—it's about principles. Whether you deploy to Kubernetes, Lambda, or Heroku, these factors lead to systems that are reliable, scalable, and maintainable. They've only become more relevant as infrastructure abstractions have improved.