- Published on
Integration Testing With TestContainers — Real Databases in Your CI Pipeline
- Authors

- Name
- Sanjeev Sharma
- @webcoderspeed1
Introduction
Integration tests against mock databases are lies you tell yourself. Real tests need real dependencies. TestContainers spins up actual Docker containers per test—PostgreSQL, Redis, RabbitMQ—and tears them down afterward. No flakiness from stale state. No surprises when production uses different connection pooling behavior. This post covers TestContainers for Node.js with Vitest, database isolation strategies, parallel test execution, and the CI performance trade-offs.
- TestContainers Setup for PostgreSQL
- Per-Test Database Isolation
- Redis Container for Cache Testing
- Kafka Consumer Testing
- Parallel Test Execution
- CI Performance Optimization
- Checklist
- Conclusion
TestContainers Setup for PostgreSQL
Install dependencies:
npm install --save-dev @testcontainers/postgresql vitest
Create a test setup file that spins up a container once per test suite:
// test/db.setup.ts
import { PostgreSqlContainer, StartedPostgreSqlContainer } from '@testcontainers/postgresql';
import { Pool, PoolClient } from 'pg';
import { beforeAll, afterAll, beforeEach } from 'vitest';
let container: StartedPostgreSqlContainer;
let pool: Pool;
export async function setupDatabase() {
container = await new PostgreSqlContainer()
.withDatabase('testdb')
.withUsername('testuser')
.withUserPassword('testpass')
.withExposedPorts(5432)
.start();
const host = container.getHost();
const port = container.getMappedPort(5432);
pool = new Pool({
host,
port,
user: 'testuser',
password: 'testpass',
database: 'testdb',
});
// Run migrations
const client = await pool.connect();
try {
await client.query(`
CREATE TABLE users (
id SERIAL PRIMARY KEY,
email VARCHAR(255) NOT NULL UNIQUE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
`);
} finally {
client.release();
}
return { container, pool };
}
export async function teardownDatabase() {
await pool.end();
await container.stop();
}
export function getPool(): Pool {
return pool;
}
Use in your test file:
// test/users.test.ts
import { describe, it, expect, beforeAll, afterAll } from 'vitest';
import { setupDatabase, teardownDatabase, getPool } from './db.setup';
import { UserRepository } from '../src/users';
describe('UserRepository', () => {
let repo: UserRepository;
beforeAll(async () => {
await setupDatabase();
repo = new UserRepository(getPool());
});
afterAll(async () => {
await teardownDatabase();
});
it('should create a user', async () => {
const user = await repo.create({
email: 'alice@example.com',
});
expect(user).toHaveProperty('id');
expect(user.email).toBe('alice@example.com');
});
it('should find user by email', async () => {
await repo.create({ email: 'bob@example.com' });
const user = await repo.findByEmail('bob@example.com');
expect(user).toBeDefined();
});
});
Per-Test Database Isolation
Sharing database state between tests causes flakiness. Use transaction rollback for speed or table truncation for safety.
Transaction rollback approach (fastest):
// test/db.isolation.ts
import { beforeEach, afterEach } from 'vitest';
import { PoolClient } from 'pg';
import { getPool } from './db.setup';
let client: PoolClient;
export async function isolateWithTransaction() {
beforeEach(async () => {
client = await getPool().connect();
await client.query('BEGIN');
});
afterEach(async () => {
await client.query('ROLLBACK');
client.release();
});
return { getClient: () => client };
}
Use in tests:
describe('UserRepository with transaction isolation', () => {
const { getClient } = await isolateWithTransaction();
it('test 1 - creates user', async () => {
const client = getClient();
await client.query(
'INSERT INTO users (email) VALUES ($1)',
['user1@example.com']
);
});
it('test 2 - user1 should not exist (rollback worked)', async () => {
const client = getClient();
const result = await client.query(
'SELECT * FROM users WHERE email = $1',
['user1@example.com']
);
expect(result.rows).toHaveLength(0);
});
});
Truncate approach (safer for schema changes):
export async function isolateWithTruncate() {
beforeEach(async () => {
const client = await getPool().connect();
try {
await client.query('TRUNCATE TABLE users CASCADE');
} finally {
client.release();
}
});
}
Redis Container for Cache Testing
// test/redis.setup.ts
import { GenericContainer } from 'testcontainers';
import { createClient } from 'redis';
let redisContainer: StartedTestContainer;
let redisClient: ReturnType<typeof createClient>;
export async function setupRedis() {
redisContainer = await new GenericContainer('redis:7-alpine')
.withExposedPorts(6379)
.withHealthCheck({
test: ['CMD', 'redis-cli', 'ping'],
interval: 100,
timeout: 3000,
retries: 3,
})
.start();
const host = redisContainer.getHost();
const port = redisContainer.getMappedPort(6379);
redisClient = createClient({
socket: { host, port },
});
await redisClient.connect();
return redisClient;
}
export async function teardownRedis() {
await redisClient.disconnect();
await redisContainer.stop();
}
Test with Vitest:
import { describe, it, expect, beforeAll, afterAll, beforeEach } from 'vitest';
import { setupRedis, teardownRedis } from './redis.setup';
describe('CacheService', () => {
let redis: ReturnType<typeof createClient>;
beforeAll(async () => {
redis = await setupRedis();
});
afterAll(async () => {
await teardownRedis();
});
beforeEach(async () => {
await redis.flushAll();
});
it('should cache user data', async () => {
const cacheService = new CacheService(redis);
await cacheService.set('user:1', { id: 1, name: 'Alice' });
const cached = await cacheService.get('user:1');
expect(cached).toEqual({ id: 1, name: 'Alice' });
});
it('should expire keys', async () => {
const cacheService = new CacheService(redis);
await cacheService.setWithTTL('session:123', { data: 'xyz' }, 1);
await new Promise(resolve => setTimeout(resolve, 1100));
const expired = await cacheService.get('session:123');
expect(expired).toBeNull();
});
});
Kafka Consumer Testing
Test event-driven systems with Docker Compose:
// test/kafka.setup.ts
import { DockerComposeEnvironment, Wait } from 'testcontainers';
import { Kafka } from 'kafkajs';
import path from 'path';
export async function setupKafka() {
const env = await new DockerComposeEnvironment(
path.join(__dirname, 'fixtures'),
'docker-compose.kafka.yml'
)
.withWait(Wait.forService('kafka', Wait.forHealthCheck()))
.up();
const kafka = new Kafka({
clientId: 'test-client',
brokers: ['localhost:9092'],
connectionTimeout: 10000,
});
return { env, kafka };
}
export async function teardownKafka(env: DockerComposeEnvironment) {
await env.down();
}
Docker Compose file:
# test/fixtures/docker-compose.kafka.yml
version: '3.8'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.5.0
environment:
ZOOKEEPER_CLIENT_PORT: 2181
kafka:
image: confluentinc/cp-kafka:7.5.0
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_INTER_BROKER_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
depends_on:
- zookeeper
healthcheck:
test: kafka-broker-api-versions.sh --bootstrap-server localhost:9092
interval: 5s
timeout: 10s
retries: 5
Test Kafka consumer:
describe('OrderEventConsumer', () => {
let kafka: Kafka;
let consumer: Consumer;
beforeAll(async () => {
const { kafka: kafkaInstance } = await setupKafka();
kafka = kafkaInstance;
consumer = kafka.consumer({ groupId: 'test-group' });
await consumer.connect();
});
afterAll(async () => {
await consumer.disconnect();
});
it('should process order events', async () => {
const producer = kafka.producer();
await producer.connect();
const processor = new OrderEventConsumer(consumer);
const events: unknown[] = [];
processor.on('order:created', (event) => events.push(event));
await consumer.subscribe({ topic: 'orders' });
await consumer.run({
eachMessage: async ({ message }) => {
processor.handleMessage(JSON.parse(message.value!.toString()));
},
});
await producer.send({
topic: 'orders',
messages: [
{
key: 'order-1',
value: JSON.stringify({
id: 'order-1',
customerId: 'cust-123',
total: 99.99,
}),
},
],
});
await new Promise(resolve => setTimeout(resolve, 1000));
expect(events).toHaveLength(1);
await producer.disconnect();
});
});
Parallel Test Execution
Run tests in parallel with isolated containers:
{
"scripts": {
"test": "vitest --run --threads --maxThreads=4"
}
}
Configure vitest.config.ts:
import { defineConfig } from 'vitest/config';
export default defineConfig({
test: {
globals: true,
environment: 'node',
threads: true,
maxThreads: 4,
minThreads: 1,
testTimeout: 30000,
hookTimeout: 30000,
},
});
Each test file gets its own container and pool—safe parallelization.
CI Performance Optimization
Container startup dominates test time. Cache container images:
# .github/workflows/test.yml
jobs:
integration-test:
runs-on: ubuntu-latest
services:
docker:
image: docker:latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
- run: npm ci
- name: Run tests
run: npm run test:integration
env:
TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE: /var/run/docker.sock
TESTCONTAINERS_RYUK_DISABLED: false
Reuse containers across test files (advanced):
// test/shared-container.ts
import { PostgreSqlContainer } from '@testcontainers/postgresql';
import { Pool } from 'pg';
let sharedPool: Pool;
let sharedContainer: any;
export async function getSharedPool() {
if (!sharedContainer) {
sharedContainer = await new PostgreSqlContainer().start();
sharedPool = new Pool({
host: sharedContainer.getHost(),
port: sharedContainer.getMappedPort(5432),
user: 'test',
password: 'test',
database: 'test',
});
}
return sharedPool;
}
Checklist
- Install TestContainers for your database
- Write container setup/teardown in shared test file
- Choose isolation strategy (transaction rollback vs truncate)
- Run containers in parallel with Vitest --threads
- Create Docker Compose for multi-service tests
- Test Kafka consumers with real brokers
- Cache node_modules in CI
- Disable Ryuk in CI (Docker cleanup)
- Set TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE
- Monitor test duration per suite
Conclusion
TestContainers eliminate the gap between test and production environments. Real databases catch queries that work in SQLite but fail in PostgreSQL. Real Redis catches serialization bugs. Your tests become trustworthy. Start with a single PostgreSQL container and expand to Redis and Kafka as your system grows. The investment in container isolation pays for itself in prevented production bugs.