- Replace pgtestdb with Testcontainers for improved test isolation and reliability - Update test setup functions to spin up dedicated PostgreSQL containers for each test - Ensure automatic cleanup of containers after tests to prevent resource leaks - Modify documentation to reflect changes in testing methodology and benefits of using Testcontainers
8.5 KiB
Testing Guide
Overview
The backend uses pgtestdb for PostgreSQL-based testing. Each test gets an isolated PostgreSQL database with migrations already applied, ensuring tests run against a production-identical environment.
Prerequisites
1. PostgreSQL Server
You need a PostgreSQL server running for tests. Options:
Option A: Docker Compose (Recommended)
# Start PostgreSQL test server
docker compose -f docker-compose.yml up -d postgres
# Or use the test-specific compose file
docker compose -f docker-compose.test.yml up -d
Option B: Local PostgreSQL
Install PostgreSQL locally and ensure it's running on the default port (5432) or configure via environment variables.
2. Environment Variables
Tests use these environment variables (with defaults matching Docker Compose):
POSTGRES_USER=turash # Default: turash (from docker-compose.yml)
POSTGRES_PASSWORD=turash123 # Default: turash123 (from docker-compose.yml)
POSTGRES_HOST=localhost # Default: localhost
POSTGRES_PORT=5432 # Default: 5432 (from docker-compose.yml)
POSTGRES_DB=postgres # Default: postgres (used to create test databases)
Note: The turash user must have CREATEDB privileges. If tests fail with permission errors, grant privileges:
ALTER USER turash CREATEDB;
Running Tests
Run All Tests
go test ./...
Run Tests with Verbose Output
go test -v ./...
Run Specific Test Package
go test ./internal/handler/...
go test ./internal/service/...
go test ./internal/repository/...
Run Tests in Parallel
go test -parallel 4 ./...
How It Works
pgtestdb Architecture
- Template Database: On first run, pgtestdb creates a template database with all migrations applied
- Test Isolation: Each test gets a cloned database from the template (fast, milliseconds)
- Automatic Cleanup: Databases are automatically dropped after each test
🔒 Production Data Safety
IMPORTANT: Tests NEVER touch production data!
- ✅ Isolated Databases: Each test creates a unique temporary database (e.g.,
pgtestdb_abc123) - ✅ Production Protected: The production
turashdatabase is NEVER accessed or modified - ✅ Connection Strategy: Tests connect to the
postgresadmin database to CREATE test databases - ✅ Automatic Cleanup: Test databases are automatically DROPPED after each test completes
- ✅ No Data Leakage: Tests run in complete isolation with no shared state
How it works:
- Test connects to
postgresdatabase (admin database, not production) - pgtestdb creates a new temporary database:
pgtestdb_<random_id> - Migrations run on the temporary database
- Test executes against the temporary database
- Temporary database is automatically dropped when test completes
Production database (turash) remains untouched!
Test Setup Example
func TestMyFeature(t *testing.T) {
t.Parallel()
// Setup PostgreSQL test database with testcontainers
// Spins up an isolated PostgreSQL container for this test
db := testutils.SetupTestDBWithTestcontainers(t)
// Use database - migrations already applied
repo := repository.NewMyRepository(db)
// Your test code here
// Container automatically cleaned up when test ends
}
Ginkgo/Gomega Tests
BeforeEach(func() {
// Setup PostgreSQL test database with testcontainers
// Each test gets its own isolated PostgreSQL container
db = testutils.SetupTestDBWithTestcontainers(GinkgoT())
// Initialize repositories/services
repo = repository.NewMyRepository(db)
})
Testify Suite Tests
func (suite *MyTestSuite) SetupTest() {
// Setup PostgreSQL test database with testcontainers
// Container automatically managed and cleaned up
suite.db = testutils.SetupTestDBWithTestcontainers(suite.T())
suite.repo = repository.NewMyRepository(suite.db)
}
Features Supported
✅ Full PostgreSQL Features:
- PostGIS spatial operations
- GIN indexes for JSONB
- JSONB queries and operations
- Complex constraints and checks
- All PostgreSQL-specific features
✅ Fast Execution:
- Template-based cloning (milliseconds per test)
- Parallel test execution
- No migration overhead per test
✅ Isolation:
- Each test gets a clean database
- No state leakage between tests
- Safe parallel execution
Troubleshooting
Tests Fail with "connection refused"
Problem: PostgreSQL server is not running.
Solution:
# Start PostgreSQL
docker compose up -d postgres
# Or check if PostgreSQL is running
pg_isready -h localhost -p 5433
Tests Fail with "PostGIS not available"
Problem: PostGIS extension is not installed.
Solution: Install PostGIS extension:
# In PostgreSQL
CREATE EXTENSION IF NOT EXISTS postgis;
Or use PostGIS Docker image:
image: postgis/postgis:15-3.4
Tests are Slow
Problem: Template database not being reused.
Solution:
- Ensure PostgreSQL server is running
- Check that migrations haven't changed (Hash() method)
- Use RAM-backed volume for Docker:
volumes: - type: tmpfs target: /var/lib/postgresql/data
Migration Errors
Problem: Migrations fail during template creation.
Solution:
- Check PostgreSQL logs
- Verify PostGIS extension is available
- Ensure user has CREATEDB privileges
CI/CD Integration
GitHub Actions Example
services:
postgres:
image: postgis/postgis:15-3.4
env:
POSTGRES_PASSWORD: test123
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
env:
POSTGRES_HOST: localhost
POSTGRES_PORT: 5432
POSTGRES_USER: postgres
POSTGRES_PASSWORD: test123
Performance Tips
- Use Template Databases: pgtestdb automatically uses templates for speed
- Run Tests in Parallel: Tests are isolated, safe to run concurrently
- RAM-Backed Storage: Use tmpfs volumes for Docker for faster I/O
- Optimize PostgreSQL: Disable fsync and synchronous_commit for tests:
command: - postgres - -c fsync=off - -c synchronous_commit=off
Migration to Testcontainers
All tests have been migrated to use testcontainers for database isolation. Key changes:
- ✅
SetupTestDBWithTestcontainers()provides isolated PostgreSQL containers per test - ✅ No local PostgreSQL setup required - works anywhere Docker is available
- ✅ Each test gets a fresh database with automatic cleanup
- ✅ Real PostgreSQL + PostGIS for accurate integration testing
- ✅ CI/CD runs all tests when Docker available, unit tests only when not
- ✅ Perfect test isolation prevents interference between tests
Database Backup
Backup Production Database
Before running tests or making changes, backup your production database using the Cobra CLI:
Using Dev Mode (Docker Compose)
# Create a backup using Docker Compose configuration
make db-backup
# Or directly
go run ./cmd/backup --dev
Using Environment Variables
# Create a backup using environment variables
make db-backup-env
# Or directly
go run ./cmd/backup
Using Connection String
# Create a backup using connection string
make db-backup-conn CONN="postgres://user:pass@host:port/db"
# Or directly
go run ./cmd/backup --conn "postgres://user:pass@host:port/db"
Backup Options
# Custom backup directory
go run ./cmd/backup --dev --dir /path/to/backups
# Keep more backups
go run ./cmd/backup --dev --keep 20
Backups are stored in ./backups/ directory with timestamps:
turash_backup_20250124_120000.sql.gz- Automatically keeps last 10 backups (configurable)
- Compressed with gzip for space efficiency
Restore Database
# Restore from backup (dev mode)
make db-restore BACKUP=backups/turash_backup_20250124_120000.sql.gz
# Or directly
go run ./cmd/backup restore backups/turash_backup_20250124_120000.sql.gz --dev
# Using environment variables
go run ./cmd/backup restore backups/turash_backup_20250124_120000.sql.gz
# Using connection string
go run ./cmd/backup restore backups/turash_backup_20250124_120000.sql.gz --conn "postgres://..."
⚠️ Warning: Restore will REPLACE all data in the production database. A safety backup is created automatically before restore.