turash/bugulma/backend/TESTING.md
Damir Mukimov 000eab4740
Major repository reorganization and missing backend endpoints implementation
Repository Structure:
- Move files from cluttered root directory into organized structure
- Create archive/ for archived data and scraper results
- Create bugulma/ for the complete application (frontend + backend)
- Create data/ for sample datasets and reference materials
- Create docs/ for comprehensive documentation structure
- Create scripts/ for utility scripts and API tools

Backend Implementation:
- Implement 3 missing backend endpoints identified in gap analysis:
  * GET /api/v1/organizations/{id}/matching/direct - Direct symbiosis matches
  * GET /api/v1/users/me/organizations - User organizations
  * POST /api/v1/proposals/{id}/status - Update proposal status
- Add complete proposal domain model, repository, and service layers
- Create database migration for proposals table
- Fix CLI server command registration issue

API Documentation:
- Add comprehensive proposals.md API documentation
- Update README.md with Users and Proposals API sections
- Document all request/response formats, error codes, and business rules

Code Quality:
- Follow existing Go backend architecture patterns
- Add proper error handling and validation
- Match frontend expected response schemas
- Maintain clean separation of concerns (handler -> service -> repository)
2025-11-25 06:01:16 +01:00

323 lines
7.9 KiB
Markdown

# Testing Guide
## Overview
The backend uses **pgtestdb** for PostgreSQL-based testing. Each test gets an isolated PostgreSQL database with migrations already applied, ensuring tests run against a production-identical environment.
## Prerequisites
### 1. PostgreSQL Server
You need a PostgreSQL server running for tests. Options:
#### Option A: Docker Compose (Recommended)
```bash
# Start PostgreSQL test server
docker compose -f docker-compose.yml up -d postgres
# Or use the test-specific compose file
docker compose -f docker-compose.test.yml up -d
```
#### Option B: Local PostgreSQL
Install PostgreSQL locally and ensure it's running on the default port (5432) or configure via environment variables.
### 2. Environment Variables
Tests use these environment variables (with defaults matching Docker Compose):
```bash
POSTGRES_USER=turash # Default: turash (from docker-compose.yml)
POSTGRES_PASSWORD=turash123 # Default: turash123 (from docker-compose.yml)
POSTGRES_HOST=localhost # Default: localhost
POSTGRES_PORT=5432 # Default: 5432 (from docker-compose.yml)
POSTGRES_DB=postgres # Default: postgres (used to create test databases)
```
**Note**: The `turash` user must have `CREATEDB` privileges. If tests fail with permission errors, grant privileges:
```sql
ALTER USER turash CREATEDB;
```
## Running Tests
### Run All Tests
```bash
go test ./...
```
### Run Tests with Verbose Output
```bash
go test -v ./...
```
### Run Specific Test Package
```bash
go test ./internal/handler/...
go test ./internal/service/...
go test ./internal/repository/...
```
### Run Tests in Parallel
```bash
go test -parallel 4 ./...
```
## How It Works
### pgtestdb Architecture
1. **Template Database**: On first run, pgtestdb creates a template database with all migrations applied
2. **Test Isolation**: Each test gets a cloned database from the template (fast, milliseconds)
3. **Automatic Cleanup**: Databases are automatically dropped after each test
### 🔒 **Production Data Safety**
**IMPORTANT**: Tests NEVER touch production data!
-**Isolated Databases**: Each test creates a unique temporary database (e.g., `pgtestdb_abc123`)
-**Production Protected**: The production `turash` database is NEVER accessed or modified
-**Connection Strategy**: Tests connect to the `postgres` admin database to CREATE test databases
-**Automatic Cleanup**: Test databases are automatically DROPPED after each test completes
-**No Data Leakage**: Tests run in complete isolation with no shared state
**How it works:**
1. Test connects to `postgres` database (admin database, not production)
2. pgtestdb creates a new temporary database: `pgtestdb_<random_id>`
3. Migrations run on the temporary database
4. Test executes against the temporary database
5. Temporary database is automatically dropped when test completes
**Production database (`turash`) remains untouched!**
### Test Setup Example
```go
func TestMyFeature(t *testing.T) {
t.Parallel()
// Setup PostgreSQL test database
db := testutils.SetupTestDB(t)
// Use database - migrations already applied
repo := repository.NewMyRepository(db)
// Your test code here
}
```
### Ginkgo/Gomega Tests
```go
BeforeEach(func() {
// Setup PostgreSQL test database
db = testutils.SetupTestDB(GinkgoT())
// Initialize repositories/services
repo = repository.NewMyRepository(db)
})
```
### Testify Suite Tests
```go
func (suite *MyTestSuite) SetupTest() {
// Setup PostgreSQL test database
suite.db = testutils.SetupTestDB(suite.T())
suite.repo = repository.NewMyRepository(suite.db)
}
```
## Features Supported
**Full PostgreSQL Features**:
- PostGIS spatial operations
- GIN indexes for JSONB
- JSONB queries and operations
- Complex constraints and checks
- All PostgreSQL-specific features
**Fast Execution**:
- Template-based cloning (milliseconds per test)
- Parallel test execution
- No migration overhead per test
**Isolation**:
- Each test gets a clean database
- No state leakage between tests
- Safe parallel execution
## Troubleshooting
### Tests Fail with "connection refused"
**Problem**: PostgreSQL server is not running.
**Solution**:
```bash
# Start PostgreSQL
docker compose up -d postgres
# Or check if PostgreSQL is running
pg_isready -h localhost -p 5433
```
### Tests Fail with "PostGIS not available"
**Problem**: PostGIS extension is not installed.
**Solution**: Install PostGIS extension:
```bash
# In PostgreSQL
CREATE EXTENSION IF NOT EXISTS postgis;
```
Or use PostGIS Docker image:
```yaml
image: postgis/postgis:15-3.4
```
### Tests are Slow
**Problem**: Template database not being reused.
**Solution**:
- Ensure PostgreSQL server is running
- Check that migrations haven't changed (Hash() method)
- Use RAM-backed volume for Docker:
```yaml
volumes:
- type: tmpfs
target: /var/lib/postgresql/data
```
### Migration Errors
**Problem**: Migrations fail during template creation.
**Solution**:
- Check PostgreSQL logs
- Verify PostGIS extension is available
- Ensure user has CREATEDB privileges
## CI/CD Integration
### GitHub Actions Example
```yaml
services:
postgres:
image: postgis/postgis:15-3.4
env:
POSTGRES_PASSWORD: test123
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
env:
POSTGRES_HOST: localhost
POSTGRES_PORT: 5432
POSTGRES_USER: postgres
POSTGRES_PASSWORD: test123
```
## Performance Tips
1. **Use Template Databases**: pgtestdb automatically uses templates for speed
2. **Run Tests in Parallel**: Tests are isolated, safe to run concurrently
3. **RAM-Backed Storage**: Use tmpfs volumes for Docker for faster I/O
4. **Optimize PostgreSQL**: Disable fsync and synchronous_commit for tests:
```yaml
command:
- postgres
- -c fsync=off
- -c synchronous_commit=off
```
## Migration from SQLite
All tests have been migrated from SQLite to PostgreSQL. Key changes:
-`SetupTestDB()` now requires `*testing.T` parameter
- ✅ Tests use real PostgreSQL with all features
- ✅ PostGIS spatial operations work correctly
- ✅ GIN indexes and JSONB queries supported
- ✅ No more SQLite compatibility issues
## Database Backup
### Backup Production Database
Before running tests or making changes, backup your production database using the Cobra CLI:
#### Using Dev Mode (Docker Compose)
```bash
# Create a backup using Docker Compose configuration
make db-backup
# Or directly
go run ./cmd/backup --dev
```
#### Using Environment Variables
```bash
# Create a backup using environment variables
make db-backup-env
# Or directly
go run ./cmd/backup
```
#### Using Connection String
```bash
# Create a backup using connection string
make db-backup-conn CONN="postgres://user:pass@host:port/db"
# Or directly
go run ./cmd/backup --conn "postgres://user:pass@host:port/db"
```
#### Backup Options
```bash
# Custom backup directory
go run ./cmd/backup --dev --dir /path/to/backups
# Keep more backups
go run ./cmd/backup --dev --keep 20
```
Backups are stored in `./backups/` directory with timestamps:
- `turash_backup_20250124_120000.sql.gz`
- Automatically keeps last 10 backups (configurable)
- Compressed with gzip for space efficiency
### Restore Database
```bash
# Restore from backup (dev mode)
make db-restore BACKUP=backups/turash_backup_20250124_120000.sql.gz
# Or directly
go run ./cmd/backup restore backups/turash_backup_20250124_120000.sql.gz --dev
# Using environment variables
go run ./cmd/backup restore backups/turash_backup_20250124_120000.sql.gz
# Using connection string
go run ./cmd/backup restore backups/turash_backup_20250124_120000.sql.gz --conn "postgres://..."
```
**⚠️ Warning**: Restore will REPLACE all data in the production database. A safety backup is created automatically before restore.
## References
- [pgtestdb Documentation](https://github.com/peterldowns/pgtestdb)
- [PostgreSQL Testing Best Practices](https://www.postgresql.org/developer/testing/)