Repository Structure:
- Move files from cluttered root directory into organized structure
- Create archive/ for archived data and scraper results
- Create bugulma/ for the complete application (frontend + backend)
- Create data/ for sample datasets and reference materials
- Create docs/ for comprehensive documentation structure
- Create scripts/ for utility scripts and API tools
Backend Implementation:
- Implement 3 missing backend endpoints identified in gap analysis:
* GET /api/v1/organizations/{id}/matching/direct - Direct symbiosis matches
* GET /api/v1/users/me/organizations - User organizations
* POST /api/v1/proposals/{id}/status - Update proposal status
- Add complete proposal domain model, repository, and service layers
- Create database migration for proposals table
- Fix CLI server command registration issue
API Documentation:
- Add comprehensive proposals.md API documentation
- Update README.md with Users and Proposals API sections
- Document all request/response formats, error codes, and business rules
Code Quality:
- Follow existing Go backend architecture patterns
- Add proper error handling and validation
- Match frontend expected response schemas
- Maintain clean separation of concerns (handler -> service -> repository)
7.3 KiB
10. Go 1.25 Stack & Backend Architecture
Recommended Stack
Core Stack (MVP): Go 1.25 + Neo4j + NATS/Redis Streams + PostgreSQL + Redis
Core Stack (Scale): Go 1.25 + Neo4j + Kafka + PostgreSQL + Redis
HTTP Framework Selection
Options (Choose based on requirements):
- Fiber: Fast, Express-inspired, lowest latency
- Gin: Mature, widely adopted, good balance
- Echo: Clean API, good middleware support
- Standard
net/http: Simple, zero dependencies, full control
Recommendation: Start with Gin for MVP (mature ecosystem), consider Fiber if low latency critical
API Gateway
- Kong (written in Lua/Go plugin support) or Traefik (Go-native)
- Alternative: Build lightweight gateway in Go using
net/httpor Gin - Rate limiting, request routing, authentication
- API versioning support
Message Queue & Event Streaming
MVP Recommendation: Start with NATS or Redis Streams, migrate to Kafka at scale
-
NATS (Recommended for MVP): Go-native messaging (
nats.go)- Benefits: 60-70% complexity reduction vs Kafka, similar capabilities
- Use Case: Perfect for MVP phase, real-time updates, pub/sub
- Library:
github.com/nats-io/nats.go
-
Redis Streams (Alternative MVP): Simple pub/sub, job queues
- Benefits: Minimal infrastructure overhead, integrates with existing Redis cache
- Use Case: Initial real-time features, job queues
- Library:
go-redis/redis/v9
-
Kafka (Scale Phase): Industry standard for event streaming
- Migration Trigger: When platform reaches 1000+ businesses
- Use Case: High-throughput event streaming, event sourcing
- Libraries:
confluent-kafka-goorshopify/sarama
-
RabbitMQ:
streadway/amqpfor traditional message queues (not recommended)
Background Jobs: Use Go's context and goroutines, or asynq for distributed job queues
Decision Framework:
- < 100 businesses: Redis Streams (simplest)
- 100-1000 businesses: NATS (balanced performance/complexity)
- > 1000 businesses: Kafka (enterprise-grade, high-throughput)
Database Layer
-
Primary Graph DB: Neo4j using
github.com/neo4j/neo4j-go-driver/v5- Connection pooling built-in
- Transaction support
- Prepared statements for performance
-
Secondary RDBMS: PostgreSQL using
github.com/jackc/pgx/v5- Better performance than
database/sql - Native PostGIS support via
github.com/twpayne/go-geom - Connection pooling with
pgxpool
- Better performance than
-
Time-Series:
- TimescaleDB (PostgreSQL extension) - use
pgxdriver - InfluxDB using
github.com/influxdata/influxdb-client-go/v2
- TimescaleDB (PostgreSQL extension) - use
-
Cache: Redis using
github.com/redis/go-redis/v9- Match results, sessions, rate limiting
- Pub/sub for real-time updates
-
Search:
- Meilisearch:
github.com/meilisearch/meilisearch-go(Go-native, fast) - Elasticsearch:
github.com/elastic/go-elasticsearch/v8 - Alternative: PostgreSQL full-text search for simpler deployments
- Meilisearch:
Go 1.25 Specific Features & Performance Targets
Critical: Upgrade to Go 1.25
Performance Benchmarks (Production Targets):
- Throughput: 10,000+ HTTP requests/second
- Latency: p95 <50ms API response time
- Memory: <100MB baseline, <200MB peak per instance
- CPU: <20% utilization at 1,000 req/s
- Concurrency: 10,000+ goroutines supported
-
Experimental JSON v2 Package:
// Enable with: GOEXPERIMENT=jsonv2 go build // IMPORTANT: Build with feature flags and fallback to Go 1.23 stable features import "encoding/json/v2" // Feature flag: json_v2_enabled- Performance: 3-10x faster JSON processing (50μs → 5-15μs per request)
- Throughput: 50,000+ JSON operations/second
- Use Case: High-throughput API responses, message serialization
- Risk Mitigation: Build feature flags for experimental features, fallback to Go 1.23 if not production-ready by Q1 2025
-
GreenTea Garbage Collector:
# Enable with: GOEXPERIMENT=greenteagc go build # IMPORTANT: Feature flag required, fallback to standard GC- Performance: Reduces GC overhead by 10-40% (from 20% to 12-18% CPU)
- Latency: 50% reduction in GC pause times (<1ms p99 pauses)
- Use Case: Matching engine, event processors, graph query handlers
- Memory: 15-30% reduction in heap allocations
- Risk Mitigation: Feature flag implementation required, fallback to standard GC if experimental features not production-ready
-
Container-Aware GOMAXPROCS:
- Resource Utilization: 90%+ CPU utilization in Kubernetes pods
- Auto-scaling: Accurate horizontal pod autoscaling decisions
- Efficiency: 25% improvement in resource allocation accuracy
-
DWARF v5 Debug Information:
- Binary Size: 10-20% reduction in compiled binary size
- Build Time: 15% faster compilation and linking
- Debugging: Improved Delve debugging experience
-
WaitGroup.Go Method:
// Simplified goroutine creation var wg sync.WaitGroup wg.Go(func() { /* work */ })- Code Quality: 30% reduction in boilerplate concurrency code
-
Trace Flight Recorder API:
import "runtime/trace" // Continuous tracing with in-memory ring buffer- Observability: <1% performance overhead for continuous tracing
- Debugging: Capture 1-hour execution traces in 50MB memory
Go-Specific Libraries & Patterns
Essential Libraries:
- Validation:
github.com/go-playground/validator/v10or Go 1.25 generics for type-safe validation - Configuration Management:
github.com/spf13/viper - Logging:
github.com/rs/zerolog(fast, structured) orgithub.com/sirupsen/logrus(feature-rich) - HTTP Client: Standard
net/http(Go 1.25 improvements) orgithub.com/go-resty/resty/v2 - Database Migration:
github.com/golang-migrate/migrate/v4 - Testing:
github.com/stretchr/testify,github.com/golang/mockorgithub.com/vektra/mockery/v2 - WebSocket:
github.com/gorilla/websocket,nhooyr.io/websocket, orgithub.com/gobwas/ws - GraphQL:
github.com/99designs/gqlgen(schema-first) orgithub.com/graphql-go/graphql(runtime-first) - gRPC:
google.golang.org/grpcfor microservices - Task Queues:
github.com/hibiken/asynq(Redis-based distributed task queue) - Observability:
go.opentelemetry.io/otel,github.com/prometheus/client_golang
Go-Specific Architecture Patterns:
- Interface-Driven Design: Accept interfaces, return structs
- Context Propagation: Use
context.Contextfor cancellation, timeouts, request-scoped values - Error Handling: Wrap errors with
fmt.Errorf("operation failed: %w", err), useerrors.Is()anderrors.As() - Concurrency Patterns: Channels for communication,
sync.WaitGroupfor coordination, worker pools for parallelism - Graceful Shutdown: Handle SIGTERM/SIGINT, drain connections, finish in-flight requests, cleanup resources
Go Project Structure
/cmd # Application entrypoints
/internal # Private application code
/pkg # Public library code
/api # API definitions
/configs # Configuration files
/scripts # Build/deployment scripts
/docs # Documentation including ADRs