turash/bugulma/frontend/FRONTEND_SIMPLIFICATION.md
Damir Mukimov 6347f42e20
Consolidate repositories: Remove nested frontend .git and merge into main repository
- Remove nested git repository from bugulma/frontend/.git
- Add all frontend files to main repository tracking
- Convert from separate frontend/backend repos to unified monorepo
- Preserve all frontend code and development history as tracked files
- Eliminate nested repository complexity for simpler development workflow

This creates a proper monorepo structure with frontend and backend
coexisting in the same repository for easier development and deployment.
2025-11-25 06:02:57 +01:00

3.9 KiB

Frontend Simplification - Backend AI Integration

Overview

The frontend has been simplified to be "dumb" - it no longer directly calls LLM providers. All AI/LLM operations are now handled by backend API endpoints.

What Changed

Removed

  • Direct LLM provider calls from frontend
  • LLM abstraction layer initialization
  • Frontend API key management
  • Provider-specific logic in frontend

Added

  • Backend API client for AI endpoints (services/ai-api.ts)
  • Simplified service layer that just calls backend
  • Clean separation of concerns

Architecture

┌─────────────────────────────────────────┐
│         Frontend (React)                 │
│  - Components                            │
│  - Hooks (useGemini.ts)                  │
│  - Services (aiService.ts)               │
└──────────────┬──────────────────────────┘
               │ HTTP Requests
               ▼
┌─────────────────────────────────────────┐
│      Backend API Endpoints               │
│  /api/ai/extract/text                    │
│  /api/ai/extract/file                    │
│  /api/ai/analyze/symbiosis               │
│  /api/ai/web-intelligence                │
│  /api/ai/search-suggestions              │
│  /api/ai/generate/description           │
│  /api/ai/generate/historical-context     │
│  /api/ai/chat                            │
└──────────────┬──────────────────────────┘
               │
               ▼
┌─────────────────────────────────────────┐
│      Backend LLM Service                 │
│  - Provider abstraction                  │
│  - API key management                    │
│  - Rate limiting                         │
│  - Caching                               │
└──────────────────────────────────────────┘

Benefits

  1. Security: API keys stay on backend
  2. Simplicity: Frontend doesn't need to know about providers
  3. Centralization: All AI logic in one place
  4. Cost Control: Backend can manage rate limiting, caching
  5. Easier Testing: Mock backend endpoints instead of LLM providers
  6. Better Error Handling: Centralized error handling on backend

Files Changed

New Files

  • services/ai-api.ts - Backend API client for AI endpoints
  • BACKEND_AI_ENDPOINTS.md - Specification for backend endpoints

Modified Files

  • services/aiService.ts - Now just calls backend API
  • index.tsx - Removed LLM initialization
  • lib/api-client.ts - Added FormData support for file uploads

Kept (for reference/future use)

  • lib/llm/ - LLM abstraction layer (can be used by backend)

Migration Path

  1. Frontend updated to call backend endpoints
  2. Backend needs to implement AI endpoints (see BACKEND_AI_ENDPOINTS.md)
  3. Backend can use the LLM abstraction from lib/llm/ (if ported to Go) or implement its own

Example Usage

Before (Direct LLM call)

import { llmService } from './lib/llm/llmService';
const response = await llmService.generateContent({ ... });

After (Backend API call)

import * as aiApi from './services/ai-api';
const data = await aiApi.extractDataFromText({ text: '...' });

Next Steps

  1. Implement backend AI endpoints (see BACKEND_AI_ENDPOINTS.md)
  2. Add rate limiting and caching on backend
  3. Add monitoring and cost tracking
  4. Consider streaming responses for chat (WebSocket or SSE)