mirror of
https://github.com/SamyRai/turash.git
synced 2025-12-26 23:01:33 +00:00
- Remove nested git repository from bugulma/frontend/.git - Add all frontend files to main repository tracking - Convert from separate frontend/backend repos to unified monorepo - Preserve all frontend code and development history as tracked files - Eliminate nested repository complexity for simpler development workflow This creates a proper monorepo structure with frontend and backend coexisting in the same repository for easier development and deployment.
3.9 KiB
3.9 KiB
Frontend Simplification - Backend AI Integration
Overview
The frontend has been simplified to be "dumb" - it no longer directly calls LLM providers. All AI/LLM operations are now handled by backend API endpoints.
What Changed
Removed
- Direct LLM provider calls from frontend
- LLM abstraction layer initialization
- Frontend API key management
- Provider-specific logic in frontend
Added
- Backend API client for AI endpoints (
services/ai-api.ts) - Simplified service layer that just calls backend
- Clean separation of concerns
Architecture
┌─────────────────────────────────────────┐
│ Frontend (React) │
│ - Components │
│ - Hooks (useGemini.ts) │
│ - Services (aiService.ts) │
└──────────────┬──────────────────────────┘
│ HTTP Requests
▼
┌─────────────────────────────────────────┐
│ Backend API Endpoints │
│ /api/ai/extract/text │
│ /api/ai/extract/file │
│ /api/ai/analyze/symbiosis │
│ /api/ai/web-intelligence │
│ /api/ai/search-suggestions │
│ /api/ai/generate/description │
│ /api/ai/generate/historical-context │
│ /api/ai/chat │
└──────────────┬──────────────────────────┘
│
▼
┌─────────────────────────────────────────┐
│ Backend LLM Service │
│ - Provider abstraction │
│ - API key management │
│ - Rate limiting │
│ - Caching │
└──────────────────────────────────────────┘
Benefits
- Security: API keys stay on backend
- Simplicity: Frontend doesn't need to know about providers
- Centralization: All AI logic in one place
- Cost Control: Backend can manage rate limiting, caching
- Easier Testing: Mock backend endpoints instead of LLM providers
- Better Error Handling: Centralized error handling on backend
Files Changed
New Files
services/ai-api.ts- Backend API client for AI endpointsBACKEND_AI_ENDPOINTS.md- Specification for backend endpoints
Modified Files
services/aiService.ts- Now just calls backend APIindex.tsx- Removed LLM initializationlib/api-client.ts- Added FormData support for file uploads
Kept (for reference/future use)
lib/llm/- LLM abstraction layer (can be used by backend)
Migration Path
- ✅ Frontend updated to call backend endpoints
- ⏳ Backend needs to implement AI endpoints (see
BACKEND_AI_ENDPOINTS.md) - ⏳ Backend can use the LLM abstraction from
lib/llm/(if ported to Go) or implement its own
Example Usage
Before (Direct LLM call)
import { llmService } from './lib/llm/llmService';
const response = await llmService.generateContent({ ... });
After (Backend API call)
import * as aiApi from './services/ai-api';
const data = await aiApi.extractDataFromText({ text: '...' });
Next Steps
- Implement backend AI endpoints (see
BACKEND_AI_ENDPOINTS.md) - Add rate limiting and caching on backend
- Add monitoring and cost tracking
- Consider streaming responses for chat (WebSocket or SSE)