mirror of
https://github.com/SamyRai/turash.git
synced 2025-12-26 23:01:33 +00:00
- Remove nested git repository from bugulma/frontend/.git - Add all frontend files to main repository tracking - Convert from separate frontend/backend repos to unified monorepo - Preserve all frontend code and development history as tracked files - Eliminate nested repository complexity for simpler development workflow This creates a proper monorepo structure with frontend and backend coexisting in the same repository for easier development and deployment.
6.9 KiB
6.9 KiB
LLM Provider Abstraction Implementation
Overview
The application now uses a provider-agnostic abstraction layer for LLM services, allowing easy switching between different providers (Gemini, OpenAI, Anthropic, etc.) without changing application code.
What Changed
New Files
lib/llm/types.ts- Core interfaces and types for LLM providerslib/llm/providers/gemini.ts- Gemini provider implementationlib/llm/providers/index.ts- Provider factory and registrylib/llm/llmService.ts- High-level service wrapperlib/llm/init.ts- Initialization utilityservices/aiService.ts- Refactored service layer (replacesgeminiService.ts)
Modified Files
hooks/useGemini.ts- Updated to use newaiServiceinstead ofgeminiServiceindex.tsx- Added LLM service initialization on app startup
Deprecated Files
services/geminiService.ts- Can be removed after migration (kept for reference)
Architecture
┌─────────────────────────────────────────┐
│ Application Code │
│ (hooks, components, services) │
└──────────────┬──────────────────────────┘
│
▼
┌─────────────────────────────────────────┐
│ aiService.ts │
│ (High-level business logic) │
└──────────────┬──────────────────────────┘
│
▼
┌─────────────────────────────────────────┐
│ llmService.ts │
│ (Service wrapper) │
└──────────────┬──────────────────────────┘
│
▼
┌─────────────────────────────────────────┐
│ ILLMProvider Interface │
│ (Provider abstraction) │
└──────────────┬──────────────────────────┘
│
┌───────┴────────┬──────────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Gemini │ │ OpenAI │ │ Anthropic│
│ Provider │ │ Provider │ │ Provider │
└──────────┘ └──────────┘ └──────────┘
Usage
Environment Configuration
Set these environment variables to configure the LLM provider:
# Provider selection (default: gemini)
VITE_LLM_PROVIDER=gemini
# API credentials
VITE_LLM_API_KEY=your-api-key-here
# Optional: Model configuration
VITE_LLM_MODEL=gemini-2.5-flash
VITE_LLM_TEMPERATURE=0.7
VITE_LLM_MAX_TOKENS=2048
Using the Service
The service is automatically initialized on app startup. Use it in your code:
import { llmService } from './lib/llm/llmService';
// Simple text generation
const response = await llmService.generateContent({
contents: 'Hello, world!',
systemInstruction: 'You are a helpful assistant.',
responseFormat: 'text',
});
// JSON mode with schema validation
import { z } from 'zod';
const schema = z.object({ name: z.string(), age: z.number() });
const jsonResponse = await llmService.generateContent({
contents: 'Extract: John is 30',
responseFormat: 'json',
jsonSchema: schema,
});
console.log(jsonResponse.json); // { name: 'John', age: 30 }
High-Level Functions
Use the business logic functions in services/aiService.ts:
import {
sendMessage,
extractDataFromText,
analyzeSymbiosis,
getWebIntelligence,
} from './services/aiService';
// These functions are provider-agnostic
const description = await extractDataFromText(text, t);
const matches = await analyzeSymbiosis(org, allOrgs, t);
Adding a New Provider
- Create provider class in
lib/llm/providers/:
// lib/llm/providers/openai.ts
import type { ILLMProvider, LLMProvider, LLMProviderConfig } from '../types';
export class OpenAIProvider implements ILLMProvider {
readonly name: LLMProvider = 'openai';
initialize(config: LLMProviderConfig): void {
// Initialize OpenAI client
}
async generateContent(request: GenerateContentRequest): Promise<GenerateContentResponse> {
// Implement OpenAI API calls
}
isInitialized(): boolean {
/* ... */
}
getCapabilities() {
/* ... */
}
}
- Register in factory (
lib/llm/providers/index.ts):
case 'openai':
return new OpenAIProvider();
- Set environment variable:
VITE_LLM_PROVIDER=openai
VITE_LLM_API_KEY=sk-...
Migration Notes
Before
import { sendMessageToGemini } from './services/geminiService';
const response = await sendMessageToGemini(message, systemInstruction);
After
import { sendMessage } from './services/aiService';
const response = await sendMessage(message, systemInstruction);
All hooks have been updated to use the new service. The old geminiService.ts can be removed after verifying everything works.
Provider Capabilities
Each provider reports its capabilities:
- Gemini: Images ✅, JSON ✅, System Instructions ✅, Tools ✅
- OpenAI (when implemented): Images ✅, JSON ✅, System Instructions ✅, Tools ✅
- Anthropic (when implemented): Images ✅, JSON ✅, System Instructions ✅, Tools ✅
Error Handling
All providers throw LLMProviderError with provider context:
try {
const response = await llmService.generateContent({ ... });
} catch (error) {
if (error instanceof LLMProviderError) {
console.error(`Error from ${error.provider}:`, error.message);
}
}
Benefits
- Flexibility: Switch providers via environment variable
- Testability: Easy to mock providers for testing
- Future-proof: Add new providers without changing application code
- Cost optimization: Switch to cheaper providers when available
- Feature parity: Abstract away provider-specific differences
Next Steps
- Implement OpenAI provider (optional)
- Implement Anthropic provider (optional)
- Add provider-specific optimizations
- Add streaming support abstraction (for chat)
- Add retry logic and rate limiting
- Add cost tracking per provider