mirror of
https://github.com/SamyRai/turash.git
synced 2025-12-26 23:01:33 +00:00
- Remove nested git repository from bugulma/frontend/.git - Add all frontend files to main repository tracking - Convert from separate frontend/backend repos to unified monorepo - Preserve all frontend code and development history as tracked files - Eliminate nested repository complexity for simpler development workflow This creates a proper monorepo structure with frontend and backend coexisting in the same repository for easier development and deployment.
221 lines
6.9 KiB
Markdown
221 lines
6.9 KiB
Markdown
# LLM Provider Abstraction Implementation
|
|
|
|
## Overview
|
|
|
|
The application now uses a provider-agnostic abstraction layer for LLM services, allowing easy switching between different providers (Gemini, OpenAI, Anthropic, etc.) without changing application code.
|
|
|
|
## What Changed
|
|
|
|
### New Files
|
|
|
|
1. **`lib/llm/types.ts`** - Core interfaces and types for LLM providers
|
|
2. **`lib/llm/providers/gemini.ts`** - Gemini provider implementation
|
|
3. **`lib/llm/providers/index.ts`** - Provider factory and registry
|
|
4. **`lib/llm/llmService.ts`** - High-level service wrapper
|
|
5. **`lib/llm/init.ts`** - Initialization utility
|
|
6. **`services/aiService.ts`** - Refactored service layer (replaces `geminiService.ts`)
|
|
|
|
### Modified Files
|
|
|
|
1. **`hooks/useGemini.ts`** - Updated to use new `aiService` instead of `geminiService`
|
|
2. **`index.tsx`** - Added LLM service initialization on app startup
|
|
|
|
### Deprecated Files
|
|
|
|
- **`services/geminiService.ts`** - Can be removed after migration (kept for reference)
|
|
|
|
## Architecture
|
|
|
|
```
|
|
┌─────────────────────────────────────────┐
|
|
│ Application Code │
|
|
│ (hooks, components, services) │
|
|
└──────────────┬──────────────────────────┘
|
|
│
|
|
▼
|
|
┌─────────────────────────────────────────┐
|
|
│ aiService.ts │
|
|
│ (High-level business logic) │
|
|
└──────────────┬──────────────────────────┘
|
|
│
|
|
▼
|
|
┌─────────────────────────────────────────┐
|
|
│ llmService.ts │
|
|
│ (Service wrapper) │
|
|
└──────────────┬──────────────────────────┘
|
|
│
|
|
▼
|
|
┌─────────────────────────────────────────┐
|
|
│ ILLMProvider Interface │
|
|
│ (Provider abstraction) │
|
|
└──────────────┬──────────────────────────┘
|
|
│
|
|
┌───────┴────────┬──────────────┐
|
|
▼ ▼ ▼
|
|
┌──────────┐ ┌──────────┐ ┌──────────┐
|
|
│ Gemini │ │ OpenAI │ │ Anthropic│
|
|
│ Provider │ │ Provider │ │ Provider │
|
|
└──────────┘ └──────────┘ └──────────┘
|
|
```
|
|
|
|
## Usage
|
|
|
|
### Environment Configuration
|
|
|
|
Set these environment variables to configure the LLM provider:
|
|
|
|
```bash
|
|
# Provider selection (default: gemini)
|
|
VITE_LLM_PROVIDER=gemini
|
|
|
|
# API credentials
|
|
VITE_LLM_API_KEY=your-api-key-here
|
|
|
|
# Optional: Model configuration
|
|
VITE_LLM_MODEL=gemini-2.5-flash
|
|
VITE_LLM_TEMPERATURE=0.7
|
|
VITE_LLM_MAX_TOKENS=2048
|
|
```
|
|
|
|
### Using the Service
|
|
|
|
The service is automatically initialized on app startup. Use it in your code:
|
|
|
|
```typescript
|
|
import { llmService } from './lib/llm/llmService';
|
|
|
|
// Simple text generation
|
|
const response = await llmService.generateContent({
|
|
contents: 'Hello, world!',
|
|
systemInstruction: 'You are a helpful assistant.',
|
|
responseFormat: 'text',
|
|
});
|
|
|
|
// JSON mode with schema validation
|
|
import { z } from 'zod';
|
|
const schema = z.object({ name: z.string(), age: z.number() });
|
|
|
|
const jsonResponse = await llmService.generateContent({
|
|
contents: 'Extract: John is 30',
|
|
responseFormat: 'json',
|
|
jsonSchema: schema,
|
|
});
|
|
console.log(jsonResponse.json); // { name: 'John', age: 30 }
|
|
```
|
|
|
|
### High-Level Functions
|
|
|
|
Use the business logic functions in `services/aiService.ts`:
|
|
|
|
```typescript
|
|
import {
|
|
sendMessage,
|
|
extractDataFromText,
|
|
analyzeSymbiosis,
|
|
getWebIntelligence,
|
|
} from './services/aiService';
|
|
|
|
// These functions are provider-agnostic
|
|
const description = await extractDataFromText(text, t);
|
|
const matches = await analyzeSymbiosis(org, allOrgs, t);
|
|
```
|
|
|
|
## Adding a New Provider
|
|
|
|
1. **Create provider class** in `lib/llm/providers/`:
|
|
|
|
```typescript
|
|
// lib/llm/providers/openai.ts
|
|
import type { ILLMProvider, LLMProvider, LLMProviderConfig } from '../types';
|
|
|
|
export class OpenAIProvider implements ILLMProvider {
|
|
readonly name: LLMProvider = 'openai';
|
|
|
|
initialize(config: LLMProviderConfig): void {
|
|
// Initialize OpenAI client
|
|
}
|
|
|
|
async generateContent(request: GenerateContentRequest): Promise<GenerateContentResponse> {
|
|
// Implement OpenAI API calls
|
|
}
|
|
|
|
isInitialized(): boolean {
|
|
/* ... */
|
|
}
|
|
getCapabilities() {
|
|
/* ... */
|
|
}
|
|
}
|
|
```
|
|
|
|
2. **Register in factory** (`lib/llm/providers/index.ts`):
|
|
|
|
```typescript
|
|
case 'openai':
|
|
return new OpenAIProvider();
|
|
```
|
|
|
|
3. **Set environment variable**:
|
|
|
|
```bash
|
|
VITE_LLM_PROVIDER=openai
|
|
VITE_LLM_API_KEY=sk-...
|
|
```
|
|
|
|
## Migration Notes
|
|
|
|
### Before
|
|
|
|
```typescript
|
|
import { sendMessageToGemini } from './services/geminiService';
|
|
const response = await sendMessageToGemini(message, systemInstruction);
|
|
```
|
|
|
|
### After
|
|
|
|
```typescript
|
|
import { sendMessage } from './services/aiService';
|
|
const response = await sendMessage(message, systemInstruction);
|
|
```
|
|
|
|
All hooks have been updated to use the new service. The old `geminiService.ts` can be removed after verifying everything works.
|
|
|
|
## Provider Capabilities
|
|
|
|
Each provider reports its capabilities:
|
|
|
|
- **Gemini**: Images ✅, JSON ✅, System Instructions ✅, Tools ✅
|
|
- **OpenAI** (when implemented): Images ✅, JSON ✅, System Instructions ✅, Tools ✅
|
|
- **Anthropic** (when implemented): Images ✅, JSON ✅, System Instructions ✅, Tools ✅
|
|
|
|
## Error Handling
|
|
|
|
All providers throw `LLMProviderError` with provider context:
|
|
|
|
```typescript
|
|
try {
|
|
const response = await llmService.generateContent({ ... });
|
|
} catch (error) {
|
|
if (error instanceof LLMProviderError) {
|
|
console.error(`Error from ${error.provider}:`, error.message);
|
|
}
|
|
}
|
|
```
|
|
|
|
## Benefits
|
|
|
|
1. **Flexibility**: Switch providers via environment variable
|
|
2. **Testability**: Easy to mock providers for testing
|
|
3. **Future-proof**: Add new providers without changing application code
|
|
4. **Cost optimization**: Switch to cheaper providers when available
|
|
5. **Feature parity**: Abstract away provider-specific differences
|
|
|
|
## Next Steps
|
|
|
|
1. Implement OpenAI provider (optional)
|
|
2. Implement Anthropic provider (optional)
|
|
3. Add provider-specific optimizations
|
|
4. Add streaming support abstraction (for chat)
|
|
5. Add retry logic and rate limiting
|
|
6. Add cost tracking per provider
|