Skip to main content

AI Integrations

FSS includes a built-in multi-provider AI module that works across the backend, frontend, and mobile packages. Activate any number of providers by adding their API keys — no other configuration is needed.

Architecture Overview

Supported Providers

ProviderEnv VariableDefault Model
Anthropic (Claude)ANTHROPIC_API_KEYclaude-sonnet-4-6
OpenAI (GPT)OPENAI_API_KEYgpt-4o
Google GeminiGOOGLE_AI_API_KEYgemini-2.0-flash
GroqGROQ_API_KEYllama-3.3-70b-versatile

A provider is active only when its API key is present in projects/fss/backend/.env. The GET /ai/providers endpoint reports configured: true/false for each.


Backend Setup

1. Add API Keys

# projects/fss/backend/.env
# Add keys for whichever providers you want
ANTHROPIC_API_KEY=sk-ant-api03-...
OPENAI_API_KEY=sk-...
GOOGLE_AI_API_KEY=AIzaSy...
GROQ_API_KEY=gsk_...

2. Verify

Start the backend and visit https://localhost:3443/api — the AI endpoints appear in Swagger. Or call the providers endpoint:

curl -H "Authorization: Bearer <token>" https://localhost:3443/ai/providers

Available Endpoints

All endpoints require Authorization: Bearer <accessToken>.

MethodEndpointDescription
GET/ai/providersList all providers and configured status
POST/ai/chatFull AI completion (waits for complete response)
POST/ai/streamSSE streaming (Content-Type: text/event-stream)

Using AIService in Your Own Modules

Import AIModule and inject AIService to use AI from any NestJS module:

// your.module.ts
import { AIModule } from '../ai/ai.module';

@Module({ imports: [AIModule] })
export class YourModule {}

// your.service.ts
import { AIService } from '../ai/ai.service';

constructor(private aiService: AIService) {}

// Full response
const result = await this.aiService.chat('anthropic', [
{ role: 'user', content: 'Summarize this document: ...' }
], { maxTokens: 512, systemPrompt: 'Be concise.' });
console.log(result.content);

// Streaming
const stream = this.aiService.stream('openai', messages, options);
for await (const chunk of stream) {
process.stdout.write(chunk);
}

Adding a Custom Provider

  1. Create src/modules/ai/providers/<name>.service.ts implementing IAIProvider:
import { IAIProvider, AIMessage, AICompletionOptions, AICompletionResult } from '../interfaces/ai-provider.interface';

export class MyProviderService implements IAIProvider {
readonly provider = 'myprovider';
readonly defaultModel = 'my-model-v1';

isConfigured(): boolean {
return !!process.env.MYPROVIDER_API_KEY;
}

async chat(messages: AIMessage[], options?: AICompletionOptions): Promise<AICompletionResult> {
// call your provider's API
}

async *stream(messages: AIMessage[], options?: AICompletionOptions): AsyncGenerator<string> {
// yield chunks
}
}
  1. Add to AIModule providers array
  2. Inject in AIService constructor and register in the providers Map
  3. Add MYPROVIDER_API_KEY= to .env.example
  4. Add the provider name to the @IsEnum decorator in AIChatDto

Chat vs Streaming

Frontend Usage

The frontend ships with drop-in components and a React hook. No additional setup is needed beyond the backend having at least one provider configured.

AIChatInterface (Drop-in Component)

// src/app/ai-chat/page.tsx
'use client';
import { useEffect, useState } from 'react';
import { ProtectedRoute } from '@/components/auth/ProtectedRoute';
import { AIChatInterface } from '@/components/ai';
import { aiAPI } from '@/lib/api';
import type { AIProviderInfo } from '@/types/ai';

export default function AIChatPage() {
const [providers, setProviders] = useState<AIProviderInfo[]>([]);

useEffect(() => {
aiAPI.getProviders().then(setProviders).catch(console.error);
}, []);

return (
<ProtectedRoute>
<div className="max-w-3xl mx-auto px-4 py-8">
<h1 className="text-2xl font-semibold mb-6">AI Assistant</h1>
<AIChatInterface
providers={providers}
systemPrompt="You are a helpful assistant."
streaming={true}
className="h-[600px]"
/>
</div>
</ProtectedRoute>
);
}

useAIChat Hook

Use useAIChat when you want full control over chat state without the pre-built UI:

import { useAIChat } from '@/hooks/useAIChat';

const {
messages, // AIMessage[] — full conversation history
isLoading, // boolean — true during full (non-streaming) request
isStreaming, // boolean — true during SSE stream
error, // string | null
sendMessage, // async (text: string) => void — full response
sendMessageStream, // async (text: string) => void — SSE streaming
clearMessages, // () => void
stop, // () => void — abort mid-stream
} = useAIChat({
provider: 'openai',
model: 'gpt-4o',
systemPrompt: 'Be concise.',
maxTokens: 512,
temperature: 0.8,
});

// Stream a message
await sendMessageStream('Explain JWT in one sentence.');

// Abort if needed
stop();

Direct API Calls

import { aiAPI } from '@/lib/api';

// Check configured providers
const providers = await aiAPI.getProviders();

// Single completion without UI
const result = await aiAPI.chat({
provider: 'gemini',
messages: [{ role: 'user', content: 'Summarize this article: ...' }],
systemPrompt: 'Respond in bullet points.',
maxTokens: 300,
});
console.log(result.content);

Available Models

// src/types/ai.ts
AI_PROVIDER_MODELS = {
anthropic: ['claude-sonnet-4-6', 'claude-opus-4-6', 'claude-haiku-4-5-20251001'],
openai: ['gpt-4o', 'gpt-4o-mini', 'o1', 'o1-mini'],
gemini: ['gemini-2.0-flash', 'gemini-2.5-pro', 'gemini-1.5-flash'],
groq: ['llama-3.3-70b-versatile', 'llama-3.1-8b-instant', 'mixtral-8x7b-32768'],
}

Mobile Usage

The mobile app uses src/services/aiService.ts which wraps the backend /ai/* endpoints using the SSL-pinning API client.

note

Streaming (SSE) is not supported on mobile. Only full responses via aiAPI.chat() are available.

List Providers

import { aiAPI } from '../services/aiService';

const providers = await aiAPI.getProviders();
// [{ provider: 'anthropic', defaultModel: 'claude-sonnet-4-6', configured: true }, ...]

Send a Chat Message

const result = await aiAPI.chat({
provider: 'anthropic',
messages: [
{ role: 'user', content: 'What is the capital of France?' }
],
model: 'claude-sonnet-4-6', // optional — uses provider default if omitted
maxTokens: 512, // optional
temperature: 0.7, // optional (0–2)
systemPrompt: 'Be concise.', // optional
});

console.log(result.content); // "Paris."
console.log(result.usage); // { inputTokens: 10, outputTokens: 2 }

Full Chat Screen Example

import React, { useState } from 'react';
import { View, Text, TextInput, TouchableOpacity, FlatList, ActivityIndicator } from 'react-native';
import { aiAPI, AIMessage } from '../services/aiService';
import InternalLayout from '../components/InternalLayout';

export default function AIChatScreen() {
const [messages, setMessages] = useState<AIMessage[]>([]);
const [input, setInput] = useState('');
const [loading, setLoading] = useState(false);

const send = async () => {
const text = input.trim();
if (!text || loading) return;
setInput('');

const updated: AIMessage[] = [...messages, { role: 'user', content: text }];
setMessages(updated);
setLoading(true);

try {
const result = await aiAPI.chat({ provider: 'anthropic', messages: updated });
setMessages([...updated, { role: 'assistant', content: result.content }]);
} catch (e) {
// handle error
} finally {
setLoading(false);
}
};

return (
<InternalLayout>
<FlatList
className="flex-1 px-4"
data={messages}
keyExtractor={(_, i) => String(i)}
renderItem={({ item }) => (
<View className={`my-1 p-3 rounded-xl max-w-[80%] ${
item.role === 'user' ? 'self-end bg-blue-500' : 'self-start bg-gray-100'
}`}>
<Text className={item.role === 'user' ? 'text-white' : 'text-gray-800'}>
{item.content}
</Text>
</View>
)}
/>
{loading && <ActivityIndicator className="my-2" />}
<View className="flex-row items-center px-4 py-3 border-t border-gray-200">
<TextInput
className="flex-1 border border-gray-300 rounded-xl px-3 py-2 mr-2"
value={input}
onChangeText={setInput}
placeholder="Ask anything..."
onSubmitEditing={send}
/>
<TouchableOpacity
onPress={send}
disabled={loading}
className="bg-blue-500 px-4 py-2 rounded-xl"
>
<Text className="text-white font-medium">Send</Text>
</TouchableOpacity>
</View>
</InternalLayout>
);
}

TypeScript Interfaces

These types are shared across the stack (each package has its own copy):

type AIProvider = 'anthropic' | 'openai' | 'gemini' | 'groq';

interface AIMessage {
role: 'user' | 'assistant' | 'system';
content: string;
}

interface AIChatOptions {
provider: AIProvider;
messages: AIMessage[];
model?: string;
maxTokens?: number;
temperature?: number; // 0–2
systemPrompt?: string;
}

interface AICompletionResult {
content: string;
model: string;
provider: string;
usage?: { inputTokens: number; outputTokens: number };
}

interface AIProviderInfo {
provider: AIProvider;
defaultModel: string;
configured: boolean;
}