Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Teeflo/PolyChat-AI/llms.txt

Use this file to discover all available pages before exploring further.

Overview

PolyChat-AI integrates with the OpenRouter API to provide access to multiple AI models through a unified interface. This guide covers the integration architecture, customization options, and extension patterns.
All API integration code is located in src/services/openRouter.ts

OpenRouter API Basics

API Endpoint

PolyChat-AI uses the OpenRouter chat completions endpoint:
const API_URL = 'https://openrouter.ai/api/v1/chat/completions';

Required Headers

Every API request includes these headers (see src/services/openRouter.ts:44-49):
{
  'Authorization': `Bearer ${apiKey}`,
  'Content-Type': 'application/json',
  'HTTP-Referer': window.location.origin,
  'X-Title': 'PolyChat AI'
}
  • Authorization: Authenticates your OpenRouter API key
  • Content-Type: Specifies JSON request body format
  • HTTP-Referer: Required by OpenRouter for attribution and analytics
  • X-Title: Identifies your application in OpenRouter’s dashboard

Core API Functions

Non-Streaming Requests

The fetchAIResponse() function handles standard API requests:
fetchAIResponse(
  messages: Message[],
  apiKey: string,
  model: string,
  systemPrompt?: string
): Promise<string | MessageContent[]>
Key features:
  • Converts app message format to OpenRouter API format
  • Injects system prompt as first message if provided
  • Handles both text and multimodal responses (text + images)
  • Returns either a string or array of MessageContent objects
See implementation at src/services/openRouter.ts:5-97

Streaming Requests

The streamAIResponse() function enables real-time response streaming:
streamAIResponse(
  messages: Message[],
  apiKey: string,
  model: string,
  onChunk: (delta: string) => void,
  systemPrompt?: string,
  abortController?: AbortController
): Promise<string | MessageContent[]>
Streaming features:
  • Real-time chunk delivery via onChunk callback
  • Automatic fallback to non-streaming on error
  • Cancellable via AbortController
  • Server-Sent Events (SSE) parsing
  • Returns complete text when done
See implementation at src/services/openRouter.ts:102-191
If streaming fails (network issues, unsupported model), the function automatically falls back to fetchAIResponse().

Message Format Conversion

PolyChat-AI uses an internal message format that differs from OpenRouter’s API format. The conversion logic handles this transformation:

Internal Format → API Format

const apiMessages: ApiMessage[] = messages.map((message) => {
  let content = '';
  if (typeof message.content === 'string') {
    content = message.content;
  } else if (Array.isArray(message.content)) {
    // Extract text from multimodal content
    content = message.content
      .filter((item) => item.type === 'text')
      .map((item) => item.text || '')
      .join(' ');
  }
  return { role: message.role, content };
});
Why this conversion?
  • OpenRouter expects simple {role, content} objects
  • PolyChat-AI’s internal format supports multimodal content (text + images)
  • Text content is extracted and concatenated for API requests

System Prompt Injection

System prompts are automatically prepended to the message list:
if (systemPrompt && systemPrompt.trim()) {
  apiMessages.unshift({
    role: 'system',
    content: systemPrompt.trim(),
  });
}
This ensures the AI model receives context instructions before processing user messages.
System prompts are set in Settings (Ctrl+K) and can be customized per user preference. See the Configuration documentation.

Advanced Features

Image Generation

PolyChat-AI includes specialized image generation functions:
generateImage(
  prompt: string,
  apiKey: string,
  model: string = 'google/gemini-2.5-flash-image-preview:free',
  options?: {
    size?: '1024x1024' | '512x512' | '256x256';
    style?: 'natural' | 'vivid' | 'digital_art';
    quality?: 'standard' | 'hd';
  }
): Promise<string | MessageContent[]>
Basic image generation with prompt optimization. See src/services/openRouter.ts:682-731
Image generation helpers:
  • isImageGenerationModel(modelId): Check if model supports image generation
  • getImageModels(): Fetch list of image-capable models from OpenRouter
  • optimizeImagePrompt(prompt): Enhance prompts for better quality
  • createAdvancedImagePrompt(): Build detailed prompts with style/mood/lighting options

Model Discovery

Fetch available models from OpenRouter:
// Get trending general-purpose models
const trendingModels = await getTopWeeklyModels();
// Returns: [{ id, name, desc, emoji, isFree? }]

// Get image generation models
const imageModels = await getImageModels();
// Returns: [{ id, name, desc, emoji }]
These functions fetch from https://openrouter.ai/api/v1/models and filter based on model capabilities. See implementations at:
  • src/services/openRouter.ts:316-466 (trending models)
  • src/services/openRouter.ts:218-314 (image models)

API Key Validation

Validate an OpenRouter API key before use:
const isValid = await validateApiKey('sk-or-v1-...');

if (!isValid) {
  console.error('Invalid API key');
}
This makes a test request to the models endpoint to verify the key works. See src/services/openRouter.ts:827-840

Customization Patterns

Adding Custom Headers

To add custom headers to all API requests:
1

Locate the fetch calls

Open src/services/openRouter.ts and find the fetch() calls at lines 42 and 126
2

Extend the headers object

headers: {
  Authorization: `Bearer ${apiKey}`,
  'Content-Type': 'application/json',
  'HTTP-Referer': window.location.origin,
  'X-Title': 'PolyChat AI',
  'X-Custom-Header': 'your-value', // Add here
}
3

Apply to both streaming and non-streaming

Make sure to add the header to both fetchAIResponse and streamAIResponse functions

Modifying Request Payload

Customize the request body sent to OpenRouter:
const payload: OpenRouterPayload = {
  model,
  messages: apiMessages,
  // Add custom parameters:
  temperature: 0.7,
  max_tokens: 1000,
  top_p: 0.9,
};
  • temperature: Randomness (0-2, default 1)
  • top_p: Nucleus sampling (0-1, default 1)
  • top_k: Token limiting
  • frequency_penalty: Reduce repetition (-2 to 2)
  • presence_penalty: Encourage new topics (-2 to 2)
  • max_tokens: Response length limit
  • stop: Stop sequences
See OpenRouter API docs for complete list.

Creating a Custom API Service

To integrate a different API provider:
  1. Create a new service file: src/services/customProvider.ts
  2. Implement the same interface:
    export const fetchAIResponse = async (
      messages: Message[],
      apiKey: string,
      model: string,
      systemPrompt?: string
    ): Promise<string | MessageContent[]> => {
      // Your implementation
    };
    
  3. Update imports: Replace OpenRouter imports with your custom service
  4. Maintain type compatibility: Use the same Message and MessageContent types from src/types/index.ts

Error Handling

The API integration includes comprehensive error handling:

Error Types

try {
  const response = await fetchAIResponse(...);
} catch (error) {
  if (error instanceof Error) {
    // Check error.message for:
    // - "API error: 401" → Invalid API key
    // - "API error: 429" → Rate limit exceeded
    // - "API error: 500" → OpenRouter server error
    // - "Failed to fetch" → Network error
    console.error(error.message);
  }
}

Automatic Retry Pattern

The generateImageReliable() function demonstrates exponential backoff:
for (let attempt = 1; attempt <= maxRetries; attempt++) {
  try {
    return await generateImage(...);
  } catch (error) {
    if (attempt < maxRetries) {
      // Exponential backoff: 1s, 2s, 4s (max 5s)
      const delay = Math.min(1000 * Math.pow(2, attempt - 1), 5000);
      await new Promise(resolve => setTimeout(resolve, delay));
    }
  }
}
You can apply this pattern to other API functions for improved reliability.

Performance Optimization

Request Cancellation

Use AbortController to cancel in-flight requests:
const controller = new AbortController();

streamAIResponse(
  messages,
  apiKey,
  model,
  onChunk,
  systemPrompt,
  controller // Pass controller
);

// Cancel the request
controller.abort();
This prevents unnecessary API charges and improves UX when users navigate away.

Caching Responses

Consider implementing a response cache for repeated queries:
const cache = new Map<string, string>();

function getCacheKey(messages: Message[], model: string): string {
  return JSON.stringify({ messages, model });
}

const key = getCacheKey(messages, model);
if (cache.has(key)) {
  return cache.get(key)!;
}

const response = await fetchAIResponse(...);
cache.set(key, response);
Be cautious with caching - it may return stale responses for non-deterministic models. Best used for static reference queries.

Testing & Development

Mock API Responses

For testing without API calls:
const mockFetchAIResponse = async (
  messages: Message[],
  apiKey: string,
  model: string
): Promise<string> => {
  await new Promise(resolve => setTimeout(resolve, 1000));
  return `Mock response for: ${messages[messages.length - 1].content}`;
};

Environment-Based API URLs

Switch between production and development endpoints:
const API_URL = process.env.NODE_ENV === 'development'
  ? 'http://localhost:3000/api/v1/chat/completions'
  : 'https://openrouter.ai/api/v1/chat/completions';

OpenRouter Documentation

Official API reference and guides

Model Pricing

Browse available models and pricing

Settings Guide

Configure API keys and preferences

Troubleshooting

Common API issues and solutions