Model Registry Architecture
Auto-Update (Runtime)
The app fetches available models from the AI Gateway API at runtime. Results are cached for 1 hour.models.generated.ts.
Manual Update (Build-time)
To refresh the fallback snapshot, run:lib/ai/models.generated.ts. Run this periodically to keep the fallback fresh.
Model Configuration
All model settings are configured inchat.config.ts under the models key:
chat.config.ts
| Setting | Description |
|---|---|
providerOrder | Provider sort order in model selector |
disabledModels | Models hidden from all users |
curatedDefaults | Models enabled by default for new users |
anonymousModels | Models available to anonymous users |
defaults.* | Default model for each task type |
Image generation uses a separate
defaults.image setting. Language models with the image-generation tag can also generate images inline. See Image Generation for details.Reasoning Variants
Models that support extended thinking are automatically split into two variants:{model-id}(standard mode){model-id}-reasoning(extended thinking enabled)
buildAppModels() for any model with reasoning: true.
Visibility Pipeline
Model visibility is determined through a pipeline:Authenticated Users
- Remove disabled models (
models.disabledModels) - Apply defaults (
models.curatedDefaults+ any new API models not inmodels.generated.ts) - Apply user overrides from saved preferences
Anonymous Users
- Remove disabled models
- Filter to
models.anonymousModelsonly
User Settings
Users configure their model preferences at/settings/models. This page lets users toggle individual models on or off. A link to AI Registry provides detailed model information.
Environment Variables
The AI Gateway handles authentication with all providers. You do not need individual API keys for OpenAI, Anthropic, Google, etc. Instead, you authenticate with the gateway using one of these:VERCEL_OIDC_TOKEN(automatically provided on Vercel deployments)AI_GATEWAY_API_KEY(for non-Vercel deployments, obtain from Vercel AI Gateway)