Step-by-step guide for adding support for a new LLM in Dust. Use when adding a new model, or updating a previous one.
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
skills listSkill Instructions
name: dust-llm description: Step-by-step guide for adding support for a new LLM in Dust. Use when adding a new model, or updating a previous one.
Adding Support for a New LLM Model
This skill guides you through adding support for a newly released LLM.
Quick Reference
Files to Modify
| File | Purpose |
|---|---|
front/types/assistant/models/{provider}.ts | Model ID + configuration |
front/lib/api/assistant/token_pricing.ts | Pricing per million tokens |
front/types/assistant/models/models.ts | Central registry |
front/lib/api/llm/clients/{provider}/types.ts | Router whitelist |
sdks/js/src/types.ts | SDK types |
front/components/providers/types.ts | UI availability (optional) |
front/lib/api/llm/tests/llm.test.ts | Integration tests |
Prerequisites
Before adding, gather:
- Model ID: Exact provider identifier (e.g.,
gpt-4-turbo-2024-04-09) - Context size: Total context window in tokens
- Pricing: Input/output cost per million tokens
- Capabilities: Vision, structured output, reasoning effort levels
- Tokenizer: Compatible tokenizer for token counting
Step-by-Step: Adding an OpenAI Model
Step 1: Add Model Configuration
Edit front/types/assistant/models/openai.ts:
export const GPT_4_TURBO_2024_04_09_MODEL_ID = "gpt-4-turbo-2024-04-09" as const;
export const GPT_4_TURBO_2024_04_09_MODEL_CONFIG: ModelConfigurationType = {
providerId: "openai",
modelId: GPT_4_TURBO_2024_04_09_MODEL_ID,
displayName: "GPT 4 turbo",
contextSize: 128_000,
recommendedTopK: 32,
recommendedExhaustiveTopK: 64,
largeModel: true,
description: "OpenAI's GPT 4 Turbo model for complex tasks (128k context).",
shortDescription: "OpenAI's second best model.",
isLegacy: false,
isLatest: false,
generationTokensCount: 2048,
supportsVision: true,
minimumReasoningEffort: "none",
maximumReasoningEffort: "none",
defaultReasoningEffort: "none",
supportsResponseFormat: false,
tokenizer: { type: "tiktoken", base: "cl100k_base" },
};
Step 2: Add Pricing
Edit front/lib/api/assistant/token_pricing.ts:
const CURRENT_MODEL_PRICING: Record<BaseModelIdType, PricingEntry> = {
// ... existing
"gpt-4-turbo-2024-04-09": {
input: 10.0, // USD per million input tokens
output: 30.0, // USD per million output tokens
cache_read_input_tokens: 1.0, // Optional: cached reads
cache_creation_input_tokens: 12.5, // Optional: cache creation
},
};
Step 3: Register in Central Registry
Edit front/types/assistant/models/models.ts:
export const MODEL_IDS = [
// ... existing
GPT_4_TURBO_2024_04_09_MODEL_ID,
] as const;
export const SUPPORTED_MODEL_CONFIGS: ModelConfigurationType[] = [
// ... existing
GPT_4_TURBO_2024_04_09_MODEL_CONFIG,
];
Step 4: Update Router Whitelist
Edit front/lib/api/llm/clients/openai/types.ts:
export const OPENAI_WHITELISTED_MODEL_IDS = [
// ... existing
GPT_4_TURBO_2024_04_09_MODEL_ID,
] as const;
Step 5: Update SDK Types
Edit sdks/js/src/types.ts:
const ModelLLMIdSchema = FlexibleEnumSchema<
// ... existing
| "gpt-4-turbo-2024-04-09"
>();
Step 6: Add to UI (Optional)
Edit front/components/providers/types.ts:
export const USED_MODEL_CONFIGS: readonly ModelConfig[] = [
// ... existing
GPT_4_TURBO_2024_04_09_MODEL_CONFIG,
] as const;
Step 7: Test (Mandatory)
Edit front/lib/api/llm/tests/llm.test.ts:
const MODELS = {
// ... existing
[GPT_4_TURBO_2024_04_09_MODEL_ID]: {
runTest: true, // Enable for testing
providerId: "openai",
},
};
Run test:
RUN_LLM_TEST=true npx vitest --config lib/api/llm/tests/vite.config.js lib/api/llm/tests/llm.test.ts --run
After test passes, set runTest: false to avoid expensive CI runs.
Adding Anthropic Models
Same pattern with Anthropic-specific files:
front/types/assistant/models/anthropic.ts- AddCLAUDE_X_MODEL_IDand configfront/lib/api/llm/clients/anthropic/types.ts- Add toANTHROPIC_WHITELISTED_MODEL_IDSfront/types/assistant/models/models.ts- Register in central registryfront/lib/api/assistant/token_pricing.ts- Add pricingsdks/js/src/types.ts- Update SDK types- Test and validate
Model Configuration Properties
| Property | Description |
|---|---|
supportsVision | Can process images |
supportsResponseFormat | Supports structured output (JSON) |
minimumReasoningEffort | Min reasoning level ("none", "low", "medium", "high") |
maximumReasoningEffort | Max reasoning level |
defaultReasoningEffort | Default reasoning level |
tokenizer | Tokenizer config for token counting |
Validation Checklist
- Model config added to provider file
- Pricing updated (input, output, cache if applicable)
- Registered in central registry (
MODEL_IDS+SUPPORTED_MODEL_CONFIGS) - Router whitelist updated
- SDK types updated
- UI config added (if needed)
- Integration test passes
- Test disabled after validation
Troubleshooting
Model not in UI: Check USED_MODEL_CONFIGS in front/components/providers/types.ts
API calls failing: Verify model ID matches provider's exact identifier, check router whitelist
Token counting errors: Validate context size and tokenizer configuration
Pricing issues: Ensure prices are per million tokens in USD
Reference
- See
front/types/assistant/models/openai.tsandanthropic.tsfor examples - Provider docs: OpenAI, Anthropic, Google, Mistral
More by dust-tt
View allInformation about dust-hive, a CLI tool for running multiple isolated Dust development environments. ALWAYS enable this skill when the working directory is under ~/dust-hive/. Use for environment status, Dust app commands, and understanding port allocation.
Step-by-step guide for creating Temporal workflows in Dust. Use when adding background jobs, async processing, durable workflows, or task queues.
Step-by-step guide for creating new internal MCP server integrations in Dust that connect to remote platforms (Jira, HubSpot, Salesforce, etc.). Use when adding a new MCP server, implementing a platform integration, or connecting Dust to a new external service.
Step-by-step guide for writing focused, practical tests for Dust codebases following the 80/20 principle.
