How Model Selection Works
When you create an iframe through the camelAI API, you specify which model to use. This model choice is locked for that specific chat conversation - ensuring consistency throughout the user’s interaction. If users access previous conversations, camelAI automatically uses the model that was originally selected for that chat.Available Models
We currently support five models across two providers:Model Comparison
Model | Provider | Best For | Speed | Cost | Intelligence |
---|---|---|---|---|---|
GPT-5 (Default) | OpenAI | Complex multi-step workflows, long-context | Medium | Medium | Very High |
O3 | OpenAI | Complex queries, tool-heavy workflows | Medium | Highest | Very High |
Sonnet 4.0 | Anthropic | In-depth research, comprehensive analysis | Medium | Highest | Very High |
O4 Mini | OpenAI | Simple data, cost-conscious deployments | Fast | Low | High |
4.1 | OpenAI | Speed-critical applications, straightforward data | Very Fast | Lowest | Medium |
Model Characteristics
Our recommended default model offering state-of-the-art reasoning and reliability.Strengths:
- Strong long-context understanding
- Excellent multi-turn coherence and instruction following
- Typically slower than lightweight models for simple tasks
- Your data requires complex joins or calculations
- Accuracy is more important than speed
Best Practices
Test Before Production
We strongly recommend experimenting with different models using your actual data before deploying to production. Model performance can vary significantly based on:- Your data structure complexity
- Typical query patterns
- User expectations
- Performance requirements
Updates and Support
We continuously evaluate and update our model offerings to provide the best possible experience. This guide will be updated as new models become available or existing models are improved.Need a specific model?
If you don’t see a model that meets your requirements, we’d love to hear from you. Contact us to request additional model support.