Model Settings
Configure temperature, tokens, and other model settings
Fine-tune how AI models behave by adjusting their settings. These options let you control creativity, response length, and other aspects of model behavior.
Understanding Model Settings
Temperature
What it does: Controls randomness and creativity.
| Setting | Behavior |
|---|---|
| 0.0 | Very focused, deterministic, same answer each time |
| 0.5 | Balanced between creativity and consistency |
| 0.7 | Default - good balance for most tasks |
| 1.0 | More creative, varied responses |
| 1.5+ | Highly creative, potentially unpredictable |
When to adjust:
- Lower for: Factual questions, coding, analysis
- Higher for: Creative writing, brainstorming, variety
Max Tokens
What it does: Sets the maximum length of responses.
| Setting | Result |
|---|---|
| 256 | Very short responses |
| 1024 | Short to medium responses |
| 4096 | Standard length (default) |
| 8192+ | Long, detailed responses |
When to adjust:
- Lower for: Quick answers, concise responses
- Higher for: Detailed explanations, long-form content
Top P (Nucleus Sampling)
What it does: Another way to control randomness. Limits word choices to the most likely options.
| Setting | Behavior |
|---|---|
| 0.1 | Very restricted, conservative choices |
| 0.5 | Moderate variety |
| 0.9 | Default - good variety while staying sensible |
| 1.0 | Consider all possible words |
Tip: Usually, you only need to adjust either temperature OR top_p, not both. Temperature is more intuitive for most users.
Frequency Penalty
What it does: Reduces repetition by penalizing words that appear frequently.
| Setting | Behavior |
|---|---|
| 0.0 | No penalty (default) |
| 0.5 | Light penalty - reduces obvious repetition |
| 1.0 | Moderate penalty - more word variety |
| 2.0 | Strong penalty - avoids repetition heavily |
When to adjust:
- Increase if responses are repetitive
- Keep low for technical content where repetition may be necessary
Presence Penalty
What it does: Encourages the model to talk about new topics.
| Setting | Behavior |
|---|---|
| 0.0 | No penalty (default) |
| 0.5 | Light encouragement for new topics |
| 1.0 | Moderate push for variety |
| 2.0 | Strong push for new topics |
When to adjust:
- Increase for brainstorming or exploring many ideas
- Keep low for focused, single-topic discussions
Accessing Model Settings
Open the Model Menu
Click the model selector in the chat input area.
Click Settings
Select "Model Settings" or the gear icon.
Adjust Parameters
Modify temperature, max tokens, or other settings.
Save or Apply
Changes apply to your current conversation or can be saved as defaults.
Preset Configurations
Instead of adjusting settings manually, use presets:
For: Coding, facts, analysis
- Temperature: 0.2
- Top P: 0.9
- Frequency Penalty: 0.0
Results in focused, consistent, accurate responses.
For: General tasks (default)
- Temperature: 0.7
- Top P: 0.9
- Frequency Penalty: 0.0
Good for most everyday tasks.
For: Writing, brainstorming
- Temperature: 1.0
- Top P: 0.95
- Frequency Penalty: 0.5
More varied, creative responses.
Practical Examples
For Coding
Settings:
- Temperature: 0.2
- Max Tokens: 4096
- Frequency Penalty: 0.0Low temperature ensures consistent, correct code. Standard length allows for complete implementations.
For Creative Writing
Settings:
- Temperature: 0.9
- Max Tokens: 8192
- Frequency Penalty: 0.3
- Presence Penalty: 0.3Higher temperature for creativity. Penalties reduce repetitive phrases and encourage variety.
For Quick Answers
Settings:
- Temperature: 0.5
- Max Tokens: 512
- Top P: 0.9Lower max tokens encourages concise responses. Moderate temperature for reliability.
For Brainstorming
Settings:
- Temperature: 1.2
- Max Tokens: 4096
- Presence Penalty: 0.8High temperature and presence penalty encourage diverse, wide-ranging ideas.
Default Model Settings
Set your preferred defaults:
Go to Settings
Open Settings → Models.
Find Default Settings
Scroll to "Default Model Settings."
Configure Defaults
Set your preferred temperature, max tokens, etc.
Save
These settings apply to all new conversations.
Managing Your Models
Set Default Model
Choose which model new chats use:
- Go to Settings → Models
- Select "Default Model"
- Choose your preferred model
Favorite Models
Star frequently-used models:
- Open the model selector
- Click the star next to models you use often
- Starred models appear at the top
Hide Models
Remove models you never use:
- Go to Settings → Models
- Find "Disabled Models"
- Toggle off models to hide them
Model-Specific Settings
Some settings only apply to certain models:
| Model Type | Unique Settings |
|---|---|
| Reasoning (o3, etc.) | Reasoning effort level |
| Image generation | Image size, style, quality |
| Code models | Language preferences |
These appear automatically when relevant models are selected.
Common Questions
Next Steps
- Learn prompting techniques for better results
- Explore the model comparison guide
- Understand credits and pricing
Was this page helpful? Let us know!
Report an issue