Chat Settings
Last updated
Last updated
The chat settings affect only the current chat session and override the default chat settings.
Select the model to be used for the current chat session.
This defines the maximum size of the context, in tokens, that the model will use. Since different models calculate tokens differently, this is an approximation. If the field is empty, the context limit is set to the maximum possible value for the model.
Keep in mind: Your new, unsent messages and unsent attached files are not included in the context window limit. Adjust the context limit accordingly if attaching large files.
Note: The context window limit does not account for system parameters required for the app to function. These typically do not exceed an additional 1000 tokens per request. It is advisable to always set the limit to 1000 tokens more than your actual needs.
Hint: If your files were not included in the context, simply set the context window limit to a larger value and submit an empty message.
Important: Be cautious when using an empty context limit with long chats or large files, as this can quickly lead to high costs.
This defines the model's maximum response length in tokens.
This setting varies by model but generally influences the predictability or creativity of the model's output. Switching between models resets this value.
Available on certain models only. This setting controls the repetition of specific phrases or words in the generated text. Switching between models resets this value.
Click Change to open the context memory panel. To learn more, refer to the Memorize tool.
System prompts provide instructions and information to the model regarding the nature of the chat.
This is the place to set a persona for the model, if needed.
Available only for models that support tools.
Toggle this setting to enable or disable the model's use of tools.
Some tools may spawn an additional assistant to optimize task execution. This setting specifies which model to use for the assistant.
This sets the limit for knowledge retrieval in tokens, affecting the Search in Files and Extract Web Content tools.
Important: Since the retrieved knowledge is injected into the model’s context, it is crucial that the model’s Context Window Limit is greater than the Knowledge Window Limit.
By default, all tools are active for the model to use. However, you can disable specific tools as needed.
You can manually set and delete the Active Directory path. Refer to the Set Active Directory tool for more informatio
The $PATH is utilized by the Run Command tool.
Available only for models that support tools.
Toggle this setting to allow or disallow the model's ability to generate images.
Select the model to be used for image generation.
This depends on the model. Choose the desired resolution for the generated images.
This depends on the model:
Standard: Default setting.
HD: Allocates more time for image generation, resulting in higher quality images but also increased latency and price.
This depends on the model:
Vivid: Encourages the model to generate hyper-real and dramatic images.
Natural: Promotes a more realistic, less hyper-real appearance in the generated images.