Chat Settings

The chat settings affect only the current chat session and override the default chat settings.

Chat

Model

Select the model to be used for the current chat session.

Context Window Limit

This defines the maximum size of the context, in tokens, that the model will use. Since different models calculate tokens differently, this is an approximation. If the field is empty, the context limit is set to the maximum possible value for the model.

Keep in mind: Your new, unsent messages and unsent attached files are not included in the context window limit. Adjust the context limit accordingly if attaching large files.

Note: The context window limit does not account for system parameters required for the app to function. These typically do not exceed an additional 1000 tokens per request. It is advisable to always set the limit to 1000 tokens more than your actual needs.

Hint: If your files were not included in the context, simply set the context window limit to a larger value and submit an empty message.

Important: Be cautious when using an empty context limit with long chats or large files, as this can quickly lead to high costs.

Output Limit

This defines the model's maximum response length in tokens.

Temperature

This setting varies by model but generally influences the predictability or creativity of the model's output. Switching between models resets this value.

Presence Penalty

Available on certain models only. This setting controls the repetition of specific phrases or words in the generated text. Switching between models resets this value.

Context Memory

Click Change to open the context memory panel. To learn more, refer to the Memorize tool.

System Prompt

System prompts provide instructions and information to the model regarding the nature of the chat.

This is the place to set a persona for the model, if needed.

Tools

Available only for models that support tools.

Enabled

Toggle this setting to enable or disable the model's use of tools.

Assistant Model

Some tools may spawn an additional assistant to optimize task execution. This setting specifies which model to use for the assistant.

Knowledge Window Limit

This sets the limit for knowledge retrieval in tokens, affecting the Search in Files and Extract Web Content tools.

Important: Since the retrieved knowledge is injected into the model’s context, it is crucial that the model’s Context Window Limit is greater than the Knowledge Window Limit.

Active Tools

By default, all tools are active for the model to use. However, you can disable specific tools as needed.

Active Directory

You can manually set and delete the Active Directory path. Refer to the Set Active Directory tool for more informatio

$PATH

The $PATH is utilized by the Run Command tool.

Image

Available only for models that support tools.

Enabled

Toggle this setting to allow or disallow the model's ability to generate images.

Model

Select the model to be used for image generation.

Image Size

This depends on the model. Choose the desired resolution for the generated images.

Quality

This depends on the model:

  • Standard: Default setting.

  • HD: Allocates more time for image generation, resulting in higher quality images but also increased latency and price.

Style

This depends on the model:

  • Vivid: Encourages the model to generate hyper-real and dramatic images.

  • Natural: Promotes a more realistic, less hyper-real appearance in the generated images.

Last updated