Assistant Prompts
This widget is designed to configure how TellusR assistants process user queries and generate responses. It allows you to fine-tune search behavior, response merging, and assistant setup to optimize interaction with your data.
TellusR supports two types of assistants:
- Standard Assistants – Enable direct interaction with a search project, allowing users to chat with content from a specific dataset.
- Stitch Assistants – Combine responses from multiple assistants into a single reply, either by stitching them together as separate answers or by merging them into a unified response using an LLM.
Assistant Configuration
New assistants can be created by clicking the “+"-sign next to the assistant selection menu.
When creating an assistant, you can choose between a Standard or Stitch assistant, depending on your needs.
Standard Assistant Settings
- Search Project: Selects the dataset the assistant will use for search-based chat.
- Top N: Specifies the number of top-matching chunks used for generating responses (excluding neighboring chunks).
- Semantic Weight: Controls the balance between semantic and keyword-based search (0 = keyword-based, 1 = semantic).
- Highlight Window: Determines how many neighboring chunks are included for additional context in responses.
- Query Parsing Mode:
- Normal: Generates search query suggestion from user input.
- Stable (Blend Three): Asks the LLM three times for query suggestions and blends the results.
Stitch Assistant Settings
- Stitch Profiles: Defines which assistants are included in the stitch assistant.
- Merge Stitch Reply Prompt: When LLM Merge = true, this prompt guides how the LLM merges responses while keeping references intact.
- Enable LLM Merge of Stitch Reply:
- False → Replies from multiple assistants are combined as separate answers.
- True → An LLM merges responses into a single, cohesive reply.