TellusR is tailored to serve as a Retrieval Augmented Generation (RAG) module to enable you to seamlessly integrate its advanced retrieval capabilities with cutting-edge generative models.

TellusR provides a predefined search pipeline, tellusrRag, designed to offer relevant context for an LLM. Instead of returning the top most suitable documents, it delivers the most fitting text excerpts based on the user´s query. The retrieval parameters can be customized, such as the number of top documents to retrieve, the quantity of relevant text chunks per document, and the size of each chunk, allowing for a tailored LLM-context retrieval.


Given a SERVER_URL and a PROJECT, the following request returns top chunks given the parameters.

curl -X GET $SERVER_URL/tellusr/api/v1/$PROJECT/compute/tellusrRag?q=mathematics&topDocsN=4&subResultsN=3&highlightWindow=3&rag.simplify=true
  • q: the query
  • topDocsN: how many top documents to consider?
  • subResultsN: up to how many search-hits on chunks in each top document to consider. (The search may return less than this number if there are not enough chunks.)
  • highlightWindow: for each search hit on a chunk, expand the chunk by also returning surrounding chunks.
  • rag.simplify: if set to true, chunks that are connected will be contatenated and grouped as continuous text in the response. Alternatively, if set to false (default) then chunks will have detailed metadata about their origin, such as page number, which may be needed to provide references as links.