This contains all the settings needed for the Jarvis v2 query engine.
Whether this query engine is enabled.
The modules to be used in the query engine.
Must contain a minimum of 1
items
Settings for our standard IR-based module.
Configure how we retrieve, score, and rank answers.
Configure how we rerank answers.
Rerankers are executed one after the other.
No Additional ItemsMinimum cutoff reranker function. All resulting candidates will have scores greater or equal to the cutoff.
Conditions for documents before we apply the reranker Documents that fail the condition will be passed through to the next reranker.
No Additional ItemsA condition to check before applying a reranker.
The comparison operator.
The value to compare against.
The key in the score vector to use for the comparison. When the key doesn't exist or is None, we return false.
Must be at least 1
characters long
The score value to cut at. If specified as a percentage, the cutoff is calculated as a percentage of the minimum/maximum score.
^\d+(\.\d+)?\%$
If all documents are discarded, keep the original candidates.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
The key in the score vector to use for the cutoff.
Must be at least 1
characters long
"min-cutoff-reranker"
Whether to log verbose debugging information.
Maximum cutoff reranker function. All resulting candidates will have scores lesser or equal to the cutoff.
Conditions for documents before we apply the reranker Documents that fail the condition will be passed through to the next reranker.
No Additional ItemsThe score value to cut at. If specified as a percentage, the cutoff is calculated as a percentage of the minimum/maximum score.
^\d+(\.\d+)?\%$
If all documents are discarded, keep the original candidates.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
The key in the score vector to use for the cutoff.
Must be at least 1
characters long
"max-cutoff-reranker"
Whether to log verbose debugging information.
Reranker keeps the top K candidates based on distinct score value and discards the rest.
Keep the top K candidates.
Value must be strictly greater than 0
Whether to keep candidates whose score tie for top K. Defaults to true
.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
The key in the score vector to use for the cutoff.
Must be at least 1
characters long
"top-k-reranker"
Whether to log verbose debugging information.
A pseudo-reranker that calls the load
method on all candidate documents.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
"load-full-document-reranker"
Whether to log verbose debugging information.
Reorder the candidates based on scoring keys.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
The scoring fields to order the candidates by. Prefix with a +/-
for ascending/descending order. You can also access sv
, doc
, and source
fields using JMESPath syntax, e.g., sv.missed_tokens_count
, doc.name
, source.phone
, etc.
Must contain a minimum of 1
items
Must be at least 1
characters long
"order-by-reranker"
Whether to log verbose debugging information.
Reranker keeps the first N candidates and discards the rest.
Keep the first N candidates.
Value must be strictly greater than 0
The name of the reranker function. This is used to identify the reranker functions that has been applied.
"first-n-reranker"
Whether to log verbose debugging information.
Reranker keeps the last N candidates and discards the rest.
Keep the last N candidates.
Value must be strictly greater than 0
The name of the reranker function. This is used to identify the reranker functions that has been applied.
"last-n-reranker"
Whether to log verbose debugging information.
Reranker reverses the order of the candidates.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
"reverse-reranker"
Whether to log verbose debugging information.
Reranker that keeps entries containing matching answer fields.
The field in the document to use as the answer.
Must contain a minimum of 1
items
Answer field that we expect to be matched in the answer. Answer field is specified in <answer_type>.<answer_field>
format.
^[\w\-]+\.[\w\_]+$
"intent"
If all documents are discarded, keep the original candidates.
Whether to match all
or any
of the answer fields.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
"answer-field-reranker"
Whether to log verbose debugging information.
Reranker that shortlists or discards entry based on the condition.
The field that decides whether to discard or keep the entries.
The fields in the document to check for the values.
No Additional ItemsThe name of the reranker function. This is used to identify the reranker functions that has been applied.
"discard-or-keep-information-reranker"
The field based on which discard or keep is decided.
Whether to log verbose debugging information.
Whether to log verbose debugging information.
Configure how we retrieve answer.
Retrievers are executed concurrently and then concatenated after.
Must contain a minimum of 1
items
Settings for retrieving intent phrases.
The name of the retriever function. This is used to identify the retriever functions used.
The size
parameter to use when retrieving documents from the KB. Note that scan
is not used.
Value must be greater or equal to 0
Configure how we perform fuzzy matching when retrieving phrases.
Settings for tiered fuzzy matching.
Whether to allow transpositions in fuzzy matching. See https://opensearch.org/docs/latest/query-dsl/full-text/match/#transpositions
A tuple [a, b]
where string lengths in [0, a] do not have fuzzy matching, lengths in (a, b] allow 1 Levenshtein distance, and lengths (b, ∞) allow 2 Levenshtein distances.
Must contain a minimum of 2
items
Must contain a maximum of 2
items
Value must be greater or equal to 0
Value must be strictly greater than 0
The length of the prefix to use for fuzzy matching. See https://opensearch.org/docs/latest/query-dsl/full-text/match/#prefix-length
Value must be greater or equal to 0
A tuple [a, b]
where a is the score for 2 Levenshtein distance, b is the score for 1 Levenshtein distances, and c is the score for exact matches.
Must contain a minimum of 3
items
Must contain a maximum of 3
items
"answer-phrase-retriever"
Whether to log verbose debugging information.
Settings for retrieving intent phrases.
The name of the retriever function. This is used to identify the retriever functions used.
The size
parameter to use when retrieving documents from the KB. Note that scan
is not used.
Value must be greater or equal to 0
Configure how we perform fuzzy matching when retrieving phrases.
Settings for tiered fuzzy matching.
Same definition as TieredFuzzyMatchSettings"intent-phrase-retriever"
Whether to log verbose debugging information.
Settings for retrieving answers.
The maximum number of answer phrase clauses to use in a query. When there are more clauses, multiple queries will be executed concurrently.
Value must be greater or equal to 0
The name of the retriever function. This is used to identify the retriever functions used.
The size
parameter to use when retrieving documents from the KB. Note that scan
is not used.
Value must be greater or equal to 0
"answer-retriever"
Whether to log verbose debugging information.
The weights to use for each field in the retrieval function. By default, each field is weighted 1.
No Additional ItemsSettings for retrieval weights.
The field in the answer where the object is used.
The answer type where the object is used.
Multiply to the weight.
The weight to assign to the field. It can also be one of the supported phrase scoring keys or a constant weight value.
Whether to log verbose debugging information.
Configure how we score answers.
Scorer functions are executed concurrently and then merged together after.
Must contain a minimum of 1
items
Settings for a phrase scorer.
The name of the scorer function. This is used to identify the scorer functions that has been applied.
"phrase-scorer"
Whether to log verbose debugging information.
Score an answer based on standard token level P/R/F metrics.
The name of the scorer function. This is used to identify the scorer functions that has been applied.
"answer-scorer"
Whether to log verbose debugging information.
Score an answer based on standard token level P/R/F metrics.
The list of institution affinities.
No Additional ItemsThe list of institutions starting from most to least affiniated. This field is case insensitive.
Must contain a minimum of 1
items
The tenant for which the priority order is defined.
The cleo
app tenant applicable to this object.
^cleo\:[a-zA-Z0-9][\w\-\_]*$
The hospital
app tenant applicable to this object.
^hospital\:[a-zA-Z0-9][\w\-\_]*$
The name of the scorer function. This is used to identify the scorer functions that has been applied.
"directory-answer-scorer"
Whether to log verbose debugging information.
Whether to log verbose debugging information.
Configure how we retrieve, score, and rank answer phrases. This is done per answer type/field. Multiple finders for each answer type/field is supported.
No Additional ItemsConfigure how we retrieve, score, and rank answer phrases in the module.
The field in the answer where the object is used.
The answer type where the object is used.
Configure how we rerank phrases.
Rerankers are executed one after the other.
No Additional ItemsMinimum cutoff reranker function. All resulting candidates will have scores greater or equal to the cutoff.
Conditions for documents before we apply the reranker Documents that fail the condition will be passed through to the next reranker.
No Additional ItemsA condition to check before applying a reranker.
The comparison operator.
The value to compare against.
The key in the score vector to use for the comparison. When the key doesn't exist or is None, we return false.
Must be at least 1
characters long
The score value to cut at. If specified as a percentage, the cutoff is calculated as a percentage of the minimum/maximum score.
^\d+(\.\d+)?\%$
If all documents are discarded, keep the original candidates.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
The key in the score vector to use for the cutoff.
Must be at least 1
characters long
"min-cutoff-reranker"
Whether to log verbose debugging information.
Maximum cutoff reranker function. All resulting candidates will have scores lesser or equal to the cutoff.
Conditions for documents before we apply the reranker Documents that fail the condition will be passed through to the next reranker.
No Additional ItemsThe score value to cut at. If specified as a percentage, the cutoff is calculated as a percentage of the minimum/maximum score.
^\d+(\.\d+)?\%$
If all documents are discarded, keep the original candidates.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
The key in the score vector to use for the cutoff.
Must be at least 1
characters long
"max-cutoff-reranker"
Whether to log verbose debugging information.
Reranker keeps the top K candidates based on distinct score value and discards the rest.
Keep the top K candidates.
Value must be strictly greater than 0
Whether to keep candidates whose score tie for top K. Defaults to true
.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
The key in the score vector to use for the cutoff.
Must be at least 1
characters long
"top-k-reranker"
Whether to log verbose debugging information.
A pseudo-reranker that calls the load
method on all candidate documents.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
"load-full-document-reranker"
Whether to log verbose debugging information.
Reorder the candidates based on scoring keys.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
The scoring fields to order the candidates by. Prefix with a +/-
for ascending/descending order. You can also access sv
, doc
, and source
fields using JMESPath syntax, e.g., sv.missed_tokens_count
, doc.name
, source.phone
, etc.
Must contain a minimum of 1
items
Must be at least 1
characters long
"order-by-reranker"
Whether to log verbose debugging information.
Reranker keeps the first N candidates and discards the rest.
Keep the first N candidates.
Value must be strictly greater than 0
The name of the reranker function. This is used to identify the reranker functions that has been applied.
"first-n-reranker"
Whether to log verbose debugging information.
Reranker keeps the last N candidates and discards the rest.
Keep the last N candidates.
Value must be strictly greater than 0
The name of the reranker function. This is used to identify the reranker functions that has been applied.
"last-n-reranker"
Whether to log verbose debugging information.
Reranker reverses the order of the candidates.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
"reverse-reranker"
Whether to log verbose debugging information.
Reranker that keeps entries containing matching answer fields.
The field in the document to use as the answer.
Must contain a minimum of 1
items
Answer field that we expect to be matched in the answer. Answer field is specified in <answer_type>.<answer_field>
format.
^[\w\-]+\.[\w\_]+$
"intent"
If all documents are discarded, keep the original candidates.
Whether to match all
or any
of the answer fields.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
"answer-field-reranker"
Whether to log verbose debugging information.
Reranker that shortlists or discards entry based on the condition.
The field that decides whether to discard or keep the entries.
The fields in the document to check for the values.
No Additional ItemsThe name of the reranker function. This is used to identify the reranker functions that has been applied.
"discard-or-keep-information-reranker"
The field based on which discard or keep is decided.
Whether to log verbose debugging information.
Whether to log verbose debugging information.
Configure how we score phrases.
Scorer functions are executed concurrently and then merged together after.
Must contain a minimum of 1
items
Settings for a phrase scorer.
The name of the scorer function. This is used to identify the scorer functions that has been applied.
"phrase-scorer"
Whether to log verbose debugging information.
Score an answer based on standard token level P/R/F metrics.
The name of the scorer function. This is used to identify the scorer functions that has been applied.
"answer-scorer"
Whether to log verbose debugging information.
Score an answer based on standard token level P/R/F metrics.
The list of institution affinities.
No Additional ItemsThe list of institutions starting from most to least affiniated. This field is case insensitive.
Must contain a minimum of 1
items
The tenant for which the priority order is defined.
The cleo
app tenant applicable to this object.
^cleo\:[a-zA-Z0-9][\w\-\_]*$
The hospital
app tenant applicable to this object.
^hospital\:[a-zA-Z0-9][\w\-\_]*$
The name of the scorer function. This is used to identify the scorer functions that has been applied.
"directory-answer-scorer"
Whether to log verbose debugging information.
Whether to log verbose debugging information.
"answer"
Configure how we retrieve, score, and rank intent phrases.
Configure how we rerank phrases.
Rerankers are executed one after the other.
No Additional ItemsMinimum cutoff reranker function. All resulting candidates will have scores greater or equal to the cutoff.
Conditions for documents before we apply the reranker Documents that fail the condition will be passed through to the next reranker.
No Additional ItemsA condition to check before applying a reranker.
The comparison operator.
The value to compare against.
The key in the score vector to use for the comparison. When the key doesn't exist or is None, we return false.
Must be at least 1
characters long
The score value to cut at. If specified as a percentage, the cutoff is calculated as a percentage of the minimum/maximum score.
^\d+(\.\d+)?\%$
If all documents are discarded, keep the original candidates.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
The key in the score vector to use for the cutoff.
Must be at least 1
characters long
"min-cutoff-reranker"
Whether to log verbose debugging information.
Maximum cutoff reranker function. All resulting candidates will have scores lesser or equal to the cutoff.
Conditions for documents before we apply the reranker Documents that fail the condition will be passed through to the next reranker.
No Additional ItemsThe score value to cut at. If specified as a percentage, the cutoff is calculated as a percentage of the minimum/maximum score.
^\d+(\.\d+)?\%$
If all documents are discarded, keep the original candidates.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
The key in the score vector to use for the cutoff.
Must be at least 1
characters long
"max-cutoff-reranker"
Whether to log verbose debugging information.
Reranker keeps the top K candidates based on distinct score value and discards the rest.
Keep the top K candidates.
Value must be strictly greater than 0
Whether to keep candidates whose score tie for top K. Defaults to true
.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
The key in the score vector to use for the cutoff.
Must be at least 1
characters long
"top-k-reranker"
Whether to log verbose debugging information.
A pseudo-reranker that calls the load
method on all candidate documents.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
"load-full-document-reranker"
Whether to log verbose debugging information.
Reorder the candidates based on scoring keys.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
The scoring fields to order the candidates by. Prefix with a +/-
for ascending/descending order. You can also access sv
, doc
, and source
fields using JMESPath syntax, e.g., sv.missed_tokens_count
, doc.name
, source.phone
, etc.
Must contain a minimum of 1
items
Must be at least 1
characters long
"order-by-reranker"
Whether to log verbose debugging information.
Reranker keeps the first N candidates and discards the rest.
Keep the first N candidates.
Value must be strictly greater than 0
The name of the reranker function. This is used to identify the reranker functions that has been applied.
"first-n-reranker"
Whether to log verbose debugging information.
Reranker keeps the last N candidates and discards the rest.
Keep the last N candidates.
Value must be strictly greater than 0
The name of the reranker function. This is used to identify the reranker functions that has been applied.
"last-n-reranker"
Whether to log verbose debugging information.
Reranker reverses the order of the candidates.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
"reverse-reranker"
Whether to log verbose debugging information.
Reranker that keeps entries containing matching answer fields.
The field in the document to use as the answer.
Must contain a minimum of 1
items
Answer field that we expect to be matched in the answer. Answer field is specified in <answer_type>.<answer_field>
format.
^[\w\-]+\.[\w\_]+$
"intent"
If all documents are discarded, keep the original candidates.
Whether to match all
or any
of the answer fields.
The name of the reranker function. This is used to identify the reranker functions that has been applied.
"answer-field-reranker"
Whether to log verbose debugging information.
Reranker that shortlists or discards entry based on the condition.
The field that decides whether to discard or keep the entries.
The fields in the document to check for the values.
No Additional ItemsThe name of the reranker function. This is used to identify the reranker functions that has been applied.
"discard-or-keep-information-reranker"
The field based on which discard or keep is decided.
Whether to log verbose debugging information.
Whether to log verbose debugging information.
Configure how we score phrases.
Scorer functions are executed concurrently and then merged together after.
Must contain a minimum of 1
items
Settings for a phrase scorer.
The name of the scorer function. This is used to identify the scorer functions that has been applied.
"phrase-scorer"
Whether to log verbose debugging information.
Score an answer based on standard token level P/R/F metrics.
The name of the scorer function. This is used to identify the scorer functions that has been applied.
"answer-scorer"
Whether to log verbose debugging information.
Score an answer based on standard token level P/R/F metrics.
The list of institution affinities.
No Additional ItemsThe list of institutions starting from most to least affiniated. This field is case insensitive.
Must contain a minimum of 1
items
The tenant for which the priority order is defined.
The cleo
app tenant applicable to this object.
^cleo\:[a-zA-Z0-9][\w\-\_]*$
The hospital
app tenant applicable to this object.
^hospital\:[a-zA-Z0-9][\w\-\_]*$
The name of the scorer function. This is used to identify the scorer functions that has been applied.
"directory-answer-scorer"
Whether to log verbose debugging information.
Whether to log verbose debugging information.
"intent"
The intents will likely trigger this module.
No Additional ItemsThe intent phrase.
Must be at least 1
characters long
The responder to use for this module. Module responders evaluate all the candidate answers to form a cohesive response for a module.
Settings for our directory module responder.
The maximum number of answer candidate bare answers that we will load in order to sort and display to the user. Default to None
which means to load all answer candidates.
Value must be strictly greater than 0
The maximum number of answer candidate bare answers that we will render and display to the user.
Value must be strictly greater than 0
"directory-ir"
Settings for our clinical trial module responder.
The maximum number of answer candidate bare answers that we will load in order to sort and display to the user. Default to None
which means to load all answer candidates.
Value must be strictly greater than 0
The maximum number of answer candidate bare answers that we will render and display to the user.
Value must be strictly greater than 0
"clinical-trial-ir"
Stopwords used in this module.
No Additional ItemsConfigure stopwords on a per answer type/field level.
The field in the answer where the object is used.
The answer type where the object is used.
The stopwords for this answer type/field. It can be a list of phrases or a stopword set name.
"en-standard"
Must be at least 1
characters long
Synonyms used in this module.
No Additional ItemsConfigure synonyms on a per answer type/field level.
The field in the answer where the object is used.
The answer type where the object is used.
The number of rounds to expand the synonyms. Each round will generate more candidate synonyms.
Value must be greater or equal to 0
The synonyms for this answer type/field as a list of synonym groups.
Must contain a minimum of 1
items
Describes how synonyms are generated
In a one-way synonym group, all phrases are synonyms of the first phrase but not the other way round.
Each additional property must conform to the following schema
Type: array of stringSynonyms of the first phrase.
Must contain a minimum of 1
items
In a two-way synonym group, all pairs of phrases are synonyms of each other and can be used interchangeably.
Must contain a minimum of 2
items
A Single synonym phrase.
Must be at least 1
characters long
The name of the module. This should be unique within the query engine.
"standard-ir"
Settings for our document LLM module.
Defines whether to allow early termination (before LLM call). True when stock phrase functionality is requested. Default is False.
Name of model to be used, it should correspond with one of the values in embedding_model.
Defines settings for the embedding model during data and query flow.
No Additional ItemsDefines settings for the embedding model during data and query flow.
Context size
Prompt for the embedding.
Dimension of the embedding
Where is the model from?
HNSW settings
Defines settings for HNSW graph. See
https://github.com/run-llama/llamaindex/blob/977d60a058c691957dae3eb3c66c1894faea24ac/llama-index-integrations/vectorstores/llama-index-vector-stores-postgres/llamaindex/vectorstores/postgres/base.py#L570
Distance metric to use. Note that by default PGVectorStore.buildquery calls cosine_distance
Size of the dynamic candidate list for constructing the graph. Higher value provides better recall at the cost of speed
Size of the dynamic candidate list for search. Higher value provides better recall at the cost of speed.
Max number of connections per layer.
Name of embedding model to use.
Must be at least 1
characters long
Instruction for the query.
Defines settings for the LLM model during query flow.
Context size aka context length
Template for the context
Name of AWS guardrail if applicable. Should start with arn:aws:bedrock:us-west-2:.
AWS Guardrail version (string) if applicable
Max tokens returned by LLM
Name of LLM as per HuggingFace
Must be at least 1
characters long
Similarity score cutoff for node postprocessing based on reranker score
Similarity score cutoff for node postprocessing based on embedding score
Number of nodes to return after retrieval
Restricts bot to only answer only these languages. Provide list of 2-letter language codes, and double-check that Amazon Comprehend / Translate supports them.
No Additional ItemsPrompt for the LLM
Temperature
The name of the module. This should be unique within the query engine.
Configure which postprocessor functions to use
Postprocessor functions to be applied one after another in the model.
No Additional ItemsLanguage Remover LLM Postprocessor function.
"query_language="
The name of the LLM Postprocessor function. This is used to identify the LLM postprocessor functions that has been applied.
"language-remover"
Amazon Translator LLM Postprocessor function.
AWS region name
The name of the LLM Postprocessor function. This is used to identify the LLM postprocessor functions that has been applied.
"amazon-translator"
Configure which postprocessor functions to use
Preprocessor functions to be applied one after another.
No Additional ItemsTruncate Preprocessor function.
Maximum length of query string
The name of the preprocessor function. This is used to identify the preprocessor functions that has been applied.
"truncate"
Language Filter Preprocessor function.
AWS region name
The name of the preprocessor function. This is used to identify the preprocessor functions that has been applied.
The language codes that are supported for this client. By default, if this is empty, all languages are supported. If a language is not supported, query will be nullified
No Additional Items"language-filter"
Configure which query feature functions to use
Query Feature functions to be applied one after another.
No Additional ItemsLanguage Detector Query Feature function.
AWS region name
The name of the query feature function. This is used to identify the query feature functions that has been applied.
"language-detector"
"document-llm"
"document-llm"
Settings for our Cleo dialogue LLM module.
Defines whether to allow early termination (before LLM call). True when stock phrase functionality is requested. Default is False.
Name of model to be used, it should correspond with one of the values in embedding_model.
Defines settings for the embedding model during data and query flow.
No Additional ItemsDefines settings for the embedding model during data and query flow.
Same definition as Embedding Model SettingsDefines settings for the LLM model during query flow.
Context size aka context length
Template for the context
Name of AWS guardrail if applicable. Should start with arn:aws:bedrock:us-west-2:.
AWS Guardrail version (string) if applicable
Max tokens returned by LLM
Name of LLM as per HuggingFace
Must be at least 1
characters long
Similarity score cutoff for node postprocessing based on reranker score
Similarity score cutoff for node postprocessing based on embedding score
Number of nodes to return after retrieval
Restricts bot to only answer only these languages. Provide list of 2-letter language codes, and double-check that Amazon Comprehend / Translate supports them.
No Additional ItemsPrompt for the LLM
Temperature
Configure valid keys that will be used to filter metadata. They should by default be at least any of the types of QueryFeatureSettings.
No Additional ItemsThe name of the module. This should be unique within the query engine.
Configure which postprocessor functions to use
Postprocessor functions to be applied one after another in the model.
No Additional ItemsLanguage Remover LLM Postprocessor function.
"query_language="
The name of the LLM Postprocessor function. This is used to identify the LLM postprocessor functions that has been applied.
"language-remover"
Amazon Translator LLM Postprocessor function.
AWS region name
The name of the LLM Postprocessor function. This is used to identify the LLM postprocessor functions that has been applied.
"amazon-translator"
Configure which postprocessor functions to use
Preprocessor functions to be applied one after another.
No Additional ItemsTruncate Preprocessor function.
Maximum length of query string
The name of the preprocessor function. This is used to identify the preprocessor functions that has been applied.
"truncate"
Language Filter Preprocessor function.
AWS region name
The name of the preprocessor function. This is used to identify the preprocessor functions that has been applied.
The language codes that are supported for this client. By default, if this is empty, all languages are supported. If a language is not supported, query will be nullified
No Additional Items"language-filter"
Configure how we generate query features.
Same definition as query_feature_settings"cleo-dialogue-llm"
"dialogue-llm"
Defines settings for Agentic modules
name: agentic-module-query-engine
tenants: [all]
modules:
- type: agentic-module
name: agentic-module-v0
storage_prefix: agentic-module-v0-
#end agentic-module-v0
Defines whether to allow early termination (before LLM call). True when stock phrase functionality is requested. Default is False.
Name of model to be used, it should correspond with one of the values in embedding_model.
Defines settings for the embedding model during data and query flow.
No Additional ItemsDefines settings for the embedding model during data and query flow.
Same definition as Embedding Model SettingsDefines settings for the LLM model during query flow.
Context size aka context length
Template for the context
Name of AWS guardrail if applicable. Should start with arn:aws:bedrock:us-west-2:.
AWS Guardrail version (string) if applicable
Max tokens returned by LLM
Name of LLM as per HuggingFace
Must be at least 1
characters long
Similarity score cutoff for node postprocessing based on reranker score
Similarity score cutoff for node postprocessing based on embedding score
Number of nodes to return after retrieval
Restricts bot to only answer only these languages. Provide list of 2-letter language codes, and double-check that Amazon Comprehend / Translate supports them.
No Additional ItemsPrompt for the LLM
Temperature
Configure valid keys that will be used to filter metadata. They should by default be at least any of the types of QueryFeatureSettings.
No Additional ItemsThe name of the module. This should be unique within the query engine.
Configure which postprocessor functions to use
Postprocessor functions to be applied one after another in the model.
No Additional ItemsLanguage Remover LLM Postprocessor function.
"query_language="
The name of the LLM Postprocessor function. This is used to identify the LLM postprocessor functions that has been applied.
"language-remover"
Amazon Translator LLM Postprocessor function.
AWS region name
The name of the LLM Postprocessor function. This is used to identify the LLM postprocessor functions that has been applied.
"amazon-translator"
Configure which postprocessor functions to use
Preprocessor functions to be applied one after another.
No Additional ItemsTruncate Preprocessor function.
Maximum length of query string
The name of the preprocessor function. This is used to identify the preprocessor functions that has been applied.
"truncate"
Language Filter Preprocessor function.
AWS region name
The name of the preprocessor function. This is used to identify the preprocessor functions that has been applied.
The language codes that are supported for this client. By default, if this is empty, all languages are supported. If a language is not supported, query will be nullified
No Additional Items"language-filter"
Configure how we generate query features.
Same definition as query_feature_settings"agentic-module-v0-"
"agentic-module"
Noop module settings.
The name of the module. This should be unique within the query engine.
"noop"
The name of the query engine. This should be globally unique.
Version of this query engine based on the Kondo resource version.