Documentation Index
Fetch the complete documentation index at: https://pype-db52d533.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Required Arguments
Each test case must be aModelTestCase instance with the following fields:
input: The user’s query.actual_output: The generated response (used for formatting or logging; not used in score computation).retrieval_context: A list of strings representing the context chunks retrieved from your knowledge base.
Optional Arguments
| Argument | Type | Description | Default |
|---|---|---|---|
threshold | float | Minimum passing score. | 0.5 |
model | str | Name of LLM to use for scoring (e.g., 'gpt-4o' or custom DeepEvalBaseLLM). | 'gpt-4o' |
include_reason | bool | If True, includes a human-readable explanation for the score. | True |
strict_mode | bool | Enforces a binary score: 1 for full relevance, 0 otherwise. | False |
async_mode | bool | Enables parallel processing during scoring. | True |
verbose_mode | bool | If True, logs detailed scoring steps to the console. | False |
evaluation_template | ContextualRelevancyTemplate | Override prompt logic for LLM judge. | Internal default |
Usage Example
How It Works
- Extract statements from the
retrieval_contextusing the selected LLM. - Classify each statement as either relevant or not relevant to the
input. - Compute the Contextual Relevancy score as:
Use Cases
Use ContextualRelevancyMetric when you want to:- Optimize retriever precision by penalizing irrelevant or off-topic context.
- Measure retrieval drift when irrelevant content is included in context.
- Improve user satisfaction by keeping retrieved context short and focused.
