# Agensight ## Docs - [Agensight Tracing](https://pype-db52d533.mintlify.app/core-concepts/tracing.md): How to initialize Agensight tracing, define main operations with @trace, and add granular details with @span. - [Conversation Completeness](https://pype-db52d533.mintlify.app/evaluations/conversation_completness.md) - [Conversational GEval](https://pype-db52d533.mintlify.app/evaluations/conversation_geval.md) - [Conversation Relevancy](https://pype-db52d533.mintlify.app/evaluations/conversation_relevancy.md) - [GEval](https://pype-db52d533.mintlify.app/evaluations/geval.md) - [Introduction](https://pype-db52d533.mintlify.app/evaluations/intro.md) - [Image Editing](https://pype-db52d533.mintlify.app/evaluations/multi-model-metrics/image_editing.md) - [Image Helpfulness](https://pype-db52d533.mintlify.app/evaluations/multi-model-metrics/image_helpfulness.md) - [Image Reference](https://pype-db52d533.mintlify.app/evaluations/multi-model-metrics/image_reference.md) - [Multimodal Answer Relevancy](https://pype-db52d533.mintlify.app/evaluations/multi-model-metrics/multimodal_answer_relevancy.md) - [Multimodal Contextual Precision](https://pype-db52d533.mintlify.app/evaluations/multi-model-metrics/multimodal_contextual_precision.md) - [Multimodal Contextual Recall](https://pype-db52d533.mintlify.app/evaluations/multi-model-metrics/multimodal_contextual_recall.md) - [Multimodal Contextual Relevancy](https://pype-db52d533.mintlify.app/evaluations/multi-model-metrics/multimodal_contextual_relevancy.py.md) - [Multimodal Faithfulness](https://pype-db52d533.mintlify.app/evaluations/multi-model-metrics/multimodal_faithfulness.py.md) - [Multimodal Tool Correctness](https://pype-db52d533.mintlify.app/evaluations/multi-model-metrics/multimodal_tool_correctness.md) - [Text To Image](https://pype-db52d533.mintlify.app/evaluations/multi-model-metrics/text_to_image.md) - [Contextual Precision](https://pype-db52d533.mintlify.app/evaluations/rag-metrics/contextual_precision.md): The ContextualPrecisionMetric is a reference-based RAG metric designed to evaluate the ranking quality of retrieved context nodes in a Retrieval-Augmented Generation (RAG) system. It uses an LLM-as-a-judge approach to assess whether relevant context chunks are ranked higher than irrelevant ones, ens… - [Contextual Recall](https://pype-db52d533.mintlify.app/evaluations/rag-metrics/contextual_recall.md): The ContextualRecallMetric is a reference-based RAG evaluation metric designed to measure how completely your retrieval context covers the information needed to generate the ideal answer. It uses LLM-as-a-judge to assess whether statements in the expected_output are attributable to the retrieved con… - [Contextual Relevancy](https://pype-db52d533.mintlify.app/evaluations/rag-metrics/contextual_relevancy.md): The ContextualRelevancyMetric is a referenceless RAG evaluation metric that assesses how relevant the retrieved information is in response to a given user input. It uses LLM-as-a-judge to score the retrieval_context based on its topical alignment with the original query, regardless of the actual or… - [Task Completion ](https://pype-db52d533.mintlify.app/evaluations/task_completion.md) - [Tool Correctness ](https://pype-db52d533.mintlify.app/evaluations/tool_correctness.md) - [Introduction](https://pype-db52d533.mintlify.app/introduction.md): Trace, debug, and optimize your LLM workflows with developer-first observability. - [Quickstart](https://pype-db52d533.mintlify.app/quickstart.md): This guide walks you through installing Agensight, capturing your first LLM trace, and setting up the MCP server to auto-extract agents, tools, and prompts from your codebase.