Quickstart
This guide walks you through installing Agensight, capturing your first LLM trace, and setting up the MCP server to auto-extract agents, tools, and prompts from your codebase.
Prerequisite
- Python ≥ 3.10
Step 1: Install Agensight and Capture Your First Trace
Let’s start by installing the SDK and writing a small script to trace an OpenAI API call.
1. Create a virtual environment
2. Install Agensight
3. Write your first trace
Create a file my_llm_app.py
:
Make sure your OPENAI_API_KEY
is set before running the script:
Step 2: View the Trace in Agensight
After running the script:
This opens http://localhost:5001
, where you can explore:
- Token usage
- Function spans
- Full LLM call trace
Step 3: Set Up MCP Server with Cursor to Auto-Generate Config
To enable full tracing (agents, tools, prompts), you’ll need to generate an agensight.config.json
for your project.
1. Clone and set up the MCP server
2. Connect MCP server to Cursor
In ~/.cursor/mcp.json
, add:
Replace the paths with the actual paths to your Python binary and server.py
file.
3. Ask Cursor to generate config
In Cursor, open your project and ask:
Cursor will run the MCP server and create a valid agensight.config.json
in your project root.
What’s Next?
- Add more
@trace()
and@span()
decorators to trace your pipelines in detail. - Use the config file to enhance dashboard grouping for agents, tools, and prompts.
- Learn about advanced features in the Core Concepts guide.