Monitor your LLM requests and application functions.
1
Installation
First, you’ll need a Parea API key. See Authentication to get started.After you’ve followed those steps, you are ready to install the Parea SDK client.
Copy
pip install parea-ai
2
Start Logging
Use any of the code snippets below to start logging your LLM requests.
Parea supports automatic logging for OpenAI, Anthropic, Langchain, or any model if using Parea’s completion method.The @trace decorator allows you to associate multiple functions into a single trace.
Copy
from openai import OpenAIfrom parea import Pareaclient = OpenAI(api_key="OPENAI_API_KEY") # replace with your API keyp = Parea(api_key="PAREA_API_KEY") # replace with your API keyp.wrap_openai_client(client) # if OpenAI python version < 1.0.0: p.wrap_openai_client(openai)response = client.chat.completions.create( model="gpt-3.5-turbo", temperature=0.5, messages=[ { "role": "user", "content": "Write a Hello World program in Python using FastAPI.", } ],)print(response.choices[0].message.content)
3
View Logs
Now you can view your trace logs on the Logs page. You will see a table of your logs, and any chains will be expandable. The log table supports search, filtering, and sorting.If you click a log, it will open the detailed trace view. Here, you can step through each span and view inputs, outputs, messages, metadata, and other key metrics associated with a given trace.You can also add these traces to a test collection to test new prompts on historical inputs or open the trace in the Playground to iterate on the example.