Skip to main content
1

Installation

First, you’ll need a Parea API key. See Authentication to get started.After you’ve followed those steps, you are ready to install the Parea SDK client.
pip install parea-ai
2

Start Logging

Use any of the code snippets below to start logging your LLM requests.
  • Python
  • Typescript
  • Curl
Parea supports automatic logging for OpenAI, Anthropic, Langchain, or any model if using Parea’s completion method.The @trace decorator allows you to associate multiple functions into a single trace.
from openai import OpenAI
from parea import Parea

client = OpenAI(api_key="OPENAI_API_KEY")  # replace with your API key
p = Parea(api_key="PAREA_API_KEY")  # replace with your API key
p.wrap_openai_client(client)  # if OpenAI python version < 1.0.0: p.wrap_openai_client(openai)

response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    temperature=0.5,
    messages=[
        {
            "role": "user",
            "content": "Write a Hello World program in Python using FastAPI.",
        }
    ],
)
print(response.choices[0].message.content)
3

View Logs

Now you can view your trace logs on the Logs page. You will see a table of your logs, and any chains will be expandable. The log table supports search, filtering, and sorting.trace-log-tableIf you click a log, it will open the detailed trace view. Here, you can step through each span and view inputs, outputs, messages, metadata, and other key metrics associated with a given trace.You can also add these traces to a test collection to test new prompts on historical inputs or open the trace in the Playground to iterate on the example.detailed-trace-log

What’s Next?

Explore our cookbooks for more examples on how to use Parea’s SDKs. Learn how to use evaluation metrics to run experiments.
I