1

Installation

First, you’ll need a Parea API key. See Authentication to get started.

After you’ve followed those steps, you are ready to install the Parea SDK client.

pip install parea-ai
2

Start Logging

Use any of the code snippets below to start logging your LLM requests.

  • Python

  • Typescript

  • Curl

Parea supports automatic logging for OpenAI, Langchain, or any model if using Parea’s completion method.

The @trace decorator allows you to associate multiple functions into a single trace.

from parea import Parea
from parea.schemas import LLMInputs, Message, ModelParams, Role, Completion

p = Parea(api_key="PAREA_API_KEY")  # replace with your API key

response = p.completion(
    Completion(llm_configuration=LLMInputs(
        model="gpt-3.5-turbo",  # this can be any model enabled on Parea
        model_params=ModelParams(temp=0.5),
        messages=[Message(
            role=Role.user,
            content="Write a Hello World program in Python using FastAPI.",
        )],
    ))
)
print(response.content)
3

View Logs

Now you can view your trace logs on the Logs page. You will see a table of your logs, and any chains will be expandable. The log table supports search, filtering, and sorting.

trace-log-table

If you click a log, it will open the detailed trace view. Here, you can step through each span and view inputs, outputs, messages, metadata, and other key metrics associated with a given trace.

You can also add these traces to a test collection to test new prompts on historical inputs or open the trace in the Playground to iterate on the example.

detailed-trace-log

What’s Next?

Explore our cookbooks for more examples on how to use Parea’s SDKs.

Learn how to use evaluation metrics to run experiments.