Logging and Tracing
Record discrete events or related events in your application from LLM requests/chains to functions.
In order to build production ready LLM applications, developers need to understand the state of their systems. You want to track key LLM metrics, such as request inputs and outputs, as well as model specific metadata, model parameters, tokens, cost, etc. But you also want to track your functions which may manipulate data from one LLM and chain it into another. Parea makes it easy to get this deep visibility into any LLM stack.
Prerequisites
- First, you’ll need a Parea API key. See Authentication to get started.
- For any model you want to use with the SDK, set up your Provider API keys.
- Install the Parea SDK.
Logging
Parea automatically logs all LLM requests when using the SDK, or when using OpenAI’s API.
Usage
Parea supports automatic logging for OpenAI, Anthropic, Langchain, or any model if using Parea’s completion method (schema definition).
OpenAI API
If you want to use OpenAI directly, you can still get automatic logging using Parea’s wrap_openai_client
helper.
Anthropic API
If you want to use Anthropic’s Claude directly, you can still get automatic logging using Parea’s wrap_anthropic_client
helper.
Parea Completion Method
The completion method allows you to call any LLM model you have access to on Parea with the same API interface.
You have granular control over what is logged via the parameters on Parea’s completion method.
log_omit_inputs
: bool = field(default=False) # omit the inputs to the LLM calllog_omit_outputs
: bool = field(default=False) # omit the outputs from the LLM calllog_omit
: bool = field(default=False) # do not log anything
LangChain Framework
Parea also supports frameworks such as Langchain. You can use PareaAILangchainTracer
as a callback to automatically log all requests and responses.
Tracing
If your LLM application has complex abstractions such as chains, agents, retrieval, tool usage, or external functions that modify or connect prompts, then you will want a trace to associate all your related logs. A Trace captures the entire lifecycle of a request and consists of one or more spans, representing different sub-steps.
Usage
The @trace
decorator allows you to associate multiple processes into a single parent trace.
You only need to add the decorator to the top level function or any non-llm call function that you want to also track.
OpenAI API
If you want to use OpenAI directly, you can still get automatic logging using Parea’s wrap_openai_client
helper.
Parea Completion Method
Limitations
Python: Threading & Multi-processing
The trace
decorator relies on Python’s contextvars
to create traces.
However, when spawning threads from inside a trace the decorator will not work correctly as the contextvars
are not correctly copied to the new threads or processes.
There is an existing issue in Python’s standard library and a great explanation in the FastAPI repo that discusses this limitation.
For example when a @trace
-decorated function uses a ThreadPoolExecutor
to make concurrent LLM requests the context that holds important info on the nesting hierarchy (“we are inside another trace”) is not copied over correctly to the child threads.
So, the created generations will not be linked to the trace and be ‘orphaned’.
In the UI, you will see a trace missing those generations.
A workaround is to manually copy over the context to the new threads or processes via contextvars.copy_context
.
This is the recommended approach when using threading or multi-processing in Python.
Disabling/sampling logging
You can either disable logging or only store a percentage of all logs in Parea.
In Python, you can disable logging by setting the environment variable TURN_OFF_PAREA_LOGGING
to True
.
Alternatively, you can also deactivate logging by using the parea.helpers.TurnOffPareaLogging
context manager.
In order to reduce the amount of logs stored in Parea, you can specify the log_sample_rate
in the trace
decorator or completion
function
Streaming
What’s Next
Now that you know how to create a trace you can enrich it with metadata or learn how to: