Record discrete events or related events in your application from LLM requests/chains to functions.
wrap_openai_client
helper.wrap_anthropic_client
helper.log_omit_inputs
: bool = field(default=False) # omit the inputs to the LLM calllog_omit_outputs
: bool = field(default=False) # omit the outputs from the LLM calllog_omit
: bool = field(default=False) # do not log anythingPareaAILangchainTracer
as a callback to automatically log all requests and responses.@trace
decorator allows you to associate multiple processes into a single parent trace.
You only need to add the decorator to the top level function or any non-llm call function that you want to also track.wrap_openai_client
helper.trace
decorator relies on Python’s contextvars
to create traces.
However, when spawning threads from inside a trace the decorator will not work correctly as the contextvars
are not correctly copied to the new threads or processes.
There is an existing issue in Python’s standard library and a great explanation in the FastAPI repo that discusses this limitation.
For example when a @trace
-decorated function uses a ThreadPoolExecutor
to make concurrent LLM requests the context that holds important info on the nesting hierarchy (“we are inside another trace”) is not copied over correctly to the child threads.
So, the created generations will not be linked to the trace and be ‘orphaned’.
In the UI, you will see a trace missing those generations.
A workaround is to manually copy over the context to the new threads or processes via contextvars.copy_context
.
This is the recommended approach when using threading or multi-processing in Python.
TURN_OFF_PAREA_LOGGING
to True
.
Alternatively, you can also deactivate logging by using the parea.helpers.TurnOffPareaLogging
context manager.
In order to reduce the amount of logs stored in Parea, you can specify the log_sample_rate
in the trace
decorator or completion
function