Log
Log a (LLM) response to visualize inference results, or chains.
Body
Unique 4 UUID identifier of this trace/log.
Ex: e3267953-a16f-47f5-b37e-622dbb29d730
Unique 4 UUID identifier of the root trace/log (needed to create hierarchical traces).
Unique 4 UUID identifier of the parent trace/log (needed to create hierarchical traces).
Depth of this log in the trace hierarchy. Note, root log has depth 0. The children of the root log have depth 1, and so on.
Order of execution of this log in the trace hierarchy. Note, that the root log has execution order 0. The first child of the root log has execution order 1, and so on.
Name to identify a trace. Will be visible in logs for filtering.
Key value-pairs of the function input. Note, there is a special field to capture messages in LLM calls. You can still use it in the case of LLM calls to track the key-value pairs for prompt templates.
Response of this step/log/function. If response isn’t a string, it needs to be serialized to a string.
Name of the project this log is associated with.
Time in seconds it took execute the LLM/function call of this log.
Datetime from a POSIX timestamp for when the request started.
Ex. 2023-07-23 13:48:34
Datetime from a POSIX timestamp for when the request completed.
Ex. 2023-07-23 13:48:34
Unique 4 UUID identifier. If provided, this log will be associated with the experiment.
Any images which are associated with this log and should be displayed in the logs.
Any additional metadata to record.
The target or “gold standard” response for the inputs of this log.
List of string tags to associate with this log.
Unique identifier for a end-user. Will be visible in logs for filtering.
Unique identifier for a session. Can you be used to associated multiple logs in e.g. chat applications. Will be visible in logs for filtering.
Score and evaluation metric fields
The following are the fields that can be used to attach scores and evaluation metrics to a log.
A list of scores attached to this log.
Any feedback or score attached to this log. Ranges from 0.0 to 1.0.
A list of evaluation metric names which are deployed through Parea and should be executed on this log.
If provided, this output will be used for deployd evaluation metrics instead of the output field.
If provided, with a value between 0.0 and 1.0, this is the likelihood that the evaluation metrics will be applied to this log.
LLM-call specific fields
Any fields that are specific to a LLM completion request are listed below.
LLM completion request configuration if this was a LLM request.
This contains the model
, provider
, messages
, model_params
, functions
, and function_call
.
Time in seconds to generate the first token (only applies to streamed LLM responses).
Token count of input prompt if it was a LLM request.
Token count of output completion if it was a LLM request.
Token count of input prompt + output completion if it was a LLM request.
Cost of this completion in USD if it was a LLM request.
Was this page helpful?