Get Experiment Trace Logs
Fetches all trace logs for an experiment.
Args: experiment_uuid (str): UUID of the experiment.
Returns: list[TraceLogTreeSchema]: List of trace logs for the experiment.
Authorizations
Path Parameters
Body
Field to filter on. If you want to filter by a score, you need to follow this format: 'score:{score_name}'. If you want to filter by annotation, you need to follow this format: 'annotation:{annotation_type}:{annotation_id}'.
Filter by key when filtering on a map
Filter operator
equals
, not_equals
, like
, greater_than_or_equal
, less_than_or_equal
, greater_than
, less_than
, is_null
, exists
, in
, between
Filter value
Response
Start timestamp
UUID of the trace log. Ex: e3267953-a16f-47f5-b37e-622dbb29d730
Any annotations on log which were collected on Parea frontend. It maps annoation criterion ID to a dictionary mapping user_id (Parea user ID) to annotation.
Annotation creation timestamp
Annotation criterion ID
UUID of associated trace
Parea user ID
Annotation name
Annotation ID
Annotation score
User email address
Annotation value
If specified, evals given with evaluation_metric_names
will be applied to this log with this fraction.
If the cache was hit for this log.
UUIDs of any children.
IDs of any children. Will be automatically populated.
Children logs
Start timestamp
UUID of the trace log. Ex: e3267953-a16f-47f5-b37e-622dbb29d730
Any annotations on log which were collected on Parea frontend. It maps annoation criterion ID to a dictionary mapping user_id (Parea user ID) to annotation.
Annotation creation timestamp
Annotation criterion ID
UUID of associated trace
Parea user ID
Annotation name
Annotation ID
Annotation score
User email address
Annotation value
If specified, evals given with evaluation_metric_names
will be applied to this log with this fraction.
If the cache was hit for this log.
UUIDs of any children.
IDs of any children. Will be automatically populated.
Children logs
Start timestamp
UUID of the trace log. Ex: e3267953-a16f-47f5-b37e-622dbb29d730
If specified, evals given with evaluation_metric_names
will be applied to this log with this fraction.
If the cache was hit for this log.
UUIDs of any children.
IDs of any children. Will be automatically populated.
Children logs
Start timestamp
UUID of the trace log. Ex: e3267953-a16f-47f5-b37e-622dbb29d730
Any annotations on log which were collected on Parea frontend. It maps annoation criterion ID to a dictionary mapping user_id (Parea user ID) to annotation.
If specified, evals given with evaluation_metric_names
will be applied to this log with this fraction.
If the cache was hit for this log.
UUIDs of any children.
IDs of any children. Will be automatically populated.
Children logs
Start timestamp
UUID of the trace log. Ex: e3267953-a16f-47f5-b37e-622dbb29d730
Any annotations on log which were collected on Parea frontend. It maps annoation criterion ID to a dictionary mapping user_id (Parea user ID) to annotation.
If specified, evals given with evaluation_metric_names
will be applied to this log with this fraction.
If the cache was hit for this log.
UUIDs of any children.
IDs of any children. Will be automatically populated.
Children logs
Any comments on log which were collected on Parea frontend.
If this log was a LLM call, this will contain the configuration used for the call.
If this was a LLM call, this will contain the cost of the call.
Optionally, provide the ID of the used deployed prompt in this log.
Depth/level of nestedness of span in overall trace. Root-level trace is 0 and it always increments by 1.
End timestamp of span.
Unique identifier for a end-user.
If status=error
, this should contain any additional information such as the stacktrace
Deprecated
Names of evaluation metrics deployed on Parea which should be applied to this log.
The execution number of span in trace. It starts with 0 and increments by 1 with every span.
If given, will be used to associate this log with an experiment.
Any captured (user) feedback on this log
Deprecated
Any images associated with trace.
If this was a LLM call, this will contain the number of tokens in the input.
Key-value pair inputs of this trace. Note, there is a special field to capture messages in LLM calls. You can still use it in the case of LLM calls to track the key-value pairs for prompt templates.
Latency of this log in seconds.
If specified, this log and its entire associated trace will logged with this probability. Must be between 0 and 1 (incl.). Defaults to 1.0 (i.e., keeping all logs)
0 < x < 1
Any additional key-value pairs which provide context or are useful for filtering.
Organization ID associated with Parea API key. Will be automatically determined from API key
Response of this step/log/function. If response isn’t a string, it needs to be serialized to a string.
If provided, will be used as output for any specified evaluation metric.
If this was a LLM call, this will contain the number of tokens in the output.
If given, current trace will be a child of this trace. If current child is not a child, parent_trace_id
should be equal to trace_id
Name of the project with which the trace/log should be associated with. Must be provided if project_uuid
is not provided
UUID of project with which this log is associated. Will be automatically filled-in by SDKs
This is the UUID of the root trace/span of this trace. If current trace is the root trace, root_trace_id
must be equal to trace_id
Any scores/eval results associated with this log.
Unique identifier for a session. Can you be used to associated multiple logs in e.g. chat applications.
If the trace was a success
or error
List of tags which provide additional context or are useful for filtering.
The target or “gold standard†response for the inputs of this log.
If this was a LLM call, this will contain the time taken to generate the first token.
If this was a LLM call, this will contain the total number of tokens in the input and output.
The name of this span.
Any comments on log which were collected on Parea frontend.
Comment
Comment creation timestamp
Comment ID
Trace ID
User ID
User email address
If this log was a LLM call, this will contain the configuration used for the call.
Controls how the model responds to function calls. “auto†means the model can pick between an end-user or calling a function. To specifying a particular function name use a dictionary with key=“name†and value=“my_functionâ€. This will force the model to call that function.
A list of functions the model may generate JSON inputs for. Assumes every item in list has key 'name', 'description' and 'parameters'
Messages to LLM
Model name
Parameters such as temperature.
Provider name
If this was a LLM call, this will contain the cost of the call.
Optionally, provide the ID of the used deployed prompt in this log.
Depth/level of nestedness of span in overall trace. Root-level trace is 0 and it always increments by 1.
End timestamp of span.
Unique identifier for a end-user.
If status=error
, this should contain any additional information such as the stacktrace
Deprecated
Names of evaluation metrics deployed on Parea which should be applied to this log.
The execution number of span in trace. It starts with 0 and increments by 1 with every span.
If given, will be used to associate this log with an experiment.
Any captured (user) feedback on this log
Deprecated
Any images associated with trace.
URL of image
Caption of image
If this was a LLM call, this will contain the number of tokens in the input.
Key-value pair inputs of this trace. Note, there is a special field to capture messages in LLM calls. You can still use it in the case of LLM calls to track the key-value pairs for prompt templates.
Latency of this log in seconds.
If specified, this log and its entire associated trace will logged with this probability. Must be between 0 and 1 (incl.). Defaults to 1.0 (i.e., keeping all logs)
0 < x < 1
Any additional key-value pairs which provide context or are useful for filtering.
Organization ID associated with Parea API key. Will be automatically determined from API key
Response of this step/log/function. If response isn’t a string, it needs to be serialized to a string.
If provided, will be used as output for any specified evaluation metric.
If this was a LLM call, this will contain the number of tokens in the output.
If given, current trace will be a child of this trace. If current child is not a child, parent_trace_id
should be equal to trace_id
Name of the project with which the trace/log should be associated with. Must be provided if project_uuid
is not provided
UUID of project with which this log is associated. Will be automatically filled-in by SDKs
This is the UUID of the root trace/span of this trace. If current trace is the root trace, root_trace_id
must be equal to trace_id
Any scores/eval results associated with this log.
Name of the score / evaluation
Value of the score
Will be automatically populated if this score was from a deployed evaluation metric.
Reason for this score
Unique identifier for a session. Can you be used to associated multiple logs in e.g. chat applications.
If the trace was a success
or error
List of tags which provide additional context or are useful for filtering.
The target or “gold standard†response for the inputs of this log.
If this was a LLM call, this will contain the time taken to generate the first token.
If this was a LLM call, this will contain the total number of tokens in the input and output.
The name of this span.
Any comments on log which were collected on Parea frontend.
Comment
Comment creation timestamp
Comment ID
Trace ID
User ID
User email address
If this log was a LLM call, this will contain the configuration used for the call.
Controls how the model responds to function calls. “auto†means the model can pick between an end-user or calling a function. To specifying a particular function name use a dictionary with key=“name†and value=“my_functionâ€. This will force the model to call that function.
A list of functions the model may generate JSON inputs for. Assumes every item in list has key 'name', 'description' and 'parameters'
Model name
Parameters such as temperature.
Frequency penalty
Max. number of completion tokens
Model name
Presence penalty
Response format. See OpenAI docs for definition
Used for Mistral.
Temperature
Top p
Provider name
If this was a LLM call, this will contain the cost of the call.
Optionally, provide the ID of the used deployed prompt in this log.
Depth/level of nestedness of span in overall trace. Root-level trace is 0 and it always increments by 1.
End timestamp of span.
Unique identifier for a end-user.
If status=error
, this should contain any additional information such as the stacktrace
Deprecated
Names of evaluation metrics deployed on Parea which should be applied to this log.
The execution number of span in trace. It starts with 0 and increments by 1 with every span.
If given, will be used to associate this log with an experiment.
Any captured (user) feedback on this log
Deprecated
Any images associated with trace.
URL of image
Caption of image
If this was a LLM call, this will contain the number of tokens in the input.
Key-value pair inputs of this trace. Note, there is a special field to capture messages in LLM calls. You can still use it in the case of LLM calls to track the key-value pairs for prompt templates.
Latency of this log in seconds.
If specified, this log and its entire associated trace will logged with this probability. Must be between 0 and 1 (incl.). Defaults to 1.0 (i.e., keeping all logs)
0 < x < 1
Any additional key-value pairs which provide context or are useful for filtering.
Organization ID associated with Parea API key. Will be automatically determined from API key
Response of this step/log/function. If response isn’t a string, it needs to be serialized to a string.
If provided, will be used as output for any specified evaluation metric.
If this was a LLM call, this will contain the number of tokens in the output.
If given, current trace will be a child of this trace. If current child is not a child, parent_trace_id
should be equal to trace_id
Name of the project with which the trace/log should be associated with. Must be provided if project_uuid
is not provided
UUID of project with which this log is associated. Will be automatically filled-in by SDKs
This is the UUID of the root trace/span of this trace. If current trace is the root trace, root_trace_id
must be equal to trace_id
Any scores/eval results associated with this log.
Name of the score / evaluation
Value of the score
Will be automatically populated if this score was from a deployed evaluation metric.
Reason for this score
Unique identifier for a session. Can you be used to associated multiple logs in e.g. chat applications.
If the trace was a success
or error
List of tags which provide additional context or are useful for filtering.
The target or “gold standard†response for the inputs of this log.
If this was a LLM call, this will contain the time taken to generate the first token.
If this was a LLM call, this will contain the total number of tokens in the input and output.
The name of this span.
Any comments on log which were collected on Parea frontend.
Comment
Comment creation timestamp
Comment ID
Trace ID
User ID
User email address
If this log was a LLM call, this will contain the configuration used for the call.
Controls how the model responds to function calls. “auto†means the model can pick between an end-user or calling a function. To specifying a particular function name use a dictionary with key=“name†and value=“my_functionâ€. This will force the model to call that function.
A list of functions the model may generate JSON inputs for. Assumes every item in list has key 'name', 'description' and 'parameters'
Model name
Parameters such as temperature.
Frequency penalty
Max. number of completion tokens
Model name
Presence penalty
Response format. See OpenAI docs for definition
Used for Mistral.
Temperature
Top p
Provider name
If this was a LLM call, this will contain the cost of the call.
Optionally, provide the ID of the used deployed prompt in this log.
Depth/level of nestedness of span in overall trace. Root-level trace is 0 and it always increments by 1.
End timestamp of span.
Unique identifier for a end-user.
If status=error
, this should contain any additional information such as the stacktrace
Deprecated
Names of evaluation metrics deployed on Parea which should be applied to this log.
The execution number of span in trace. It starts with 0 and increments by 1 with every span.
If given, will be used to associate this log with an experiment.
Any captured (user) feedback on this log
Deprecated
Any images associated with trace.
URL of image
Caption of image
If this was a LLM call, this will contain the number of tokens in the input.
Key-value pair inputs of this trace. Note, there is a special field to capture messages in LLM calls. You can still use it in the case of LLM calls to track the key-value pairs for prompt templates.
Latency of this log in seconds.
If specified, this log and its entire associated trace will logged with this probability. Must be between 0 and 1 (incl.). Defaults to 1.0 (i.e., keeping all logs)
0 < x < 1
Any additional key-value pairs which provide context or are useful for filtering.
Organization ID associated with Parea API key. Will be automatically determined from API key
Response of this step/log/function. If response isn’t a string, it needs to be serialized to a string.
If provided, will be used as output for any specified evaluation metric.
If this was a LLM call, this will contain the number of tokens in the output.
If given, current trace will be a child of this trace. If current child is not a child, parent_trace_id
should be equal to trace_id
Name of the project with which the trace/log should be associated with. Must be provided if project_uuid
is not provided
UUID of project with which this log is associated. Will be automatically filled-in by SDKs
This is the UUID of the root trace/span of this trace. If current trace is the root trace, root_trace_id
must be equal to trace_id
Any scores/eval results associated with this log.
Name of the score / evaluation
Value of the score
Will be automatically populated if this score was from a deployed evaluation metric.
Reason for this score
Unique identifier for a session. Can you be used to associated multiple logs in e.g. chat applications.
If the trace was a success
or error
List of tags which provide additional context or are useful for filtering.
The target or “gold standard†response for the inputs of this log.
If this was a LLM call, this will contain the time taken to generate the first token.
If this was a LLM call, this will contain the total number of tokens in the input and output.
The name of this span.
Any comments on log which were collected on Parea frontend.
If this log was a LLM call, this will contain the configuration used for the call.
Controls how the model responds to function calls. “auto†means the model can pick between an end-user or calling a function. To specifying a particular function name use a dictionary with key=“name†and value=“my_functionâ€. This will force the model to call that function.
A list of functions the model may generate JSON inputs for. Assumes every item in list has key 'name', 'description' and 'parameters'
Model name
Parameters such as temperature.
Frequency penalty
Max. number of completion tokens
Model name
Presence penalty
Response format. See OpenAI docs for definition
Used for Mistral.
Temperature
Top p
Provider name
If this was a LLM call, this will contain the cost of the call.
Optionally, provide the ID of the used deployed prompt in this log.
Depth/level of nestedness of span in overall trace. Root-level trace is 0 and it always increments by 1.
End timestamp of span.
Unique identifier for a end-user.
If status=error
, this should contain any additional information such as the stacktrace
Deprecated
Names of evaluation metrics deployed on Parea which should be applied to this log.
The execution number of span in trace. It starts with 0 and increments by 1 with every span.
If given, will be used to associate this log with an experiment.
Any captured (user) feedback on this log
Deprecated
Any images associated with trace.
URL of image
Caption of image
If this was a LLM call, this will contain the number of tokens in the input.
Key-value pair inputs of this trace. Note, there is a special field to capture messages in LLM calls. You can still use it in the case of LLM calls to track the key-value pairs for prompt templates.
Latency of this log in seconds.
If specified, this log and its entire associated trace will logged with this probability. Must be between 0 and 1 (incl.). Defaults to 1.0 (i.e., keeping all logs)
0 < x < 1
Any additional key-value pairs which provide context or are useful for filtering.
Organization ID associated with Parea API key. Will be automatically determined from API key
Response of this step/log/function. If response isn’t a string, it needs to be serialized to a string.
If provided, will be used as output for any specified evaluation metric.
If this was a LLM call, this will contain the number of tokens in the output.
If given, current trace will be a child of this trace. If current child is not a child, parent_trace_id
should be equal to trace_id
Name of the project with which the trace/log should be associated with. Must be provided if project_uuid
is not provided
UUID of project with which this log is associated. Will be automatically filled-in by SDKs
This is the UUID of the root trace/span of this trace. If current trace is the root trace, root_trace_id
must be equal to trace_id
Any scores/eval results associated with this log.
Unique identifier for a session. Can you be used to associated multiple logs in e.g. chat applications.
If the trace was a success
or error
List of tags which provide additional context or are useful for filtering.
The target or “gold standard†response for the inputs of this log.
If this was a LLM call, this will contain the time taken to generate the first token.
If this was a LLM call, this will contain the total number of tokens in the input and output.
The name of this span.
Was this page helpful?