Get Experiment Trace Logs
Fetches all trace logs for an experiment.
Args: experiment_uuid (str): UUID of the experiment.
Returns: list[TraceLogTreeSchema]: List of trace logs for the experiment.
Authorizations
Path Parameters
Body
Field to filter on. If you want to filter by a score, you need to follow this format: 'score:{score_name}'. If you want to filter by annotation, you need to follow this format: 'annotation:{annotation_type}:{annotation_id}'.
Filter by key when filtering on a map
Filter operator
equals
, not_equals
, like
, greater_than_or_equal
, less_than_or_equal
, greater_than
, less_than
, is_null
, exists
, in
, between
Filter value
Response
If this log was a LLM call, this will contain the configuration used for the call.
Model name
Provider name
Parameters such as temperature.
Model name
Temperature
Top p
Frequency penalty
Presence penalty
Max. number of completion tokens
Response format. See OpenAI docs for definition
Used for Mistral.
A list of functions the model may generate JSON inputs for. Assumes every item in list has key 'name', 'description' and 'parameters'
Controls how the model responds to function calls. “auto†means the model can pick between an end-user or calling a function. To specifying a particular function name use a dictionary with key=“name†and value=“my_functionâ€. This will force the model to call that function.
Key-value pair inputs of this trace. Note, there is a special field to capture messages in LLM calls. You can still use it in the case of LLM calls to track the key-value pairs for prompt templates.
Response of this step/log/function. If response isn’t a string, it needs to be serialized to a string.
The target or “gold standard†response for the inputs of this log.
Latency of this log in seconds.
If this was a LLM call, this will contain the time taken to generate the first token.
If this was a LLM call, this will contain the number of tokens in the input.
If this was a LLM call, this will contain the number of tokens in the output.
If this was a LLM call, this will contain the total number of tokens in the input and output.
If this was a LLM call, this will contain the cost of the call.
UUID of the trace log. Ex: e3267953-a16f-47f5-b37e-622dbb29d730
Start timestamp
If given, current trace will be a child of this trace. If current child is not a child, parent_trace_id
should be equal to trace_id
This is the UUID of the root trace/span of this trace. If current trace is the root trace, root_trace_id
must be equal to trace_id
Name of the project with which the trace/log should be associated with. Must be provided if project_uuid
is not provided
If the trace was a success
or error
If status=error
, this should contain any additional information such as the stacktrace
If provided, will be used as output for any specified evaluation metric.
Names of evaluation metrics deployed on Parea which should be applied to this log.
Any scores/eval results associated with this log.
Any captured (user) feedback on this log
If specified, evals given with evaluation_metric_names
will be applied to this log with this fraction.
If specified, this log and its entire associated trace will logged with this probability. Must be between 0 and 1 (incl.). Defaults to 1.0 (i.e., keeping all logs)
0 < x < 1
Optionally, provide the ID of the used deployed prompt in this log.
If the cache was hit for this log.
The name of this span.
UUIDs of any children.
IDs of any children. Will be automatically populated.
End timestamp of span.
Unique identifier for a end-user.
Unique identifier for a session. Can you be used to associated multiple logs in e.g. chat applications.
Any additional key-value pairs which provide context or are useful for filtering.
List of tags which provide additional context or are useful for filtering.
If given, will be used to associate this log with an experiment.
Any images associated with trace.
URL of image
Caption of image
Any comments on log which were collected on Parea frontend.
Any annotations on log which were collected on Parea frontend. It maps annoation criterion ID to a dictionary mapping user_id (Parea user ID) to annotation.
UUID of associated trace
Annotation criterion ID
Annotation creation timestamp
Parea user ID
Annotation score
Annotation value
Annotation ID
User email address
Annotation name
Depth/level of nestedness of span in overall trace. Root-level trace is 0 and it always increments by 1.
The execution number of span in trace. It starts with 0 and increments by 1 with every span.
Deprecated
Deprecated
UUID of project with which this log is associated. Will be automatically filled-in by SDKs
Organization ID associated with Parea API key. Will be automatically determined from API key
Children logs
If this log was a LLM call, this will contain the configuration used for the call.
Model name
Provider name
Parameters such as temperature.
Model name
Temperature
Top p
Frequency penalty
Presence penalty
Max. number of completion tokens
Response format. See OpenAI docs for definition
Used for Mistral.
A list of functions the model may generate JSON inputs for. Assumes every item in list has key 'name', 'description' and 'parameters'
Controls how the model responds to function calls. “auto†means the model can pick between an end-user or calling a function. To specifying a particular function name use a dictionary with key=“name†and value=“my_functionâ€. This will force the model to call that function.
Key-value pair inputs of this trace. Note, there is a special field to capture messages in LLM calls. You can still use it in the case of LLM calls to track the key-value pairs for prompt templates.
Response of this step/log/function. If response isn’t a string, it needs to be serialized to a string.
The target or “gold standard†response for the inputs of this log.
Latency of this log in seconds.
If this was a LLM call, this will contain the time taken to generate the first token.
If this was a LLM call, this will contain the number of tokens in the input.
If this was a LLM call, this will contain the number of tokens in the output.
If this was a LLM call, this will contain the total number of tokens in the input and output.
If this was a LLM call, this will contain the cost of the call.
UUID of the trace log. Ex: e3267953-a16f-47f5-b37e-622dbb29d730
Start timestamp
If given, current trace will be a child of this trace. If current child is not a child, parent_trace_id
should be equal to trace_id
This is the UUID of the root trace/span of this trace. If current trace is the root trace, root_trace_id
must be equal to trace_id
Name of the project with which the trace/log should be associated with. Must be provided if project_uuid
is not provided
If the trace was a success
or error
If status=error
, this should contain any additional information such as the stacktrace
If provided, will be used as output for any specified evaluation metric.
Names of evaluation metrics deployed on Parea which should be applied to this log.
Any scores/eval results associated with this log.
Name of the score / evaluation
Value of the score
Will be automatically populated if this score was from a deployed evaluation metric.
Reason for this score
Any captured (user) feedback on this log
If specified, evals given with evaluation_metric_names
will be applied to this log with this fraction.
If specified, this log and its entire associated trace will logged with this probability. Must be between 0 and 1 (incl.). Defaults to 1.0 (i.e., keeping all logs)
0 < x < 1
Optionally, provide the ID of the used deployed prompt in this log.
If the cache was hit for this log.
The name of this span.
UUIDs of any children.
IDs of any children. Will be automatically populated.
End timestamp of span.
Unique identifier for a end-user.
Unique identifier for a session. Can you be used to associated multiple logs in e.g. chat applications.
Any additional key-value pairs which provide context or are useful for filtering.
List of tags which provide additional context or are useful for filtering.
If given, will be used to associate this log with an experiment.
Any images associated with trace.
URL of image
Caption of image
Any comments on log which were collected on Parea frontend.
Trace ID
Comment
Comment ID
User ID
User email address
Comment creation timestamp
Any annotations on log which were collected on Parea frontend. It maps annoation criterion ID to a dictionary mapping user_id (Parea user ID) to annotation.
UUID of associated trace
Annotation criterion ID
Annotation creation timestamp
Parea user ID
Annotation score
Annotation value
Annotation ID
User email address
Annotation name
Depth/level of nestedness of span in overall trace. Root-level trace is 0 and it always increments by 1.
The execution number of span in trace. It starts with 0 and increments by 1 with every span.
Deprecated
Deprecated
UUID of project with which this log is associated. Will be automatically filled-in by SDKs
Organization ID associated with Parea API key. Will be automatically determined from API key
Children logs
If this log was a LLM call, this will contain the configuration used for the call.
Model name
Provider name
Parameters such as temperature.
Model name
Temperature
Top p
Frequency penalty
Presence penalty
Max. number of completion tokens
Response format. See OpenAI docs for definition
Used for Mistral.
A list of functions the model may generate JSON inputs for. Assumes every item in list has key 'name', 'description' and 'parameters'
Controls how the model responds to function calls. “auto†means the model can pick between an end-user or calling a function. To specifying a particular function name use a dictionary with key=“name†and value=“my_functionâ€. This will force the model to call that function.
Key-value pair inputs of this trace. Note, there is a special field to capture messages in LLM calls. You can still use it in the case of LLM calls to track the key-value pairs for prompt templates.
Response of this step/log/function. If response isn’t a string, it needs to be serialized to a string.
The target or “gold standard†response for the inputs of this log.
Latency of this log in seconds.
If this was a LLM call, this will contain the time taken to generate the first token.
If this was a LLM call, this will contain the number of tokens in the input.
If this was a LLM call, this will contain the number of tokens in the output.
If this was a LLM call, this will contain the total number of tokens in the input and output.
If this was a LLM call, this will contain the cost of the call.
UUID of the trace log. Ex: e3267953-a16f-47f5-b37e-622dbb29d730
Start timestamp
If given, current trace will be a child of this trace. If current child is not a child, parent_trace_id
should be equal to trace_id
This is the UUID of the root trace/span of this trace. If current trace is the root trace, root_trace_id
must be equal to trace_id
Name of the project with which the trace/log should be associated with. Must be provided if project_uuid
is not provided
If the trace was a success
or error
If status=error
, this should contain any additional information such as the stacktrace
If provided, will be used as output for any specified evaluation metric.
Names of evaluation metrics deployed on Parea which should be applied to this log.
Any scores/eval results associated with this log.
Name of the score / evaluation
Value of the score
Will be automatically populated if this score was from a deployed evaluation metric.
Reason for this score
Any captured (user) feedback on this log
If specified, evals given with evaluation_metric_names
will be applied to this log with this fraction.
If specified, this log and its entire associated trace will logged with this probability. Must be between 0 and 1 (incl.). Defaults to 1.0 (i.e., keeping all logs)
0 < x < 1
Optionally, provide the ID of the used deployed prompt in this log.
If the cache was hit for this log.
The name of this span.
UUIDs of any children.
IDs of any children. Will be automatically populated.
End timestamp of span.
Unique identifier for a end-user.
Unique identifier for a session. Can you be used to associated multiple logs in e.g. chat applications.
Any additional key-value pairs which provide context or are useful for filtering.
List of tags which provide additional context or are useful for filtering.
If given, will be used to associate this log with an experiment.
Any images associated with trace.
URL of image
Caption of image
Any comments on log which were collected on Parea frontend.
Trace ID
Comment
Comment ID
User ID
User email address
Comment creation timestamp
Depth/level of nestedness of span in overall trace. Root-level trace is 0 and it always increments by 1.
The execution number of span in trace. It starts with 0 and increments by 1 with every span.
Deprecated
Deprecated
UUID of project with which this log is associated. Will be automatically filled-in by SDKs
Organization ID associated with Parea API key. Will be automatically determined from API key
Children logs
If this log was a LLM call, this will contain the configuration used for the call.
Model name
Provider name
Parameters such as temperature.
Messages to LLM
A list of functions the model may generate JSON inputs for. Assumes every item in list has key 'name', 'description' and 'parameters'
Controls how the model responds to function calls. “auto†means the model can pick between an end-user or calling a function. To specifying a particular function name use a dictionary with key=“name†and value=“my_functionâ€. This will force the model to call that function.
Key-value pair inputs of this trace. Note, there is a special field to capture messages in LLM calls. You can still use it in the case of LLM calls to track the key-value pairs for prompt templates.
Response of this step/log/function. If response isn’t a string, it needs to be serialized to a string.
The target or “gold standard†response for the inputs of this log.
Latency of this log in seconds.
If this was a LLM call, this will contain the time taken to generate the first token.
If this was a LLM call, this will contain the number of tokens in the input.
If this was a LLM call, this will contain the number of tokens in the output.
If this was a LLM call, this will contain the total number of tokens in the input and output.
If this was a LLM call, this will contain the cost of the call.
UUID of the trace log. Ex: e3267953-a16f-47f5-b37e-622dbb29d730
Start timestamp
If given, current trace will be a child of this trace. If current child is not a child, parent_trace_id
should be equal to trace_id
This is the UUID of the root trace/span of this trace. If current trace is the root trace, root_trace_id
must be equal to trace_id
Name of the project with which the trace/log should be associated with. Must be provided if project_uuid
is not provided
If the trace was a success
or error
If status=error
, this should contain any additional information such as the stacktrace
If provided, will be used as output for any specified evaluation metric.
Names of evaluation metrics deployed on Parea which should be applied to this log.
Any scores/eval results associated with this log.
Name of the score / evaluation
Value of the score
Will be automatically populated if this score was from a deployed evaluation metric.
Reason for this score
Any captured (user) feedback on this log
If specified, evals given with evaluation_metric_names
will be applied to this log with this fraction.
If specified, this log and its entire associated trace will logged with this probability. Must be between 0 and 1 (incl.). Defaults to 1.0 (i.e., keeping all logs)
0 < x < 1
Optionally, provide the ID of the used deployed prompt in this log.
If the cache was hit for this log.
The name of this span.
UUIDs of any children.
IDs of any children. Will be automatically populated.
End timestamp of span.
Unique identifier for a end-user.
Unique identifier for a session. Can you be used to associated multiple logs in e.g. chat applications.
Any additional key-value pairs which provide context or are useful for filtering.
List of tags which provide additional context or are useful for filtering.
If given, will be used to associate this log with an experiment.
Any images associated with trace.
URL of image
Caption of image
Any comments on log which were collected on Parea frontend.
Trace ID
Comment
Comment ID
User ID
User email address
Comment creation timestamp
Any annotations on log which were collected on Parea frontend. It maps annoation criterion ID to a dictionary mapping user_id (Parea user ID) to annotation.
Depth/level of nestedness of span in overall trace. Root-level trace is 0 and it always increments by 1.
The execution number of span in trace. It starts with 0 and increments by 1 with every span.
Deprecated
Deprecated
UUID of project with which this log is associated. Will be automatically filled-in by SDKs
Organization ID associated with Parea API key. Will be automatically determined from API key
Children logs
If this log was a LLM call, this will contain the configuration used for the call.
Key-value pair inputs of this trace. Note, there is a special field to capture messages in LLM calls. You can still use it in the case of LLM calls to track the key-value pairs for prompt templates.
Response of this step/log/function. If response isn’t a string, it needs to be serialized to a string.
The target or “gold standard†response for the inputs of this log.
Latency of this log in seconds.
If this was a LLM call, this will contain the time taken to generate the first token.
If this was a LLM call, this will contain the number of tokens in the input.
If this was a LLM call, this will contain the number of tokens in the output.
If this was a LLM call, this will contain the total number of tokens in the input and output.
If this was a LLM call, this will contain the cost of the call.
UUID of the trace log. Ex: e3267953-a16f-47f5-b37e-622dbb29d730
Start timestamp
If given, current trace will be a child of this trace. If current child is not a child, parent_trace_id
should be equal to trace_id
This is the UUID of the root trace/span of this trace. If current trace is the root trace, root_trace_id
must be equal to trace_id
Name of the project with which the trace/log should be associated with. Must be provided if project_uuid
is not provided
If the trace was a success
or error
If status=error
, this should contain any additional information such as the stacktrace
If provided, will be used as output for any specified evaluation metric.
Names of evaluation metrics deployed on Parea which should be applied to this log.
Any scores/eval results associated with this log.
Any captured (user) feedback on this log
If specified, evals given with evaluation_metric_names
will be applied to this log with this fraction.
If specified, this log and its entire associated trace will logged with this probability. Must be between 0 and 1 (incl.). Defaults to 1.0 (i.e., keeping all logs)
0 < x < 1
Optionally, provide the ID of the used deployed prompt in this log.
If the cache was hit for this log.
The name of this span.
UUIDs of any children.
IDs of any children. Will be automatically populated.
End timestamp of span.
Unique identifier for a end-user.
Unique identifier for a session. Can you be used to associated multiple logs in e.g. chat applications.
Any additional key-value pairs which provide context or are useful for filtering.
List of tags which provide additional context or are useful for filtering.
If given, will be used to associate this log with an experiment.
Any images associated with trace.
Any comments on log which were collected on Parea frontend.
Any annotations on log which were collected on Parea frontend. It maps annoation criterion ID to a dictionary mapping user_id (Parea user ID) to annotation.
Depth/level of nestedness of span in overall trace. Root-level trace is 0 and it always increments by 1.
The execution number of span in trace. It starts with 0 and increments by 1 with every span.
Deprecated
Deprecated
UUID of project with which this log is associated. Will be automatically filled-in by SDKs
Organization ID associated with Parea API key. Will be automatically determined from API key
Children logs
Was this page helpful?