POST
/
api
/
parea
/
v1
/
trace_log
curl --location 'https://parea-ai-backend-us-9ac16cdbc7a7b006.onporter.run/api/parea/v1/trace_log' \
--header 'Content-Type: application/json' \
--header 'x-api-key: <<PAREA_API_KEY>>' \
--data '{
    "trace_id": "<<UUID>>",
    "root_trace_id": "<<SAME_UUID>>",
    "parent_trace_id": "<<SAME_UUID>>",
    "depth": 0,
    "execution_order": 0,
    "trace_name":"test",
    "project_name":"default",
    "inputs": {
        "x": "Golang",
        "y": "Fiber"
    },
    "start_timestamp":"2024-05-30 13:48:34",
    "end_timestamp":"2024-05-30 13:48:35",
    "status": "success",
    "output": "Some logged output",
    "metadata": {"purpose": "testing"}
}'
    null

Body

trace_id
string
required

Unique 4 UUID identifier of this trace/log.

Ex: e3267953-a16f-47f5-b37e-622dbb29d730

root_trace_id
string
required

Unique 4 UUID identifier of the root trace/log (needed to create hierarchical traces).

parent_trace_id
string
required

Unique 4 UUID identifier of the parent trace/log (needed to create hierarchical traces).

children
list[string]
default: []

List of 4 UUID identifier of the children logs (needed to create hierarchical traces). Alternatively, you can set fill_children to true to automatically fill this field.

fill_children
boolean
default: "false"

Will automatically fill the children field with the children of this log.

depth
integer
required

Depth of this log in the trace hierarchy. Note, root log has depth 0. The children of the root log have depth 1, and so on.

execution_order
integer
required

Order of execution of this log in the trace hierarchy. Note, that the root log has execution order 0. The first child of the root log has execution order 1, and so on.

trace_name
string
required

Name to identify a trace. Will be visible in logs for filtering.

inputs
dict

Key value-pairs of the function input. Note, there is a special field to capture messages in LLM calls. You can still use it in the case of LLM calls to track the key-value pairs for prompt templates.

output
string

Response of this step/log/function. If response isn’t a string, it needs to be serialized to a string.

project_name
string
required

Name of the project this log is associated with.

latency
float

Time in seconds it took execute the LLM/function call of this log.

start_timestamp
string

Datetime from a POSIX timestamp for when the request started.

Ex. 2023-07-23 13:48:34

end_timestamp
string

Datetime from a POSIX timestamp for when the request completed.

Ex. 2023-07-23 13:48:34

experiment_uuid
string

Unique 4 UUID identifier. If provided, this log will be associated with the experiment.

images
list

Any images which are associated with this log and should be displayed in the logs.

metadata
dict

Any additional metadata to record.

target
string

The target or “gold standard” response for the inputs of this log.

tags
list

List of string tags to associate with this log.

end_user_identifier
string

Unique identifier for a end-user. Will be visible in logs for filtering.

session_id
string

Unique identifier for a session. Can you be used to associated multiple logs in e.g. chat applications. Will be visible in logs for filtering.

Score and evaluation metric fields

The following are the fields that can be used to attach scores and evaluation metrics to a log.

scores
list[object]

A list of scores attached to this log.

feedback_score
float

Any feedback or score attached to this log. Ranges from 0.0 to 1.0.

evaluation_metric_names
list[string]

A list of evaluation metric names which are deployed through Parea and should be executed on this log.

output_for_eval_metrics
string

If provided, this output will be used for deployd evaluation metrics instead of the output field.

apply_eval_frac
float

If provided, with a value between 0.0 and 1.0, this is the likelihood that the evaluation metrics will be applied to this log.

LLM-call specific fields

Any fields that are specific to a LLM completion request are listed below.

configuration
object

LLM completion request configuration if this was a LLM request. This contains the model, provider, messages, model_params, functions, and function_call.

time_to_first_token
float

Time in seconds to generate the first token (only applies to streamed LLM responses).

input_tokens
integer

Token count of input prompt if it was a LLM request.

output_tokens
integer

Token count of output completion if it was a LLM request.

total_tokens
integer

Token count of input prompt + output completion if it was a LLM request.

cost
float

Cost of this completion in USD if it was a LLM request.

curl --location 'https://parea-ai-backend-us-9ac16cdbc7a7b006.onporter.run/api/parea/v1/trace_log' \
--header 'Content-Type: application/json' \
--header 'x-api-key: <<PAREA_API_KEY>>' \
--data '{
    "trace_id": "<<UUID>>",
    "root_trace_id": "<<SAME_UUID>>",
    "parent_trace_id": "<<SAME_UUID>>",
    "depth": 0,
    "execution_order": 0,
    "trace_name":"test",
    "project_name":"default",
    "inputs": {
        "x": "Golang",
        "y": "Fiber"
    },
    "start_timestamp":"2024-05-30 13:48:34",
    "end_timestamp":"2024-05-30 13:48:35",
    "status": "success",
    "output": "Some logged output",
    "metadata": {"purpose": "testing"}
}'
    null