Trace your app & log experiments via API
/completion
endpoint automatically takes care of things like caching & retries.
configuration
that you can use to log the LLM configuration.
More details on /trace_log
can be found here.
trace_id
: The UUID of the current trace log.parent_trace_id
: The UUID of the parent of the current trace log. If the current trace log is the root, this field will be the same as trace_id
.root_trace_id
: The UUID of the root trace log. If the current trace log is the root, this field will be the same as trace_id
.trace
decorator of the Python SDK here.
General Algorithm
root_trace_id
for all logs in the trace.depth
the sameparent_trace_id
as the new log’s parent_trace_id
Algorithm Walkthrough
trace_id
, parent_trace_id
, and root_trace_id
for the first log, also called the root log.
This log will associate all the logs.
When you create the first child log, you will use the trace_id
of the root log as the parent_trace_id
and root_trace_id
.
If the next & 3rd log, …trace_id
of the first child as the new log’s parent_trace_id
trace_id
of the root log as the new log’s root_trace_id
trace_id
of the root log as the new log’s parent_trace_id
and root_trace_id
Example
Create root log
Create LLM log
trace_id
but keep the same root_trace_id
and parent_trace_id
.Update root log (optional)
experiment_uuid
field.
Get the project UUID
project_uuid
of the project we want to associate the experiment with.
You can get the project_uuid
by calling the /project
endpoint.
The full API docs can be found here.Create experiment
project_uuid
, we can create an experiment to get the experiment_uuid
.
Note, the run name must be unique for every experiment in the project and only contain alphanumeric characters, dashes, and underscores.
The full API docs can be found here.Log experiment logs
experiment_uuid
to associate it with the experiment.
If you want to report any scores of a particular step, you can do so by adding a scores
field.
Note, any scores of children will be automatically populated up to the parent & root logs.
The full API docs can be found here.Finish experiment
/experiment/{experiment_uuid}/finished
endpoint to automatically calculate the average statistics of all logged scores as well as cost, latency, token usage, etc.
You can optionally log any dataset-level metrics such as balanced accuracy, pearson correlations, etc.
The full API docs can be found here.