REST API Walkthrough
Trace your app & log experiments via API
In this cookbook we will walk you through how to log, trace and run experiments via the API. Note, in the API docs are more endpoints documented to e.g. manage datasets.
LLM Proxy
You can use the LLM gateway to use a deployed prompt or interact with many LLM providers through a unified API.
The /completion
endpoint automatically takes care of things like caching & retries.
Logging
You can log any kind of LLM calls and other events via the API.
Note, for LLM calls, there is a special field configuration
that you can use to log the LLM configuration.
More details on /trace_log
can be found here.
Update a log
Sometimes it’s necessary to update a log after it has been created. See the full details on the PUT endpoint here.
Tracing: Hierarchical Logging
If you use the API directly, you will need to manually associate the logs to create a trace. To do that, we rely on the following fields:
trace_id
: The UUID of the current trace log.parent_trace_id
: The UUID of the parent of the current trace log. If the current trace log is the root, this field will be the same astrace_id
.root_trace_id
: The UUID of the root trace log. If the current trace log is the root, this field will be the same astrace_id
.
To implement this in your application, you need to keep track of these fields and pass them to the API when creating a log.
You can see an example implementation in the trace
decorator of the Python SDK here.
General Algorithm
General Algorithm
Apply the following logic when you add a new log:
- Always use the same
root_trace_id
for all logs in the trace. - If the new log is a sibling of the previous one
- keep
depth
the same - use the previous log’s
parent_trace_id
as the new log’sparent_trace_id
- keep
Algorithm Walkthrough
Algorithm Walkthrough
You will start by creating a UUID, and use it as the trace_id
, parent_trace_id
, and root_trace_id
for the first log, also called the root log.
This log will associate all the logs.
When you create the first child log, you will use the trace_id
of the root log as the parent_trace_id
and root_trace_id
.
If the next & 3rd log, …
- … is nested within the first child log, then
- set the
trace_id
of the first child as the new log’sparent_trace_id
- set the
trace_id
of the root log as the new log’sroot_trace_id
- set the
- … isn’t nested within the first child but is a sequential step, then
- set the
trace_id
of the root log as the new log’sparent_trace_id
androot_trace_id
- set the
Example
Example
In this example, we will create a trace with 2 logs, the first log is the root log and the second log is a logged LLM call. In the end we will update the root log with the output of the LLM call.
Create root log
We will first create a root log.
Create LLM log
Note, that the we create a new trace_id
but keep the same root_trace_id
and parent_trace_id
.
Update root log (optional)
Now that the LLM call has finished and we did some post-processing, we can optionally update the root log.
Experiments
You can also log experiments via the API to get benefits such as tracking metrics over time and comparing different runs.
An experiment is essentially a special view of logs grouped by the experiment_uuid
field.
Get the project UUID
In order to create an experiment, we need to know the project_uuid
of the project we want to associate the experiment with.
You can get the project_uuid
by calling the /project
endpoint.
The full API docs can be found here.
Create experiment
Now that we have the project_uuid
, we can create an experiment to get the experiment_uuid
.
Note, the run name must be unique for every experiment in the project and only contain alphanumeric characters, dashes, and underscores.
The full API docs can be found here.
Log experiment logs
With every log we send, we need to attach the experiment_uuid
to associate it with the experiment.
If you want to report any scores of a particular step, you can do so by adding a scores
field.
Note, any scores of children will be automatically populated up to the parent & root logs.
The full API docs can be found here.
Finish experiment
After logging all your traces, call the /experiment/{experiment_uuid}/finished
endpoint to automatically calculate the average statistics of all logged scores as well as cost, latency, token usage, etc.
You can optionally log any dataset-level metrics such as balanced accuracy, pearson correlations, etc.
The full API docs can be found here.