REST API Walkthrough
Trace your app & log experiments via API
In this cookbook we will walk you through how to log, trace and run experiments via the API. Note, in the API docs are more endpoints documented to e.g. manage datasets.
LLM Proxy
You can use the LLM gateway to use a deployed prompt or interact with many LLM providers through a unified API.
The /completion
endpoint automatically takes care of things like caching & retries.
Logging
You can log any kind of LLM calls and other events via the API.
Note, for LLM calls, there is a special field configuration
that you can use to log the LLM configuration.
More details on /trace_log
can be found here.
Update a log
Sometimes it’s necessary to update a log after it has been created. See the full details on the PUT endpoint here.
Tracing: Hierarchical Logging
If you use the API directly, you will need to manually associate the logs to create a trace. To do that, we rely on the following fields:
trace_id
: The UUID of the current trace log.parent_trace_id
: The UUID of the parent of the current trace log. If the current trace log is the root, this field will be the same astrace_id
.root_trace_id
: The UUID of the root trace log. If the current trace log is the root, this field will be the same astrace_id
.
To implement this in your application, you need to keep track of these fields and pass them to the API when creating a log.
You can see an example implementation in the trace
decorator of the Python SDK here.
Experiments
You can also log experiments via the API to get benefits such as tracking metrics over time and comparing different runs.
An experiment is essentially a special view of logs grouped by the experiment_uuid
field.
Get the project UUID
In order to create an experiment, we need to know the project_uuid
of the project we want to associate the experiment with.
You can get the project_uuid
by calling the /project
endpoint.
The full API docs can be found here.
Create experiment
Now that we have the project_uuid
, we can create an experiment to get the experiment_uuid
.
Note, the run name must be unique for every experiment in the project and only contain alphanumeric characters, dashes, and underscores.
The full API docs can be found here.
Log experiment logs
With every log we send, we need to attach the experiment_uuid
to associate it with the experiment.
If you want to report any scores of a particular step, you can do so by adding a scores
field.
Note, any scores of children will be automatically populated up to the parent & root logs.
The full API docs can be found here.
Finish experiment
After logging all your traces, call the /experiment/{experiment_uuid}/finished
endpoint to automatically calculate the average statistics of all logged scores as well as cost, latency, token usage, etc.
You can optionally log any dataset-level metrics such as balanced accuracy, pearson correlations, etc.
The full API docs can be found here.
Was this page helpful?