In this cookbook we will walk you through how to log, trace and run experiments via the API.
Note, in the API docs are more endpoints documented to e.g. manage datasets.
You can use the LLM gateway to use a deployed prompt or interact with many LLM providers through a unified API.
The /completion endpoint automatically takes care of things like caching & retries.
You can log any kind of LLM calls and other events via the API.
Note, for LLM calls, there is a special field configuration that you can use to log the LLM configuration.
More details on /trace_log can be found here.
If you use the API directly, you will need to manually associate the logs to create a trace.
To do that, we rely on the following fields:
trace_id: The UUID of the current trace log.
parent_trace_id: The UUID of the parent of the current trace log. If the current trace log is the root, this field will be the same as trace_id.
root_trace_id: The UUID of the root trace log. If the current trace log is the root, this field will be the same as trace_id.
To implement this in your application, you need to keep track of these fields and pass them to the API when creating a log.
You can see an example implementation in the trace decorator of the Python SDK here.
Apply the following logic when you add a new log:
Always use the same root_trace_id for all logs in the trace.
If the new log is a sibling of the previous one
keep depth the same
use the previous log’s parent_trace_id as the new log’s parent_trace_id
You will start by creating a UUID, and use it as the trace_id, parent_trace_id, and root_trace_id for the first log, also called the root log.
This log will associate all the logs.
When you create the first child log, you will use the trace_id of the root log as the parent_trace_id and root_trace_id.
If the next & 3rd log, …
… is nested within the first child log, then
set the trace_id of the first child as the new log’s parent_trace_id
set the trace_id of the root log as the new log’s root_trace_id
… isn’t nested within the first child but is a sequential step, then
set the trace_id of the root log as the new log’s parent_trace_id and root_trace_id
In this example, we will create a trace with 2 logs, the first log is the root log and the second log is a logged LLM call.
In the end we will update the root log with the output of the LLM call.
Note, that the we create a new trace_id but keep the same root_trace_id and parent_trace_id.
curl--location'https://parea-ai-backend-us-9ac16cdbc7a7b006.onporter.run/api/parea/v1/trace_log'\--header'Content-Type: application/json'\--header'x-api-key: <<PAREA_API_KEY>>'\--data '{"trace_id":<<NEW_UUID>>,"root_trace_id":<<ROOT_UUID>>,"parent_trace_id":<<ROOT_UUID>>,"trace_name":"LLM","project_name":"default","inputs":{"x":"Python","y":"FastAPI"},"configuration":{"model":"gpt-4o","model_params":{"temp":0.5},"messages":[{"role":"user","content":"Write a hello world program in Python using FastAPI."}]},"status":"success","output":"Some LLM output","start_timestamp":"2024-08-05 13:48:34","end_timestamp":"2024-08-05 13:48:43"}
3
Update root log (optional)
Now that the LLM call has finished and we did some post-processing, we can optionally update the root log.
You can also log experiments via the API to get benefits such as tracking metrics over time and comparing different runs.
An experiment is essentially a special view of logs grouped by the experiment_uuid field.
1
Get the project UUID
In order to create an experiment, we need to know the project_uuid of the project we want to associate the experiment with.
You can get the project_uuid by calling the /project endpoint.
The full API docs can be found here.
curl--request POST \--url https://parea-ai-backend-us-9ac16cdbc7a7b006.onporter.run/api/parea/v1/project \--header'Content-Type: application/json'\--header'x-user-id: <api-key>'\--data '{"name":"default"}'
2
Create experiment
Now that we have the project_uuid, we can create an experiment to get the experiment_uuid.
Note, the run name must be unique for every experiment in the project and only contain alphanumeric characters, dashes, and underscores.
The full API docs can be found here.
curl--request POST \--url https://parea-ai-backend-us-9ac16cdbc7a7b006.onporter.run/api/parea/v1/experiment \--header'Content-Type: application/json'\--header'x-user-id: <api-key>'\--data '{"name":"Test Experiment","project_uuid":"...","run_name":"test-experiment","metadata":{"dataset":"hello word dataset"}}'
3
Log experiment logs
With every log we send, we need to attach the experiment_uuid to associate it with the experiment.
If you want to report any scores of a particular step, you can do so by adding a scores field.
Note, any scores of children will be automatically populated up to the parent & root logs.
The full API docs can be found here.
After logging all your traces, call the /experiment/{experiment_uuid}/finished endpoint to automatically calculate the average statistics of all logged scores as well as cost, latency, token usage, etc.
You can optionally log any dataset-level metrics such as balanced accuracy, pearson correlations, etc.
The full API docs can be found here.
curl--request POST \--url https://parea-ai-backend-us-9ac16cdbc7a7b006.onporter.run/api/parea/v1/experiment/<<EXPERIMENT_UUID>>/finished \--header'Content-Type: application/json'\--header'x-user-id: <api-key>'\--data '{"status":"completed","dataset_level_stats":[{"name":"pearson_correlation","score":0.8}]}'