In this cookbook we will walk you through how to log, trace and run experiments via the API. Note, in the API docs are more endpoints documented to e.g. manage datasets.

LLM Proxy

You can use the LLM gateway to use a deployed prompt or interact with many LLM providers through a unified API. The /completion endpoint automatically takes care of things like caching & retries.

Logging

You can log any kind of LLM calls and other events via the API. Note, for LLM calls, there is a special field configuration that you can use to log the LLM configuration. More details on /trace_log can be found here.

Update a log

Sometimes it’s necessary to update a log after it has been created. See the full details on the PUT endpoint here.

curl --request PUT \
    --url https://parea-ai-backend-us-9ac16cdbc7a7b006.onporter.run/api/parea/v1/trace_log \
    --header 'Content-Type: application/json' \
    --header 'x-user-id: <api-key>' \
    --data '{
    "field_name_to_value_map": {
    "error": "Some error message",
    "status": "error"
    },
    "trace_id": "<<UUID>>",
    "root_trace_id": "<<ROOT_TRACE_UUID>>"
}'

Tracing: Hierarchical Logging

If you use the API directly, you will need to manually associate the logs to create a trace. To do that, we rely on the following fields:

  • trace_id: The UUID of the current trace log.
  • parent_trace_id: The UUID of the parent of the current trace log. If the current trace log is the root, this field will be the same as trace_id.
  • root_trace_id: The UUID of the root trace log. If the current trace log is the root, this field will be the same as trace_id.

To implement this in your application, you need to keep track of these fields and pass them to the API when creating a log. You can see an example implementation in the trace decorator of the Python SDK here.

Experiments

You can also log experiments via the API to get benefits such as tracking metrics over time and comparing different runs. An experiment is essentially a special view of logs grouped by the experiment_uuid field.

1

Get the project UUID

In order to create an experiment, we need to know the project_uuid of the project we want to associate the experiment with. You can get the project_uuid by calling the /project endpoint. The full API docs can be found here.

curl --request POST \
    --url https://parea-ai-backend-us-9ac16cdbc7a7b006.onporter.run/api/parea/v1/project \
    --header 'Content-Type: application/json' \
    --header 'x-user-id: <api-key>' \
    --data '{
        "name": "default"
    }'
2

Create experiment

Now that we have the project_uuid, we can create an experiment to get the experiment_uuid. Note, the run name must be unique for every experiment in the project and only contain alphanumeric characters, dashes, and underscores. The full API docs can be found here.

curl --request POST \
    --url https://parea-ai-backend-us-9ac16cdbc7a7b006.onporter.run/api/parea/v1/experiment \
    --header 'Content-Type: application/json' \
    --header 'x-user-id: <api-key>' \
    --data '{
        "name": "Test Experiment",
        "project_uuid": "...",
        "run_name": "test-experiment",
        "metadata": {
            "dataset": "hello word dataset"
        }
    }'
3

Log experiment logs

With every log we send, we need to attach the experiment_uuid to associate it with the experiment. If you want to report any scores of a particular step, you can do so by adding a scores field. Note, any scores of children will be automatically populated up to the parent & root logs. The full API docs can be found here.

curl --location 'https://parea-ai-backend-us-9ac16cdbc7a7b006.onporter.run/api/parea/v1/trace_log' \
--header 'Content-Type: application/json' \
--header 'x-api-key: <<PAREA_API_KEY>>' \
--data '{
    "trace_id": "<<UUID>>",
    "root_trace_id": "<<SAME_UUID>>",
    "parent_trace_id": "<<SAME_UUID>>",
    "trace_name":"test",
    "project_name":"default",
    "inputs": {
        "x": "Golang",
        "y": "Fiber"
    },
    "start_timestamp":"2024-05-30 13:48:34",
    "end_timestamp":"2024-05-30 13:48:35",
    "status": "success",
    "output": "Some logged output",
    "experiment_uuid": "<<EXPERIMENT_UUID>>",
    "scores": [
        {
          "name": "accuracy",
          "score": 0.8
        }
    ]
}'
4

Finish experiment

After logging all your traces, call the /experiment/{experiment_uuid}/finished endpoint to automatically calculate the average statistics of all logged scores as well as cost, latency, token usage, etc. You can optionally log any dataset-level metrics such as balanced accuracy, pearson correlations, etc. The full API docs can be found here.

curl --request POST \
    --url https://parea-ai-backend-us-9ac16cdbc7a7b006.onporter.run/api/parea/v1/experiment/<<EXPERIMENT_UUID>>/finished \
    --header 'Content-Type: application/json' \
    --header 'x-user-id: <api-key>' \
    --data '{
        "status": "completed",
        "dataset_level_stats": [
            {
              "name": "pearson_correlation",
              "score": 0.8
            }
        ]
    }'