Pre-requisites

Ensure you’ve followed the Authentication steps to set up an organization and API key.

Using deployed evaluation functions

After creating an evaluation function on the platform, you can use it to automatically track the performance of the components of your LLM app. For that, simply wrap the function you want to track with the trace decorator and the evaluation function will be executed in the backend in a non-blocking way:

import { trace } from "parea-ai";

...

const TracedFunction = trace(
  'TraceName',
  functionToTrace,
  {
    evalFuncNames: ['Harmfullness Detector']
  },
);

For a full example you can view our Typescript cookbook or Python cookbook.

Using evaluation functions from your code base

For Python, you can also use functions defined in your local codebase. You simply specify them in the trace decorator and they will be executed locally in a separate thread:

python
from parea.utils.trace_utils import trace


def usefulness(inputs: Dict, output: str, target: str = None) -> float:
    return 1.0 if output == target else 0.0


@trace(eval_funcs=[usefulness])
def function_to_trace(*args, **kwargs):
    ...

For a full working example checkout Python cookbook.

Visualization of scores in trace view

The scores for any step are visualized on the right side of a trace:

Trace View