Integrations
LiteLLM Proxy
Instrumenting your LiteLLM proxy with Parea AI
Parea AI works seamlessly with LiteLLM proxy using the wrap_openai_client/patchOpenAI
methods.
Quickstart
Assuming you had a liteLLM config.yaml file with the following content:
You could wrap the OpenAI client with Parea AI as follows:
Visualizing your traces
In your Parea logs dashboard, you can visualize your traces and see the detailed steps the LLM took across the various models.