Observability
Templated LLM Calls
Using templated messages in auto-traced OpenAI & Anthropic calls
It’s good practice to use templated messages for your LLM calls.
In order to directly associate inputs to templated LLM calls, you can pass them into wrapped OpenAI and Anthropic calls via a template_inputs
argument that contains key-value pairs of template variables and their corresponding values.
When you do this, the template_inputs
will be visualized in the trace and used to format the messages in your LLM calls.
Note, that any variable in the template message needs to enclosed in double curly braces {{}}
in order to get replaced.
See, below OpenAI example:
OpenAI
This will result in the following trace:
Was this page helpful?