POST
/
api
/
parea
/
v1
/
deployed-prompt

Body

deployment_id
string
required

This is the ID for a specific deployed prompt. You can find your deployed prompts on the Deployments tab.

llm_inputs
dict

If you would like to fill in your prompt template with inputs you can also provided then as a dictionary. The keys should match the names of the deployed prompt template’s variables.

Response

This is the ID for a specific deployed prompt. You can find your deployed prompts on the Deployments tab.

This is the version number of the deployed prompt.

A name for this completion. Will be visible in logs for filtering.

functions
list string

A list of functions the model may generate JSON inputs for.

function_call
string

Controls how the model responds to function calls. “auto” means the model can pick between an end-user or calling a function. To specifying a particular function name use a dictionary with key=“name” and value=“my_function”. This will force the model to call that function.

prompt
object

LLM completion request configuration

model_params
dict

Key value pairs for the model hyper parameters to use for this completion.

model
string

The model that will complete your prompt. Ex. gpt-3.5-turbo

provider
string

Supported model providers: openai, anthropic, azure