Get Deployed Prompt
Fetch a deployed prompt
Body
This is the ID for a specific deployed prompt. You can find your deployed prompts on the Deployments tab.
If you would like to fill in your prompt template with inputs you can also provided then as a dictionary. The keys should match the names of the deployed prompt template’s variables.
Response
This is the ID for a specific deployed prompt. You can find your deployed prompts on the Deployments tab.
This is the version number of the deployed prompt.
A name for this completion. Will be visible in logs for filtering.
A list of functions the model may generate JSON inputs for.
Controls how the model responds to function calls. “auto” means the model can pick between an end-user or calling a function. To specifying a particular function name use a dictionary with key=“name” and value=“my_function”. This will force the model to call that function.
LLM completion request configuration
Key value pairs for the model hyper parameters to use for this completion.
The model that will complete your prompt. Ex. gpt-3.5-turbo
Supported model providers: openai, anthropic, azure
Was this page helpful?