POST
/
api
/
parea
/
v1
/
deployed-prompt
p = Parea(api_key="PAREA_API_KEY")  # replace with your API key

def main() -> UseDeployedPromptResponse:
    return p.get_prompt(
        UseDeployedPrompt(
            deployment_id="p-qZrnFesaeCpqcXJ_yL3wi",
            llm_inputs={"x": "Python", "y": "Flask"},
        )
    )
    {
        "deployment_id": "p-qZrnFesaeCpqcXJ_yL3wi",
        "version_number": 1.0,
        "name": "hello world",
        "functions": [],
        "function_call": null,
        "prompt": {
            "raw_messages": [{
                "role": "user",
                "content": "I want a Hello World program in {{x}}. Using the {{y}} framework."
            }],
            "messages": [{
                "content": "I want a Hello World program in Python. Using the Flask framework.",
                "role": "user"
            }],
            "inputs": {
                "x": "Python",
                "y": "Flask"
            }
        },
        "model": "gpt-3.5-turbo-0613",
        "provider": "openai",
        "model_params": {
            "temp": 0.5,
            "top_p": 1.0,
            "max_length": null,
            "presence_penalty": 0.0,
            "frequency_penalty": 0.0,
            "response_format": null
        }
    }

Body

deployment_id
string
required

This is the ID for a specific deployed prompt. You can find your deployed prompts on the Deployments tab.

llm_inputs
dict

If you would like to fill in your prompt template with inputs you can also provided then as a dictionary. The keys should match the names of the deployed prompt template’s variables.

Response

This is the ID for a specific deployed prompt. You can find your deployed prompts on the Deployments tab.

This is the version number of the deployed prompt.

A name for this completion. Will be visible in logs for filtering.

functions
list string

A list of functions the model may generate JSON inputs for.

function_call
string

Controls how the model responds to function calls. “auto” means the model can pick between an end-user or calling a function. To specifying a particular function name use a dictionary with key=“name” and value=“my_function”. This will force the model to call that function.

prompt
object

LLM completion request configuration

model_params
dict

Key value pairs for the model hyper parameters to use for this completion.

model
string

The model that will complete your prompt. Ex. gpt-3.5-turbo

provider
string

Supported model providers: openai, anthropic, azure

p = Parea(api_key="PAREA_API_KEY")  # replace with your API key

def main() -> UseDeployedPromptResponse:
    return p.get_prompt(
        UseDeployedPrompt(
            deployment_id="p-qZrnFesaeCpqcXJ_yL3wi",
            llm_inputs={"x": "Python", "y": "Flask"},
        )
    )
    {
        "deployment_id": "p-qZrnFesaeCpqcXJ_yL3wi",
        "version_number": 1.0,
        "name": "hello world",
        "functions": [],
        "function_call": null,
        "prompt": {
            "raw_messages": [{
                "role": "user",
                "content": "I want a Hello World program in {{x}}. Using the {{y}} framework."
            }],
            "messages": [{
                "content": "I want a Hello World program in Python. Using the Flask framework.",
                "role": "user"
            }],
            "inputs": {
                "x": "Python",
                "y": "Flask"
            }
        },
        "model": "gpt-3.5-turbo-0613",
        "provider": "openai",
        "model_params": {
            "temp": 0.5,
            "top_p": 1.0,
            "max_length": null,
            "presence_penalty": 0.0,
            "frequency_penalty": 0.0,
            "response_format": null
        }
    }