Take advantage of Parea’s version management with deployed prompts. Deploying a prompt makes it available via the SDK. All prompt parameters are bundled with a deployment, including function call schemas and model parameters.

Prerequisites

  1. First, you’ll need a Parea API key. See Authentication to get started.
  2. For any model you want to use with the SDK, set up your Provider API keys.

How to deploy a prompt

Visit the Playground to view or create prompts. After testing your prompt, you can click the blue rocket icon to deploy it.

You must save your prompt before deploying it.

DeployPrompt

Afterward, provide a name for the deployment and copy the deployment ID. (Don’t worry; you can get the deployment ID later from the Deployments tab)

DeployModal

Prompt deployments are specific to the selected version. In the Playground, when you change a prompt or its parameters and save it, a new version will automatically be created. You can bump or rollback deployment versions while maintaining the same deployment ID. This makes it easy to change your prompt without updating your code.

Bumping / Revert a deployment

In the Playground, if a prompt has been previously deployed and you click the rocket icon again, you will be prompted to confirm whether to bump (if the current version is > than the deployed) or revert (if the current version is < than the deployed) the deployment. (You could also create an entirely new deployment with a different name and ID.)

BumpRevert

How to use a deployed prompt

You can interact with your deployed prompts in two ways. Use the prompt directly or fetch the prompt’s data to use in your code.

Use deployed prompt

Using the completion endpoint, you can set the deployment_id parameter and provide the prompt template inputs.

def deployed_critic(argument: str) -> CompletionResponse:
    return p.completion(
        Completion(
            deployment_id="p-PSOwRyIPaQRq4xQW3MbpV",
            llm_inputs={"argument": argument},
        )
    )

Fetch deployed prompt

You can fetch the prompt data and utilize it in your code using the get deployed prompt endpoint. The response will include the raw prompt and the prompt filled in with the provided inputs and other metadata.

def get_critic_prompt(argument: str) -> UseDeployedPromptResponse:
    return p.get_prompt(
        UseDeployedPrompt(
            deployment_id="p-PSOwRyIPaQRq4xQW3MbpV",
            llm_inputs={"argument": argument}
        )
    )

# Response ##
# UseDeployedPromptResponse(
#     deployment_id="p-PSOwRyIPaQRq4xQW3MbpV",
#     name="critic-2",
#     functions=[],
#     function_call=None,
#     prompt=Prompt(
#         raw_messages=[
#             {
#                 "role": "system",
#                 "content": "You are a critic.\n"
#                            "What unresolved questions or criticism do you have "
#                            "after reading the following argument?"
#                            "\nProvide a concise summary of your feedback.",
#             },
#             {"role": "system", "content": "Argument: {{argument}}"},
#         ],
#         messages=[
#             {
#                 "content": "You are a critic.\n"
#                            "What unresolved questions or criticism do you have "
#                            "after reading the following argument?"
#                            "\nProvide a concise summary of your feedback.",
#                 "role": "system",
#             },
#             {"content": "Argument: Hello World", "role": "system"},
#         ],
#         inputs={"argument": "Hello World"},
#     ),
#     model="gpt-3.5-turbo-0613",
#     provider="openai",
#     model_params={"temp": 0.0, "top_p": 1.0, "max_length": None, "presence_penalty": 0.0, "frequency_penalty": 0.0},
# )

More examples

View Deployed Prompts and Trace Logs

deployment_table

You can view your deployed prompts and their associated trace logs in the Deployments tab.

On the detailed deployment view, you can see the prompt’s parameters and recent trace logs.

DeploymentView