Stream Completion
Get a completion response using either one of your organization’s deployed prompts, or by providing completion details including prompt and inputs in the request.
This endpoint acts as a LLM gateway/proxy endpoint to generate completions from different LLMs.
Authorizations
Body
Key-value pairs as inputs to prompt template. Only needs to be provided if deployment_id
is provided or llm_configuration.messages
are templated.
LLM configuration parameters such as messages, functions, etc.
Project name which is used to associate the log with a project.
Project UUID which is used to associate the log with a project. Does not need to be provided if the project_name
is provided.
Experiment UUID which is used to associate the log with an experiment.
UUID of the parent log. If given, will be used to associate the generation in a chain & create hierarchical nested logs.
UUID of the root log. If given, will be used to associate the generation in a chain & create hierarchical nested logs.
Special field to track the end user which is interacting with your LLM app.
This is the ID for a specific deployed prompt. You can find your deployed prompts on the Deployments tab. If a deployment_id is provided, Parea will fetch all of the associated configuration including model name, model parameters, and any associated functions. Any information provided on the llm_configuration field will be used instead of the associated deployed prompts fields.
List of evaluation metric IDs deployed on Parea which should be used to evaluate the completion output.
Key-value pairs to be associated with the log.
List of tags to be associated with the log.
Optional ground truth output for the inputs. Will be used for evaluation and can be used when creating a test case from the log.
UUID of the generation log. If not given, will be auto-generated.
Name of the generation log. If not given, will be auto-generated in the format llm-{provider}
.
Provider API key to generate response. If not given, API keys saved on the platform will be used
If true, the completion will be cached to avoid latency & cost for any subsequent completion using the same inputs.
If true, the inputs, llm_configuration.messages, llm_configuration.functions, llm_configuration.model_params will not be logged.
If true, the generated response will not be logged.
Equivalent to setting both log_omit_inputs and log_omit_outputs to true.
If specified, this log and its entire associated trace will logged with this probability. Must be between 0 and 1 (incl.). Defaults to 1.0 (i.e., keeping all logs)
0 < x < 1
Deprecated field which is the same trace_id
Deprecated field
Deprecated field
Deprecated field
Deprecated field. Use /completion/stream instead.
Response
The response is of type any
.
Was this page helpful?