Class Prompt (1.95.1)
Note: Some or all of the information on this page might not apply
to Trusted Cloud. For a list of services that are available in
Trusted Cloud, see Services available for
Trusted Cloud .
Version latestkeyboard_arrow_down
Prompt (
prompt_data : typing . Optional [ PartsType ] = None ,
* ,
variables : typing . Optional [ typing . List [ typing . Dict [ str , PartsType ]]] = None ,
prompt_name : typing . Optional [ str ] = None ,
generation_config : typing . Optional [
vertexai . generative_models . _generative_models . GenerationConfig
] = None ,
model_name : typing . Optional [ str ] = None ,
safety_settings : typing . Optional [
vertexai . generative_models . _generative_models . SafetySetting
] = None ,
system_instruction : typing . Optional [ PartsType ] = None ,
tools : typing . Optional [
typing . List [ vertexai . generative_models . _generative_models . Tool ]
] = None ,
tool_config : typing . Optional [
vertexai . generative_models . _generative_models . ToolConfig
] = None
)
A prompt which may be a template with variables.
The Prompt
class allows users to define a template string with
variables represented in curly braces {variable}
. The variable
name must be a valid Python variable name (no spaces, must start with a
letter). These placeholders can be replaced with specific values using the
assemble_contents
method, providing flexibility in generating dynamic prompts.
Usage:
Generate content from a single set of variables:
prompt = Prompt(
prompt_data="Hello, {name}! Today is {day}. How are you?",
variables=[{"name": "Alice", "day": "Monday"}]
generation_config=GenerationConfig(
temperature=0.1,
top_p=0.95,
top_k=20,
candidate_count=1,
max_output_tokens=100,
),
model_name="gemini-1.0-pro-002",
safety_settings=[SafetySetting(
category=SafetySetting.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
threshold=SafetySetting.HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
method=SafetySetting.HarmBlockMethod.SEVERITY,
)],
system_instruction="Please answer in a short sentence.",
)
# Generate content using the assembled prompt.
prompt.generate_content(
contents=prompt.assemble_contents(**prompt.variables)
)
```
Generate content with multiple sets of variables:
```
prompt = Prompt(
prompt_data="Hello, {name}! Today is {day}. How are you?",
variables=[
{"name": "Alice", "day": "Monday"},
{"name": "Bob", "day": "Tuesday"},
],
generation_config=GenerationConfig(
temperature=0.1,
top_p=0.95,
top_k=20,
candidate_count=1,
max_output_tokens=100,
),
model_name="gemini-1.0-pro-002",
safety_settings=[SafetySetting(
category=SafetySetting.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
threshold=SafetySetting.HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
method=SafetySetting.HarmBlockMethod.SEVERITY,
)],
system_instruction="Please answer in a short sentence.",
)
# Generate content using the assembled prompt for each variable set.
for i in range(len(prompt.variables)):
prompt.generate_content(
contents=prompt.assemble_contents(**prompt.variables[i])
)
```
Methods
Prompt
Prompt (
prompt_data : typing . Optional [ PartsType ] = None ,
* ,
variables : typing . Optional [ typing . List [ typing . Dict [ str , PartsType ]]] = None ,
prompt_name : typing . Optional [ str ] = None ,
generation_config : typing . Optional [
vertexai . generative_models . _generative_models . GenerationConfig
] = None ,
model_name : typing . Optional [ str ] = None ,
safety_settings : typing . Optional [
vertexai . generative_models . _generative_models . SafetySetting
] = None ,
system_instruction : typing . Optional [ PartsType ] = None ,
tools : typing . Optional [
typing . List [ vertexai . generative_models . _generative_models . Tool ]
] = None ,
tool_config : typing . Optional [
vertexai . generative_models . _generative_models . ToolConfig
] = None
)
Initializes the Prompt with a given prompt, and variables.
__repr__
Returns a string representation of the unassembled prompt.
__str__
Returns the prompt data as a string, without any variables replaced.
assemble_contents
assemble_contents (
** variables_dict : PartsType ,
) - > typing . List [ vertexai . generative_models . _generative_models . Content ]
Returns the prompt data, as a List[Content], assembled with variables if applicable.
Can be ingested into model.generate_content to make API calls.
generate_content
generate_content (
contents : ContentsType ,
* ,
generation_config : typing . Optional [ GenerationConfigType ] = None ,
safety_settings : typing . Optional [ SafetySettingsType ] = None ,
model_name : typing . Optional [ str ] = None ,
tools : typing . Optional [
typing . List [ vertexai . generative_models . _generative_models . Tool ]
] = None ,
tool_config : typing . Optional [
vertexai . generative_models . _generative_models . ToolConfig
] = None ,
stream : bool = False ,
system_instruction : typing . Optional [ PartsType ] = None
) - > typing . Union [
vertexai . generative_models . _generative_models . GenerationResponse ,
typing . Iterable [ vertexai . generative_models . _generative_models . GenerationResponse ],
]
Generates content using the saved Prompt configs.
get_unassembled_prompt_data
get_unassembled_prompt_data () - > PartsType
Returns the prompt data, without any variables replaced.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-28 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[],[],null,["# Class Prompt (1.95.1)\n\nVersion latestkeyboard_arrow_down\n\n- [1.95.1 (latest)](/python/docs/reference/vertexai/latest/vertexai.prompts._prompts.Prompt)\n- [1.94.0](/python/docs/reference/vertexai/1.94.0/vertexai.prompts._prompts.Prompt)\n- [1.93.1](/python/docs/reference/vertexai/1.93.1/vertexai.prompts._prompts.Prompt)\n- [1.92.0](/python/docs/reference/vertexai/1.92.0/vertexai.prompts._prompts.Prompt)\n- [1.91.0](/python/docs/reference/vertexai/1.91.0/vertexai.prompts._prompts.Prompt)\n- [1.90.0](/python/docs/reference/vertexai/1.90.0/vertexai.prompts._prompts.Prompt)\n- [1.89.0](/python/docs/reference/vertexai/1.89.0/vertexai.prompts._prompts.Prompt)\n- [1.88.0](/python/docs/reference/vertexai/1.88.0/vertexai.prompts._prompts.Prompt)\n- [1.87.0](/python/docs/reference/vertexai/1.87.0/vertexai.prompts._prompts.Prompt)\n- [1.86.0](/python/docs/reference/vertexai/1.86.0/vertexai.prompts._prompts.Prompt)\n- [1.85.0](/python/docs/reference/vertexai/1.85.0/vertexai.prompts._prompts.Prompt)\n- [1.84.0](/python/docs/reference/vertexai/1.84.0/vertexai.prompts._prompts.Prompt)\n- [1.83.0](/python/docs/reference/vertexai/1.83.0/vertexai.prompts._prompts.Prompt)\n- [1.82.0](/python/docs/reference/vertexai/1.82.0/vertexai.prompts._prompts.Prompt)\n- [1.81.0](/python/docs/reference/vertexai/1.81.0/vertexai.prompts._prompts.Prompt)\n- [1.80.0](/python/docs/reference/vertexai/1.80.0/vertexai.prompts._prompts.Prompt)\n- [1.79.0](/python/docs/reference/vertexai/1.79.0/vertexai.prompts._prompts.Prompt)\n- [1.78.0](/python/docs/reference/vertexai/1.78.0/vertexai.prompts._prompts.Prompt)\n- [1.77.0](/python/docs/reference/vertexai/1.77.0/vertexai.prompts._prompts.Prompt)\n- [1.76.0](/python/docs/reference/vertexai/1.76.0/vertexai.prompts._prompts.Prompt)\n- [1.75.0](/python/docs/reference/vertexai/1.75.0/vertexai.prompts._prompts.Prompt)\n- [1.74.0](/python/docs/reference/vertexai/1.74.0/vertexai.prompts._prompts.Prompt)\n- [1.73.0](/python/docs/reference/vertexai/1.73.0/vertexai.prompts._prompts.Prompt)\n- [1.72.0](/python/docs/reference/vertexai/1.72.0/vertexai.prompts._prompts.Prompt)\n- [1.71.1](/python/docs/reference/vertexai/1.71.1/vertexai.prompts._prompts.Prompt)\n- [1.70.0](/python/docs/reference/vertexai/1.70.0/vertexai.prompts._prompts.Prompt)\n- [1.69.0](/python/docs/reference/vertexai/1.69.0/vertexai.prompts._prompts.Prompt)\n- [1.68.0](/python/docs/reference/vertexai/1.68.0/vertexai.prompts._prompts.Prompt)\n- [1.67.1](/python/docs/reference/vertexai/1.67.1/vertexai.prompts._prompts.Prompt)\n- [1.66.0](/python/docs/reference/vertexai/1.66.0/vertexai.prompts._prompts.Prompt)\n- [1.65.0](/python/docs/reference/vertexai/1.65.0/vertexai.prompts._prompts.Prompt)\n- [1.63.0](/python/docs/reference/vertexai/1.63.0/vertexai.prompts._prompts.Prompt)\n- [1.62.0](/python/docs/reference/vertexai/1.62.0/vertexai.prompts._prompts.Prompt)\n- [1.60.0](/python/docs/reference/vertexai/1.60.0/vertexai.prompts._prompts.Prompt)\n- [1.59.0](/python/docs/reference/vertexai/1.59.0/vertexai.prompts._prompts.Prompt) \n\n Prompt(\n prompt_data: typing.Optional[PartsType] = None,\n *,\n variables: typing.Optional[typing.List[typing.Dict[str, PartsType]]] = None,\n prompt_name: typing.Optional[str] = None,\n generation_config: typing.Optional[\n vertexai.generative_models._generative_models.GenerationConfig\n ] = None,\n model_name: typing.Optional[str] = None,\n safety_settings: typing.Optional[\n vertexai.generative_models._generative_models.SafetySetting\n ] = None,\n system_instruction: typing.Optional[PartsType] = None,\n tools: typing.Optional[\n typing.List[vertexai.generative_models._generative_models.Tool]\n ] = None,\n tool_config: typing.Optional[\n vertexai.generative_models._generative_models.ToolConfig\n ] = None\n )\n\nA prompt which may be a template with variables.\n\nThe `Prompt` class allows users to define a template string with\nvariables represented in curly braces `{variable}`. The variable\nname must be a valid Python variable name (no spaces, must start with a\nletter). These placeholders can be replaced with specific values using the\n`assemble_contents` method, providing flexibility in generating dynamic prompts.\n\nUsage:\nGenerate content from a single set of variables: \n\n prompt = Prompt(\n prompt_data=\"Hello, {name}! Today is {day}. How are you?\",\n variables=[{\"name\": \"Alice\", \"day\": \"Monday\"}]\n generation_config=GenerationConfig(\n temperature=0.1,\n top_p=0.95,\n top_k=20,\n candidate_count=1,\n max_output_tokens=100,\n ),\n model_name=\"gemini-1.0-pro-002\",\n safety_settings=[SafetySetting(\n category=SafetySetting.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,\n threshold=SafetySetting.HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,\n method=SafetySetting.HarmBlockMethod.SEVERITY,\n )],\n system_instruction=\"Please answer in a short sentence.\",\n )\n\n # Generate content using the assembled prompt.\n prompt.generate_content(\n contents=prompt.assemble_contents(**prompt.variables)\n )\n ```\n Generate content with multiple sets of variables:\n ```\n prompt = Prompt(\n prompt_data=\"Hello, {name}! Today is {day}. How are you?\",\n variables=[\n {\"name\": \"Alice\", \"day\": \"Monday\"},\n {\"name\": \"Bob\", \"day\": \"Tuesday\"},\n ],\n generation_config=GenerationConfig(\n temperature=0.1,\n top_p=0.95,\n top_k=20,\n candidate_count=1,\n max_output_tokens=100,\n ),\n model_name=\"gemini-1.0-pro-002\",\n safety_settings=[SafetySetting(\n category=SafetySetting.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,\n threshold=SafetySetting.HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,\n method=SafetySetting.HarmBlockMethod.SEVERITY,\n )],\n system_instruction=\"Please answer in a short sentence.\",\n )\n\n # Generate content using the assembled prompt for each variable set.\n for i in range(len(prompt.variables)):\n prompt.generate_content(\n contents=prompt.assemble_contents(**prompt.variables[i])\n )\n ```\n\nMethods\n-------\n\n### Prompt\n\n Prompt(\n prompt_data: typing.Optional[PartsType] = None,\n *,\n variables: typing.Optional[typing.List[typing.Dict[str, PartsType]]] = None,\n prompt_name: typing.Optional[str] = None,\n generation_config: typing.Optional[\n vertexai.generative_models._generative_models.GenerationConfig\n ] = None,\n model_name: typing.Optional[str] = None,\n safety_settings: typing.Optional[\n vertexai.generative_models._generative_models.SafetySetting\n ] = None,\n system_instruction: typing.Optional[PartsType] = None,\n tools: typing.Optional[\n typing.List[vertexai.generative_models._generative_models.Tool]\n ] = None,\n tool_config: typing.Optional[\n vertexai.generative_models._generative_models.ToolConfig\n ] = None\n )\n\nInitializes the Prompt with a given prompt, and variables.\n\n### __repr__\n\n __repr__() -\u003e str\n\nReturns a string representation of the unassembled prompt.\n\n### __str__\n\n __str__() -\u003e str\n\nReturns the prompt data as a string, without any variables replaced.\n\n### assemble_contents\n\n assemble_contents(\n **variables_dict: PartsType,\n ) -\u003e typing.List[vertexai.generative_models._generative_models.Content]\n\nReturns the prompt data, as a List\\[Content\\], assembled with variables if applicable.\nCan be ingested into model.generate_content to make API calls.\n\n### generate_content\n\n generate_content(\n contents: ContentsType,\n *,\n generation_config: typing.Optional[GenerationConfigType] = None,\n safety_settings: typing.Optional[SafetySettingsType] = None,\n model_name: typing.Optional[str] = None,\n tools: typing.Optional[\n typing.List[vertexai.generative_models._generative_models.Tool]\n ] = None,\n tool_config: typing.Optional[\n vertexai.generative_models._generative_models.ToolConfig\n ] = None,\n stream: bool = False,\n system_instruction: typing.Optional[PartsType] = None\n ) -\u003e typing.Union[\n vertexai.generative_models._generative_models.GenerationResponse,\n typing.Iterable[vertexai.generative_models._generative_models.GenerationResponse],\n ]\n\nGenerates content using the saved Prompt configs.\n\n### get_unassembled_prompt_data\n\n get_unassembled_prompt_data() -\u003e PartsType\n\nReturns the prompt data, without any variables replaced."]]