CodeGenerationModel(model_id: str, endpoint_name: typing.Optional[str] = None)A language model that generates code.
.. rubric:: Examples
Getting answers:
generation_model = CodeGenerationModel.from_pretrained("code-bison@001") print(generation_model.predict( prefix="Write a function that checks if a year is a leap year.", ))
completion_model = CodeGenerationModel.from_pretrained("code-gecko@001") print(completion_model.predict( prefix="def reverse_string(s):", ))
Methods
CodeGenerationModel
CodeGenerationModel(model_id: str, endpoint_name: typing.Optional[str] = None)Creates a LanguageModel.
This constructor should not be called directly.
Use LanguageModel.from_pretrained(model_name=...) instead.
from_pretrained
from_pretrained(model_name: str) -> vertexai._model_garden._model_garden_models.TLoads a _ModelGardenModel.
| Exceptions | |
|---|---|
| Type | Description |
ValueError |
If model_name is unknown. |
ValueError |
If model does not support this class. |
predict
predict(
prefix: str,
suffix: typing.Optional[str] = None,
*,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
stop_sequences: typing.Optional[typing.List[str]] = None
) -> vertexai.language_models.TextGenerationResponseGets model response for a single prompt.
predict_async
predict_async(
prefix: str,
suffix: typing.Optional[str] = None,
*,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
stop_sequences: typing.Optional[typing.List[str]] = None
) -> vertexai.language_models.TextGenerationResponseAsynchronously gets model response for a single prompt.
predict_streaming
predict_streaming(
prefix: str,
suffix: typing.Optional[str] = None,
*,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
stop_sequences: typing.Optional[typing.List[str]] = None
) -> typing.Iterator[vertexai.language_models.TextGenerationResponse]Predicts the code based on previous code.
The result is a stream (generator) of partial responses.