Reference documentation and code samples for the Google Cloud Ai Platform V1 Client class ComputeTokensRequest.
Request message for ComputeTokens RPC call.
Generated from protobuf message google.cloud.aiplatform.v1.ComputeTokensRequest
Namespace
Google \ Cloud \ AIPlatform \ V1Methods
__construct
Constructor.
| Parameter | |
|---|---|
| Name | Description |
data |
mixed
|
getEndpoint
Required. The name of the Endpoint requested to get lists of tokens and token ids.
| Returns | |
|---|---|
| Type | Description |
string |
|
setEndpoint
Required. The name of the Endpoint requested to get lists of tokens and token ids.
| Parameter | |
|---|---|
| Name | Description |
var |
string
|
| Returns | |
|---|---|
| Type | Description |
$this |
|
getInstances
Optional. The instances that are the input to token computing API call.
Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.
| Returns | |
|---|---|
| Type | Description |
Google\Protobuf\RepeatedField<Google\Protobuf\Value> |
|
setInstances
Optional. The instances that are the input to token computing API call.
Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.
| Parameter | |
|---|---|
| Name | Description |
var |
array<Google\Protobuf\Value>
|
| Returns | |
|---|---|
| Type | Description |
$this |
|
getModel
Optional. The name of the publisher model requested to serve the prediction. Format: projects/{project}/locations/{location}/publishers//models/
| Returns | |
|---|---|
| Type | Description |
string |
|
setModel
Optional. The name of the publisher model requested to serve the prediction. Format: projects/{project}/locations/{location}/publishers//models/
| Parameter | |
|---|---|
| Name | Description |
var |
string
|
| Returns | |
|---|---|
| Type | Description |
$this |
|
getContents
Optional. Input content.
| Returns | |
|---|---|
| Type | Description |
Google\Protobuf\RepeatedField<Content> |
|
setContents
Optional. Input content.
| Parameter | |
|---|---|
| Name | Description |
var |
array<Content>
|
| Returns | |
|---|---|
| Type | Description |
$this |
|
static::build
| Parameters | |
|---|---|
| Name | Description |
endpoint |
string
Required. The name of the Endpoint requested to get lists of tokens and token ids. Please see LlmUtilityServiceClient::endpointName() for help formatting this field. |
instances |
array<Google\Protobuf\Value>
Optional. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models. |
| Returns | |
|---|---|
| Type | Description |
ComputeTokensRequest |
|