- 4.84.0 (latest)
 - 4.82.0
 - 4.81.0
 - 4.80.0
 - 4.79.0
 - 4.78.0
 - 4.76.0
 - 4.74.0
 - 4.73.0
 - 4.70.0
 - 4.69.0
 - 4.68.0
 - 4.66.0
 - 4.65.0
 - 4.64.0
 - 4.63.0
 - 4.62.0
 - 4.61.0
 - 4.60.0
 - 4.59.0
 - 4.58.0
 - 4.57.0
 - 4.55.0
 - 4.54.0
 - 4.53.0
 - 4.52.0
 - 4.51.0
 - 4.50.0
 - 4.49.0
 - 4.48.0
 - 4.47.0
 - 4.46.0
 - 4.45.0
 - 4.43.0
 - 4.42.0
 - 4.41.0
 - 4.40.0
 - 4.39.0
 - 4.38.0
 - 4.37.0
 - 4.36.0
 - 4.35.0
 - 4.34.0
 - 4.33.0
 - 4.30.0
 - 4.29.0
 - 4.28.0
 - 4.27.0
 - 4.26.0
 - 4.25.0
 - 4.24.0
 - 4.23.0
 - 4.22.0
 - 4.21.0
 - 4.20.0
 - 4.19.0
 - 4.18.0
 - 4.17.0
 - 4.15.0
 - 4.14.0
 - 4.13.0
 - 4.12.0
 - 4.11.0
 - 4.10.0
 - 4.9.1
 - 4.8.6
 - 4.7.5
 - 4.6.0
 - 4.5.11
 - 4.4.0
 - 4.3.1
 
public static final class InferenceParameter.Builder extends GeneratedMessageV3.Builder<InferenceParameter.Builder> implements InferenceParameterOrBuilderThe parameters of inference.
 Protobuf type google.cloud.dialogflow.v2beta1.InferenceParameter
Inheritance
Object > AbstractMessageLite.Builder<MessageType,BuilderType> > AbstractMessage.Builder<BuilderType> > GeneratedMessageV3.Builder > InferenceParameter.BuilderImplements
InferenceParameterOrBuilderStatic Methods
getDescriptor()
public static final Descriptors.Descriptor getDescriptor()| Returns | |
|---|---|
| Type | Description | 
Descriptor | 
        |
Methods
addRepeatedField(Descriptors.FieldDescriptor field, Object value)
public InferenceParameter.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)| Parameters | |
|---|---|
| Name | Description | 
field | 
        FieldDescriptor | 
      
value | 
        Object | 
      
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        |
build()
public InferenceParameter build()| Returns | |
|---|---|
| Type | Description | 
InferenceParameter | 
        |
buildPartial()
public InferenceParameter buildPartial()| Returns | |
|---|---|
| Type | Description | 
InferenceParameter | 
        |
clear()
public InferenceParameter.Builder clear()| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        |
clearField(Descriptors.FieldDescriptor field)
public InferenceParameter.Builder clearField(Descriptors.FieldDescriptor field)| Parameter | |
|---|---|
| Name | Description | 
field | 
        FieldDescriptor | 
      
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        |
clearMaxOutputTokens()
public InferenceParameter.Builder clearMaxOutputTokens()Optional. Maximum number of the output tokens for the generator.
 optional int32 max_output_tokens = 1 [(.google.api.field_behavior) = OPTIONAL];
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        This builder for chaining.  | 
      
clearOneof(Descriptors.OneofDescriptor oneof)
public InferenceParameter.Builder clearOneof(Descriptors.OneofDescriptor oneof)| Parameter | |
|---|---|
| Name | Description | 
oneof | 
        OneofDescriptor | 
      
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        |
clearTemperature()
public InferenceParameter.Builder clearTemperature()Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
 optional double temperature = 2 [(.google.api.field_behavior) = OPTIONAL];
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        This builder for chaining.  | 
      
clearTopK()
public InferenceParameter.Builder clearTopK()Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
 optional int32 top_k = 3 [(.google.api.field_behavior) = OPTIONAL];
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        This builder for chaining.  | 
      
clearTopP()
public InferenceParameter.Builder clearTopP()Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
 optional double top_p = 4 [(.google.api.field_behavior) = OPTIONAL];
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        This builder for chaining.  | 
      
clone()
public InferenceParameter.Builder clone()| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        |
getDefaultInstanceForType()
public InferenceParameter getDefaultInstanceForType()| Returns | |
|---|---|
| Type | Description | 
InferenceParameter | 
        |
getDescriptorForType()
public Descriptors.Descriptor getDescriptorForType()| Returns | |
|---|---|
| Type | Description | 
Descriptor | 
        |
getMaxOutputTokens()
public int getMaxOutputTokens()Optional. Maximum number of the output tokens for the generator.
 optional int32 max_output_tokens = 1 [(.google.api.field_behavior) = OPTIONAL];
| Returns | |
|---|---|
| Type | Description | 
int | 
        The maxOutputTokens.  | 
      
getTemperature()
public double getTemperature()Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
 optional double temperature = 2 [(.google.api.field_behavior) = OPTIONAL];
| Returns | |
|---|---|
| Type | Description | 
double | 
        The temperature.  | 
      
getTopK()
public int getTopK()Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
 optional int32 top_k = 3 [(.google.api.field_behavior) = OPTIONAL];
| Returns | |
|---|---|
| Type | Description | 
int | 
        The topK.  | 
      
getTopP()
public double getTopP()Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
 optional double top_p = 4 [(.google.api.field_behavior) = OPTIONAL];
| Returns | |
|---|---|
| Type | Description | 
double | 
        The topP.  | 
      
hasMaxOutputTokens()
public boolean hasMaxOutputTokens()Optional. Maximum number of the output tokens for the generator.
 optional int32 max_output_tokens = 1 [(.google.api.field_behavior) = OPTIONAL];
| Returns | |
|---|---|
| Type | Description | 
boolean | 
        Whether the maxOutputTokens field is set.  | 
      
hasTemperature()
public boolean hasTemperature()Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
 optional double temperature = 2 [(.google.api.field_behavior) = OPTIONAL];
| Returns | |
|---|---|
| Type | Description | 
boolean | 
        Whether the temperature field is set.  | 
      
hasTopK()
public boolean hasTopK()Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
 optional int32 top_k = 3 [(.google.api.field_behavior) = OPTIONAL];
| Returns | |
|---|---|
| Type | Description | 
boolean | 
        Whether the topK field is set.  | 
      
hasTopP()
public boolean hasTopP()Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
 optional double top_p = 4 [(.google.api.field_behavior) = OPTIONAL];
| Returns | |
|---|---|
| Type | Description | 
boolean | 
        Whether the topP field is set.  | 
      
internalGetFieldAccessorTable()
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()| Returns | |
|---|---|
| Type | Description | 
FieldAccessorTable | 
        |
isInitialized()
public final boolean isInitialized()| Returns | |
|---|---|
| Type | Description | 
boolean | 
        |
mergeFrom(InferenceParameter other)
public InferenceParameter.Builder mergeFrom(InferenceParameter other)| Parameter | |
|---|---|
| Name | Description | 
other | 
        InferenceParameter | 
      
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        |
mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
public InferenceParameter.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)| Parameters | |
|---|---|
| Name | Description | 
input | 
        CodedInputStream | 
      
extensionRegistry | 
        ExtensionRegistryLite | 
      
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        |
| Exceptions | |
|---|---|
| Type | Description | 
IOException | 
        |
mergeFrom(Message other)
public InferenceParameter.Builder mergeFrom(Message other)| Parameter | |
|---|---|
| Name | Description | 
other | 
        Message | 
      
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        |
mergeUnknownFields(UnknownFieldSet unknownFields)
public final InferenceParameter.Builder mergeUnknownFields(UnknownFieldSet unknownFields)| Parameter | |
|---|---|
| Name | Description | 
unknownFields | 
        UnknownFieldSet | 
      
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        |
setField(Descriptors.FieldDescriptor field, Object value)
public InferenceParameter.Builder setField(Descriptors.FieldDescriptor field, Object value)| Parameters | |
|---|---|
| Name | Description | 
field | 
        FieldDescriptor | 
      
value | 
        Object | 
      
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        |
setMaxOutputTokens(int value)
public InferenceParameter.Builder setMaxOutputTokens(int value)Optional. Maximum number of the output tokens for the generator.
 optional int32 max_output_tokens = 1 [(.google.api.field_behavior) = OPTIONAL];
| Parameter | |
|---|---|
| Name | Description | 
value | 
        intThe maxOutputTokens to set.  | 
      
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        This builder for chaining.  | 
      
setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
public InferenceParameter.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)| Parameters | |
|---|---|
| Name | Description | 
field | 
        FieldDescriptor | 
      
index | 
        int | 
      
value | 
        Object | 
      
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        |
setTemperature(double value)
public InferenceParameter.Builder setTemperature(double value)Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
 optional double temperature = 2 [(.google.api.field_behavior) = OPTIONAL];
| Parameter | |
|---|---|
| Name | Description | 
value | 
        doubleThe temperature to set.  | 
      
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        This builder for chaining.  | 
      
setTopK(int value)
public InferenceParameter.Builder setTopK(int value)Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
 optional int32 top_k = 3 [(.google.api.field_behavior) = OPTIONAL];
| Parameter | |
|---|---|
| Name | Description | 
value | 
        intThe topK to set.  | 
      
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        This builder for chaining.  | 
      
setTopP(double value)
public InferenceParameter.Builder setTopP(double value)Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
 optional double top_p = 4 [(.google.api.field_behavior) = OPTIONAL];
| Parameter | |
|---|---|
| Name | Description | 
value | 
        doubleThe topP to set.  | 
      
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        This builder for chaining.  | 
      
setUnknownFields(UnknownFieldSet unknownFields)
public final InferenceParameter.Builder setUnknownFields(UnknownFieldSet unknownFields)| Parameter | |
|---|---|
| Name | Description | 
unknownFields | 
        UnknownFieldSet | 
      
| Returns | |
|---|---|
| Type | Description | 
InferenceParameter.Builder | 
        |