- 4.84.0 (latest)
- 4.82.0
- 4.81.0
- 4.80.0
- 4.79.0
- 4.78.0
- 4.76.0
- 4.74.0
- 4.73.0
- 4.70.0
- 4.69.0
- 4.68.0
- 4.66.0
- 4.65.0
- 4.64.0
- 4.63.0
- 4.62.0
- 4.61.0
- 4.60.0
- 4.59.0
- 4.58.0
- 4.57.0
- 4.55.0
- 4.54.0
- 4.53.0
- 4.52.0
- 4.51.0
- 4.50.0
- 4.49.0
- 4.48.0
- 4.47.0
- 4.46.0
- 4.45.0
- 4.43.0
- 4.42.0
- 4.41.0
- 4.40.0
- 4.39.0
- 4.38.0
- 4.37.0
- 4.36.0
- 4.35.0
- 4.34.0
- 4.33.0
- 4.30.0
- 4.29.0
- 4.28.0
- 4.27.0
- 4.26.0
- 4.25.0
- 4.24.0
- 4.23.0
- 4.22.0
- 4.21.0
- 4.20.0
- 4.19.0
- 4.18.0
- 4.17.0
- 4.15.0
- 4.14.0
- 4.13.0
- 4.12.0
- 4.11.0
- 4.10.0
- 4.9.1
- 4.8.6
- 4.7.5
- 4.6.0
- 4.5.11
- 4.4.0
- 4.3.1
public static final class StreamingDetectIntentRequest.Builder extends GeneratedMessageV3.Builder<StreamingDetectIntentRequest.Builder> implements StreamingDetectIntentRequestOrBuilderThe top-level message sent by the client to the Sessions.StreamingDetectIntent method.
Multiple request messages should be sent in order:
- The first message must contain session, query_input plus optionally query_params. If the client wants to receive an audio response, it should also contain output_audio_config. The message must not contain input_audio.
- If query_input was set to query_input.audio_config, all subsequent messages must contain input_audio to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with query_input.text. - However, note that: - Dialogflow will bill you for the audio duration so far.
- Dialogflow discards all Speech recognition results in favor of the input text.
- Dialogflow will use the language code from the first message.
 - After you sent all input, you must half-close or abort the request stream. 
 Protobuf type google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest
Inheritance
Object > AbstractMessageLite.Builder<MessageType,BuilderType> > AbstractMessage.Builder<BuilderType> > GeneratedMessageV3.Builder > StreamingDetectIntentRequest.BuilderImplements
StreamingDetectIntentRequestOrBuilderStatic Methods
getDescriptor()
public static final Descriptors.Descriptor getDescriptor()| Returns | |
|---|---|
| Type | Description | 
| Descriptor | |
Methods
addRepeatedField(Descriptors.FieldDescriptor field, Object value)
public StreamingDetectIntentRequest.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)| Parameters | |
|---|---|
| Name | Description | 
| field | FieldDescriptor | 
| value | Object | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
build()
public StreamingDetectIntentRequest build()| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest | |
buildPartial()
public StreamingDetectIntentRequest buildPartial()| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest | |
clear()
public StreamingDetectIntentRequest.Builder clear()| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
clearEnableDebuggingInfo()
public StreamingDetectIntentRequest.Builder clearEnableDebuggingInfo() If true, StreamingDetectIntentResponse.debugging_info will get populated.
 bool enable_debugging_info = 8;
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | This builder for chaining. | 
clearField(Descriptors.FieldDescriptor field)
public StreamingDetectIntentRequest.Builder clearField(Descriptors.FieldDescriptor field)| Parameter | |
|---|---|
| Name | Description | 
| field | FieldDescriptor | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
clearInputAudio()
public StreamingDetectIntentRequest.Builder clearInputAudio() The input audio content to be recognized. Must be sent if
 query_input was set to a streaming input audio config. The complete audio
 over all streaming messages must not exceed 1 minute.
 bytes input_audio = 6;
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | This builder for chaining. | 
clearOneof(Descriptors.OneofDescriptor oneof)
public StreamingDetectIntentRequest.Builder clearOneof(Descriptors.OneofDescriptor oneof)| Parameter | |
|---|---|
| Name | Description | 
| oneof | OneofDescriptor | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
clearOutputAudioConfig()
public StreamingDetectIntentRequest.Builder clearOutputAudioConfig()Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
 .google.cloud.dialogflow.v2beta1.OutputAudioConfig output_audio_config = 5;
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
clearOutputAudioConfigMask()
public StreamingDetectIntentRequest.Builder clearOutputAudioConfigMask()Mask for output_audio_config indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level.
If unspecified or empty, output_audio_config replaces the agent-level config in its entirety.
 .google.protobuf.FieldMask output_audio_config_mask = 7;
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
clearQueryInput()
public StreamingDetectIntentRequest.Builder clearQueryInput()Required. The input specification. It can be set to:
- an audio config which instructs the speech recognizer how to process the speech audio, 
- a conversational query in the form of text, or 
- an event that specifies which intent to trigger. 
 
 .google.cloud.dialogflow.v2beta1.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
clearQueryParams()
public StreamingDetectIntentRequest.Builder clearQueryParams()The parameters of this query.
 .google.cloud.dialogflow.v2beta1.QueryParameters query_params = 2;
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
clearSession()
public StreamingDetectIntentRequest.Builder clearSession()Required. The name of the session the query is sent to. Supported formats:
- projects/<Project ID>/agent/sessions/<Session ID>,
- projects/<Project ID>/locations/<Location ID>/agent/sessions/<Session
ID>,
- projects/<Project ID>/agent/environments/<Environment ID>/users/<User
ID>/sessions/<Session ID>,
- projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID> - ,- IfLocation ID- is not specified we assume default 'us' location. IfEnvironment ID- is not specified, we assume default 'draft' environment. IfUser ID- is not specified, we are using "-". It's up to the API caller to choose an appropriateSession ID- andUser Id- . They can be a random number or some type of user and session identifiers (preferably hashed). The length of theSession ID- andUser ID` must not exceed 36 characters.- For more information, see the API interactions guide. - Note: Always use agent versions for production traffic. See Versions and environments. 
 
 string session = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.resource_reference) = { ... }
 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | This builder for chaining. | 
clearSingleUtterance() (deprecated)
public StreamingDetectIntentRequest.Builder clearSingleUtterance()Deprecated. google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.single_utterance is deprecated. See google/cloud/dialogflow/v2beta1/session.proto;l=568
 DEPRECATED. Please use
 InputAudioConfig.single_utterance
 instead. If false (default), recognition does not cease until the client
 closes the stream. If true, the recognizer will detect a single spoken
 utterance in input audio. Recognition ceases when it detects the audio's
 voice has stopped or paused. In this case, once a detected intent is
 received, the client should close the stream and start a new request with a
 new stream as needed. This setting is ignored when query_input is a piece
 of text or an event.
 bool single_utterance = 4 [deprecated = true];
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | This builder for chaining. | 
clone()
public StreamingDetectIntentRequest.Builder clone()| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
getDefaultInstanceForType()
public StreamingDetectIntentRequest getDefaultInstanceForType()| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest | |
getDescriptorForType()
public Descriptors.Descriptor getDescriptorForType()| Returns | |
|---|---|
| Type | Description | 
| Descriptor | |
getEnableDebuggingInfo()
public boolean getEnableDebuggingInfo() If true, StreamingDetectIntentResponse.debugging_info will get populated.
 bool enable_debugging_info = 8;
| Returns | |
|---|---|
| Type | Description | 
| boolean | The enableDebuggingInfo. | 
getInputAudio()
public ByteString getInputAudio() The input audio content to be recognized. Must be sent if
 query_input was set to a streaming input audio config. The complete audio
 over all streaming messages must not exceed 1 minute.
 bytes input_audio = 6;
| Returns | |
|---|---|
| Type | Description | 
| ByteString | The inputAudio. | 
getOutputAudioConfig()
public OutputAudioConfig getOutputAudioConfig()Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
 .google.cloud.dialogflow.v2beta1.OutputAudioConfig output_audio_config = 5;
| Returns | |
|---|---|
| Type | Description | 
| OutputAudioConfig | The outputAudioConfig. | 
getOutputAudioConfigBuilder()
public OutputAudioConfig.Builder getOutputAudioConfigBuilder()Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
 .google.cloud.dialogflow.v2beta1.OutputAudioConfig output_audio_config = 5;
| Returns | |
|---|---|
| Type | Description | 
| OutputAudioConfig.Builder | |
getOutputAudioConfigMask()
public FieldMask getOutputAudioConfigMask()Mask for output_audio_config indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level.
If unspecified or empty, output_audio_config replaces the agent-level config in its entirety.
 .google.protobuf.FieldMask output_audio_config_mask = 7;
| Returns | |
|---|---|
| Type | Description | 
| FieldMask | The outputAudioConfigMask. | 
getOutputAudioConfigMaskBuilder()
public FieldMask.Builder getOutputAudioConfigMaskBuilder()Mask for output_audio_config indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level.
If unspecified or empty, output_audio_config replaces the agent-level config in its entirety.
 .google.protobuf.FieldMask output_audio_config_mask = 7;
| Returns | |
|---|---|
| Type | Description | 
| Builder | |
getOutputAudioConfigMaskOrBuilder()
public FieldMaskOrBuilder getOutputAudioConfigMaskOrBuilder()Mask for output_audio_config indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level.
If unspecified or empty, output_audio_config replaces the agent-level config in its entirety.
 .google.protobuf.FieldMask output_audio_config_mask = 7;
| Returns | |
|---|---|
| Type | Description | 
| FieldMaskOrBuilder | |
getOutputAudioConfigOrBuilder()
public OutputAudioConfigOrBuilder getOutputAudioConfigOrBuilder()Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
 .google.cloud.dialogflow.v2beta1.OutputAudioConfig output_audio_config = 5;
| Returns | |
|---|---|
| Type | Description | 
| OutputAudioConfigOrBuilder | |
getQueryInput()
public QueryInput getQueryInput()Required. The input specification. It can be set to:
- an audio config which instructs the speech recognizer how to process the speech audio, 
- a conversational query in the form of text, or 
- an event that specifies which intent to trigger. 
 
 .google.cloud.dialogflow.v2beta1.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
 
| Returns | |
|---|---|
| Type | Description | 
| QueryInput | The queryInput. | 
getQueryInputBuilder()
public QueryInput.Builder getQueryInputBuilder()Required. The input specification. It can be set to:
- an audio config which instructs the speech recognizer how to process the speech audio, 
- a conversational query in the form of text, or 
- an event that specifies which intent to trigger. 
 
 .google.cloud.dialogflow.v2beta1.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
 
| Returns | |
|---|---|
| Type | Description | 
| QueryInput.Builder | |
getQueryInputOrBuilder()
public QueryInputOrBuilder getQueryInputOrBuilder()Required. The input specification. It can be set to:
- an audio config which instructs the speech recognizer how to process the speech audio, 
- a conversational query in the form of text, or 
- an event that specifies which intent to trigger. 
 
 .google.cloud.dialogflow.v2beta1.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
 
| Returns | |
|---|---|
| Type | Description | 
| QueryInputOrBuilder | |
getQueryParams()
public QueryParameters getQueryParams()The parameters of this query.
 .google.cloud.dialogflow.v2beta1.QueryParameters query_params = 2;
| Returns | |
|---|---|
| Type | Description | 
| QueryParameters | The queryParams. | 
getQueryParamsBuilder()
public QueryParameters.Builder getQueryParamsBuilder()The parameters of this query.
 .google.cloud.dialogflow.v2beta1.QueryParameters query_params = 2;
| Returns | |
|---|---|
| Type | Description | 
| QueryParameters.Builder | |
getQueryParamsOrBuilder()
public QueryParametersOrBuilder getQueryParamsOrBuilder()The parameters of this query.
 .google.cloud.dialogflow.v2beta1.QueryParameters query_params = 2;
| Returns | |
|---|---|
| Type | Description | 
| QueryParametersOrBuilder | |
getSession()
public String getSession()Required. The name of the session the query is sent to. Supported formats:
- projects/<Project ID>/agent/sessions/<Session ID>,
- projects/<Project ID>/locations/<Location ID>/agent/sessions/<Session
ID>,
- projects/<Project ID>/agent/environments/<Environment ID>/users/<User
ID>/sessions/<Session ID>,
- projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID> - ,- IfLocation ID- is not specified we assume default 'us' location. IfEnvironment ID- is not specified, we assume default 'draft' environment. IfUser ID- is not specified, we are using "-". It's up to the API caller to choose an appropriateSession ID- andUser Id- . They can be a random number or some type of user and session identifiers (preferably hashed). The length of theSession ID- andUser ID` must not exceed 36 characters.- For more information, see the API interactions guide. - Note: Always use agent versions for production traffic. See Versions and environments. 
 
 string session = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.resource_reference) = { ... }
 
| Returns | |
|---|---|
| Type | Description | 
| String | The session. | 
getSessionBytes()
public ByteString getSessionBytes()Required. The name of the session the query is sent to. Supported formats:
- projects/<Project ID>/agent/sessions/<Session ID>,
- projects/<Project ID>/locations/<Location ID>/agent/sessions/<Session
ID>,
- projects/<Project ID>/agent/environments/<Environment ID>/users/<User
ID>/sessions/<Session ID>,
- projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID> - ,- IfLocation ID- is not specified we assume default 'us' location. IfEnvironment ID- is not specified, we assume default 'draft' environment. IfUser ID- is not specified, we are using "-". It's up to the API caller to choose an appropriateSession ID- andUser Id- . They can be a random number or some type of user and session identifiers (preferably hashed). The length of theSession ID- andUser ID` must not exceed 36 characters.- For more information, see the API interactions guide. - Note: Always use agent versions for production traffic. See Versions and environments. 
 
 string session = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.resource_reference) = { ... }
 
| Returns | |
|---|---|
| Type | Description | 
| ByteString | The bytes for session. | 
getSingleUtterance() (deprecated)
public boolean getSingleUtterance()Deprecated. google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.single_utterance is deprecated. See google/cloud/dialogflow/v2beta1/session.proto;l=568
 DEPRECATED. Please use
 InputAudioConfig.single_utterance
 instead. If false (default), recognition does not cease until the client
 closes the stream. If true, the recognizer will detect a single spoken
 utterance in input audio. Recognition ceases when it detects the audio's
 voice has stopped or paused. In this case, once a detected intent is
 received, the client should close the stream and start a new request with a
 new stream as needed. This setting is ignored when query_input is a piece
 of text or an event.
 bool single_utterance = 4 [deprecated = true];
| Returns | |
|---|---|
| Type | Description | 
| boolean | The singleUtterance. | 
hasOutputAudioConfig()
public boolean hasOutputAudioConfig()Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
 .google.cloud.dialogflow.v2beta1.OutputAudioConfig output_audio_config = 5;
| Returns | |
|---|---|
| Type | Description | 
| boolean | Whether the outputAudioConfig field is set. | 
hasOutputAudioConfigMask()
public boolean hasOutputAudioConfigMask()Mask for output_audio_config indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level.
If unspecified or empty, output_audio_config replaces the agent-level config in its entirety.
 .google.protobuf.FieldMask output_audio_config_mask = 7;
| Returns | |
|---|---|
| Type | Description | 
| boolean | Whether the outputAudioConfigMask field is set. | 
hasQueryInput()
public boolean hasQueryInput()Required. The input specification. It can be set to:
- an audio config which instructs the speech recognizer how to process the speech audio, 
- a conversational query in the form of text, or 
- an event that specifies which intent to trigger. 
 
 .google.cloud.dialogflow.v2beta1.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
 
| Returns | |
|---|---|
| Type | Description | 
| boolean | Whether the queryInput field is set. | 
hasQueryParams()
public boolean hasQueryParams()The parameters of this query.
 .google.cloud.dialogflow.v2beta1.QueryParameters query_params = 2;
| Returns | |
|---|---|
| Type | Description | 
| boolean | Whether the queryParams field is set. | 
internalGetFieldAccessorTable()
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()| Returns | |
|---|---|
| Type | Description | 
| FieldAccessorTable | |
isInitialized()
public final boolean isInitialized()| Returns | |
|---|---|
| Type | Description | 
| boolean | |
mergeFrom(StreamingDetectIntentRequest other)
public StreamingDetectIntentRequest.Builder mergeFrom(StreamingDetectIntentRequest other)| Parameter | |
|---|---|
| Name | Description | 
| other | StreamingDetectIntentRequest | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
public StreamingDetectIntentRequest.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)| Parameters | |
|---|---|
| Name | Description | 
| input | CodedInputStream | 
| extensionRegistry | ExtensionRegistryLite | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
| Exceptions | |
|---|---|
| Type | Description | 
| IOException | |
mergeFrom(Message other)
public StreamingDetectIntentRequest.Builder mergeFrom(Message other)| Parameter | |
|---|---|
| Name | Description | 
| other | Message | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
mergeOutputAudioConfig(OutputAudioConfig value)
public StreamingDetectIntentRequest.Builder mergeOutputAudioConfig(OutputAudioConfig value)Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
 .google.cloud.dialogflow.v2beta1.OutputAudioConfig output_audio_config = 5;
| Parameter | |
|---|---|
| Name | Description | 
| value | OutputAudioConfig | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
mergeOutputAudioConfigMask(FieldMask value)
public StreamingDetectIntentRequest.Builder mergeOutputAudioConfigMask(FieldMask value)Mask for output_audio_config indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level.
If unspecified or empty, output_audio_config replaces the agent-level config in its entirety.
 .google.protobuf.FieldMask output_audio_config_mask = 7;
| Parameter | |
|---|---|
| Name | Description | 
| value | FieldMask | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
mergeQueryInput(QueryInput value)
public StreamingDetectIntentRequest.Builder mergeQueryInput(QueryInput value)Required. The input specification. It can be set to:
- an audio config which instructs the speech recognizer how to process the speech audio, 
- a conversational query in the form of text, or 
- an event that specifies which intent to trigger. 
 
 .google.cloud.dialogflow.v2beta1.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
 
| Parameter | |
|---|---|
| Name | Description | 
| value | QueryInput | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
mergeQueryParams(QueryParameters value)
public StreamingDetectIntentRequest.Builder mergeQueryParams(QueryParameters value)The parameters of this query.
 .google.cloud.dialogflow.v2beta1.QueryParameters query_params = 2;
| Parameter | |
|---|---|
| Name | Description | 
| value | QueryParameters | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
mergeUnknownFields(UnknownFieldSet unknownFields)
public final StreamingDetectIntentRequest.Builder mergeUnknownFields(UnknownFieldSet unknownFields)| Parameter | |
|---|---|
| Name | Description | 
| unknownFields | UnknownFieldSet | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
setEnableDebuggingInfo(boolean value)
public StreamingDetectIntentRequest.Builder setEnableDebuggingInfo(boolean value) If true, StreamingDetectIntentResponse.debugging_info will get populated.
 bool enable_debugging_info = 8;
| Parameter | |
|---|---|
| Name | Description | 
| value | booleanThe enableDebuggingInfo to set. | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | This builder for chaining. | 
setField(Descriptors.FieldDescriptor field, Object value)
public StreamingDetectIntentRequest.Builder setField(Descriptors.FieldDescriptor field, Object value)| Parameters | |
|---|---|
| Name | Description | 
| field | FieldDescriptor | 
| value | Object | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
setInputAudio(ByteString value)
public StreamingDetectIntentRequest.Builder setInputAudio(ByteString value) The input audio content to be recognized. Must be sent if
 query_input was set to a streaming input audio config. The complete audio
 over all streaming messages must not exceed 1 minute.
 bytes input_audio = 6;
| Parameter | |
|---|---|
| Name | Description | 
| value | ByteStringThe inputAudio to set. | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | This builder for chaining. | 
setOutputAudioConfig(OutputAudioConfig value)
public StreamingDetectIntentRequest.Builder setOutputAudioConfig(OutputAudioConfig value)Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
 .google.cloud.dialogflow.v2beta1.OutputAudioConfig output_audio_config = 5;
| Parameter | |
|---|---|
| Name | Description | 
| value | OutputAudioConfig | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
setOutputAudioConfig(OutputAudioConfig.Builder builderForValue)
public StreamingDetectIntentRequest.Builder setOutputAudioConfig(OutputAudioConfig.Builder builderForValue)Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
 .google.cloud.dialogflow.v2beta1.OutputAudioConfig output_audio_config = 5;
| Parameter | |
|---|---|
| Name | Description | 
| builderForValue | OutputAudioConfig.Builder | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
setOutputAudioConfigMask(FieldMask value)
public StreamingDetectIntentRequest.Builder setOutputAudioConfigMask(FieldMask value)Mask for output_audio_config indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level.
If unspecified or empty, output_audio_config replaces the agent-level config in its entirety.
 .google.protobuf.FieldMask output_audio_config_mask = 7;
| Parameter | |
|---|---|
| Name | Description | 
| value | FieldMask | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
setOutputAudioConfigMask(FieldMask.Builder builderForValue)
public StreamingDetectIntentRequest.Builder setOutputAudioConfigMask(FieldMask.Builder builderForValue)Mask for output_audio_config indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level.
If unspecified or empty, output_audio_config replaces the agent-level config in its entirety.
 .google.protobuf.FieldMask output_audio_config_mask = 7;
| Parameter | |
|---|---|
| Name | Description | 
| builderForValue | Builder | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
setQueryInput(QueryInput value)
public StreamingDetectIntentRequest.Builder setQueryInput(QueryInput value)Required. The input specification. It can be set to:
- an audio config which instructs the speech recognizer how to process the speech audio, 
- a conversational query in the form of text, or 
- an event that specifies which intent to trigger. 
 
 .google.cloud.dialogflow.v2beta1.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
 
| Parameter | |
|---|---|
| Name | Description | 
| value | QueryInput | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
setQueryInput(QueryInput.Builder builderForValue)
public StreamingDetectIntentRequest.Builder setQueryInput(QueryInput.Builder builderForValue)Required. The input specification. It can be set to:
- an audio config which instructs the speech recognizer how to process the speech audio, 
- a conversational query in the form of text, or 
- an event that specifies which intent to trigger. 
 
 .google.cloud.dialogflow.v2beta1.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
 
| Parameter | |
|---|---|
| Name | Description | 
| builderForValue | QueryInput.Builder | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
setQueryParams(QueryParameters value)
public StreamingDetectIntentRequest.Builder setQueryParams(QueryParameters value)The parameters of this query.
 .google.cloud.dialogflow.v2beta1.QueryParameters query_params = 2;
| Parameter | |
|---|---|
| Name | Description | 
| value | QueryParameters | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
setQueryParams(QueryParameters.Builder builderForValue)
public StreamingDetectIntentRequest.Builder setQueryParams(QueryParameters.Builder builderForValue)The parameters of this query.
 .google.cloud.dialogflow.v2beta1.QueryParameters query_params = 2;
| Parameter | |
|---|---|
| Name | Description | 
| builderForValue | QueryParameters.Builder | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
public StreamingDetectIntentRequest.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)| Parameters | |
|---|---|
| Name | Description | 
| field | FieldDescriptor | 
| index | int | 
| value | Object | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |
setSession(String value)
public StreamingDetectIntentRequest.Builder setSession(String value)Required. The name of the session the query is sent to. Supported formats:
- projects/<Project ID>/agent/sessions/<Session ID>,
- projects/<Project ID>/locations/<Location ID>/agent/sessions/<Session
ID>,
- projects/<Project ID>/agent/environments/<Environment ID>/users/<User
ID>/sessions/<Session ID>,
- projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID> - ,- IfLocation ID- is not specified we assume default 'us' location. IfEnvironment ID- is not specified, we assume default 'draft' environment. IfUser ID- is not specified, we are using "-". It's up to the API caller to choose an appropriateSession ID- andUser Id- . They can be a random number or some type of user and session identifiers (preferably hashed). The length of theSession ID- andUser ID` must not exceed 36 characters.- For more information, see the API interactions guide. - Note: Always use agent versions for production traffic. See Versions and environments. 
 
 string session = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.resource_reference) = { ... }
 
| Parameter | |
|---|---|
| Name | Description | 
| value | StringThe session to set. | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | This builder for chaining. | 
setSessionBytes(ByteString value)
public StreamingDetectIntentRequest.Builder setSessionBytes(ByteString value)Required. The name of the session the query is sent to. Supported formats:
- projects/<Project ID>/agent/sessions/<Session ID>,
- projects/<Project ID>/locations/<Location ID>/agent/sessions/<Session
ID>,
- projects/<Project ID>/agent/environments/<Environment ID>/users/<User
ID>/sessions/<Session ID>,
- projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID> - ,- IfLocation ID- is not specified we assume default 'us' location. IfEnvironment ID- is not specified, we assume default 'draft' environment. IfUser ID- is not specified, we are using "-". It's up to the API caller to choose an appropriateSession ID- andUser Id- . They can be a random number or some type of user and session identifiers (preferably hashed). The length of theSession ID- andUser ID` must not exceed 36 characters.- For more information, see the API interactions guide. - Note: Always use agent versions for production traffic. See Versions and environments. 
 
 string session = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.resource_reference) = { ... }
 
| Parameter | |
|---|---|
| Name | Description | 
| value | ByteStringThe bytes for session to set. | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | This builder for chaining. | 
setSingleUtterance(boolean value) (deprecated)
public StreamingDetectIntentRequest.Builder setSingleUtterance(boolean value)Deprecated. google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.single_utterance is deprecated. See google/cloud/dialogflow/v2beta1/session.proto;l=568
 DEPRECATED. Please use
 InputAudioConfig.single_utterance
 instead. If false (default), recognition does not cease until the client
 closes the stream. If true, the recognizer will detect a single spoken
 utterance in input audio. Recognition ceases when it detects the audio's
 voice has stopped or paused. In this case, once a detected intent is
 received, the client should close the stream and start a new request with a
 new stream as needed. This setting is ignored when query_input is a piece
 of text or an event.
 bool single_utterance = 4 [deprecated = true];
| Parameter | |
|---|---|
| Name | Description | 
| value | booleanThe singleUtterance to set. | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | This builder for chaining. | 
setUnknownFields(UnknownFieldSet unknownFields)
public final StreamingDetectIntentRequest.Builder setUnknownFields(UnknownFieldSet unknownFields)| Parameter | |
|---|---|
| Name | Description | 
| unknownFields | UnknownFieldSet | 
| Returns | |
|---|---|
| Type | Description | 
| StreamingDetectIntentRequest.Builder | |