- 2.43.0 (latest)
- 2.41.2
- 2.40.0
- 2.39.1
- 2.38.0
- 2.37.0
- 2.36.0
- 2.35.0
- 2.34.0
- 2.33.0
- 2.32.0
- 2.30.2
- 2.29.0
- 2.28.3
- 2.27.0
- 2.26.0
- 2.25.0
- 2.24.1
- 2.23.3
- 2.22.0
- 2.21.0
- 2.20.0
- 2.19.1
- 2.18.0
- 2.17.0
- 2.16.1
- 2.15.2
- 2.14.1
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.1
- 2.8.1
- 2.7.1
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.0
- 2.1.2
- 2.0.0
- 1.1.3
- 1.0.0
- 0.8.0
- 0.7.2
StreamingDetectIntentRequest(
    mapping=None, *, ignore_unknown_fields=False, **kwargs
)The top-level message sent by the client to the Sessions.StreamingDetectIntent method.
Multiple request messages should be sent in order:
- The first message must contain session, query_input plus optionally query_params. If the client wants to receive an audio response, it should also contain output_audio_config. The message must not contain input_audio. 
- If query_input was set to query_input.audio_config, all subsequent messages must contain input_audio to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with query_input.text. - However, note that: - Dialogflow will bill you for the audio duration so far.
- Dialogflow discards all Speech recognition results in favor of the input text.
- Dialogflow will use the language code from the first message.
 
After you sent all input, you must half-close or abort the request stream.
| Attributes | |
|---|---|
| Name | Description | 
| session | strRequired. The name of the session the query is sent to. Supported formats: - \`projects//agent/sessions/, - projects/,
   -projects/,
   -projects/,
   
   IfLocation IDis not specified we assume default 'us'
   location. IfEnvironment IDis not specified, we assume
   default 'draft' environment. IfUser IDis not
   specified, we are using "-". It's up to the API caller to
   choose an appropriateSession IDandUser Id. They
   can be a random number or some type of user and session
   identifiers (preferably hashed). The length of theSession IDandUser IDmust not exceed 36
   characters.
   
   For more information, see the `API interactions
   guide | 
| query_params | google.cloud.dialogflow_v2beta1.types.QueryParametersThe parameters of this query. | 
| query_input | google.cloud.dialogflow_v2beta1.types.QueryInputRequired. The input specification. It can be set to: 1. an audio config which instructs the speech recognizer how to process the speech audio, 2. a conversational query in the form of text, or 3. an event that specifies which intent to trigger. | 
| single_utterance | boolDEPRECATED. Please use InputAudioConfig.single_utterance instead. If false(default), recognition does not cease
   until the client closes the stream. Iftrue, the
   recognizer will detect a single spoken utterance in input
   audio. Recognition ceases when it detects the audio's voice
   has stopped or paused. In this case, once a detected intent
   is received, the client should close the stream and start a
   new request with a new stream as needed. This setting is
   ignored whenquery_inputis a piece of text or an event. | 
| output_audio_config | google.cloud.dialogflow_v2beta1.types.OutputAudioConfigInstructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated. | 
| output_audio_config_mask | google.protobuf.field_mask_pb2.FieldMaskMask for output_audio_config indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level. If unspecified or empty, output_audio_config replaces the agent-level config in its entirety. | 
| input_audio | bytesThe input audio content to be recognized. Must be sent if query_inputwas set to a streaming input audio config.
   The complete audio over all streaming messages must not
   exceed 1 minute. | 
| enable_debugging_info | boolIf true, StreamingDetectIntentResponse.debugging_infowill get populated. |