- 2.43.0 (latest)
- 2.41.2
- 2.40.0
- 2.39.1
- 2.38.0
- 2.37.0
- 2.36.0
- 2.35.0
- 2.34.0
- 2.33.0
- 2.32.0
- 2.30.2
- 2.29.0
- 2.28.3
- 2.27.0
- 2.26.0
- 2.25.0
- 2.24.1
- 2.23.3
- 2.22.0
- 2.21.0
- 2.20.0
- 2.19.1
- 2.18.0
- 2.17.0
- 2.16.1
- 2.15.2
- 2.14.1
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.1
- 2.8.1
- 2.7.1
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.0
- 2.1.2
- 2.0.0
- 1.1.3
- 1.0.0
- 0.8.0
- 0.7.2
InputAudioConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)Instructs the speech recognizer how to process the audio content.
| Attributes | |
|---|---|
| Name | Description | 
| audio_encoding | google.cloud.dialogflow_v2.types.AudioEncodingRequired. Audio encoding of the audio content to process. | 
| sample_rate_hertz | intRequired. Sample rate (in Hertz) of the audio content sent in the query. Refer to `Cloud Speech API documentation | 
| language_code | strRequired. The language of the supplied audio. Dialogflow does not do translations. See `Language Support | 
| enable_word_info | boolIf true, Dialogflow returns
   SpeechWordInfo
   in
   StreamingRecognitionResult
   with information about the recognized speech words, e.g.
   start and end time offsets. If false or unspecified, Speech
   doesn't return any word-level information. | 
| phrase_hints | MutableSequence[str]A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See `the Cloud Speech documentation | 
| speech_contexts | MutableSequence[google.cloud.dialogflow_v2.types.SpeechContext]Context information to assist speech recognition. See `the Cloud Speech documentation | 
| model | strOptional. Which Speech model to select for the given request. For more information, see `Speech models | 
| model_variant | google.cloud.dialogflow_v2.types.SpeechModelVariantWhich variant of the [Speech model][google.cloud.dialogflow.v2.InputAudioConfig.model] to use. | 
| single_utterance | boolIf false(default), recognition does not cease until the
   client closes the stream. Iftrue, the recognizer will
   detect a single spoken utterance in input audio. Recognition
   ceases when it detects the audio's voice has stopped or
   paused. In this case, once a detected intent is received,
   the client should close the stream and start a new request
   with a new stream as needed. Note: This setting is relevant
   only for streaming methods. Note: When specified,
   InputAudioConfig.single_utterance takes precedence over
   StreamingDetectIntentRequest.single_utterance. | 
| disable_no_speech_recognized_event | boolOnly used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If falseand recognition doesn't return any result,
   triggerNO_SPEECH_RECOGNIZEDevent to Dialogflow agent. | 
| enable_automatic_punctuation | boolEnable automatic punctuation option at the speech backend. | 
| phrase_sets | MutableSequence[str]A collection of phrase set resources to use for speech adaptation. | 
| opt_out_conformer_model_migration | boolIf true, the request will opt out for STT conformer
   model migration. This field will be deprecated once force
   migration takes place in June 2024. Please refer to
   `Dialogflow ES Speech model
   migration |