SpeechAsyncClient(*, credentials: google.auth.credentials.Credentials = None, transport: Union[str, google.cloud.speech_v1p1beta1.services.speech.transports.base.SpeechTransport] = 'grpc_asyncio', client_options: <module 'google.api_core.client_options' from '/workspace/python-speech/.nox/docfx/lib/python3.9/site-packages/google/api_core/client_options.py'> = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = <google.api_core.gapic_v1.client_info.ClientInfo object>)Service that implements Google Cloud Speech API.
Methods
SpeechAsyncClient
SpeechAsyncClient(*, credentials: google.auth.credentials.Credentials = None, transport: Union[str, google.cloud.speech_v1p1beta1.services.speech.transports.base.SpeechTransport] = 'grpc_asyncio', client_options: <module 'google.api_core.client_options' from '/workspace/python-speech/.nox/docfx/lib/python3.9/site-packages/google/api_core/client_options.py'> = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = <google.api_core.gapic_v1.client_info.ClientInfo object>)Instantiate the speech client.
| Parameters | |
|---|---|
| Name | Description | 
| credentials | Optional[google.auth.credentials.Credentials]The authorization credentials to attach to requests. These credentials identify the application to the service; if none are specified, the client will attempt to ascertain the credentials from the environment. | 
| transport | Union[str, The transport to use. If set to None, a transport is chosen automatically. | 
| client_options | ClientOptionsCustom options for the client. It won't take effect if a  | 
| Exceptions | |
|---|---|
| Type | Description | 
| google.auth.exceptions.MutualTlsChannelError | If mutual TLS transport creation failed for any reason. | 
custom_class_path
custom_class_path(project: str, location: str, custom_class: str)Return a fully-qualified custom_class string.
from_service_account_file
from_service_account_file(filename: str, *args, **kwargs)Creates an instance of this client using the provided credentials file.
| Parameter | |
|---|---|
| Name | Description | 
| filename | strThe path to the service account private key json file. | 
| Returns | |
|---|---|
| Type | Description | 
| {@api.name} | The constructed client. | 
from_service_account_json
from_service_account_json(filename: str, *args, **kwargs)Creates an instance of this client using the provided credentials file.
| Parameter | |
|---|---|
| Name | Description | 
| filename | strThe path to the service account private key json file. | 
| Returns | |
|---|---|
| Type | Description | 
| {@api.name} | The constructed client. | 
get_transport_class
get_transport_class()Return an appropriate transport class.
long_running_recognize
long_running_recognize(request: Optional[google.cloud.speech_v1p1beta1.types.cloud_speech.LongRunningRecognizeRequest] = None, *, config: Optional[google.cloud.speech_v1p1beta1.types.cloud_speech.RecognitionConfig] = None, audio: Optional[google.cloud.speech_v1p1beta1.types.cloud_speech.RecognitionAudio] = None, retry: google.api_core.retry.Retry = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())Performs asynchronous speech recognition: receive results via
the google.longrunning.Operations interface. Returns either an
Operation.error or an Operation.response which contains
a LongRunningRecognizeResponse message. For more information
on asynchronous speech recognition, see the
how-to <https://cloud.google.com/speech-to-text/docs/async-recognize>__.
| Parameters | |
|---|---|
| Name | Description | 
| request | The request object. The top-level message sent by the client for the  | 
| config | Required. Provides information to the recognizer that specifies how to process the request. This corresponds to the  | 
| audio | Required. The audio data to be recognized. This corresponds to the  | 
| retry | google.api_core.retry.RetryDesignation of what errors, if any, should be retried. | 
| timeout | floatThe timeout for this request. | 
| metadata | Sequence[Tuple[str, str]]Strings which should be sent along with the request as metadata. | 
| Returns | |
|---|---|
| Type | Description | 
|  | An object representing a long-running operation. The result type for the operation will be .cloud_speech.LongRunningRecognizeResponse LongRunningRecognizemethod. It contains the result as zero or more sequentialSpeechRecognitionResultmessages. It is included in theresult.responsefield of theOperationreturned by theGetOperationcall of thegoogle::longrunning::Operationsservice. | 
parse_custom_class_path
parse_custom_class_path(path: str)Parse a custom_class path into its component segments.
parse_phrase_set_path
parse_phrase_set_path(path: str)Parse a phrase_set path into its component segments.
phrase_set_path
phrase_set_path(project: str, location: str, phrase_set: str)Return a fully-qualified phrase_set string.
recognize
recognize(request: Optional[google.cloud.speech_v1p1beta1.types.cloud_speech.RecognizeRequest] = None, *, config: Optional[google.cloud.speech_v1p1beta1.types.cloud_speech.RecognitionConfig] = None, audio: Optional[google.cloud.speech_v1p1beta1.types.cloud_speech.RecognitionAudio] = None, retry: google.api_core.retry.Retry = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())Performs synchronous speech recognition: receive results after all audio has been sent and processed.
| Parameters | |
|---|---|
| Name | Description | 
| request | The request object. The top-level message sent by the client for the  | 
| config | Required. Provides information to the recognizer that specifies how to process the request. This corresponds to the  | 
| audio | Required. The audio data to be recognized. This corresponds to the  | 
| retry | google.api_core.retry.RetryDesignation of what errors, if any, should be retried. | 
| timeout | floatThe timeout for this request. | 
| metadata | Sequence[Tuple[str, str]]Strings which should be sent along with the request as metadata. | 
| Returns | |
|---|---|
| Type | Description | 
|  | The only message returned to the client by the Recognizemethod. It contains the result as zero or more sequentialSpeechRecognitionResultmessages. | 
streaming_recognize
streaming_recognize(requests: Optional[AsyncIterator[google.cloud.speech_v1p1beta1.types.cloud_speech.StreamingRecognizeRequest]] = None, *, retry: google.api_core.retry.Retry = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())Performs bidirectional streaming speech recognition: receive results while sending audio. This method is only available via the gRPC API (not REST).
| Parameters | |
|---|---|
| Name | Description | 
| requests | AsyncIterator[The request object AsyncIterator. The top-level message sent by the client for the  | 
| retry | google.api_core.retry.RetryDesignation of what errors, if any, should be retried. | 
| timeout | floatThe timeout for this request. | 
| metadata | Sequence[Tuple[str, str]]Strings which should be sent along with the request as metadata. | 
| Returns | |
|---|---|
| Type | Description | 
| AsyncIterable[ | StreamingRecognizeResponseis the only message returned to the client byStreamingRecognize. A series of zero or moreStreamingRecognizeResponsemessages are streamed back to the client. If there is no recognizable audio, andsingle_utteranceis set to false, then no messages are streamed back to the client. Here's an example of a series of tenStreamingRecognizeResponse\ s that might be returned while processing audio: 1. results { alternatives { transcript: "tube" } stability: 0.01 } 2. results { alternatives { transcript: "to be a" } stability: 0.01 } 3. results { alternatives { transcript: "to be" } stability: 0.9 } results { alternatives { transcript: " or not to be" } stability: 0.01 } 4. results { alternatives { transcript: "to be or not to be" confidence: 0.92 } alternatives { transcript: "to bee or not to bee" } is_final: true } 5. results { alternatives { transcript: " that's" } stability: 0.01 } 6. results { alternatives { transcript: " that is" } stability: 0.9 } results { alternatives { transcript: " the question" } stability: 0.01 } 7. results { alternatives { transcript: " that is the question" confidence: 0.98 } alternatives { transcript: " that was the question" } is_final: true } Notes: - Only two of the above responses #4 and #7 contain final results; they are indicated byis_final: true. Concatenating these together generates the full transcript: "to be or not to be that is the question". - The others contain interimresults. #3 and #6 contain two interimresults: the first portion has a high stability and is less likely to change; the second portion has a low stability and is very likely to change. A UI designer might choose to show only high stabilityresults. - The specificstabilityandconfidencevalues shown above are only for illustrative purposes. Actual values may vary. - In each response, only one of these fields will be set:error,speech_event_type, or one or more (repeated)results. |