- 4.71.0 (latest)
- 4.70.0
- 4.69.0
- 4.68.0
- 4.67.0
- 4.65.0
- 4.63.0
- 4.62.0
- 4.59.0
- 4.58.0
- 4.57.0
- 4.55.0
- 4.54.0
- 4.53.0
- 4.52.0
- 4.51.0
- 4.50.0
- 4.49.0
- 4.48.0
- 4.47.0
- 4.46.0
- 4.44.0
- 4.43.0
- 4.42.0
- 4.41.0
- 4.40.0
- 4.39.0
- 4.38.0
- 4.37.0
- 4.36.0
- 4.35.0
- 4.34.0
- 4.32.0
- 4.31.0
- 4.30.0
- 4.29.0
- 4.28.0
- 4.27.0
- 4.26.0
- 4.25.0
- 4.24.0
- 4.23.0
- 4.22.0
- 4.19.0
- 4.18.0
- 4.17.0
- 4.16.0
- 4.15.0
- 4.14.0
- 4.13.0
- 4.12.0
- 4.11.0
- 4.10.0
- 4.9.0
- 4.8.0
- 4.7.0
- 4.6.0
- 4.4.0
- 4.3.0
- 4.2.0
- 4.1.0
- 4.0.0
- 3.0.0
- 2.6.1
- 2.5.9
- 2.4.0
- 2.3.0
- 2.2.15
public interface StreamingRecognizeRequestOrBuilder extends MessageOrBuilderImplements
MessageOrBuilderMethods
getAudioContent()
public abstract ByteString getAudioContent() The audio data to be recognized. Sequential chunks of audio data are sent
in sequential StreamingRecognizeRequest messages. The first
StreamingRecognizeRequest message must not contain audio_content data
and all subsequent StreamingRecognizeRequest messages must contain
audio_content data. The audio bytes must be encoded as specified in
RecognitionConfig. Note: as with all bytes fields, proto buffers use a
pure binary representation (not base64). See
content limits.
bytes audio_content = 2;
| Returns | |
|---|---|
| Type | Description |
ByteString |
The audioContent. |
getStreamingConfig()
public abstract StreamingRecognitionConfig getStreamingConfig() Provides information to the recognizer that specifies how to process the
request. The first StreamingRecognizeRequest message must contain a
streaming_config message.
.google.cloud.speech.v1.StreamingRecognitionConfig streaming_config = 1;
| Returns | |
|---|---|
| Type | Description |
StreamingRecognitionConfig |
The streamingConfig. |
getStreamingConfigOrBuilder()
public abstract StreamingRecognitionConfigOrBuilder getStreamingConfigOrBuilder() Provides information to the recognizer that specifies how to process the
request. The first StreamingRecognizeRequest message must contain a
streaming_config message.
.google.cloud.speech.v1.StreamingRecognitionConfig streaming_config = 1;
| Returns | |
|---|---|
| Type | Description |
StreamingRecognitionConfigOrBuilder |
|
getStreamingRequestCase()
public abstract StreamingRecognizeRequest.StreamingRequestCase getStreamingRequestCase()| Returns | |
|---|---|
| Type | Description |
StreamingRecognizeRequest.StreamingRequestCase |
|
hasAudioContent()
public abstract boolean hasAudioContent() The audio data to be recognized. Sequential chunks of audio data are sent
in sequential StreamingRecognizeRequest messages. The first
StreamingRecognizeRequest message must not contain audio_content data
and all subsequent StreamingRecognizeRequest messages must contain
audio_content data. The audio bytes must be encoded as specified in
RecognitionConfig. Note: as with all bytes fields, proto buffers use a
pure binary representation (not base64). See
content limits.
bytes audio_content = 2;
| Returns | |
|---|---|
| Type | Description |
boolean |
Whether the audioContent field is set. |
hasStreamingConfig()
public abstract boolean hasStreamingConfig() Provides information to the recognizer that specifies how to process the
request. The first StreamingRecognizeRequest message must contain a
streaming_config message.
.google.cloud.speech.v1.StreamingRecognitionConfig streaming_config = 1;
| Returns | |
|---|---|
| Type | Description |
boolean |
Whether the streamingConfig field is set. |