public sealed class StreamingRecognizeRequest : IMessage<StreamingRecognizeRequest>, IEquatable<StreamingRecognizeRequest>, IDeepCloneable<StreamingRecognizeRequest>, IBufferMessage, IMessage
Reference documentation and code samples for the Cloud Speech-to-Text v2 API class StreamingRecognizeRequest.
Request message for the
[StreamingRecognize][google.cloud.speech.v2.Speech.StreamingRecognize]
method. Multiple
[StreamingRecognizeRequest][google.cloud.speech.v2.StreamingRecognizeRequest]
messages are sent in one call.
If the [Recognizer][google.cloud.speech.v2.Recognizer] referenced by
[recognizer][google.cloud.speech.v2.StreamingRecognizeRequest.recognizer]
contains a fully specified request configuration then the stream may only
contain messages with only
[audio][google.cloud.speech.v2.StreamingRecognizeRequest.audio] set.
Otherwise the first message must contain a
[recognizer][google.cloud.speech.v2.StreamingRecognizeRequest.recognizer] and
a
[streaming_config][google.cloud.speech.v2.StreamingRecognizeRequest.streaming_config]
message that together fully specify the request configuration and must not
contain [audio][google.cloud.speech.v2.StreamingRecognizeRequest.audio]. All
subsequent messages must only have
[audio][google.cloud.speech.v2.StreamingRecognizeRequest.audio] set.
Required. The name of the Recognizer to use during recognition. The
expected format is
projects/{project}/locations/{location}/recognizers/{recognizer}. The
{recognizer} segment may be set to _ to use an empty implicit Recognizer.
public StreamingRecognitionConfig StreamingConfig { get; set; }
StreamingRecognitionConfig to be used in this recognition attempt.
If provided, it will override the default RecognitionConfig stored in the
Recognizer.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[[["\u003cp\u003eThe \u003ccode\u003eStreamingRecognizeRequest\u003c/code\u003e class is used for sending streaming audio recognition requests to the Cloud Speech-to-Text v2 API, where multiple messages can be sent in a single call.\u003c/p\u003e\n"],["\u003cp\u003eThe first message in a stream must include a \u003ccode\u003erecognizer\u003c/code\u003e and \u003ccode\u003estreaming_config\u003c/code\u003e to fully define the request if a request configuration is not already defined in the recognizer.\u003c/p\u003e\n"],["\u003cp\u003eSubsequent messages in the stream should only have the \u003ccode\u003eaudio\u003c/code\u003e property set, containing the audio bytes to be recognized, after the initial message has been sent.\u003c/p\u003e\n"],["\u003cp\u003eThe class implements several interfaces, including \u003ccode\u003eIMessage\u003c/code\u003e, \u003ccode\u003eIEquatable\u003c/code\u003e, \u003ccode\u003eIDeepCloneable\u003c/code\u003e, and \u003ccode\u003eIBufferMessage\u003c/code\u003e, and inherits from the base \u003ccode\u003eobject\u003c/code\u003e class.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eStreamingRecognizeRequest\u003c/code\u003e class has properties like \u003ccode\u003eAudio\u003c/code\u003e, \u003ccode\u003eRecognizer\u003c/code\u003e, and \u003ccode\u003eStreamingConfig\u003c/code\u003e to define the audio data, the recognizer resource, and the recognition configuration respectively.\u003c/p\u003e\n"]]],[],null,["# Cloud Speech-to-Text v2 API - Class StreamingRecognizeRequest (1.5.0)\n\nVersion latestkeyboard_arrow_down\n\n- [1.5.0 (latest)](/dotnet/docs/reference/Google.Cloud.Speech.V2/latest/Google.Cloud.Speech.V2.StreamingRecognizeRequest)\n- [1.4.0](/dotnet/docs/reference/Google.Cloud.Speech.V2/1.4.0/Google.Cloud.Speech.V2.StreamingRecognizeRequest)\n- [1.3.0](/dotnet/docs/reference/Google.Cloud.Speech.V2/1.3.0/Google.Cloud.Speech.V2.StreamingRecognizeRequest)\n- [1.2.0](/dotnet/docs/reference/Google.Cloud.Speech.V2/1.2.0/Google.Cloud.Speech.V2.StreamingRecognizeRequest)\n- [1.1.0](/dotnet/docs/reference/Google.Cloud.Speech.V2/1.1.0/Google.Cloud.Speech.V2.StreamingRecognizeRequest)\n- [1.0.0](/dotnet/docs/reference/Google.Cloud.Speech.V2/1.0.0/Google.Cloud.Speech.V2.StreamingRecognizeRequest) \n\n public sealed class StreamingRecognizeRequest : IMessage\u003cStreamingRecognizeRequest\u003e, IEquatable\u003cStreamingRecognizeRequest\u003e, IDeepCloneable\u003cStreamingRecognizeRequest\u003e, IBufferMessage, IMessage\n\nReference documentation and code samples for the Cloud Speech-to-Text v2 API class StreamingRecognizeRequest.\n\nRequest message for the\n\\[StreamingRecognize\\]\\[google.cloud.speech.v2.Speech.StreamingRecognize\\]\nmethod. Multiple\n\\[StreamingRecognizeRequest\\]\\[google.cloud.speech.v2.StreamingRecognizeRequest\\]\nmessages are sent in one call.\n\nIf the \\[Recognizer\\]\\[google.cloud.speech.v2.Recognizer\\] referenced by\n\\[recognizer\\]\\[google.cloud.speech.v2.StreamingRecognizeRequest.recognizer\\]\ncontains a fully specified request configuration then the stream may only\ncontain messages with only\n\\[audio\\]\\[google.cloud.speech.v2.StreamingRecognizeRequest.audio\\] set.\n\nOtherwise the first message must contain a\n\\[recognizer\\]\\[google.cloud.speech.v2.StreamingRecognizeRequest.recognizer\\] and\na\n\\[streaming_config\\]\\[google.cloud.speech.v2.StreamingRecognizeRequest.streaming_config\\]\nmessage that together fully specify the request configuration and must not\ncontain \\[audio\\]\\[google.cloud.speech.v2.StreamingRecognizeRequest.audio\\]. All\nsubsequent messages must only have\n\\[audio\\]\\[google.cloud.speech.v2.StreamingRecognizeRequest.audio\\] set. \n\nInheritance\n-----------\n\n[object](https://learn.microsoft.com/dotnet/api/system.object) \\\u003e StreamingRecognizeRequest \n\nImplements\n----------\n\n[IMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IMessage-1.html)[StreamingRecognizeRequest](/dotnet/docs/reference/Google.Cloud.Speech.V2/latest/Google.Cloud.Speech.V2.StreamingRecognizeRequest), [IEquatable](https://learn.microsoft.com/dotnet/api/system.iequatable-1)[StreamingRecognizeRequest](/dotnet/docs/reference/Google.Cloud.Speech.V2/latest/Google.Cloud.Speech.V2.StreamingRecognizeRequest), [IDeepCloneable](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IDeepCloneable-1.html)[StreamingRecognizeRequest](/dotnet/docs/reference/Google.Cloud.Speech.V2/latest/Google.Cloud.Speech.V2.StreamingRecognizeRequest), [IBufferMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IBufferMessage.html), [IMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IMessage.html) \n\nInherited Members\n-----------------\n\n[object.GetHashCode()](https://learn.microsoft.com/dotnet/api/system.object.gethashcode) \n[object.GetType()](https://learn.microsoft.com/dotnet/api/system.object.gettype) \n[object.ToString()](https://learn.microsoft.com/dotnet/api/system.object.tostring)\n\nNamespace\n---------\n\n[Google.Cloud.Speech.V2](/dotnet/docs/reference/Google.Cloud.Speech.V2/latest/Google.Cloud.Speech.V2)\n\nAssembly\n--------\n\nGoogle.Cloud.Speech.V2.dll\n\nConstructors\n------------\n\n### StreamingRecognizeRequest()\n\n public StreamingRecognizeRequest()\n\n### StreamingRecognizeRequest(StreamingRecognizeRequest)\n\n public StreamingRecognizeRequest(StreamingRecognizeRequest other)\n\nProperties\n----------\n\n### Audio\n\n public ByteString Audio { get; set; }\n\nInline audio bytes to be Recognized.\nMaximum size for this field is 15 KB per request.\n\n### HasAudio\n\n public bool HasAudio { get; }\n\nGets whether the \"audio\" field is set\n\n### Recognizer\n\n public string Recognizer { get; set; }\n\nRequired. The name of the Recognizer to use during recognition. The\nexpected format is\n`projects/{project}/locations/{location}/recognizers/{recognizer}`. The\n{recognizer} segment may be set to `_` to use an empty implicit Recognizer.\n\n### RecognizerAsRecognizerName\n\n public RecognizerName RecognizerAsRecognizerName { get; set; }\n\n[RecognizerName](/dotnet/docs/reference/Google.Cloud.Speech.V2/latest/Google.Cloud.Speech.V2.RecognizerName)-typed view over the [Recognizer](/dotnet/docs/reference/Google.Cloud.Speech.V2/latest/Google.Cloud.Speech.V2.StreamingRecognizeRequest#Google_Cloud_Speech_V2_StreamingRecognizeRequest_Recognizer) resource name property.\n\n### StreamingConfig\n\n public StreamingRecognitionConfig StreamingConfig { get; set; }\n\nStreamingRecognitionConfig to be used in this recognition attempt.\nIf provided, it will override the default RecognitionConfig stored in the\nRecognizer.\n\n### StreamingRequestCase\n\n public StreamingRecognizeRequest.StreamingRequestOneofCase StreamingRequestCase { get; }"]]