public sealed class StreamingRecognizeResponse : IMessage<StreamingRecognizeResponse>, IEquatable<StreamingRecognizeResponse>, IDeepCloneable<StreamingRecognizeResponse>, IBufferMessage, IMessage
Reference documentation and code samples for the Cloud Speech-to-Text v2 API class StreamingRecognizeResponse.
StreamingRecognizeResponse is the only message returned to the client by
StreamingRecognize. A series of zero or more StreamingRecognizeResponse
messages are streamed back to the client. If there is no recognizable
audio then no messages are streamed back to the client.
Here are some examples of StreamingRecognizeResponses that might
be returned while processing audio:
results { alternatives { transcript: " that is the question"
confidence: 0.98 }
alternatives { transcript: " that was the question" }
is_final: true }
Notes:
Only two of the above responses #4 and #7 contain final results; they are
indicated by is_final: true. Concatenating these together generates the
full transcript: "to be or not to be that is the question".
The others contain interim results. #3 and #6 contain two interim
results: the first portion has a high stability and is less likely to
change; the second portion has a low stability and is very likely to
change. A UI designer might choose to show only high stability results.
The specific stability and confidence values shown above are only for
illustrative purposes. Actual values may vary.
In each response, only one of these fields will be set:
error,
speech_event_type, or
one or more (repeated) results.
public RepeatedField<StreamingRecognitionResult> Results { get; }
This repeated list contains zero or more results that
correspond to consecutive portions of the audio currently being processed.
It contains zero or one
[is_final][google.cloud.speech.v2.StreamingRecognitionResult.is_final]=true
result (the newly settled portion), followed by zero or more
[is_final][google.cloud.speech.v2.StreamingRecognitionResult.is_final]=false
results (the interim results).
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[[["\u003cp\u003eThe \u003ccode\u003eStreamingRecognizeResponse\u003c/code\u003e class is used to stream results back to the client from the \u003ccode\u003eStreamingRecognize\u003c/code\u003e method in the Cloud Speech-to-Text v2 API.\u003c/p\u003e\n"],["\u003cp\u003eThe response messages contain zero or more \u003ccode\u003eresults\u003c/code\u003e, and only one of \u003ccode\u003eerror\u003c/code\u003e, \u003ccode\u003espeech_event_type\u003c/code\u003e, or \u003ccode\u003eresults\u003c/code\u003e fields will be present in each response.\u003c/p\u003e\n"],["\u003cp\u003eFinal transcription results are indicated by \u003ccode\u003eis_final: true\u003c/code\u003e, and combining these results produces the complete transcript, while interim results are less stable and may change.\u003c/p\u003e\n"],["\u003cp\u003e\u003ccode\u003eStability\u003c/code\u003e and \u003ccode\u003econfidence\u003c/code\u003e values are provided to indicate the reliability of the interim results, allowing UI designers to display only high-stability results if desired.\u003c/p\u003e\n"],["\u003cp\u003eThe response includes a \u003ccode\u003eSpeechEventType\u003c/code\u003e property, which indicates the nature of a speech event that was detected in the audio stream.\u003c/p\u003e\n"]]],[],null,["# Cloud Speech-to-Text v2 API - Class StreamingRecognizeResponse (1.5.0)\n\nVersion latestkeyboard_arrow_down\n\n- [1.5.0 (latest)](/dotnet/docs/reference/Google.Cloud.Speech.V2/latest/Google.Cloud.Speech.V2.StreamingRecognizeResponse)\n- [1.4.0](/dotnet/docs/reference/Google.Cloud.Speech.V2/1.4.0/Google.Cloud.Speech.V2.StreamingRecognizeResponse)\n- [1.3.0](/dotnet/docs/reference/Google.Cloud.Speech.V2/1.3.0/Google.Cloud.Speech.V2.StreamingRecognizeResponse)\n- [1.2.0](/dotnet/docs/reference/Google.Cloud.Speech.V2/1.2.0/Google.Cloud.Speech.V2.StreamingRecognizeResponse)\n- [1.1.0](/dotnet/docs/reference/Google.Cloud.Speech.V2/1.1.0/Google.Cloud.Speech.V2.StreamingRecognizeResponse)\n- [1.0.0](/dotnet/docs/reference/Google.Cloud.Speech.V2/1.0.0/Google.Cloud.Speech.V2.StreamingRecognizeResponse) \n\n public sealed class StreamingRecognizeResponse : IMessage\u003cStreamingRecognizeResponse\u003e, IEquatable\u003cStreamingRecognizeResponse\u003e, IDeepCloneable\u003cStreamingRecognizeResponse\u003e, IBufferMessage, IMessage\n\nReference documentation and code samples for the Cloud Speech-to-Text v2 API class StreamingRecognizeResponse.\n\n`StreamingRecognizeResponse` is the only message returned to the client by\n`StreamingRecognize`. A series of zero or more `StreamingRecognizeResponse`\nmessages are streamed back to the client. If there is no recognizable\naudio then no messages are streamed back to the client.\n\nHere are some examples of `StreamingRecognizeResponse`s that might\nbe returned while processing audio:\n\n1. results { alternatives { transcript: \"tube\" } stability: 0.01 }\n\n2. results { alternatives { transcript: \"to be a\" } stability: 0.01 }\n\n3. results { alternatives { transcript: \"to be\" } stability: 0.9 }\n results { alternatives { transcript: \" or not to be\" } stability: 0.01 }\n\n4. results { alternatives { transcript: \"to be or not to be\"\n confidence: 0.92 }\n alternatives { transcript: \"to bee or not to bee\" }\n is_final: true }\n\n5. results { alternatives { transcript: \" that's\" } stability: 0.01 }\n\n6. results { alternatives { transcript: \" that is\" } stability: 0.9 }\n results { alternatives { transcript: \" the question\" } stability: 0.01 }\n\n7. results { alternatives { transcript: \" that is the question\"\n confidence: 0.98 }\n alternatives { transcript: \" that was the question\" }\n is_final: true }\n\nNotes:\n\n- Only two of the above responses #4 and #7 contain final results; they are\n indicated by `is_final: true`. Concatenating these together generates the\n full transcript: \"to be or not to be that is the question\".\n\n- The others contain interim `results`. #3 and #6 contain two interim\n `results`: the first portion has a high stability and is less likely to\n change; the second portion has a low stability and is very likely to\n change. A UI designer might choose to show only high stability `results`.\n\n- The specific `stability` and `confidence` values shown above are only for\n illustrative purposes. Actual values may vary.\n\n- In each response, only one of these fields will be set:\n `error`,\n `speech_event_type`, or\n one or more (repeated) `results`.\n\nInheritance\n-----------\n\n[object](https://learn.microsoft.com/dotnet/api/system.object) \\\u003e StreamingRecognizeResponse \n\nImplements\n----------\n\n[IMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IMessage-1.html)[StreamingRecognizeResponse](/dotnet/docs/reference/Google.Cloud.Speech.V2/latest/Google.Cloud.Speech.V2.StreamingRecognizeResponse), [IEquatable](https://learn.microsoft.com/dotnet/api/system.iequatable-1)[StreamingRecognizeResponse](/dotnet/docs/reference/Google.Cloud.Speech.V2/latest/Google.Cloud.Speech.V2.StreamingRecognizeResponse), [IDeepCloneable](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IDeepCloneable-1.html)[StreamingRecognizeResponse](/dotnet/docs/reference/Google.Cloud.Speech.V2/latest/Google.Cloud.Speech.V2.StreamingRecognizeResponse), [IBufferMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IBufferMessage.html), [IMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IMessage.html) \n\nInherited Members\n-----------------\n\n[object.GetHashCode()](https://learn.microsoft.com/dotnet/api/system.object.gethashcode) \n[object.GetType()](https://learn.microsoft.com/dotnet/api/system.object.gettype) \n[object.ToString()](https://learn.microsoft.com/dotnet/api/system.object.tostring)\n\nNamespace\n---------\n\n[Google.Cloud.Speech.V2](/dotnet/docs/reference/Google.Cloud.Speech.V2/latest/Google.Cloud.Speech.V2)\n\nAssembly\n--------\n\nGoogle.Cloud.Speech.V2.dll\n\nConstructors\n------------\n\n### StreamingRecognizeResponse()\n\n public StreamingRecognizeResponse()\n\n### StreamingRecognizeResponse(StreamingRecognizeResponse)\n\n public StreamingRecognizeResponse(StreamingRecognizeResponse other)\n\nProperties\n----------\n\n### Metadata\n\n public RecognitionResponseMetadata Metadata { get; set; }\n\nMetadata about the recognition.\n\n### Results\n\n public RepeatedField\u003cStreamingRecognitionResult\u003e Results { get; }\n\nThis repeated list contains zero or more results that\ncorrespond to consecutive portions of the audio currently being processed.\nIt contains zero or one\n\\[is_final\\]\\[google.cloud.speech.v2.StreamingRecognitionResult.is_final\\]=`true`\nresult (the newly settled portion), followed by zero or more\n\\[is_final\\]\\[google.cloud.speech.v2.StreamingRecognitionResult.is_final\\]=`false`\nresults (the interim results).\n\n### SpeechEventOffset\n\n public Duration SpeechEventOffset { get; set; }\n\nTime offset between the beginning of the audio and event emission.\n\n### SpeechEventType\n\n public StreamingRecognizeResponse.Types.SpeechEventType SpeechEventType { get; set; }\n\nIndicates the type of speech event."]]