public sealed class StreamingRecognitionResult : IMessage<StreamingRecognitionResult>, IEquatable<StreamingRecognitionResult>, IDeepCloneable<StreamingRecognitionResult>, IBufferMessage, IMessageContains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance.
Example:
transcript: "tube"
transcript: "to be a"
transcript: "to be"
transcript: "to be or not to be" is_final: true
transcript: " that's"
transcript: " that is"
message_type:
END_OF_SINGLE_UTTERANCEtranscript: " that is the question" is_final: true
Only two of the responses contain final results (#4 and #8 indicated by
is_final: true). Concatenating these generates the full transcript: "to be
or not to be that is the question".
In each response we populate:
for
TRANSCRIPT:transcriptand possiblyis_final.for
END_OF_SINGLE_UTTERANCE: onlymessage_type.
Implements
IMessage<StreamingRecognitionResult>, IEquatable<StreamingRecognitionResult>, IDeepCloneable<StreamingRecognitionResult>, IBufferMessage, IMessageNamespace
Google.Cloud.Dialogflow.V2Assembly
Google.Cloud.Dialogflow.V2.dll
Constructors
StreamingRecognitionResult()
public StreamingRecognitionResult()StreamingRecognitionResult(StreamingRecognitionResult)
public StreamingRecognitionResult(StreamingRecognitionResult other)| Parameter | |
|---|---|
| Name | Description |
other |
StreamingRecognitionResult |
Properties
Confidence
public float Confidence { get; set; }The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set.
This field is typically only provided if is_final is true and you should
not rely on it being accurate or even set.
| Property Value | |
|---|---|
| Type | Description |
Single |
|
IsFinal
public bool IsFinal { get; set; }If false, the StreamingRecognitionResult represents an
interim result that may change. If true, the recognizer will not return
any further hypotheses about this piece of the audio. May only be populated
for message_type = TRANSCRIPT.
| Property Value | |
|---|---|
| Type | Description |
Boolean |
|
LanguageCode
public string LanguageCode { get; set; }Detected language code for the transcript.
| Property Value | |
|---|---|
| Type | Description |
String |
|
MessageType
public StreamingRecognitionResult.Types.MessageType MessageType { get; set; }Type of the result message.
| Property Value | |
|---|---|
| Type | Description |
StreamingRecognitionResult.Types.MessageType |
|
SpeechEndOffset
public Duration SpeechEndOffset { get; set; }Time offset of the end of this Speech recognition result relative to the
beginning of the audio. Only populated for message_type = TRANSCRIPT.
| Property Value | |
|---|---|
| Type | Description |
Duration |
|
SpeechWordInfo
public RepeatedField<SpeechWordInfo> SpeechWordInfo { get; }Word-specific information for the words recognized by Speech in
[transcript][google.cloud.dialogflow.v2.StreamingRecognitionResult.transcript]. Populated if and only if message_type = TRANSCRIPT and
[InputAudioConfig.enable_word_info] is set.
| Property Value | |
|---|---|
| Type | Description |
RepeatedField<SpeechWordInfo> |
|
Transcript
public string Transcript { get; set; }Transcript text representing the words that the user spoke.
Populated if and only if message_type = TRANSCRIPT.
| Property Value | |
|---|---|
| Type | Description |
String |
|