Reference documentation and code samples for the Google Cloud Speech v1 API enum RecognitionConfig.Types.AudioEncoding.
The encoding of the audio data sent in the request.
All encodings support only 1 channel (mono) audio, unless the
audio_channel_count and enable_separate_recognition_per_channel fields
are set.
For best results, the audio source should be captured and transmitted using
a lossless encoding (FLAC or LINEAR16). The accuracy of the speech
recognition can be reduced if lossy codecs are used to capture or transmit
audio, particularly if background noise is present. Lossy codecs include
MULAW, AMR, AMR_WB, OGG_OPUS, SPEEX_WITH_HEADER_BYTE, MP3,
and WEBM_OPUS.
The FLAC and WAV audio file formats include a header that describes the
included audio content. You can request recognition for WAV files that
contain either LINEAR16 or MULAW encoded audio.
If you send FLAC or WAV audio file format in
your request, you do not need to specify an AudioEncoding; the audio
encoding format is determined from the file header. If you specify
an AudioEncoding when you send send FLAC or WAV audio, the
encoding configuration must match the encoding described in the audio
header; otherwise the request returns an
[google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT] error
code.
Adaptive Multi-Rate Narrowband codec. sample_rate_hertz must be 8000.
AmrWb
Adaptive Multi-Rate Wideband codec. sample_rate_hertz must be 16000.
EncodingUnspecified
Not specified.
Flac
FLAC (Free Lossless Audio
Codec) is the recommended encoding because it is
lossless--therefore recognition is not compromised--and
requires only about half the bandwidth of LINEAR16. FLAC stream
encoding supports 16-bit and 24-bit samples, however, not all fields in
STREAMINFO are supported.
Linear16
Uncompressed 16-bit signed little-endian samples (Linear PCM).
Mulaw
8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.
OggOpus
Opus encoded audio frames in Ogg container
(OggOpus).
sample_rate_hertz must be one of 8000, 12000, 16000, 24000, or 48000.
SpeexWithHeaderByte
Although the use of lossy encodings is not recommended, if a very low
bitrate encoding is required, OGG_OPUS is highly preferred over
Speex encoding. The Speex encoding supported by
Cloud Speech API has a header byte in each block, as in MIME type
audio/x-speex-with-header-byte.
It is a variant of the RTP Speex encoding defined in
RFC 5574.
The stream is a sequence of blocks, one block per RTP packet. Each block
starts with a byte containing the length of the block, in bytes, followed
by one or more frames of Speex data, padded to an integral number of
bytes (octets) as specified in RFC 5574. In other words, each RTP header
is replaced with a single byte containing the block length. Only Speex
wideband is supported. sample_rate_hertz must be 16000.
WebmOpus
Opus encoded audio frames in WebM container
(OggOpus). sample_rate_hertz must be
one of 8000, 12000, 16000, 24000, or 48000.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[[["\u003cp\u003eThis webpage provides reference documentation for the \u003ccode\u003eRecognitionConfig.Types.AudioEncoding\u003c/code\u003e enum within the Google Cloud Speech v1 API, detailing various audio encoding options for speech recognition.\u003c/p\u003e\n"],["\u003cp\u003eThe latest version available is 3.8.0, and the page lists various versions of the API documentation, ranging from 2.2.0 to 3.8.0 for reference.\u003c/p\u003e\n"],["\u003cp\u003eThe enum outlines different audio encoding types such as \u003ccode\u003eFLAC\u003c/code\u003e, \u003ccode\u003eLINEAR16\u003c/code\u003e, \u003ccode\u003eMULAW\u003c/code\u003e, \u003ccode\u003eOGG_OPUS\u003c/code\u003e, and \u003ccode\u003eAMR\u003c/code\u003e, each with specific requirements and recommendations for optimal use in speech recognition, with lossless encodings like \u003ccode\u003eFLAC\u003c/code\u003e being highly recommended.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eRecognitionConfig.Types.AudioEncoding\u003c/code\u003e section specifies that only 1 audio channel (mono) is supported, and notes that \u003ccode\u003eFLAC\u003c/code\u003e or \u003ccode\u003eWAV\u003c/code\u003e audio file formats do not require specified encoding if it can be determined by the file header.\u003c/p\u003e\n"],["\u003cp\u003eThe different audio encoding fields in this page include their own required sample rates for usage, for example \u003ccode\u003eAMR\u003c/code\u003e must use 8000 while \u003ccode\u003eAMR_WB\u003c/code\u003e must use 16000.\u003c/p\u003e\n"]]],[],null,[]]