• Source: Lyra (codec)
  • Lyra is a lossy audio codec developed by Google that is designed for compressing speech at very low bitrates. Unlike most other audio formats, it compresses data using a machine learning-based algorithm.


    Features


    The Lyra codec is designed to transmit speech in real-time when bandwidth is severely restricted, such as over slow or unreliable network connections. It runs at fixed bitrates of 3.2, 6, and 9 kbit/s and it is intended to provide better quality than codecs that use traditional waveform-based algorithms at similar bitrates. Instead, compression is achieved via a machine learning algorithm that encodes the input with feature extraction, and then reconstructs an approximation of the original using a generative model. This model was trained on thousands of hours of speech recorded in over 70 languages to function with various speakers. Because generative models are more computationally complex than traditional codecs, a simple model that processes different frequency ranges in parallel is used to obtain acceptable performance. Lyra imposes 20 ms of latency due to its frame size. Google's reference implementation is available for Android and Linux.


    = Quality

    =
    Lyra's initial version performed significantly better than traditional codecs at similar bitrates. Ian Buckley at MakeUseOf said, "It succeeds in creating almost eerie levels of audio reproduction with bitrates as low as 3 kbps." Google claims that it reproduces natural-sounding speech, and that Lyra at 3 kbit/s beats Opus at 8 kbit/s. Tsahi Levent-Levi writes that Satin, Microsoft's AI-based codec, outperforms it at higher bitrates.


    History


    In December 2017, Google researchers published a preprint paper on replacing the Codec 2 decoder with a WaveNet neural network. They found that a neural network is able to extrapolate features of the voice not described in the Codec 2 bitstream and give better audio quality, and that the use of conventional features makes the neural network calculation simpler compared to a purely waveform-based network. Lyra version 1 would reuse this overall framework of feature extraction, quantization, and neural synthesis.
    Lyra was first announced in February 2021, and in April, Google released the source code of their reference implementation. The initial version had a fixed bitrate of 3 kbit/s and around 90 ms latency. The encoder calculates a log mel spectrogram and performs vector quantization to store the spectrogram in a data stream. The decoder is a WaveNet neural network that takes the spectrogram and reconstructs the input audio.
    A second version (v2/1.2.0), released in September 2022, improved sound quality, latency, and performance, and permitted multiple bitrates. V2 uses a "SoundStream" structure where both the encoder and decoder are neural networks, a kind of autoencoder. A residual vector quantizer is used to turn the feature values into transferrable data.


    Support




    = Implementations

    =
    Google's implementation is available on GitHub under the Apache License. Written in C++, it is optimized for 64-bit ARM but also runs on x86, on either Android or Linux.


    = Applications

    =
    Google Meet uses Lyra to transmit sound for video chats when bandwidth is limited.


    References




    External links


    Lyra: A New Very Low-Bitrate Codec for Speech Compression Google blog post with a demonstration comparing codecs


    See also


    Satin (codec), an AI-based codec developed by Microsoft
    Comparison of audio coding formats
    Speech coding
    Videotelephony

Kata Kunci Pencarian: