diff --git a/tutorials/tts/Audio_Codec_Inference.ipynb b/tutorials/tts/Audio_Codec_Inference.ipynb new file mode 100644 index 000000000000..8eff02916737 --- /dev/null +++ b/tutorials/tts/Audio_Codec_Inference.ipynb @@ -0,0 +1,478 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "7X-TwhdTGmlc" + }, + "source": [ + "# License" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "fCQUeZRPGnoe" + }, + "source": [ + "> Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.\n", + ">\n", + "> Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\n", + ">\n", + "> http://www.apache.org/licenses/LICENSE-2.0\n", + ">\n", + "> Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "rtBDkKqVGZJ8" + }, + "source": [ + "# Introduction" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "pZ2QSsXuGbMe" + }, + "source": [ + "In this tutorial we show how use NeMo **neural audio codecs** at inference time. To learn more about training and finetuning neural audio codecs in NeMo, check the [Audio Codec Training tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/tts/Audio_Codec_Training.ipynb).\n", + "\n", + "An audio codec typically consists of an encoder, a quantizer and a decoder, with a typical architecture depicted in the figure below.\n", + "An audio codec can be used to encode an input audio signal into a sequence of discrete values.\n", + "In this tutorial, the discrete values will be referred to as **audio tokens**.\n", + "The obtained audio tokens can be decoded into an output audio signal.\n", + "\n", + "Audio tokens can be used to represent the input audio for an automatic speech recognition (ASR) model [[1](https://arxiv.org/abs/2309.10922), [2](https://arxiv.org/pdf/2407.03495)], or to represent the output audio of a text-to-speech (TTS) system [[3](https://arxiv.org/abs/2406.05298), [4](https://arxiv.org/pdf/2406.17957)].\n", + "\n", + "NeMo provides several neural audio codec models, inlcuding audio codecs and mel codecs at different sampling rates.\n", + "The list of the available models can be found [here](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemotoolkit/tts/checkpoints.html#codec-models).\n", + "\n", + "
\n", + "\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "3OZassNG5xff" + }, + "source": [ + "# Setup" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "WZvQvPkIhRi3" + }, + "outputs": [], + "source": [ + "BRANCH = 'main'\n", + "# Install NeMo library. If you are running locally (rather than on Google Colab), follow the instructions at https://github.com/NVIDIA/NeMo#Installation\n", + "\n", + "if 'google.colab' in str(get_ipython()):\n", + " !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "v8NGOM0EzK8W" + }, + "outputs": [], + "source": [ + "import math\n", + "import wget\n", + "import os\n", + "import librosa\n", + "import torch\n", + "import numpy as np\n", + "import IPython.display as ipd\n", + "import matplotlib.pyplot as plt\n", + "from pathlib import Path\n", + "\n", + "\n", + "# Utility for displaying signals and metrics\n", + "def show_signal(signal: np.ndarray, sample_rate: int = 16000, tag: str = 'Signal'):\n", + " \"\"\"Show the time-domain signal and its spectrogram.\n", + " \"\"\"\n", + " fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 2.5))\n", + "\n", + " # show waveform\n", + " t = np.arange(0, len(signal)) / sample_rate\n", + "\n", + " ax[0].plot(t, signal)\n", + " ax[0].set_xlim(0, t.max())\n", + " ax[0].grid()\n", + " ax[0].set_xlabel('time / s')\n", + " ax[0].set_ylabel('amplitude')\n", + " ax[0].set_title(tag)\n", + "\n", + " n_fft = 1024\n", + " hop_length = 256\n", + "\n", + " D = librosa.amplitude_to_db(np.abs(librosa.stft(signal, n_fft=n_fft, hop_length=hop_length)), ref=np.max)\n", + " img = librosa.display.specshow(D, y_axis='linear', x_axis='time', sr=sample_rate, n_fft=n_fft, hop_length=hop_length, ax=ax[1])\n", + " ax[1].set_title(tag)\n", + "\n", + " plt.tight_layout()\n", + " plt.colorbar(img, format=\"%+2.f dB\", ax=ax)\n", + "\n", + "\n", + "# Utility for displaying a latent representation\n", + "def show_latent(latent: np.ndarray, tag: str):\n", + " plt.figure(figsize = (16, 3))\n", + " img = plt.imshow(latent, aspect='equal')\n", + " plt.colorbar(img, ax=plt.gca())\n", + " plt.title(tag)\n", + " plt.xlabel('Time frame')\n", + " plt.ylabel('Latent vector index')\n", + " plt.tight_layout()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "8ZKDMTwsEY1K" + }, + "outputs": [], + "source": [ + "# Working directory\n", + "ROOT_DIR = Path().absolute() / 'codec_tutorial'\n", + "\n", + "# Create dataset directory\n", + "DATA_DIR = ROOT_DIR / 'data'\n", + "DATA_DIR.mkdir(parents=True, exist_ok=True)\n", + "\n", + "audio_path = DATA_DIR / 'LJ023-0089.wav'\n", + "audio_url = \"https://multilangaudiosamples.s3.us-east-2.amazonaws.com/LJ023-0089.wav\"\n", + "\n", + "if not os.path.exists(audio_path):\n", + " wget.download(audio_url, audio_path.as_posix())" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "KAbH7N427FdT" + }, + "source": [ + "# Load a model from NGC" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ODgdGgsAAUku" + }, + "source": [ + "Any of the [pretrained checkpoints](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemotoolkit/tts/checkpoints.html#codec-models) could be used for inference.\n", + "Here, we use `mel_codec_22khz_fullband_medium`, which works for 22.05 kHz audio signals.\n", + "\n", + "The model can be easily restored from NGC:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "XqAYWR65aKTx" + }, + "outputs": [], + "source": [ + "from nemo.collections.tts.models.audio_codec import AudioCodecModel\n", + "\n", + "# Optionally specify a pretrained model to fine-tune from. To train from scratch, set this to 'None'.\n", + "model_name = 'mel_codec_22khz_fullband_medium'\n", + "codec_model = AudioCodecModel.from_pretrained(model_name)\n", + "codec_model.freeze()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ZnnjL28pEY1L" + }, + "source": [ + "Show information about the loaded model:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "4xsfeHVyEY1L" + }, + "outputs": [], + "source": [ + "print(f'Loaded model from NeMo:')\n", + "print(f'\\tmodel name : {model_name}')\n", + "print(f'\\tsample rate : {codec_model.sample_rate} Hz')\n", + "print(f'\\tlatent dimension : {codec_model.vector_quantizer.codebook_dim}')\n", + "\n", + "print('\\n\\nModel summary:')\n", + "print(codec_model.summarize())" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "fM4QPsLTnzK7" + }, + "source": [ + "# Inference" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "tkZC6Dl7KRl6" + }, + "source": [ + "## Processing audio\n", + "\n", + "Here we use the codec model to process the input audio by applying the complete model. The input signal is encoded, quantized, dequantized and decoded. Finally, a reconstructed signal is obtained." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "sYzvAYr2vo1K" + }, + "outputs": [], + "source": [ + "input_audio, sr = librosa.load(audio_path, sr=codec_model.sample_rate)\n", + "\n", + "# Shape (batch, time)\n", + "input_audio_tensor = torch.from_numpy(input_audio).unsqueeze(dim=0).to(codec_model.device)\n", + "\n", + "# Shape (batch,)\n", + "input_audio_len = torch.tensor([input_audio_tensor.size(-1)]).to(codec_model.device)\n", + "\n", + "# Process audio using the codec model\n", + "output_audio_tensor, _ = codec_model(audio=input_audio_tensor, audio_len=input_audio_len)\n", + "\n", + "# Output audio\n", + "output_audio = output_audio_tensor.squeeze().cpu().numpy()\n", + "\n", + "# Show signals\n", + "show_signal(input_audio, tag='Input audio', sample_rate=codec_model.sample_rate)\n", + "show_signal(output_audio, tag='Output audio', sample_rate=codec_model.sample_rate)\n", + "\n", + "# Play audio\n", + "print('Input audio')\n", + "ipd.display(ipd.Audio(input_audio, rate=codec_model.sample_rate))\n", + "\n", + "print('Output audio')\n", + "ipd.display(ipd.Audio(output_audio, rate=codec_model.sample_rate))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "rynZYwg2VP5d" + }, + "source": [ + "## Audio tokens\n", + "\n", + "Audio tokens can be easily computed by using the `encode` method of the `AudioCodec` model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "ASKM_jKVEY1L" + }, + "outputs": [], + "source": [ + "# Convert audio to tokens\n", + "tokens, tokens_len = codec_model.encode(audio=input_audio_tensor, audio_len=input_audio_len)\n", + "\n", + "print('tokens information:')\n", + "print(f'\\tshape (batch, codebook, time frame) : {tokens.size()}')\n", + "print(f'\\tdtype : {tokens.dtype}')\n", + "print(f'\\tmin : {tokens.min()}')\n", + "print(f'\\tmax : {tokens.max()}')\n", + "\n", + "# Number of codebooks should match the number of codebooks/groups\n", + "if hasattr(codec_model.vector_quantizer, 'num_groups'):\n", + " # Group FSQ\n", + " assert tokens.size(1) == codec_model.vector_quantizer.num_groups\n", + " print(f'\\tnum_groups : {tokens.size(1)}')\n", + "elif hasattr(codec_model.vector_quantizer, 'codebooks'):\n", + " # RVQ\n", + " assert tokens.size(1) == len(codec_model.vector_quantizer.codebooks)\n", + " print(f'\\tnum_codebooks : {tokens.size(1)}')" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "CmliPMnDEY1L" + }, + "source": [ + "Similarly, audio can be easily reconstructed from audio tokens using the `decode` method of the `AudioCodec` models." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "RTQ1M9PMEY1L" + }, + "outputs": [], + "source": [ + "# Convert tokens back to audio\n", + "output_audio_from_tokens_tensor, _ = codec_model.decode(tokens=tokens, tokens_len=tokens_len)\n", + "output_audio_from_tokens = output_audio_from_tokens_tensor.squeeze().cpu().numpy()\n", + "\n", + "# Show signals\n", + "show_signal(output_audio_from_tokens, tag='Output audio from tokens', sample_rate=codec_model.sample_rate)\n", + "show_signal(output_audio_from_tokens - output_audio, tag='Difference compared to forward pass', sample_rate=codec_model.sample_rate)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "kGqotZkqEY1M" + }, + "source": [ + "## Latent representation\n", + "\n", + "Continuous (non-discrete) latent representation at the output of the encoder can be easily computed using the `encode_audio` method of the `AudioCodec` model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "r-89-gG3EY1M" + }, + "outputs": [], + "source": [ + "# Convert audio to the encoded representation\n", + "encoded, encoded_len = codec_model.encode_audio(audio=input_audio_tensor, audio_len=input_audio_len)\n", + "\n", + "print('encoded information:')\n", + "print(f'\\tshape (batch, codebook, time frame) : {encoded.size()}')\n", + "print(f'\\tdtype : {encoded.dtype}')\n", + "print(f'\\tmin : {encoded.min()}')\n", + "print(f'\\tmax : {encoded.max()}')\n", + "\n", + "\n", + "# Show the encoded representation\n", + "show_latent(encoded.squeeze().cpu().numpy(), tag='Encoder output')" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "3Ory1U1uEY1M" + }, + "source": [ + "The encoded representation can be easily converted to tokens, dequantized into a continuous latent representation and decoded back to audio." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "btmqUWNkEY1M" + }, + "outputs": [], + "source": [ + "# Encoder output to tokens\n", + "tokens = codec_model.quantize(encoded=encoded, encoded_len=encoded_len)\n", + "\n", + "# Tokens back to a continuous vector\n", + "dequantized = codec_model.dequantize(tokens=tokens, tokens_len=encoded_len)\n", + "\n", + "# Reconstruct audio\n", + "output_audio_from_latent_tensor, _ = codec_model.decode_audio(inputs=dequantized, input_len=encoded_len)\n", + "output_audio_from_latent = output_audio_from_latent_tensor.squeeze().cpu().numpy()\n", + "\n", + "# Show dequantized latent representation\n", + "show_latent(dequantized.squeeze().cpu().numpy(), tag='Decoder input')\n", + "\n", + "# Show signals\n", + "show_signal(output_audio_from_latent, tag='Output audio from latent', sample_rate=codec_model.sample_rate)\n", + "show_signal(output_audio_from_latent - output_audio, tag='Difference compared to forward pass', sample_rate=codec_model.sample_rate)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "cMvU0WxlEY1M" + }, + "source": [ + "# Related information" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "_LtyHHuLkNDv" + }, + "source": [ + "To learn more about audio codec models in NeMo, look at our [documentation](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemotoolkit/tts/models.html#codecs).\n", + "\n", + "For more information on training and finetuning neural audio codecs in NeMo, check the [Audio Codec Training tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/tts/Audio_Codec_Training.ipynb)." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "LeqV3VvJVOb-" + }, + "source": [ + "# References" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Rvu4w2x_3RSY" + }, + "source": [ + "1. [Discrete Audio Representation as an Alternative to Mel-Spectrograms for Speaker and Speech Recognition](https://arxiv.org/abs/2309.10922)\n", + "2. [Codec-ASR: Training Performant Automatic Speech Recognition Systems with Discrete Speech Representations](https://arxiv.org/pdf/2407.03495)\n", + "3. [Spectral Codecs: Spectrogram-Based Audio Codecs for High Quality Speech Synthesis](https://arxiv.org/abs/2406.05298)\n", + "4. [Improving Robustness of LLM-based Speech Synthesis by Learning Monotonic Alignment](https://arxiv.org/pdf/2406.17957)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.12" + }, + "colab": { + "provenance": [], + "toc_visible": true + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} \ No newline at end of file