Baseline


Baseline code and checkpoint

Please refer to Baselines in ESPnet below for more details.


Basic Framework

framework

The basic framework is detailed in the URGENT challenge 2024 description paper.

As depicted in the figure above, we design a distortion model (simulation stage) \mathcal{F}(\cdot) to unify the data format for different distortion types, such that different speech enhancement (SE) sub-tasks can share a consistent input/output processing. In particular, we ensure that the sampling frequency (SF) at the output of the distortion model (degraded speech) is always equal to that of its input.

During training and inference, the processing of different SFs is supported for both conventional SE models (lower-right) that usually only operate at one SF and adaptive STFT-based sampling-frequency-independent (SFI) SE models (upper-right) that can directly handle different SFs.

  • For conventional SE models (e.g., Conv-TasNet), we always upsample its input (degraded speech) to the highest SF (48 kHz) so that the model only need to operate at 48 kHz. The model output (48 kHz) is then downsampled to the same SF as the degraded speech.
  • For adaptive STFT-based SFI SE models (e.g., BSRNN, TF-GridNet), we directly feed the degraded speech of different SFs into the model, which can adaptively adjust their STFT/iSTFT configuration according to the SF and generate the enhanced signal with the same SF.


Baselines in ESPnet

We provide offical baselines and the corresponding recipe (egs2/urgent25/enh1) based on the ESPnet toolkit.

How to run training/inference is desribed in egs2/urgent25/enh1/README.md.

To install the ESPnet toolkit for model training, please follow the instructions at https://espnet.github.io/espnet/installation.html.

You can check “A quick tutorial on how to use ESPnet” to have a quick overview on how to use ESPnet for speech enhancement.