The Interspeech 2025 URGENT challenge (Universality, Robustness, and Generalizability for EnhancemeNT) is a speech enhancement challenge held at Interspeech 2025. We aim to build universal speech enhancement models for unifying speech processing in a wide variety of conditions.
Data preparation script: https://github.com/urgent-challenge/urgent2025_challenge
Official validation set for leaderboard submission: https://drive.google.com/file/d/1Ip-C5tUNGCssT8KAjHUUoh99jkzRH6nm/view
Leaderboard: https://urgent-challenge.com
Baseline script: https://github.com/kohei0209/espnet/tree/urgent2025/egs2/urgent25/enh1
Baseline pre-trained model: https://huggingface.co/kohei0209/tfgridnet_urgent25/tree/main
Slack channel: https://join.slack.com/t/urgentchallenge/shared_invite/zt-2jy2stg7q-79AGeAY0CpKHRl7r4X0e6g
The Interspeech 2025 URGENT challenge aims to bring more attention to constructing Universal, Robust and Generalizable speech EnhancemeNT models. This year’s challenge focuses on the following aspects:
The task of this challenge is to build a single speech enhancement system to adaptively handle input speech with different distortions (corresponding to different SE subtasks) and different input formats (e.g., sampling frequencies) in different acoustic environments (e.g., noise and reverberation).
The training data will consist of several public corpora of speech, noise, and RIRs. Only the specified set of data can be used during the challenge. We encourage participants to apply data augmentation techniques such as dynamic mixing to achieve the best generalizability. The data preparation scripts are released in our GitHub repositoryData
tab for more information.
We will evaluate enhanced audios with a variety of metrics to comprehensively understand the capacity of existing generative and discriminative methods. They include four different categories of metrics
More details about the evaluation plan can be found in the Rules
tab.
The URGENT 2025 challenge has two tracks with different data scales. Participants may participate in track 1, track 2, or both. The same test set will be utilized in two tracks.
We characterize the first track as a primary track to compare the techniques with the same data and the second track as a secondary one to provide insights into data scaling. Since the data scale in the Track 2 is huge, we encourage participants to participate in Track 1 first. However, if interested in data scaling or data-hungry methods (e.g., speechLM or SSL-based methods), the Track 2 welcomes you!
Note that although the dataset in Track 2 is huge, participants do not have to use all the data. Participants may want to start by training the best system developed in track1 with some data from track2. Data selection techniques (e.g., based on diversity or estimated MOS scores) would be helpful to efficiently train the model.
We have a Slack channel for real-time communication.
Recent decades have witnessed rapid development of deep learning-based speech enhancement (SE) techniques, with impressive performance in matched conditions. However, most conventional speech enhancement approaches focus only on a limited range of conditions, such as single-channel, multi-channel, anechoic, monolingual, and so on.
In many existing works, researchers tend to only train SE models on one or two common datasets, such as the VoiceBank+DEMAND
The evaluation is often done only on simulated conditions that are similar to the training setting. Meanwhile, in earlier SE challenges such DNS series, the choice of training data was also often left to the participants. This led to the situation that models trained with a huge amount of private data were compared to models trained with a small public dataset. This greatly impedes understanding of the generalizability and robustness of SE methods comprehensively. In addition, the model design may be biased towards a specific limited condition if only a small amount of data is used. The resultant SE model may also have limited capacity to handle more complicated scenarios.
Apart from conventional discriminative methods, generative methods have also attracted much attention in recent years. They are good at handling different distortions with a single model
Meanwhile, recent efforts
Existing speech enhancement challenges have fostered the development of speech enhancement models for specific conditions, such as denoising and dereverberation
The first edition of the URGENT challenge series, NeurIPS 2024 URGENT challenge, aimed at comprehensively evaluate the universality, robustness, and generalizability of SE systems. The systems which handles (i) 4 types of distortions (additive noise, reverberation, clipping, and bandwidth limitation) and (ii) various sampling rates were trained on a large-scale dataset and evaluated using as many as 12 objective metrics and human listening. We provided one of the state-of-the-art models in SE, TF-GridNet, as the baseline and 14 teams improved performance over such a strong baseline, which greatly contributes to further development of universal SE systems. However, the first challenge had some limitations: (i) it covered on 4 types of distortions (ii) it had only English data, and (iii) data filtering applied to some noisy corpora did not perfectly filter out noisy samples.
To go step further, the Interspeech 2025 URGENT challenge extends the former one as follows: (i) it covers 3 more types of distortions (7 in total) to improve universality, (ii) it has multlingual speech to investigate the language dependency of SE models, particularly that of generative models, and (iii) it intentionally includes some noisy samples to investigate how to effectively utilize/filter out them. In addition, the challenge has two tracks with different data scales, allowing participants to investigate the scalability of SE systems, which has so far been under-explored in SE community. We believe that building robust and generalizable universal SE systems is one of the imporant topics and the challenge greatly contributes to future SE research.