Skip to content

Singing Voice Separation via Recurrent Inference and Skip-Filtering Connections - PyTorch Implementation. Demo:

Notifications You must be signed in to change notification settings

Js-Mim/mss_pytorch

Repository files navigation

Singing Voice Separation via Recurrent Inference and Skip-Filtering connections

Support material and source code for the method described in : S.I. Mimilakis, K. Drossos, J.F. Santos, G. Schuller, T. Virtanen, Y. Bengio, "Monaural Singing Voice Separation with Skip-Filtering Connections and Recurrent Inference of Time-Frequency Mask", in arXiv:1711.01437 [cs.SD], Nov. 2017. This work has been accepted for poster presentation at ICASSP 2018.

Please use the above citation if you find any of the code useful.

Listening Examples : https://js-mim.github.io/mss_pytorch/

Extensions :

  • An improvement of this work, which includes a novel regularization technique using TwinNetworks, can be found here: https://github.com/dr-costas/mad-twinnet .
  • New branch called "nmr_eval". Contains our L1 penalized model as an alternative to recurrent inference algorithm. That system was submitted to SiSEC-MUS18 and is denoted as "MDL1" & "MDLT". In addition to this, it is possible to use an additional variable, the inverse masking threshold, that can be used inside the cost function. The latter approach is ongoing work that deals with perceptual evaluation.

Requirements :

Usage :

  • Clone the repository.
  • Add the base directory to your Python path.
  • While "mss_pytorch" is your current directory simply execute the "processes_scripts/main_script.py" file (iPython is prefered if the base directory was not setted up correctly.)
  • Arguments for training and testing are given inside the main function of the "processes_scripts/main_script.py" file.

Acknowledgements :

The research leading to these results has received funding from the European Union's H2020 Framework Programme (H2020-MSCA-ITN-2014) under grant agreement no 642685 MacSeNet.