If you have the interest in our work, or use this code or part of it, please cite us!
Consider citing:
@inproceedings{li2021speaker,
title={Speaker and Direction Inferred Dual-Channel Speech Separation},
author={Li, Chenxing and Xu, Jiaming and Mesgarani, Nima and Xu, Bo},
booktitle={ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={5779--5783},
year={2021},
organization={IEEE}
}
For more detailed descirption, you can further explore the whole paper with this link.
Pytorch>=1.1.0
resampy
soundfile
Please refer to predata_WSJ_lcx.py A more detailed dataset preparation procedure will be updated soon.
For train:
python train_WSJ0_SDNet.py
For test:
python test_WSJ0_SDNet.py
Please Modify the model path in test_WSJ0_SDNet.py.
If you have any questions please contact:
Email:lichenxing007@gmail.com
- A brief implemention of SDNet
- pretrained models.
- separated samples.