My open-source toolkit for speech related tasks, e.g., single/multi-channel speech enhancement & separation & recognition. The goal is to simplify the training and evaluation procedure and make it easy and flexible for me to do experiments and verify neural network based methods.
git clone https://github.com/funcwj/aps
# set up the python environments
# 1) using "pip install -r requirements.txt" or
# 2) create conda enviroments based on requirements.txt (recommended, see docker/Dockerfile)
cd aps && pip install -r requirements.txt # the optional packages are not shown in requirements.txt
For developers (who want to make commits or PRs), continue to run
# set up the git hook scripts
pip install -r requirements-dev.txt && pre-commit install
to setup the development environments. To build C++ sources and demo commands, running
mkdir build && cd build
cmake .. && make -j
The project was started at early 2019 when the author was a master student of the Audio, Speech and Language Processing Group (ASLP) in Northwestern Polytechnical University (NWPU), Xi'an, China. Originally it was used to collect the source code of the experiments that the author did in the past.