8000 GitHub - Gqwert123/wenet: Transformer based ASR Engine.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Gqwert123/wenet

 
 

Repository files navigation

WeNet

We share net together. We borrowed a lot of code from ESPnet, and we refered to OpenTransformer for batch inference.

Installation

WeNet requries PyTorch 1.6.0+.

# 1. setup your own python3 virtual env, miniconda is recommended.
# 2. install pytorch: conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
# 3. install requriments: pip install -r requirements.txt
# 4. link Kaldi on root directory of this repo: ln -s YOUR_KALDI_PATH kaldi

Python code principles

We want that our PyTorch model can be directly exported by torch.jit.script method, which is essential for deploying the model to production.

See the following resource for how to deploy PyTorch models in production in details.

To ensure that, we will try to export the model before training stage. If it fails, we should modify the training code to satisfy the export requirements.

# See in wenet/bin/train.py
script_model = torch.jit.script(model)
script_model.save(os.path.join(args.model_dir, 'init.zip'))

Two principles should be taken into consideration when we contribute our python code to WeNet, especially for the subclass of torch.nn.Module, and for the forward function.

  1. Know what is allowed and what is disallowed.

  2. Try to use explicit typing as much as possible. You can try to do type checking forced by typeguard, see https://typeguard.readthedocs.io/en/latest/userguide.html for details.

About

Transformer based ASR Engine.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 94.4%
  • Shell 5.6%
0