Almost State-of-the-art Automatic Speech Recognition in Tensorflow 2
TensorFlowASR implements some automatic speech recognition architectures such as DeepSpeech2, Jasper, RNN Transducer, ContextNet, Conformer, etc. These models can be converted to TFLite to reduce memory and computation for deployment 😄
What's New?
Table of Contents
- What's New?
- Table of Contents
- 😋 Supported Models
- Installation
- Training & Testing Tutorial
- Features Extraction
- Augmentations
- TFLite Convertion
- Pretrained Models
- Corpus Sources
- How to contribute
- References & Credits
- Contact
😋 Supported Models
Baselines
- Transducer Models (End2end models using RNNT Loss for training, currently supported Conformer, ContextNet, Streaming Transducer)
- CTCModel (End2end models using CTC Loss for training, currently supported DeepSpeech2, Jasper)
Publications
- Conformer Transducer (Reference: https://arxiv.org/abs/2005.08100) See examples/models/transducer/conformer
- Streaming Conformer (Reference: http://arxiv.org/abs/2010.11395) See examples/models/transducer/conformer
- ContextNet (Reference: http://arxiv.org/abs/2005.03191) See examples/models/transducer/contextnet
- RNN Transducer (Reference: https://arxiv.org/abs/1811.06621) See examples/models/transducer/rnnt
- Deep Speech 2 (Reference: https://arxiv.org/abs/1512.02595) See examples/models/ctc/deepspeech2
- Jasper (Reference: https://arxiv.org/abs/1904.03288) See examples/models/ctc/jasper
Installation
For training and testing, you should use git clone for installing necessary packages from other authors (ctc_decoders, rnnt_loss, etc.)
NOTE ONLY FOR APPLE SILICON: TensorFlowASR requires python >= 3.12
See the requirements.[extra].txt files for extra dependencies
git clone https://github.com/TensorSpeech/TensorFlowASR.git cd TensorFlowASR ./setup.sh [apple|tpu|gpu] [dev]
Running in a container
Training & Testing Tutorial
- For training, please read tutorial_training
- For testing, please read tutorial_testing
FYI: Keras builtin training uses infinite dataset, which avoids the potential last partial batch.
See examples for some predefined ASR models and results
Features Extraction
Augmentations
See augmentations
TFLite Convertion
After converting to tflite, the tflite model is like a function that transforms directly from an audio signal to text and tokens
Pretrained Models
See the results on each example folder, e.g. ./examples/models//transducer/conformer/results/sentencepiece/README.md
Corpus Sources
English
| Name | Source | Hours |
|---|---|---|
| LibriSpeech | LibriSpeech | 970h |
| Common Voice | https://commonvoice.mozilla.org | 1932h |
Vietnamese
| Name | Source | Hours |
|---|---|---|
| Vivos | https://ailab.hcmus.edu.vn/vivos | 15h |
| InfoRe Technology 1 | InfoRe1 (passwd: BroughtToYouByInfoRe) | 25h |
| InfoRe Technology 2 (used in VLSP2019) | InfoRe2 (passwd: BroughtToYouByInfoRe) | 415h |
| VietBud500 | https://huggingface.co/datasets/linhtran92/viet_bud500 | 500h |
How to contribute
- Fork the project
- Install for development
- Create a branch
- Make a pull request to this repo
References & Credits
- NVIDIA OpenSeq2Seq Toolkit
- https://github.com/noahchalifour/warp-transducer
- Sequence Transduction with Recurrent Neural Network
- End-to-End Speech Processing Toolkit in PyTorch
- https://github.com/iankur/ContextNet
Contact
Huy Le Nguyen
Email: nlhuy.cs.16@gmail.com