Commit
·
2d0a220
1
Parent(s):
241bc4a
🦋 Update README
Browse files
README.md
ADDED
|
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
tags:
|
| 3 |
+
- tensorflowtts
|
| 4 |
+
- audio
|
| 5 |
+
- text-to-speech
|
| 6 |
+
- text-to-mel
|
| 7 |
+
language: eng
|
| 8 |
+
license: apache-2.0
|
| 9 |
+
datasets:
|
| 10 |
+
- LJSpeech
|
| 11 |
+
widget:
|
| 12 |
+
- text: "How are you?"
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# FastSpeech2 trained on LJSpeech (Eng)
|
| 16 |
+
This repository provides a pretrained [FastSpeech2](https://arxiv.org/abs/2006.04558) trained on LJSpeech dataset (ENG). For a detail of the model, we encourage you to read more about
|
| 17 |
+
[TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS).
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
## Install TensorFlowTTS
|
| 21 |
+
First of all, please install TensorFlowTTS with the following command:
|
| 22 |
+
```
|
| 23 |
+
pip install TensorFlowTTS
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
### Converting your Text to Mel Spectrogram
|
| 27 |
+
```python
|
| 28 |
+
from tensorflow_tts.inference import AutoProcessor
|
| 29 |
+
from tensorflow_tts.inference import TFAutoModel
|
| 30 |
+
|
| 31 |
+
processor = AutoProcessor.from_pretrained("tensorspeech/tts-fastspeech2-ljspeech-en")
|
| 32 |
+
fastspeech2 = TFAutoModel.from_pretrained("tensorspeech/tts-fastspeech2-ljspeech-en")
|
| 33 |
+
|
| 34 |
+
text = "How are you?"
|
| 35 |
+
|
| 36 |
+
input_ids = processor.text_to_sequence(text)
|
| 37 |
+
|
| 38 |
+
mel_before, mel_after, duration_outputs, _, _ = fastspeech2.inference(
|
| 39 |
+
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
|
| 40 |
+
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
|
| 41 |
+
speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
|
| 42 |
+
f0_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32),
|
| 43 |
+
energy_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32),
|
| 44 |
+
)
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
#### Referencing FastSpeech
|
| 48 |
+
```
|
| 49 |
+
@misc{ren2021fastspeech,
|
| 50 |
+
title={FastSpeech 2: Fast and High-Quality End-to-End Text to Speech},
|
| 51 |
+
author={Yi Ren and Chenxu Hu and Xu Tan and Tao Qin and Sheng Zhao and Zhou Zhao and Tie-Yan Liu},
|
| 52 |
+
year={2021},
|
| 53 |
+
eprint={2006.04558},
|
| 54 |
+
archivePrefix={arXiv},
|
| 55 |
+
primaryClass={eess.AS}
|
| 56 |
+
}
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
#### Referencing TensorFlowTTS
|
| 60 |
+
```
|
| 61 |
+
@misc{TFTTS,
|
| 62 |
+
author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata,
|
| 63 |
+
Trinh Le and Yunchao He},
|
| 64 |
+
title = {TensorflowTTS},
|
| 65 |
+
year = {2020},
|
| 66 |
+
publisher = {GitHub},
|
| 67 |
+
journal = {GitHub repository},
|
| 68 |
+
howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}},
|
| 69 |
+
}
|
| 70 |
+
```
|