File size: 2,894 Bytes
0458a04 e53b3c9 0458a04 e53b3c9 bf3ef7b e53b3c9 59deba3 e53b3c9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 |
---
language:
- en
task_categories:
- text-to-audio
- audio-to-audio
tags:
- music
- midi
- chroma
- music-generation
- geometric-deep-learning
size_categories:
- 1K<n<10K
---
# MIDI Chroma Dataset
This version requires the genres to be fixed and a restructure before it's compatible with training.
Pre-processed version of [foldl/midi](https://huggingface.co/datasets/foldl/midi) with chroma features extracted directly from MIDI note events.
## Dataset Description
This dataset contains **4719 songs** with pre-computed chroma features for efficient music generation training.
### Features
- **name**: Song title (string)
- **genre**: List of genres (list of strings)
- **chroma**: Pre-computed chroma features `[128, 12]` (float32 array)
- 12 pitch classes (C, C#, D, D#, E, F, F#, G, G#, A, A#, B)
- 128 time steps
- Values normalized to sum to 1.0 per timestep
- **text**: Text description for conditioning (string)
### Extraction Method
Chroma features are extracted **directly from MIDI note events** without audio synthesis:
- Notes are mapped to their pitch class (0-11)
- Velocity is used for intensity weighting
- Temporal resolution: ~10 FPS
- Much faster than audio-based extraction
### Usage
```python
from datasets import load_dataset
import torch
# Load dataset
dataset = load_dataset("AbstractPhil/foldl-midi")
# Access samples
sample = dataset['train'][0]
chroma = torch.tensor(sample['chroma']) # [128, 12]
text = sample['text'] # "rock, pop: Genesis - The Light Dies Down"
print(f"Text: {text}")
print(f"Chroma shape: {chroma.shape}")
```
### Training ChromaLyra
This dataset is designed for training **ChromaLyra**, a geometric VAE for music generation:
```python
from geovocab2.train.model.chroma.chroma_lyra import ChromaLyra, ChromaLyraConfig
config = ChromaLyraConfig(
n_chroma=12,
seq_len=128,
latent_dim=256,
hidden_dim=384
)
model = ChromaLyra(config)
# Train with text conditioning...
```
## Dataset Creation
Created by extracting chroma from valid MIDI files in foldl/midi dataset:
- Filtered songs: 1s - 3min duration
- Skipped empty/drum-only tracks
- Original: ~20K MIDI files → This dataset: ~4719 valid samples
## Citation
Original dataset:
```bibtex
@misc{foldl-midi,
author = {foldl},
title = {MIDI Dataset},
year = {2023},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/foldl/midi}
}
```
Geometric approach:
```bibtex
@misc{abstract-phil-geovocab,
author = {AbstractPhil},
title = {GeoVocab: Geometric Deep Learning for Music Generation},
year = {2025},
url = {https://github.com/AbstractPhil/geovocab2}
}
```
## License
Same as original foldl/midi dataset;
https://huggingface.co/datasets/foldl/midi
## Acknowledgments
- Original MIDI dataset: foldl
- Chroma extraction: pretty_midi library
- Geometric VAE architecture: AbstractPhil/GeoVocab2
|