foldl-midi / README.md
AbstractPhil's picture
Update README.md
59deba3 verified
metadata
language:
  - en
task_categories:
  - text-to-audio
  - audio-to-audio
tags:
  - music
  - midi
  - chroma
  - music-generation
  - geometric-deep-learning
size_categories:
  - 1K<n<10K

MIDI Chroma Dataset

This version requires the genres to be fixed and a restructure before it's compatible with training.

Pre-processed version of foldl/midi with chroma features extracted directly from MIDI note events.

Dataset Description

This dataset contains 4719 songs with pre-computed chroma features for efficient music generation training.

Features

  • name: Song title (string)
  • genre: List of genres (list of strings)
  • chroma: Pre-computed chroma features [128, 12] (float32 array)
    • 12 pitch classes (C, C#, D, D#, E, F, F#, G, G#, A, A#, B)
    • 128 time steps
    • Values normalized to sum to 1.0 per timestep
  • text: Text description for conditioning (string)

Extraction Method

Chroma features are extracted directly from MIDI note events without audio synthesis:

  • Notes are mapped to their pitch class (0-11)
  • Velocity is used for intensity weighting
  • Temporal resolution: ~10 FPS
  • Much faster than audio-based extraction

Usage

from datasets import load_dataset
import torch

# Load dataset
dataset = load_dataset("AbstractPhil/foldl-midi")

# Access samples
sample = dataset['train'][0]
chroma = torch.tensor(sample['chroma'])  # [128, 12]
text = sample['text']                     # "rock, pop: Genesis - The Light Dies Down"

print(f"Text: {text}")
print(f"Chroma shape: {chroma.shape}")

Training ChromaLyra

This dataset is designed for training ChromaLyra, a geometric VAE for music generation:

from geovocab2.train.model.chroma.chroma_lyra import ChromaLyra, ChromaLyraConfig

config = ChromaLyraConfig(
    n_chroma=12,
    seq_len=128,
    latent_dim=256,
    hidden_dim=384
)

model = ChromaLyra(config)
# Train with text conditioning...

Dataset Creation

Created by extracting chroma from valid MIDI files in foldl/midi dataset:

  • Filtered songs: 1s - 3min duration
  • Skipped empty/drum-only tracks
  • Original: ~20K MIDI files → This dataset: ~4719 valid samples

Citation

Original dataset:

@misc{foldl-midi,
  author = {foldl},
  title = {MIDI Dataset},
  year = {2023},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/foldl/midi}
}

Geometric approach:

@misc{abstract-phil-geovocab,
  author = {AbstractPhil},
  title = {GeoVocab: Geometric Deep Learning for Music Generation},
  year = {2025},
  url = {https://github.com/AbstractPhil/geovocab2}
}

License

Same as original foldl/midi dataset; https://huggingface.co/datasets/foldl/midi

Acknowledgments

  • Original MIDI dataset: foldl
  • Chroma extraction: pretty_midi library
  • Geometric VAE architecture: AbstractPhil/GeoVocab2