Datasets:
metadata
license: cc0-1.0
task_categories:
- visual-question-answering
- image-to-text
tags:
- comics
- metadata
- book-level
- tiny-dataset
- testing
size_categories:
- n<1K
Comic Books Tiny Dataset v0 - Books (Testing)
Small test dataset of book-level metadata for rapid development and testing.
⚠️ This is a TINY dataset for testing only. For production, use comix_v0_books.
Dataset Description
- Total Books: Unknown
- Format: WebDataset (tar files)
- Content: Metadata only (NO images)
- License: Public Domain (CC0-1.0)
- Purpose: Fast testing and development
What's Included
Each book has:
{book_id}.json- Book metadata with page references
Purpose
This dataset provides book-level metadata to group pages from comix_v0_tiny_pages.
Workflow:
- Download
comix_v0_tiny_pages(with images) - Download
comix_v0_tiny_books(metadata only) - Use WebDataset pipeline to group pages by book
Quick Start
from datasets import load_dataset
import json
# Load tiny books dataset
books = load_dataset(
"emanuelevivoli/comix_v0_tiny_books",
split="train",
streaming=True
)
# Iterate through books
for book in books:
book_data = json.loads(book["json"])
book_id = book_data["book_id"]
total_pages = book_data["book_metadata"]["total_pages"]
# Get page references
for page_ref in book_data["pages"]:
page_id = page_ref["page_id"]
dataset = page_ref["dataset"] # "comix_v0_tiny_pages"
tar_file = page_ref["tar_file"]
files = page_ref["files"] # json, jpg, seg.npz (if available)
print(f"Book {book_id}: {total_pages} pages")
Dataset Structure
Book JSON Schema (v0)
{
"book_id": "c00004",
"split": "train",
"book_metadata": {
"series_title": null,
"issue_number": null,
"publication_date": null,
"publisher": null,
"total_pages": 68,
"license_status": "Public Domain",
"digital_source": "Digital Comic Museum"
},
"pages": [
{
"page_number": 0,
"page_id": "c00004_p000",
"dataset": "comix_v0_tiny_pages",
"tar_file": "comix-v0-tiny-pages-00000.tar",
"files": {
"json": "c00004_p000.json",
"jpg": "c00004_p000.jpg",
"seg.npz": "c00004_p000.seg.npz"
}
},
...
],
"segments": [], // v1: story segments (to be added)
"characters": [] // v1: character bank (to be added)
}
Data Splits
| Split | Books |
|---|---|
| Train | Unknown |
| Validation | Unknown |
| Test | Unknown |
| Total | Unknown |
Use Cases
✅ Testing: Rapid iteration on book-level structure ✅ Development: Quick validation of page grouping ✅ Debugging: Small dataset for troubleshooting ✅ Prototyping: Fast experimentation
❌ NOT for: Training production models
Version 0 (Primordial)
This is a primordial v0 with:
- ✅ Basic structure
- ✅ Page references
- ⏳ Empty
book_metadatafields (to be populated in v1) - ⏳ Empty
segments(to be added in v1) - ⏳ Empty
characters(to be added in v1)
Future (v1+) will include:
- Bibliographic information (series, issue, publisher)
- Story segments with summaries
- Character banks with appearances
Companion Dataset
comix_v0_tiny_pages: Individual pages with images
Full Dataset
For production use: comix_v0_books (~19K books)
Example: Group Pages by Book
# Load both datasets
pages = load_dataset("emanuelevivoli/comix_v0_tiny_pages", split="train")
books = load_dataset("emanuelevivoli/comix_v0_tiny_books", split="train")
# Create page index
page_index = {p["json"]["page_id"]: p for p in pages}
# Get a book and its pages
book = books[0]
book_data = json.loads(book["json"])
book_pages = []
for page_ref in book_data['pages']:
page_id = page_ref['page_id']
if page_id in page_index:
book_pages.append(page_index[page_id])
print(f"Book {book_data['book_id']}: {len(book_pages)} pages loaded")
Citation
@dataset{comix_v0_tiny_books_2025,
title={Comic Books Tiny Dataset v0 - Books},
author={Emanuele Vivoli},
year={2025},
publisher={Hugging Face},
note={Testing dataset - metadata only},
url={https://huggingface.co/datasets/emanuelevivoli/comix_v0_tiny_books}
}
License
Public Domain (CC0-1.0) - Digital Comic Museum
Updates
- v0 (2025-11-18): Initial release
- Unknown books
- Primordial version with placeholder fields
- For testing only