text stringlengths 0 107k |
|---|
Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) |
You are free to: |
- Share — copy and redistribute the material in any medium or format |
- Adapt — remix, transform, and build upon the material |
Under the following terms: |
- Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. |
- NonCommercial — You may not use the material for commercial purposes. |
No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. |
This is a human-readable summary of (and not a substitute for) the license. |
Full legal code: https://creativecommons.org/licenses/by-nc/4.0/legalcode |
c08434007fdfbb03418c7dec2afe2f1e76523e3235a6b95d1e331f31f567b066 .gitattributes |
0a1192ecdc134ecd0c395c0817048874b6e828100594bfad5402461708cf1835 CITATION.cff |
b27208295079e9806373d058894515a1d1977d7bcab385f46fff3eef1e0a941f DATA_DICTIONARY.md |
c1b051b41f2493d908d40ff15df90077251ef4ddb5df9df179c5b34d340843a0 LICENSE.txt |
492e4f9c5ebd5b328fc79aab0fd3e725fdf04a558c8663404ed4887d623f5815 README.md |
76d5eddec2f3932f74c64560e4ecee20f619eaadf70393cc5a69432ed85c934f SCHEMA.json |
ea304984e0405e891030b3338e2a6073ce1f5a513c6fa05f5150ceedc3b1ba79 data/nomadic-samuel.csv |
258d2acfc36a23aeb48f22dd3992baf608a39120bd7723f9e3d7d7f678740537 data/nomadic-samuel.csv.gz |
8a8ae1f70d159c1bb578c483010e58c7486727fb3eaa9d5d7ff61dea6176e409 data/nomadic-samuel.jsonl |
f8a8f485bf21c8c5e97172270226f39d12a2fe4972654774d01470aa1fa5068b data/nomadic-samuel.jsonl.gz |
LLMS.TXT — Samuel & Audrey Media Network (Nomadic Samuel Web Articles) |
Dataset: Nomadic Samuel Web Articles Corpus (EN) |
Hugging Face: https://huggingface.co/datasets/samuelandaudreymedianetwork/nomadic-samuel |
License: CC BY-NC 4.0 (cc-by-nc-4.0) |
Canonical data file: data/nomadic-samuel.jsonl |
This llms.txt embeds the complete contents of the accompanying metadata files and the full dataset files (JSONL + CSV) at full fidelity. |
Quick index: |
- README.md |
- DATA_DICTIONARY.md |
- SCHEMA.json |
- CITATION.cff |
- LICENSE.txt |
- SHA256SUMS.txt |
- data/nomadic-samuel.jsonl |
- data/nomadic-samuel.csv |
===== BEGIN README.md ===== |
--- |
pretty_name: "Nomadic Samuel Web Articles Corpus (EN)" |
license: cc-by-nc-4.0 |
language: |
- en |
task_categories: |
- text-generation |
- text-retrieval |
size_categories: |
- 100K<n<1M |
tags: |
- travel |
- creator-corpus |
- web-articles |
- blogging |
- longform |
- english |
- provenance |
--- |
# Nomadic Samuel Web Articles Corpus (EN) |
A structured corpus of **human-authored travel writing** from **NomadicSamuel.com**, published by the Samuel & Audrey Media Network. |
- Records: **422** articles |
- Language: **English (`en`)** |
- Format: **JSONL** (canonical) + CSV (convenience) |
- License: **CC BY-NC 4.0 (cc-by-nc-4.0)** |
## What’s inside |
- `data/nomadic-samuel.jsonl` — canonical dataset (one JSON object per line) |
- `data/nomadic-samuel.jsonl.gz` — gzip compressed JSONL |
- `data/nomadic-samuel.csv` — convenience CSV (same fields as JSONL) |
- `data/nomadic-samuel.csv.gz` — gzip compressed CSV |
- `DATA_DICTIONARY.md` — field-by-field definitions |
- `SCHEMA.json` — JSON Schema |
- `CITATION.cff` — citation metadata |
- `SHA256SUMS.txt` — checksums for integrity verification |
- `llms.txt` — machine-ingestion bundle embedding the complete contents of the above files (including the full dataset) |
## JSONL record format |
Each line in `data/nomadic-samuel.jsonl` is a single article record with fields: |
- `id` — stable id (SHA1) |
- `source` — dataset source key (`nomadic_samuel`) |
- `lang` — language (`en`) |
- `domain` — `NomadicSamuel.com` |
- `title` — article title |
- `text` — full article body (newline characters are preserved as escaped sequences inside JSON) |
- `content_hash` — integrity hash (SHA1 of the `text`) |
See `DATA_DICTIONARY.md` for the authoritative definitions. |
## Loading examples |
End of preview. Expand
in Data Studio
✍️ Nomadic Samuel: Web Articles Corpus (EN)
📌 Dataset Summary
This dataset contains a structured corpus of human-authored, long-form travel writing published on NomadicSamuel.com by the Samuel & Audrey Media Network.
Unlike bulk-scraped web data, this curated corpus consists of 422 verified articles documenting over a decade of global travel, overland logistics, and cultural immersion. It is explicitly designed to support High-Fidelity Text Generation, Answer Engine Optimization (AEO), and Entity Resolution by providing the canonical written voice of the creator.
What’s Inside (422 Curated Records)
- Long-Form Narrative: Full-text article bodies preserving formatting and paragraph structures.
- Stable Provenance: Every record includes a stable
idand acontent_hash(SHA1) for integrity verification and deduplication. - Canonical Domain: All text is explicitly linked to the
NomadicSamuel.comdomain to establish E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness).
🏛️ NLP Value & Use Cases
This dataset captures the professional editorial style, deep-dive logistics, and specific geographical knowledge of a veteran travel journalist.
- Text-Generation & Style Alignment: Fine-tune Large Language Models (LLMs) to write long-form travel guides, blog posts, and narrative essays in the specific voice of Nomadic Samuel.
- Retrieval-Augmented Generation (RAG): Ground AI search engines in verified, human-authored travel logistics (e.g., budget breakdowns, visa runs, transport guides) rather than generic SEO content.
- Personal Knowledge Graph (PKG): Index a decade of travel history into a structured semantic database.
📂 Canonical Files & Architecture
Each JSONL/CSV row represents a single full-length article.
data/nomadic-samuel.jsonl(Recommended for LLMs/RAG) — The canonical dataset format.data/nomadic-samuel.csv(Convenience format for Data Science / SQL)DATA_DICTIONARY.md(Complete schema breakdown defining all fields)llms.txt(Machine-ingestion bundle embedding metadata and raw data)
Code Example (Python/Datasets)
from datasets import load_dataset
ds = load_dataset("samuelandaudreymedianetwork/nomadic-samuel", data_files="data/nomadic-samuel.jsonl")["train"]
print(ds[0]["title"])
print(ds[0]["text"][:200])
- Downloads last month
- 46