Dataset Viewer
Auto-converted to Parquet Duplicate
first_name
string
last_name
string
gender
string
country_code
string
tokens
list
ner_tags
list
Avory
Gentles
null
CA
[ "Gentles", ",", "Avory" ]
[ 2, 4, 0 ]
Madhusudhan
Rao
M
CA
[ "Madhusudhan", "Rao" ]
[ 0, 2 ]
Mohammed
Zaki
null
CA
[ "Mohammed", "Zaki" ]
[ 0, 2 ]
Joji
Pv
M
CA
[ "Joji", "Pv" ]
[ 0, 2 ]
Neesha
Aujla
F
CA
[ "Aujla", "Neesha" ]
[ 2, 0 ]
Rowzan
Veysi
null
CA
[ "Rowzan", "Veysi" ]
[ 0, 2 ]
Labook
Ayodele
M
CA
[ "Labook", "Ayodele" ]
[ 0, 2 ]
Raymundo
Velasco
M
CA
[ "Raymundo", "Velasco" ]
[ 0, 2 ]
Liam
Lynch
M
CA
[ "Lynch", ",", "Liam" ]
[ 2, 4, 0 ]
Zeeshan Khan
Jc
M
CA
[ "Jc", "Zeeshan", "Khan" ]
[ 2, 0, 1 ]
Duha
Aw
null
CA
[ "Aw", "Duha" ]
[ 2, 0 ]
George
Heidrich
M
CA
[ "Heidrich", ",", "George" ]
[ 2, 4, 0 ]
Tracey
Mcintyre
F
CA
[ "Tracey", "Mcintyre" ]
[ 0, 2 ]
Jenny
Dang
null
CA
[ "Dang", ",", "Jenny" ]
[ 2, 4, 0 ]
Jane
Edmonds-Hastie
F
CA
[ "Edmonds-Hastie", "Jane" ]
[ 2, 0 ]
Isabelle
Cossette
F
CA
[ "Isabelle", "Cossette" ]
[ 0, 2 ]
Alex
Mines
M
CA
[ "Mines", ",", "Alex" ]
[ 2, 4, 0 ]
Sheena
Berry
null
CA
[ "Berry", ",", "Sheena" ]
[ 2, 4, 0 ]
Vaughn
Gowsell
M
CA
[ "Gowsell", ",", "Vaughn" ]
[ 2, 4, 0 ]
Monique
Camenzuli
F
CA
[ "Monique", "Camenzuli" ]
[ 0, 2 ]
Lan
Qiu
null
CA
[ "Lan", "Qiu" ]
[ 0, 2 ]
Dorothy
Ip
null
CA
[ "Ip", "Dorothy" ]
[ 2, 0 ]
Louis-Philippe
Rocque
M
CA
[ "Louis-Philippe", "Rocque" ]
[ 0, 2 ]
Spencer
Kruk
M
CA
[ "Kruk", "Spencer" ]
[ 2, 0 ]
Dana
Labrecque
F
CA
[ "Labrecque", "Dana" ]
[ 2, 0 ]
Dani
Joveva
F
CA
[ "Joveva", ",", "Dani" ]
[ 2, 4, 0 ]
Xuyuan
Zheng
M
CA
[ "Xuyuan", "Zheng" ]
[ 0, 2 ]
Lesley
Wagstaff
F
CA
[ "Wagstaff", ",", "Lesley" ]
[ 2, 4, 0 ]
Daniel
Ashton
M
CA
[ "Daniel", "Ashton" ]
[ 0, 2 ]
Alexia
Grey
F
CA
[ "Grey", ",", "Alexia" ]
[ 2, 4, 0 ]
Dominique
Zelunka
F
CA
[ "Zelunka", "Dominique" ]
[ 2, 0 ]
Jermaine
Henry
M
CA
[ "Henry", ",", "Jermaine" ]
[ 2, 4, 0 ]
Joyce
Hambly
F
CA
[ "Joyce", "Hambly" ]
[ 0, 2 ]
Maurice
Gabay
M
CA
[ "Maurice", "Gabay" ]
[ 0, 2 ]
Mackenzie
Metz
F
CA
[ "Mackenzie", "Metz" ]
[ 0, 2 ]
Muna
Evelina
null
CA
[ "Muna", "Evelina" ]
[ 0, 2 ]
Janice
Lavender
F
CA
[ "Janice", "Lavender" ]
[ 0, 2 ]
Delia
Ann
F
CA
[ "Ann", "Delia" ]
[ 2, 0 ]
Karen
Shaw
null
CA
[ "Karen", "Shaw" ]
[ 0, 2 ]
Meagan
Josephson
F
CA
[ "Josephson", ",", "Meagan" ]
[ 2, 4, 0 ]
Alexander
Mckay
M
CA
[ "Mckay", ",", "Alexander" ]
[ 2, 4, 0 ]
Nitturi Narender
Nari
M
CA
[ "Nitturi", "Narender", "Nari" ]
[ 0, 1, 2 ]
Alexandre
Gagnon
null
CA
[ "Alexandre", "Gagnon" ]
[ 0, 2 ]
Magda
Gr
null
CA
[ "Gr", "Magda" ]
[ 2, 0 ]
Mark
Henderson
null
CA
[ "Mark", "Henderson" ]
[ 0, 2 ]
Renee
Chabluk
F
CA
[ "Chabluk", "Renee" ]
[ 2, 0 ]
Shania
Cheung
F
CA
[ "Cheung", "Shania" ]
[ 2, 0 ]
Vida
Gabriel
F
CA
[ "Gabriel", "Vida" ]
[ 2, 0 ]
Pinkiben
Patel
F
CA
[ "Pinkiben", "Patel" ]
[ 0, 2 ]
Sophia
Brown
F
CA
[ "Sophia", "Brown" ]
[ 0, 2 ]
Nicolas
Roy
M
CA
[ "Nicolas", "Roy" ]
[ 0, 2 ]
Fizza
Khan
null
CA
[ "Khan", ",", "Fizza" ]
[ 2, 4, 0 ]
Diane
Hemingson
F
CA
[ "Hemingson", ",", "Diane" ]
[ 2, 4, 0 ]
Dennis
Ingvaldson
null
CA
[ "Ingvaldson", ",", "Dennis" ]
[ 2, 4, 0 ]
Vimbai
Nashé
F
CA
[ "Vimbai", "Nashé" ]
[ 0, 2 ]
Michelle
Massicard
F
CA
[ "Michelle", "Massicard" ]
[ 0, 2 ]
Ian
Watson
M
CA
[ "Watson", "Ian" ]
[ 2, 0 ]
Kelly
Flor
null
CA
[ "Kelly", "Flor" ]
[ 0, 2 ]
Ranjit
Dhillon
M
CA
[ "Dhillon", "Ranjit" ]
[ 2, 0 ]
Jessica
James
F
CA
[ "James", ",", "Jessica" ]
[ 2, 4, 0 ]
Michael
Sills
M
CA
[ "Sills", ",", "Michael" ]
[ 2, 4, 0 ]
Varun
Dua
M
CA
[ "Varun", "Dua" ]
[ 0, 2 ]
Shannon
Toltesi
F
CA
[ "Toltesi", "Shannon" ]
[ 2, 0 ]
Amanda
Andrews
F
CA
[ "Andrews", ",", "Amanda" ]
[ 2, 4, 0 ]
Cole
Graham
M
CA
[ "Graham", ",", "Cole" ]
[ 2, 4, 0 ]
Daisy
Pelagio
F
CA
[ "Pelagio", ",", "Daisy" ]
[ 2, 4, 0 ]
Yair
Linn
null
CA
[ "Linn", "Yair" ]
[ 2, 0 ]
Marie-Josée
Boucher
F
CA
[ "Marie-Josée", "Boucher" ]
[ 0, 2 ]
Tarik
Haizoun
M
CA
[ "Tarik", "Haizoun" ]
[ 0, 2 ]
Tommy
Hollywood
M
CA
[ "Hollywood", ",", "Tommy" ]
[ 2, 4, 0 ]
Joyee
Chung
F
CA
[ "Joyee", "Chung" ]
[ 0, 2 ]
Paul
Kilminster
null
CA
[ "Kilminster", ",", "Paul" ]
[ 2, 4, 0 ]
Chelsea
Pascoe
F
CA
[ "Pascoe", ",", "Chelsea" ]
[ 2, 4, 0 ]
Marouska
Marte Morillo
F
CA
[ "Marte", "Morillo", "Marouska" ]
[ 2, 3, 0 ]
Genevieve
Dumas
F
CA
[ "Genevieve", "Dumas" ]
[ 0, 2 ]
Whitney
Shettler
F
CA
[ "Whitney", "Shettler" ]
[ 0, 2 ]
Ian
Macdonald
M
CA
[ "Macdonald", "Ian" ]
[ 2, 0 ]
Nahid
Mazloum
null
CA
[ "Mazloum", "Nahid" ]
[ 2, 0 ]
Moe
Aziz
M
CA
[ "Aziz", ",", "Moe" ]
[ 2, 4, 0 ]
Ousmane
Bary
M
CA
[ "Ousmane", "Bary" ]
[ 0, 2 ]
Be
Ngo
F
CA
[ "Ngo", "Be" ]
[ 2, 0 ]
Rubel
Ahmed
M
CA
[ "Ahmed", ",", "Rubel" ]
[ 2, 4, 0 ]
Ako
Si
M
CA
[ "Si", ",", "Ako" ]
[ 2, 4, 0 ]
Cameron
Kane
null
CA
[ "Kane", "Cameron" ]
[ 2, 0 ]
Langis
Thibeault
M
CA
[ "Thibeault", "Langis" ]
[ 2, 0 ]
Iris
Elle
F
CA
[ "Elle", "Iris" ]
[ 2, 0 ]
Hayley
Ruttan
F
CA
[ "Ruttan", ",", "Hayley" ]
[ 2, 4, 0 ]
Karen
Kabiri
null
CA
[ "Kabiri", "Karen" ]
[ 2, 0 ]
Christopher
Webster
null
CA
[ "Christopher", "Webster" ]
[ 0, 2 ]
Holly
Mckenzie
F
CA
[ "Holly", "Mckenzie" ]
[ 0, 2 ]
Hortence
Makam
F
CA
[ "Hortence", "Makam" ]
[ 0, 2 ]
Mario
Lambert
M
CA
[ "Mario", "Lambert" ]
[ 0, 2 ]
Rob
Wiffen
M
CA
[ "Rob", "Wiffen" ]
[ 0, 2 ]
Tamara
Easto
F
CA
[ "Easto", "Tamara" ]
[ 2, 0 ]
Yoko
Konoshima
F
CA
[ "Konoshima", "Yoko" ]
[ 2, 0 ]
Mark
Lessard
null
CA
[ "Lessard", "Mark" ]
[ 2, 0 ]
Dorota
Malukiewicz
F
CA
[ "Malukiewicz", ",", "Dorota" ]
[ 2, 4, 0 ]
David
Davila
M
CA
[ "David", "Davila" ]
[ 0, 2 ]
Matthew
Drape
M
CA
[ "Matthew", "Drape" ]
[ 0, 2 ]
Katie
Jakes
F
CA
[ "Jakes", ",", "Katie" ]
[ 2, 4, 0 ]
End of preview. Expand in Data Studio

Dataset Card for Person Full Name NER Parsing

This dataset contains 3,383,944 curated and augmented person names, designed specifically for training Token Classification (NER) models. The primary task is to parse a full name string into its FirstName and LastName components, correctly handling multi-word names and different ordering formats.

Dataset Details

Dataset Description

This dataset is built to train robust models that can understand and segment human names. To improve real-world performance, the data has been augmented to include examples in three distinct formats:

  • FirstName LastName (50% of the data)

  • LastName, FirstName (25% of the data)

  • LastName FirstName (25% of the data, the most ambiguous case)

  • Curated by: ele-sage

  • Language(s) (NLP): English, French

  • License: MIT

Dataset Sources

Uses

Direct Use

This dataset is intended for training token-classification models for Named Entity Recognition on person names. The primary goal is to create a "name splitter" or "name parser" that can handle various formats. It can be loaded directly using the 🤗 Datasets library:

from datasets import load_dataset

# Load the dataset directly from the Hub
dataset = load_dataset("ele-sage/person-full-name-ner-parsing")

print(dataset["train"][0])
# Expected output:
# {'first_name': 'Andrew', 'last_name': 'Burt', 'gender': 'M', 'country_code': 'CA', 'tokens': ['Burt', 'Andrew'], 'ner_tags': [2, 0]}```

Out-of-Scope Use

  • General NER: This dataset is highly specialized and should not be used to train models for general-purpose NER (e.g., finding locations, organizations, dates).
  • Company Name Parsing: The dataset does not contain company or organization names. The cleaning process was explicitly designed to remove them.

Dataset Structure

The dataset is provided in a train / test split (90/10). Each record is a JSON object with the following fields:

  • first_name: (string) The original first name from the source data.
  • last_name: (string) The original last name from the source data.
  • gender: (string) The gender associated with the name in the source data (M, F, or null).
  • country_code: (string) The country code from the source data (always, CA).
  • tokens: (list of strings) The full name, pre-tokenized into a list of words, potentially reordered and with a comma.
  • ner_tags: (list of integers) The corresponding IOB-style tags for each token in the tokens list. The mapping is as follows:
    • 0: B-FNAME (Beginning of a First Name)
    • 1: I-FNAME (Inside a First Name)
    • 2: B-LNAME (Beginning of a Last Name)
    • 3: I-LNAME (Inside a Last Name)
    • 4: O (Outside, for non-entity tokens like commas)

Dataset Creation

Curation Rationale

The primary motivation was to create a large-scale, high-quality dataset for the specific task of parsing human names. Standard NER datasets are too general, and simple rule-based parsers fail on ambiguous or multi-word names. This dataset was created to train a robust AI model that could overcome these limitations by learning from a diverse set of augmented examples.

Source Data

The dataset originates from a single source: a large CSV file of over 3.4 million Canadian names, originally from a Facebook data leak. Source Link.

Data Collection and Processing

The creation of this dataset involved a multi-stage curation pipeline to ensure high quality and robustness:

  1. AI-Powered Cleaning: The raw 3.4M row CSV was first processed using the ele-sage/distilbert-base-uncased-name-classifier model. Any entry that the model did not classify as a "Person" was discarded. This was a critical step to remove a significant amount of non-name text and noise (e.g., company names, phrases, spam).

  2. Rule-Based Filtering: The AI-cleaned data was further filtered to:

    • Remove entries where the first and last names were identical.
    • Remove entries containing characters outside a pre-defined set of English, French, and common punctuation characters.
  3. Data Augmentation & Tagging: The final, cleaned dataset of 3,383,944 names was programmatically tagged and augmented to create three different name formats:

    • 50% was formatted as FirstName LastName.
    • 25% was formatted as LastName, FirstName (with the comma token tagged as O).
    • 25% was formatted as the ambiguous LastName FirstName (without a comma).
  4. Shuffling: The final combined dataset was then shuffled.

Who are the source data producers?

The original data was created by users of the social media platform Facebook. The data was later made public through a data leak.

Annotations

Annotation Process

The annotation for this dataset was a two-stage process:

  1. Primary Human Annotation: The ground-truth labels were originally provided by Facebook users themselve when they entered their names into separate, structured first_name and last_name fields in a registration form.

  2. Secondary Programmatic Transformation: A script was then used to process this structured data. This script converted the first_name and last_name fields into a sequential format suitable for token classification. It generated the tokens list and the corresponding ner_tags (using the IOB scheme) for each of the three augmented formats (FirstName LastName, LastName, FirstName, and LastName FirstName).

Who are the annotators?

The primary, ground-truth annotation was performed by the original data producers (the users of the social media platform).

The secondary annotation format (the ner_tags) was generated programmatically by the data curation scripts developed by the dataset author (ele-sage).

Bias, Risks, and Limitations

  • Geographic & Cultural Bias: The dataset is heavily biased towards North American (specifically Canadian) and European name structures. It will not perform well for training models on names from other cultural contexts (e.g., East Asian, Icelandic) where naming conventions differ.
  • Source Data Noise: As the data originates from a data leak of user-provided information, it contains inherent noise, typos, and non-name entries. While a rigorous cleaning process was applied, some noise may persist.
  • Limited Scope: The ner_tags are strictly limited to first and last names. The dataset provides no information on titles (Dr., Mr.), suffixes (Jr., III), or middle names.

Dataset Card Authors

ele-sage

Dataset Card Contact

ele-sage on Hugging Face

Downloads last month
176

Models trained or fine-tuned on ele-sage/person-names-ner