Patient Cohort — 1,000,000 Rows inspired by the published Harvard-Merck study: Kartoun U, Aggarwal R, Beam AL, Pai JK, Chatterjee AK, Fitzgerald TP, Kohane IS, Shaw SY. Development of an Algorithm to Identify Patients with Physician-Documented Insomnia. Sci Rep. 2018 May 18;8(1):7862. doi: 10.1038/s41598-018-25312-z. PMID: 29777125; PMCID: PMC5959894.
Overview
This package contains a large, practice-ready patient table with 1,000,000 rows designed for education, demonstrations, and method development in machine learning and analytics.
- Grain: one row = one patient.
- Target column:
insomnia_class(0/1) for training and evaluation in binary classification tasks. - Scope: demographics, utilization-style counts, comorbidity indicators, and outcome label.
- Important: This dataset is not for clinical use.
This document focuses on how to use the dataset effectively. It does not describe its provenance or internal construction.
Files
patients.csv— the main table with 1,000,000 patients.
You may also produce your own analysis outputs (e.g., metrics CSVs, plots) alongside the data.
Table schema
Each row is one patient. Columns cover demographics, counts, comorbidities, and the label.
| Column | Type | Description | Typical Values / Range | Missing? |
|---|---|---|---|---|
patient_id |
string | Unique ID | P000001 … |
No |
age |
float | Age in years | ≥ 18 | Rare |
sex |
category | Administrative sex | Female, Male |
No |
bmi |
float | Body mass index | ~10–70 | Yes |
ethnicity |
category | Race/ethnicity bucket | Caucasian, African American, Hispanic, Asian, Other, Unknown |
Yes |
smoking_status |
category | Tobacco status | Current, Past, Never, Unknown |
Yes |
emr_fact_count |
int | Total EMR facts (activity proxy) | 0–several thousand | Yes |
sleep_disorder_note_count |
int | Sleep-related note count | 0+ | No |
insomnia_billing_code_count |
int | Insomnia diagnosis code count | 0+ | No |
anx_depr_billing_code_count |
int | Anxiety/depression code count | 0+ | No |
psych_note_count |
int | Psychiatry-related note count | 0+ | No |
insomnia_rx_count |
int | Insomnia-related prescription count | 0+ | No |
joint_disorder_billing_code_count |
int | Musculoskeletal/joint code count | 0+ | No |
| Comorbidity flags | int (0/1) | Indicator variables (see list below) | {0,1} | No |
insomnia_probability |
float | Scoring column in (0,1) useful for ranking | [0,1] | No |
insomnia_class |
int (0/1) | Label for supervised learning | {0,1} | No |
Comorbidity flag columns (0/1):hypertension, lipid_metabolism_disorder, diabetes, gastrointestinal_disorder, anxiety_or_depression, psychiatric_disorder, pneumonia, obesity, congestive_heart_failure, coronary_artery_disease, asthma, copd, cerebrovascular_disease, afib_or_flutter, cancer, peripheral_vascular_disease, osteoporosis, ckd_or_esrd, renal_failure.
Some columns intentionally include missing values to mirror common data challenges.
Loading at scale (Python / pandas)
import pandas as pd
dtypes = {
"patient_id": "string",
"age": "float32",
"bmi": "float32",
"emr_fact_count": "int32",
"sleep_disorder_note_count": "int16",
"insomnia_billing_code_count": "int16",
"anx_depr_billing_code_count": "int16",
"psych_note_count": "int16",
"insomnia_rx_count": "int16",
"joint_disorder_billing_code_count": "int16",
# binary flags as compact integers
"hypertension": "uint8", "lipid_metabolism_disorder": "uint8", "diabetes": "uint8",
"gastrointestinal_disorder": "uint8", "anxiety_or_depression": "uint8",
"psychiatric_disorder": "uint8", "pneumonia": "uint8", "obesity": "uint8",
"congestive_heart_failure": "uint8", "coronary_artery_disease": "uint8",
"asthma": "uint8", "copd": "uint8", "cerebrovascular_disease": "uint8",
"afib_or_flutter": "uint8", "cancer": "uint8", "peripheral_vascular_disease": "uint8",
"osteoporosis": "uint8", "ckd_or_esrd": "uint8", "renal_failure": "uint8",
"insomnia_probability": "float32",
"insomnia_class": "uint8"
}
cat_cols = ["sex", "ethnicity", "smoking_status"]
# Option A: read all at once
df = pd.read_csv("patients.csv", dtype=dtypes)
for c in cat_cols:
df[c] = df[c].astype("category")
# Option B: chunked reading to limit peak memory
chunks = pd.read_csv("patients.csv", dtype=dtypes, chunksize=100_000)
df = pd.concat((chunk.assign(**{c: chunk[c].astype("category") for c in cat_cols}) for chunk in chunks), ignore_index=True)
Quick EDA (Exploratory Data Analysis)
print(df.shape)
print("Label prevalence:", df["insomnia_class"].mean())
# Missingness overview
missing = df.isna().mean().sort_values(ascending=False).head(20)
print(missing)
# Example correlation matrix for numeric counts
key = ["sleep_disorder_note_count","insomnia_billing_code_count","anx_depr_billing_code_count",
"psych_note_count","insomnia_rx_count","joint_disorder_billing_code_count","emr_fact_count"]
print(df[key].corr(method="spearman"))
Recommended preprocessing (for ML)
- Imputation: median (numeric), most-frequent (categorical).
- Encoding: one-hot for
sex,ethnicity,smoking_status. - Scaling: standardize numeric features for linear models (trees are scale-agnostic).
- Outliers: consider clipping or robust scalers for skewed counts.
- Splitting: stratify by
insomnia_classfor train/test.
Example (scikit-learn pipeline):
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
y = df["insomnia_class"].astype(int)
X = df.drop(columns=["insomnia_class"])
cat_cols = X.select_dtypes(include=["category","object"]).columns.tolist()
num_cols = X.select_dtypes(include=[np.number]).columns.tolist()
preprocess = ColumnTransformer([
("num", Pipeline([("imp", SimpleImputer(strategy="median")),
("scaler", StandardScaler())]), num_cols),
("cat", Pipeline([("imp", SimpleImputer(strategy="most_frequent")),
("ohe", OneHotEncoder(handle_unknown="ignore"))]), cat_cols),
])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, stratify=y, random_state=7)
clf = Pipeline([("prep", preprocess), ("lr", LogisticRegression(max_iter=500))]).fit(X_train, y_train)
print("ROC-AUC:", roc_auc_score(y_test, clf.predict_proba(X_test)[:,1]))
Slices & reporting
When reporting results, consider per-group summaries for fairness diagnostics:
for col in ["sex","ethnicity"]:
print("\nSlice:", col)
print(df.groupby(col)["insomnia_class"].mean())
Caveats
- This dataset is designed for education and reproducible demonstrations.
- It must not be used for clinical decision-making or any activity involving real individuals.
- Demographic proportions and feature distributions are provided as-is and should not be interpreted as representing any specific population or institution.
Citation
Kartoun, U. (2025). Patient Cohort — 1,000,000 Rows (Synthetic Dataset). DBbun LLC. Inspired by: Kartoun et al., Sci Rep (2018).
- Downloads last month
- 31