The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Balanced Accuracy Metrics for 🤗 Evaluate
A minimal, production-ready set of balanced accuracy metrics for imbalanced vision/NLP tasks, implemented as plain Python scripts that you can load with evaluate from a dataset-type repo on the Hugging Face Hub.
What this is
Three drop‑in metrics that focus on fair evaluation under class imbalance:
balanced_accuracy.py— binary & multiclass balanced accuracy with options forsample_weight,threshold="auto"(Youden’s J),ignore_index,adjusted,class_mask,return_per_class, andsupport_per_class.balanced_accuracy_multilabel.py— multilabel balanced accuracy withaverage={"macro","weighted","micro"},threshold="auto"(per label),sample_weight,class_mask,ignore_index, andsupport_per_label.balanced_topk_accuracy.py— balanced top‑k accuracy (macro top‑k recall across classes) withsample_weight, multiplekvalues, and class masking.Why it’s useful
- Works without packaging: just download the script and load via
evaluate.- Designed for long‑tail / imbalanced setups; supports masking, weighting, and chance‑adjustment.
- Clear error messages and
reasonfields for edge cases.
Requirements & Installation
Install the minimal dependencies (Python ≥3.9 recommended):
pip install --upgrade pip
pip install evaluate datasets huggingface_hub numpy
Windows note: You may see a symlink warning from
huggingface_hub. It only affects caching and can be ignored. To silence it, setHF_HUB_DISABLE_SYMLINKS_WARNING=1or enable Windows Developer Mode.
Repository Layout
This project is intentionally lightweight—each metric is a single Python file living in a dataset‑type Hub repo:
balanced_accuracy.py
balanced_accuracy_multilabel.py
balanced_topk_accuracy.py
README.md
All three metrics are loadable from the Hub via hf_hub_download(...) + evaluate.load(local_path, module_type="metric") — no installation step required.
Quickstart
0) Common helper
from huggingface_hub import hf_hub_download
import evaluate
REPO = "OliverOnHF/balanced-accuracy" # dataset-type repo
REV = "main" # or a specific commit hash for reproducibility
def load_metric_from_hf(filename):
path = hf_hub_download(REPO, filename, repo_type="dataset", revision=REV)
return evaluate.load(path, module_type="metric")
1) Binary & Multiclass — balanced_accuracy.py
m = load_metric_from_hf("balanced_accuracy.py")
# Binary (labels)
print(m.compute(references=[0,1,1,0], predictions=[0,1,0,0], task="binary"))
# → {'balanced_accuracy': 0.75}
# Binary (probabilities) + automatic threshold search (Youden’s J)
print(m.compute(references=[0,1,1,0],
predictions=[0.2, 0.9, 0.1, 0.3],
task="binary", threshold="auto"))
# → {'balanced_accuracy': 0.75, 'optimal_threshold': 0.6}
# Multiclass (macro BA) with per-class recall & sample_weight
print(m.compute(references=[0,1,2,1],
predictions=[0,2,2,1],
task="multiclass", num_classes=3,
return_per_class=True,
sample_weight=[1, 0.5, 1, 1]))
# → {'balanced_accuracy': 0.888888..., 'per_class_recall': [1.0, 0.6666..., 1.0], 'support_per_class': [1.0, 1.5, 1.0]}
# Class masking (e.g., tail classes only)
print(m.compute(references=[0,1,2,1], predictions=[0,2,2,1],
task="multiclass", num_classes=3,
class_mask=[1,2], return_per_class=True))
Key arguments
task:"binary"or"multiclass"(default"binary")threshold: float in (0,1) or"auto"(binary probabilities only). If predictions are 0/1 labels, threshold is ignored.num_classes: for multiclass; inferred if not set (when labels are 0..K‑1).sample_weight: per‑sample weights; confusion counts become weighted sums.ignore_index: skip samples wherereference == ignore_index.adjusted: chance‑corrected BA (2*BA-1for binary;(BA-1/K)/(1-1/K)for multiclass).class_mask: compute macro‑BA over a subset of classes.return_per_class: also return per‑class recalls;support_per_classis count or weighted sum (ifsample_weightis provided).
2) Multilabel — balanced_accuracy_multilabel.py
m = load_metric_from_hf("balanced_accuracy_multilabel.py")
y_true = [[1,0,1],
[0,1,0]]
y_pred = [[1,0,0],
[0,1,1]]
# Labels (0/1)
print(m.compute(references=y_true, predictions=y_pred, return_per_label=True))
# → {'balanced_accuracy': 0.6666..., 'per_label_ba': [1.0, 1.0, 0.0], 'support_per_label': [1, 1, 1]}
# Probabilities + per-label automatic threshold
probs = [[0.9,0.2,0.1],
[0.1,0.8,0.7]]
print(m.compute(references=y_true, predictions=probs,
from_probas=True, threshold="auto"))
# → {'balanced_accuracy': 0.8333..., 'per_label_thresholds': [0.5, 0.5, ~0.7]}
# Weighted / micro / class_mask
print(m.compute(references=y_true, predictions=y_pred,
average="micro",
sample_weight=[1.0, 0.5],
class_mask=[0,2]))
Key arguments
from_probas: ifTrue,predictionsare probabilities in[0,1]; else must be 0/1 labels.threshold: float in (0,1) or"auto"(whenfrom_probas=True;"auto"selects a threshold per label).average:"macro" | "weighted" | "micro"- macro: average BA across labels;
- weighted: weighted by each label’s positive support;
- micro: pool TP/TN/FP/FN across all labels then compute BA.
class_mask: evaluate only the specified label indices.return_per_label: additionally returnper_label_baandsupport_per_label.
3) Balanced Top‑K Accuracy — balanced_topk_accuracy.py
import numpy as np
m = load_metric_from_hf("balanced_topk_accuracy.py")
scores = np.array([[0.7, 0.2, 0.1],
[0.1, 0.3, 0.6],
[0.05, 0.05,0.9],
[0.05, 0.9, 0.05]])
y_true = [0,1,2,1]
# top-1 (macro recall across classes)
print(m.compute(references=y_true, predictions=scores, k=1, return_per_class=True))
# → {'balanced_topk_accuracy': 0.8333..., 'per_class_recall': [1.0, 0.5, 1.0]}
# multiple k at once
print(m.compute(references=y_true, predictions=scores, k_list=[1,2], return_per_class=True))
# → {'balanced_topk_accuracy': {1: 0.8333..., 2: 1.0}, 'per_class_recall': {1: [...], 2: [...]}}
# with sample_weight and class_mask
print(m.compute(references=y_true, predictions=scores, k=1,
sample_weight=[1,0.5,1,1], class_mask=[0,1,2]))
Intuition: For each class c, compute recall@k among samples of class c, then macro‑average across classes (optionally over a masked subset).
Expected Outputs (Sanity Check)
These should match what you get locally:
# Binary BA
{'balanced_accuracy': 0.75}
# Binary BA with auto threshold (probs: [0.2, 0.9, 0.1, 0.3])
{'balanced_accuracy': 0.75, 'optimal_threshold': 0.6}
# Multiclass BA with weights
{'balanced_accuracy': 0.888888..., 'per_class_recall': [1.0, 0.6666..., 1.0], 'support_per_class': [1.0, 1.5, 1.0]}
# Multilabel BA (labels)
{'balanced_accuracy': 0.6666..., 'per_label_ba': [1.0, 1.0, 0.0], 'support_per_label': [1, 1, 1]}
# Multilabel BA (probs + auto thresholds)
{'balanced_accuracy': 0.8333..., 'per_label_thresholds': [0.5, 0.5, ~0.7]}
# Balanced top-1 and top-2
{'balanced_topk_accuracy': 0.8333..., 'per_class_recall': [1.0, 0.5, 1.0]}
{'balanced_topk_accuracy': {1: 0.8333..., 2: 1.0}, 'per_class_recall': {1: [...], 2: [...]}}
API Reference (TL;DR)
balanced_accuracy.py (binary/multiclass)
- Args:
predictions,references,task={"binary","multiclass"},num_classes=None,adjusted=False,zero_division=0.0,threshold=None|"auto" (binary prob),ignore_index=None,return_per_class=False,class_mask=None,sample_weight=None - Returns:
{"balanced_accuracy": float}+ optional{"optimal_threshold": float}(binary, auto) +
optional{"per_class_recall": list[float], "support_per_class": list[int|float]}(multiclass).
balanced_accuracy_multilabel.py
- Args:
predictions,references,from_probas=False,threshold=0.5|"auto",zero_division=0.0,average="macro"|"weighted"|"micro",class_mask=None,ignore_index=None,return_per_label=False,sample_weight=None - Returns:
{"balanced_accuracy": float}+ optional{"per_label_thresholds": list[float]}(auto) +
optional{"per_label_ba": list[float], "support_per_label": list[int]}.
balanced_topk_accuracy.py
- Args:
predictions (N,K),references (N),k=1ork_list=[...],class_mask=None,sample_weight=None,zero_division=0.0,return_per_class=False - Returns:
{"balanced_topk_accuracy": float | dict[int,float]}+ optional{"per_class_recall": ...}.
Error Messages & Special Reasons
Friendly messages you may encounter by design:
- Length/shape: “Mismatch in the number of predictions …” / “Multilabel expects 2D arrays …”
- NaN/Inf: “
predictionscontains NaN/Inf.” - Binary:
- labels not in {0,1} → “For binary with label predictions, values must be 0/1.”
- probs not in [0,1] → “For binary with probabilities,
predictionsmust be in [0,1].”
- Multiclass: label out of range → “
predictions/referencesmust be in [0,K‑1] …” - Multilabel: average invalid / prob or label value invalid / shape mismatch
- Top‑k: invalid
k/ label out of range - Reasoned NaN:
{"reason": "empty_after_ignore_index"}— all samples were ignored{"reason": "empty_class_mask_after_filtering"}— class/label mask removed everything
Reproducible Smoke Test
Copy into test_all.py and run:
from huggingface_hub import hf_hub_download
import evaluate, numpy as np
REPO, REV = "OliverOnHF/balanced-accuracy", "main"
def load(fname): return evaluate.load(hf_hub_download(REPO, fname, repo_type="dataset", revision=REV), module_type="metric")
# 1) binary & multiclass
mba = load("balanced_accuracy.py")
print(mba.compute(references=[0,1,1,0], predictions=[0,1,0,0], task="binary"))
print(mba.compute(references=[0,1,1,0], predictions=[0.2,0.9,0.1,0.3], task="binary", threshold="auto"))
print(mba.compute(references=[0,1,2,1], predictions=[0,2,2,1], task="multiclass", num_classes=3, return_per_class=True, sample_weight=[1,0.5,1,1]))
# 2) multilabel
mml = load("balanced_accuracy_multilabel.py")
y_true = [[1,0,1],[0,1,0]]; y_pred = [[1,0,0],[0,1,1]]; probs = [[0.9,0.2,0.1],[0.1,0.8,0.7]]
print(mml.compute(references=y_true, predictions=y_pred, return_per_label=True))
print(mml.compute(references=y_true, predictions=probs, from_probas=True, threshold="auto"))
# 3) top-k
mtk = load("balanced_topk_accuracy.py")
scores = np.array([[0.7,0.2,0.1],[0.1,0.3,0.6],[0.05,0.05,0.9],[0.05,0.9,0.05]]); y_true = [0,1,2,1]
print(mtk.compute(references=y_true, predictions=scores, k=1, return_per_class=True))
print(mtk.compute(references=y_true, predictions=scores, k_list=[1,2], return_per_class=True))
Tips
- Pin
revisionto a commit hash for exact reproducibility. support_per_class/support_per_labelare counts when unweighted; ifsample_weightis provided they become effective weight sums (floats).- For extreme long‑tail distributions, combine
class_maskwith per‑class analysis for stable reporting.
License
MIT (suggested). If you need a specific license, add a root LICENSE file.
- Downloads last month
- 9