SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability
Paper
•
2503.09532
•
Published
This repository contains models described in the paper SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability. SAEBench is a comprehensive evaluation suite that measures SAE performance across seven diverse metrics, spanning interpretability, feature disentanglement and practical applications like unlearning.