File size: 1,740 Bytes
dd2c9b1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---

title: Attention Atlas
emoji: 🌍
colorFrom: pink
colorTo: blue
sdk: docker
pinned: false
license: mit
short_description: Tool for exploring attention patterns, assessing bias, etc.
app_port: 8000
---


# Attention Atlas 🌍

An interactive application for visualizing and exploring **Transformer architectures** (BERT, GPT-2) in detail, with special focus on **multi-head attention patterns**, **head specializations**, **bias detection**, and **inter-sentence attention analysis**.

## Overview

Attention Atlas is an educational and analytical tool that allows you to visually explore every component of BERT and GPT-2 architectures:

- **Token Embeddings & Positional Encodings**
- **Q/K/V Projections** & **Scaled Dot-Product Attention**
- **Multi-Head Attention** (Interactive Maps & Flow)
- **Head Specialization Radar** (Syntax, Semantics, etc.)
- **Bias Detection** (Token-level & Attention interaction)
- **Token Influence Tree** (Hierarchical dependencies)
- **Inter-Sentence Attention (ISA)**

## Features

- **Interactive Visualizations**: Powered by Plotly and D3.js.
- **Real-Time Inference**: Uses PyTorch backend to run BERT/GPT-2 models on the fly.
- **Bias Analysis**: Detects generalizations, stereotypes, and unfair language, analyzing how attention mechanisms process them.
- **Full Architecture Explorer**: Inspect every layer, head, and residual connection.

## Technologies

- **Shiny for Python**
- **Transformers (Hugging Face)**
- **PyTorch**
- **Plotly**

## Usage

Simply enter a sentence in the input box, select a model (BERT or GPT-2), and click **Generate** / **Analyze Bias**.

---

*Part of a Master's thesis on Interpretable Large Language Models.*