File size: 2,830 Bytes
8a4d3a7
 
d9c9f30
 
8a4d3a7
d9c9f30
 
 
 
 
 
 
 
 
 
8a4d3a7
 
d9c9f30
8a4d3a7
d9c9f30
 
8a4d3a7
 
 
d9c9f30
8a4d3a7
d9c9f30
 
 
 
 
 
 
 
8a4d3a7
 
 
d9c9f30
67a5ad8
d9c9f30
 
 
 
 
 
 
 
 
8a4d3a7
 
 
d9c9f30
8a4d3a7
d9c9f30
 
 
 
 
8a4d3a7
d9c9f30
 
 
8a4d3a7
d9c9f30
 
 
 
 
 
 
8a4d3a7
d9c9f30
8a4d3a7
d9c9f30
 
 
8a4d3a7
d9c9f30
 
8a4d3a7
d9c9f30
 
8a4d3a7
d9c9f30
8a4d3a7
 
 
 
d9c9f30
67a5ad8
d9c9f30
 
 
8a4d3a7
 
 
 
d9c9f30
 
 
 
8a4d3a7
 
 
d9c9f30
 
 
 
8a4d3a7
 
 
d9c9f30
67a5ad8
d9c9f30
8a4d3a7
d9c9f30
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
license: mit
language:
  - en
tags:
  - computer-vision
  - face-recognition
  - face-verification
  - facial-expression
  - image-classification
  - biometrics
  - deep-learning
  - pytorch
  - CNN
pipeline_tag: image-classification
---

# πŸ™‚ CNN β€” Facial Expression Recognition

> CNN-based facial expression classifier trained to recognize **7 emotion categories**
> from face images with a clean, reproducible pipeline.

---

## πŸ“Œ Model Summary

| Property | Details |
|---|---|
| πŸ—οΈ Architecture | CNN (custom) |
| 🎯 Task | Image Classification |
| πŸ˜€ Classes | 7 emotions |
| βš™οΈ Framework | PyTorch |
| πŸ“ Input size | 48 Γ— 48 px (grayscale) |
| πŸ“œ License | MIT |

---

## 🧠 Emotion Classes

| # | Emotion |
|---|---|
| 0 | 😠 Angry |
| 1 | 🀒 Disgust |
| 2 | 😨 Fear |
| 3 | πŸ˜„ Happy |
| 4 | 😐 Neutral |
| 5 | 😒 Sad |
| 6 | 😲 Surprise |

---

## πŸš€ How to Use

```python
import torch
import torch.nn.functional as F
from torchvision import transforms
from PIL import Image

# Load model
model = torch.jit.load("model.pt", map_location="cpu")
model.eval()

# Preprocessing
transform = transforms.Compose([
    transforms.Grayscale(),
    transforms.Resize((48, 48)),
    transforms.ToTensor(),
    transforms.Normalize([0.5], [0.5]),
])

EMOTIONS = ["Angry", "Disgust", "Fear", "Happy", "Neutral", "Sad", "Surprise"]

# Inference
image = Image.open("face.jpg")
tensor = transform(image).unsqueeze(0)

with torch.no_grad():
    probs = F.softmax(model(tensor), dim=1)[0]

predicted = EMOTIONS[probs.argmax()]
confidence = probs.max().item()

print(f"Prediction: {predicted} ({confidence:.0%})")
```

---

## πŸ—‚οΈ Training Data

- **Base dataset:** FER-2013 (Facial Expression Recognition)
- **Input format:** 48Γ—48 grayscale face images
- **Classes:** 7 universal emotion categories

---

## ⚠️ Limitations

- Optimized for **frontal face** images
- Performance may degrade with partial occlusion, extreme lighting, or non-frontal poses
- Not intended for surveillance or identity recognition β€” expression classification only

---

## πŸ”— Related Resources

- πŸ€— [Live Demo Space](https://huggingface.co/spaces/martinbadrous/Facial-Recognition-Verification)
- πŸ’» [GitHub Repository](https://github.com/martinbadrous/Facial-Recognition)

---

## πŸ‘€ Author

**Martin Badrous** β€” Computer Vision & Deep Learning Engineer

[![LinkedIn](https://img.shields.io/badge/LinkedIn-%230077B5.svg?style=flat&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/martinbadrous)
[![GitHub](https://img.shields.io/badge/GitHub-181717.svg?style=flat&logo=github&logoColor=white)](https://github.com/martinbadrous)
[![HuggingFace](https://img.shields.io/badge/HuggingFace-FFD21E.svg?style=flat&logo=huggingface&logoColor=000)](https://huggingface.co/martinbadrous)