File size: 2,830 Bytes
8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 67a5ad8 d9c9f30 8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 67a5ad8 d9c9f30 8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 8a4d3a7 d9c9f30 67a5ad8 d9c9f30 8a4d3a7 d9c9f30 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | ---
license: mit
language:
- en
tags:
- computer-vision
- face-recognition
- face-verification
- facial-expression
- image-classification
- biometrics
- deep-learning
- pytorch
- CNN
pipeline_tag: image-classification
---
# π CNN β Facial Expression Recognition
> CNN-based facial expression classifier trained to recognize **7 emotion categories**
> from face images with a clean, reproducible pipeline.
---
## π Model Summary
| Property | Details |
|---|---|
| ποΈ Architecture | CNN (custom) |
| π― Task | Image Classification |
| π Classes | 7 emotions |
| βοΈ Framework | PyTorch |
| π Input size | 48 Γ 48 px (grayscale) |
| π License | MIT |
---
## π§ Emotion Classes
| # | Emotion |
|---|---|
| 0 | π Angry |
| 1 | π€’ Disgust |
| 2 | π¨ Fear |
| 3 | π Happy |
| 4 | π Neutral |
| 5 | π’ Sad |
| 6 | π² Surprise |
---
## π How to Use
```python
import torch
import torch.nn.functional as F
from torchvision import transforms
from PIL import Image
# Load model
model = torch.jit.load("model.pt", map_location="cpu")
model.eval()
# Preprocessing
transform = transforms.Compose([
transforms.Grayscale(),
transforms.Resize((48, 48)),
transforms.ToTensor(),
transforms.Normalize([0.5], [0.5]),
])
EMOTIONS = ["Angry", "Disgust", "Fear", "Happy", "Neutral", "Sad", "Surprise"]
# Inference
image = Image.open("face.jpg")
tensor = transform(image).unsqueeze(0)
with torch.no_grad():
probs = F.softmax(model(tensor), dim=1)[0]
predicted = EMOTIONS[probs.argmax()]
confidence = probs.max().item()
print(f"Prediction: {predicted} ({confidence:.0%})")
```
---
## ποΈ Training Data
- **Base dataset:** FER-2013 (Facial Expression Recognition)
- **Input format:** 48Γ48 grayscale face images
- **Classes:** 7 universal emotion categories
---
## β οΈ Limitations
- Optimized for **frontal face** images
- Performance may degrade with partial occlusion, extreme lighting, or non-frontal poses
- Not intended for surveillance or identity recognition β expression classification only
---
## π Related Resources
- π€ [Live Demo Space](https://huggingface.co/spaces/martinbadrous/Facial-Recognition-Verification)
- π» [GitHub Repository](https://github.com/martinbadrous/Facial-Recognition)
---
## π€ Author
**Martin Badrous** β Computer Vision & Deep Learning Engineer
[](https://www.linkedin.com/in/martinbadrous)
[](https://github.com/martinbadrous)
[](https://huggingface.co/martinbadrous)
|