SAG-ViT: A Scale-Aware, High-Fidelity Patching Approach with Graph Attention for Vision Transformers Paper • 2411.09420 • Published Nov 14, 2024 • 1