Update README.md
Browse files
README.md
CHANGED
|
@@ -5,10 +5,12 @@ pipeline_tag: unconditional-image-generation
|
|
| 5 |
|
| 6 |
# RAE: Diffusion Transformers with Representation Autoencoders
|
| 7 |
|
| 8 |
-
This repository contains the official PyTorch checkpoints for
|
| 9 |
|
| 10 |
Representation Autoencoders (RAE) are a class of autoencoders that utilize pretrained, frozen representation encoders such as DINOv2 and SigLIP2 as encoders with trained ViT decoders. RAE can be used in a two-stage training pipeline for high-fidelity image synthesis, where a Stage 2 diffusion model is trained on the latent space of a pretrained RAE to generate images.
|
| 11 |
|
| 12 |
Website: https://rae-dit.github.io/
|
|
|
|
| 13 |
Code: https://github.com/bytetriper/RAE
|
|
|
|
| 14 |
Paper: https://huggingface.co/papers/2510.11690
|
|
|
|
| 5 |
|
| 6 |
# RAE: Diffusion Transformers with Representation Autoencoders
|
| 7 |
|
| 8 |
+
This repository contains the official PyTorch checkpoints for Representation Autoencoders.
|
| 9 |
|
| 10 |
Representation Autoencoders (RAE) are a class of autoencoders that utilize pretrained, frozen representation encoders such as DINOv2 and SigLIP2 as encoders with trained ViT decoders. RAE can be used in a two-stage training pipeline for high-fidelity image synthesis, where a Stage 2 diffusion model is trained on the latent space of a pretrained RAE to generate images.
|
| 11 |
|
| 12 |
Website: https://rae-dit.github.io/
|
| 13 |
+
|
| 14 |
Code: https://github.com/bytetriper/RAE
|
| 15 |
+
|
| 16 |
Paper: https://huggingface.co/papers/2510.11690
|