|
|
--- |
|
|
pipeline_tag: image-to-video |
|
|
license: mit |
|
|
datasets: |
|
|
- openai/MMMLU |
|
|
language: |
|
|
- am |
|
|
metrics: |
|
|
- accuracy |
|
|
base_model: |
|
|
- black-forest-labs/FLUX.1-dev |
|
|
new_version: black-forest-labs/FLUX.1-dev |
|
|
library_name: adapter-transformers |
|
|
tags: |
|
|
- chemistry |
|
|
--- |
|
|
# AnimateLCM-I2V for Fast Image-conditioned Video Generation in 4 steps. |
|
|
|
|
|
AnimateLCM-I2V is a latent image-to-video consistency model finetuned with [AnimateLCM](https://huggingface.co/wangfuyun/AnimateLCM) following the strategy proposed in [AnimateLCM-paper](https://arxiv.org/abs/2402.00769) without requiring teacher models. |
|
|
|
|
|
[AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data](https://arxiv.org/abs/2402.00769) by Fu-Yun Wang et al. |
|
|
|
|
|
## Example-Video |
|
|
|
|
|
%3C!-- HTML_TAG_END --> |
|
|
|
|
|
<video controls autoplay src="/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F63e9e92f20c109718713f5eb%2FSMZ4DAinSnrxKsVEW8dio.mp4%26quot%3B%3C%2Fspan%3E%26gt%3B%3C%2Fspan%3E%3C%2Fspan%3E%3Cspan class="language-xml"></video> |
|
|
|
|
|
|
|
|
For more details, please refer to our [[paper](https://arxiv.org/abs/2402.00769)] | [[code](https://github.com/G-U-N/AnimateLCM)] | [[proj-page](https://animatelcm.github.io/)] | [[civitai](https://civitai.com/models/310920/animatelcm-i2v-fast-image-to-video-generation)]. |
|
|
|
|
|
<video controls autoplay src="/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F63e9e92f20c109718713f5eb%2FKCwSoZCdxkkmtDg1LuXsP.mp4%26quot%3B%3C%2Fspan%3E%26gt%3B%3C%2Fspan%3E%3C%2Fspan%3E%3Cspan class="language-xml"></video> |