runtime error

Exit code: 1. Reason: οΏ½β–ˆβ–ˆ| 662/662 [00:00<00:00, 4.78MB/s] tokenizer.json: 0%| | 0.00/33.4M [00:00<?, ?B/s] tokenizer.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 33.4M/33.4M [00:00<00:00, 135MB/s] tokenizer.model: 0%| | 0.00/4.69M [00:00<?, ?B/s] tokenizer.model: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4.69M/4.69M [00:00<00:00, 16.1MB/s] tokenizer_config.json: 0%| | 0.00/1.16M [00:00<?, ?B/s] tokenizer_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.16M/1.16M [00:00<00:00, 39.0MB/s] Gemma model downloaded to /root/.cache/huggingface/hub/models--google--gemma-3-1b-it/snapshots/dcc83ea841ab6100d6b47a070329e1ba4cf78752 Initializing text encoder with: checkpoint_path=/root/.cache/huggingface/hub/models--MihaiPopa-1--LTX-2-Lite-2.4B/snapshots/6f94d24083bbe9619697931c9f59dfe9258a23e8/ltx-2-2.4b-pruned.safetensors gemma_root=/root/.cache/huggingface/hub/models--google--gemma-3-1b-it/snapshots/dcc83ea841ab6100d6b47a070329e1ba4cf78752 device=cuda Traceback (most recent call last): File "/app/app.py", line 63, in <module> text_encoder = model_ledger.text_encoder() File "/usr/local/lib/python3.13/site-packages/ltx_pipelines/utils/model_ledger.py", line 223, in text_encoder return self.text_encoder_builder.build(device=self._target_device(), dtype=self.dtype).to(self.device).eval() ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.13/site-packages/ltx_core/loader/single_gpu_model_builder.py", line 74, in build config = self.model_config() File "/usr/local/lib/python3.13/site-packages/ltx_core/loader/single_gpu_model_builder.py", line 44, in model_config return self.model_loader.metadata(first_shard_path) ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.13/site-packages/ltx_core/loader/sft_loader.py", line 60, in metadata return json.loads(f.metadata()["config"]) ~~~~~~~~~~~~^^^^^^^^^^ TypeError: 'NoneType' object is not subscriptable

Container logs:

Fetching error logs...