runtime error

Exit code: 1. Reason: esizing=False` The new lm_head weights will be initialized from a multivariate normal distribution that has old embeddings' mean and covariance. As described in this article: https://nlp.stanford.edu/~johnhew/vocab-expansion.html. To disable this, use `mean_resizing=False` Resized tokenizer and embedding from 320 to 392 tokens. adapter_config.json: 0%| | 0.00/753 [00:00<?, ?B/s] adapter_config.json: 100%|██████████| 753/753 [00:00<00:00, 6.72MB/s] /usr/local/lib/python3.10/site-packages/peft/tuners/tuners_utils.py:1222: UserWarning: Model has `tie_word_embeddings=True` and a tied layer is part of the adapter, but `ensure_weight_tying` is not set to True. This can lead to complications, for example when merging the adapter or converting your model to formats other than safetensors. Check the discussion here: https://github.com/huggingface/peft/issues/2777 warnings.warn(msg) adapter_model.safetensors: 0%| | 0.00/430M [00:00<?, ?B/s] adapter_model.safetensors: 21%|██ | 89.9M/430M [00:01<00:06, 51.7MB/s] adapter_model.safetensors: 100%|██████████| 430M/430M [00:02<00:00, 210MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 49, in <module> model = ReactionPredictionModel(candidate_models) File "/home/user/app/utils.py", line 120, in __init__ self.load_retro_model(candidate_models[model]) File "/home/user/app/utils.py", line 173, in load_retro_model self.retro_model = PeftModel.from_pretrained( File "/usr/local/lib/python3.10/site-packages/peft/peft_model.py", line 568, in from_pretrained load_result = model.load_adapter( File "/usr/local/lib/python3.10/site-packages/peft/peft_model.py", line 1368, in load_adapter load_result = set_peft_model_state_dict( File "/usr/local/lib/python3.10/site-packages/peft/utils/save_and_load.py", line 455, in set_peft_model_state_dict state_dict[store_key] = peft_model_state_dict[lookup_key] KeyError: 'base_model.model.lm_head.weight'

Container logs:

Fetching error logs...