runtime error

Exit code: 3. Reason: context: graph nodes = 1014 llama_context: graph splits = 1 attach_threadpool: call clip_model_loader: model name: openai/clip-vit-large-patch14-336 clip_model_loader: description: image encoder for LLaVA clip_model_loader: GGUF version: 3 clip_model_loader: alignment: 32 clip_model_loader: n_tensors: 377 clip_model_loader: n_kv: 20 clip_model_loader: has vision encoder clip_ctx: CLIP using CPU backend load_hparams: projector: mlp load_hparams: n_embd: 1024 load_hparams: n_head: 16 load_hparams: n_ff: 4096 load_hparams: n_layer: 23 load_hparams: ffn_op: gelu_quick load_hparams: projection_dim: 768 --- vision hparams --- load_hparams: image_size: 336 load_hparams: patch_size: 14 load_hparams: has_llava_proj: 1 load_hparams: minicpmv_version: 0 load_hparams: proj_scale_factor: 0 load_hparams: n_wa_pattern: 0 load_hparams: model size: 187.71 MiB load_hparams: metadata size: 0.16 MiB load_tensors: loaded 377 tensors from /opt/koboldcpp/mmproj.gguf alloc_compute_meta: CPU compute buffer size = 32.88 MiB gpttype_load_model: mmproj embedding mismatch (4096 and 3072)! Make sure you use the correct mmproj file! Load Text Model OK: False Could not load text model: /opt/koboldcpp/model.gguf The reported GGUF Arch is: llama Arch Category: 0 --- Identified as GGUF model. Attempting to Load... --- Using automatic RoPE scaling for GGUF. If the model has custom RoPE settings, they'll be used directly instead! System Info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | AMX_INT8 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | Automatic RoPE Scaling: Using (scale:1.000, base:500000.0). Threadpool set to 7 threads and 7 blasthreads... Attempting to apply Multimodal Projector: /opt/koboldcpp/mmproj.gguf

Container logs:

Fetching error logs...