Firworks commited on
Commit
eba07f9
·
verified ·
1 Parent(s): 97fa1e2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -16,6 +16,12 @@ license: apache-2.0
16
  Check the original model card for information about this model.
17
 
18
  # Running the model with VLLM in Docker
 
 
 
 
 
 
19
  ```sh
20
  sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:nightly --model Firworks/Bolmo-1B-nvfp4 --dtype auto --max-model-len 32768 --trust-remote-code
21
  ```
 
16
  Check the original model card for information about this model.
17
 
18
  # Running the model with VLLM in Docker
19
+
20
+ Requires xlstm (pip install xlstm==2.0.4).
21
+
22
+ As of vLLM 0.13.0rc2.dev118, vLLM does not support BolmoForCausalLM yet, so use Transformers for now.
23
+
24
+ Some day, this command will probably work.
25
  ```sh
26
  sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:nightly --model Firworks/Bolmo-1B-nvfp4 --dtype auto --max-model-len 32768 --trust-remote-code
27
  ```