Instructions to use google/madlad400-8b-lm with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/madlad400-8b-lm with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="google/madlad400-8b-lm", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/madlad400-8b-lm", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("google/madlad400-8b-lm", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use google/madlad400-8b-lm with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "google/madlad400-8b-lm" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/madlad400-8b-lm", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/google/madlad400-8b-lm
- SGLang
How to use google/madlad400-8b-lm with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "google/madlad400-8b-lm" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/madlad400-8b-lm", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "google/madlad400-8b-lm" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/madlad400-8b-lm", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use google/madlad400-8b-lm with Docker Model Runner:
docker model run hf.co/google/madlad400-8b-lm
This model has the safetensors weights for the Madlad-400 8B param language model.
The HF transformers code to run inference is not ready yet. The original implementation is in JAX/Flaxformer.
The model architecture is the same as Palm 8B.
It's a decoder-only T5 with 32 layers, 16 query heads, 1 KV head, and 4096 embedding size.
These are the main differences relative to the original T5 architecture:
- SwiGLU Activation
- Parallel Layers
- Multi-Query Attention
- RoPE Embeddings
- Shared Input-Output Embeddings
- No biases
- Bidirectional attention
- Layer Norm with
center_scale_at_zeroand final layer withuse_scale=False
If you are looking for the language models models, here are the available versions:
Article: MADLAD-400: A Multilingual And Document-Level Large Audited Dataset
Abstract:
We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community.
- Downloads last month
- 250