gemma_3_4b_it-lora-r8-eng-lug

Model on HF

This is a LoRA adapter for the AfriScience-MT project, enabling efficient scientific machine translation for African languages.

Adapter Description

Property Value
Base Model google/gemma-3-4b-it
Translation Direction English → Luganda
LoRA Rank (r) 8
LoRA Alpha 16
Training Method QLoRA (4-bit quantization)
Domain Scientific/Academic texts

Why LoRA?

LoRA (Low-Rank Adaptation) enables efficient fine-tuning by training only a small number of additional parameters. This adapter adds only ~4.0M parameters to the base model while achieving strong translation performance.

Evaluation Results

Performance on the AfriScience-MT test set:

Split BLEU chrF SSA-COMET
Validation 20.83 47.90 64.62
Test 18.43 46.49 63.75

Metrics explanation:

  • BLEU: Measures n-gram overlap with reference translations (0-100, higher is better)
  • chrF: Character-level F-score, robust for morphologically rich languages (0-100, higher is better)
  • SSA-COMET: Neural metric trained for Sub-Saharan African languages, shown as percentage (0-100, higher is better) (McGill-NLP/ssa-comet-stl)

Usage

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch

# Configure 4-bit quantization (recommended for memory efficiency)
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_compute_dtype=torch.bfloat16,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_use_double_quant=True,
)

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "google/gemma-3-4b-it",
    quantization_config=bnb_config,
    device_map="auto",
    torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-3-4b-it")

# Load LoRA adapter
adapter_name = "AfriScience-MT/gemma_3_4b_it-lora-r8-eng-lug"
model = PeftModel.from_pretrained(base_model, adapter_name)
model.eval()

# Prepare translation prompt
source_text = "Climate change significantly impacts agricultural productivity in sub-Saharan Africa."
instruction = "Translate the following English scientific text to Luganda."

# Format for Gemma chat template
messages = [{"role": "user", "content": f"{instruction}\n\n{source_text}"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

# Generate translation
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
    outputs = model.generate(
        **inputs,
        max_new_tokens=256,
        num_beams=5,
        early_stopping=True,
        pad_token_id=tokenizer.pad_token_id,
    )

# Decode only the generated part
generated = outputs[0][inputs["input_ids"].shape[1]:]
translation = tokenizer.decode(generated, skip_special_tokens=True)
print(translation)

Without Quantization (Full Precision)

# For GPUs with sufficient memory (>24GB for larger models)
base_model = AutoModelForCausalLM.from_pretrained(
    "google/gemma-3-4b-it",
    device_map="auto",
    torch_dtype=torch.bfloat16,
)
model = PeftModel.from_pretrained(base_model, "AfriScience-MT/gemma_3_4b_it-lora-r8-eng-lug")

Training Details

Hyperparameters

Parameter Value
LoRA Rank (r) 8
LoRA Alpha 16
LoRA Dropout 0.05
Target Modules q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Epochs 3
Batch Size 2
Learning Rate 2e-04
Max Sequence Length 512
Gradient Accumulation 4

Hardware Requirements

Configuration VRAM Required
4-bit (QLoRA) ~8-12 GB
8-bit ~16-20 GB
Full precision ~24-40 GB

Reproducibility

To reproduce this adapter:

# Clone the AfriScience-MT repository
git clone https://github.com/afriscience-mt/afriscience-mt.git
cd afriscience-mt

# Install dependencies
pip install -r requirements.txt

# Run LoRA training
python -m afriscience_mt.scripts.run_lora_training \
    --data_dir ./data \
    --source_lang eng \
    --target_lang lug \
    --model_name google/gemma-3-4b-it \
    --model_type gemma \
    --lora_rank 8 \
    --output_dir ./output \
    --num_epochs 3 \
    --batch_size 4 \
    --load_in_4bit

Limitations

  • Domain Specificity: Optimized for scientific/academic texts; may underperform on casual or colloquial language.
  • Language Direction: Only supports English → Luganda translation.
  • Base Model Required: Must be used with the google/gemma-3-4b-it base model.
  • Context Length: Maximum context is model-dependent; longer texts should be chunked.

Citation

If you use this adapter, please cite the AfriScience-MT project:

@inproceedings{afriscience-mt-2025,
  title={AfriScience-MT: Machine Translation for African Scientific Literature},
  author={AfriScience-MT Team},
  year={2025},
  url={https://github.com/afriscience-mt/afriscience-mt}
}

License

This adapter is released under the Apache 2.0 License.

Acknowledgments

Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AfriScience-MT/gemma_3_4b_it-lora-r8-eng-lug

Adapter
(183)
this model

Collections including AfriScience-MT/gemma_3_4b_it-lora-r8-eng-lug

Evaluation results