CyberSecQwen-4B — Model Card

🏆 AMD Developer Hackathon submission. Full project writeup, demo video, and judging context at lablab.ai/ai-hackathons/amd-developer/athena19/cybersecqwen-4b-cti-specialist-fine-tuned-on-amd.

Model Information

CyberSecQwen-4B is a 4B-parameter language model specialized for defensive cybersecurity tasks, fine-tuned from Qwen3-4B-Instruct-2507. It is purpose-built for two evaluation skills measured by CTI-Bench: mapping CVE descriptions to their CWE category (CTI-RCM) and answering cyber threat intelligence multiple-choice questions (CTI-MCQ).

Under the evaluation protocol of Foundation-Sec-8B (arXiv:2504.21039), CyberSecQwen-4B retains 97.3% of Foundation-Sec-Instruct-8B's CTI-RCM accuracy while exceeding its CTI-MCQ by +8.7 points, at half the parameter count.

The full training, merge, and evaluation pipeline runs end-to-end on a single AMD Instinct MI300X 192GB instance using ROCm + vLLM + FlashAttention-2. A companion model trained with the same recipe on Gemma-4-E2B-it — Gemma4Defense-2B — converges to the same CTI-RCM accuracy within 0.9 points (0.6754 vs 0.6664), demonstrating that the result is recipe-driven rather than substrate-specific.

Base model Qwen/Qwen3-4B-Instruct-2507
Parameters 4.0B total (3.6B non-embedding)
Architecture Qwen3 (RoPE, GQA 32:8, head_dim=128, 36 layers)
Context length 32,768 native
Adapter LoRA r=64, alpha=64, dropout=0.05
Precision bfloat16
Languages English
License Apache 2.0

Intended Use

Intended Use Cases

CyberSecQwen-4B is intended for security practitioners, researchers, and engineers working on:

  • CWE classification — mapping vulnerability descriptions (CVEs, advisories) to MITRE CWE categories
  • Cyber threat intelligence Q&A — answering structured questions about cybersecurity concepts, attacks, controls
  • Defensive analysis assistants — supporting human analysts who triage CVEs, prioritize patches, or document threat-actor behavior
  • Cybersecurity benchmarking on AMD hardware — as a reference fine-tune for the AMD MI300X stack and a comparator for compact-model performance on CTI-Bench

Downstream Use

The model can be used as a building block in:

  • Security operations center (SOC) ticket triage tools that suggest a likely CWE for an incoming CVE
  • Vulnerability management dashboards that pre-classify CVE feeds before human review
  • Internal cyber knowledge bases / chat assistants for security teams
  • Reference deployments demonstrating CTI workloads on AMD MI300X via vLLM ROCm

Out-of-Scope Use

The following uses are out-of-scope and are neither recommended nor intended use cases:

  1. Generating harmful content — the model must not be used to produce exploit code, weaponized proof-of-concept payloads, attacker tradecraft, or instructions that materially aid offensive operations.
  2. Critical security decisions without human oversight — the model should not auto-execute remediation, blocklist updates, account lockouts, or any action whose reversal carries cost; outputs are advisory and require qualified human review.
  3. Legal or medical advice — the model is trained on cybersecurity domain content and is not appropriate for legal, medical, or other regulated-advice contexts.
  4. Non-security use cases — general chat, code generation, summarization, translation, or other domains outside its specialization will produce lower-quality output than purpose-built models.
  5. Violation of laws or regulations — including but not limited to unauthorized vulnerability scanning, illegal data access, or misuse contrary to applicable cybersecurity statutes (CFAA, GDPR, etc.).

Hardware Requirements

The numbers below are first-principles estimates from the bf16 weight footprint plus typical KV-cache overhead at the trained 4096-token context. They are not measured throughput numbers; for production deployment, profile against your specific traffic pattern.

Specification CyberSecQwen-4B Foundation-Sec-Instruct-8B (reference)
Parameters (total / non-embedding) 4.0 B / 3.6 B 8 B
bf16 weight file on disk ~8.0 GB ~16 GB
Inference VRAM, weights only (bf16) ~8 GB ~16 GB
Inference VRAM, weights + 4 K KV cache (bf16) ~9–10 GB ~17–18 GB
Single-GPU class (bf16, headroom for batch ≥ 1) Fits on any 12 GB+ consumer card Typically requires a 24 GB+ datacenter card
AMD Instinct MI300X 192 GB (validated) Fits trivially with very large batch / long context Fits trivially

Notes:

  • Compute (FLOPs / token) is approximately proportional to the parameter count at fixed context length, so per-token inference cost is roughly 0.50× that of an 8 B model.
  • Quantized variants (int8, int4) further reduce VRAM by ~½ and ~¼ respectively. The released checkpoint is bf16 only; community quantization is not validated by the authors of this release.
  • This model has been validated end-to-end on AMD Instinct MI300X via vLLM ROCm + FlashAttention-2; consult the "How to Get Started" section below for the exact serving command on AMD hardware.

How to Get Started with the Model

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "athena129/CyberSecQwen-4B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

cve = ("A deserialization vulnerability in the destruct() function of Laravel "
       "v8.5.9 allows attackers to execute arbitrary commands.")

messages = [{
    "role": "user",
    "content": (
        "Analyze the following CVE description and map it to the appropriate CWE. "
        "Provide a brief justification for your choice. "
        "Ensure the last line of your response contains only the CWE ID.\n\n"
        f"CVE Description: {cve}"
    ),
}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=256, temperature=0.3, do_sample=True)
print(tokenizer.decode(output[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))

Serving via vLLM on AMD MI300X

docker run --rm --network=host --device=/dev/kfd --device=/dev/dri \
  -e VLLM_ROCM_USE_AITER=1 -e TORCH_BLAS_PREFER_HIPBLASLT=1 \
  vllm/vllm-openai-rocm:latest \
  --model athena129/CyberSecQwen-4B \
  --served-model-name cybersecqwen-4b \
  --attention-backend TRITON_ATTN \
  --dtype bfloat16 \
  --max-model-len 4096 \
  --gpu-memory-utilization 0.9

Training and Evaluation

Training Data

The model was trained on a combined cybersecurity corpus of approximately 14,776 supervised records:

  • CTI-RCM 2021 (decontaminated) — CVE → CWE classification examples drawn from MITRE/NVD public records dated 2021. Items appearing in the CTI-Bench evaluation splits were explicitly removed prior to training. (~6,776 records)
  • CVE / CTI synthetic Q&A — defensive-analyst-style cyber question–answer pairs grounded in CVE descriptions. (~8,000 records)

Decontamination matters here: an earlier internal version of this work showed roughly 72% test-set overlap when trained on undeduplicated CTI corpora, producing inflated CTI-RCM scores that did not generalize. The released model trains exclusively on the 2021 cohort with overlap items removed.

Methodology

This model uses direct supervised fine-tuning (SFT) of an instruction-tuned base via LoRA. The training recipe was selected through a controlled-experiment series across multiple trained variants spanning two model families and several corpus compositions, with multi-trial benchmark validation locking the released hyperparameters.

Key methodological choices that informed the released recipe:

  • Direct SFT, not knowledge distillation. Knowledge-distillation variants from a larger 20B teacher model (CyberPal-2.0-20B) were evaluated during recipe development. At the corpus sizes tested (≤ 15K supervised records), direct SFT on the curated corpus outperformed distillation on the headline benchmarks. The released model is direct SFT only.
  • Decontaminated training data. An earlier internal iteration showed ~72% test-set overlap when trained on undeduplicated CTI corpora, producing inflated CTI-RCM scores that did not generalize. The released model trains exclusively on the 2021 cohort with CTI-Bench overlap items removed.
  • Instruction-tuned base, not pre-trained base. Direct SFT on the IT checkpoint preserves the existing format priors (terse-answer multiple-choice convention) better than SFT on the pre-trained base; comparable runs on base checkpoints (Qwen3-4B-Base + identical recipe) showed substantial CTI-MCQ format-binding decay at the same corpus scale.
  • Recipe portability across substrates was an explicit design goal. The same corpus + hyperparameters were applied independently to Gemma-4-E2B-it (Gemma4Defense-2B). Both models converge to within 0.9 points on CTI-RCM, providing a built-in robustness check that the result is recipe-driven rather than substrate-specific.
  • Multi-trial benchmarking. All headline numbers are means of 5 independent trials with random sampling seeds at temperature 0.3; standard deviations are reported alongside.
  • AMD MI300X end-to-end pipeline. Training, adapter merging, and evaluation all run on a single AMD Instinct MI300X 192 GB instance via PyTorch + ROCm + Hugging Face transformers + PEFT + TRL inside the official vLLM ROCm Docker image. FlashAttention-2 is enabled in training for forward-and-backward passes; vLLM serves with TRITON_ATTN backend for inference.

Training Setup

Hyperparameter Value
Adapter LoRA, r=64, alpha=64, dropout=0.05
Target modules q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Learning rate 5e-5
Schedule cosine, warmup_ratio=0.05
Weight decay 0.01
Per-device batch size 2
Gradient accumulation 8 (effective batch = 16)
Epochs 10
Max sequence length 4096
Precision bfloat16
Attention implementation flash_attention_2
Random seed 42

The base model was Qwen3-4B-Instruct-2507, an instruction-tuned variant with Apache 2.0 licensing. Training was performed end-to-end on a single AMD Instinct MI300X 192GB instance via the AMD Developer Cloud, using PyTorch + ROCm 7 + Hugging Face transformers, peft, and trl 0.29.1 inside the official vllm/vllm-openai-rocm Docker image.

FlashAttention-2 is enabled because Qwen3-4B's attention head dimension (128) fits within the gfx942 shared-memory budget on AMD MI300X — the same FA2 approach is not viable on Gemma-4 due to its 512 head_dim on global-attention layers, which is why the companion Gemma4Defense-2B trains with sdpa instead.

Evaluation

Evaluated under the Foundation-Sec-8B protocol (arXiv:2504.21039 §B.3-B.4): zero-shot for instruction-tuned models, 5-shot for pretrained base models, dataset's own Prompt column as the user message, no system prompt, temperature 0.3, max-tokens 512, concurrency 32. Reported numbers are the mean of 5 independent trials with random sampling seeds; standard deviations are reported alongside.

Headline result

Benchmark Metric CyberSecQwen-4B Foundation-Sec-Instruct-8B Δ
CTI-MCQ (2,500 items) strict_acc, 5-trial mean ± std 0.5868 ± 0.0029 0.4996 +8.7 pp
CTI-RCM (1,000 items) strict_acc, 5-trial mean ± std 0.6664 ± 0.0023 0.6850 -1.9 pp

Parseable rates were 100% on CTI-RCM and 98.1% on CTI-MCQ — the model produces well-formed outputs in the expected response convention.

Pre / post fine-tune comparison

The improvement attributable to this fine-tune over its starting checkpoint:

Stage CTI-RCM CTI-MCQ
Qwen3-4B-Instruct-2507 (raw, instruction-tuned base) 0.519 0.473
CyberSecQwen-4B (this fine-tune) 0.6664 0.5868
Lift +15.1 pp +12.0 pp

Qwen3-4B-Instruct-2507's raw CTI-MCQ score (0.473) is substantially lower than its corresponding base model's score (0.667) under the chat-template evaluation — the same instruction-tuning-collapses-MCQ effect we observe for Foundation-Sec-Instruct (-15.6 pp vs Foundation-Sec base). This fine-tune recovers and exceeds the IT starting point on both subsets, restoring most of the MCQ format binding the instruction tuning eroded while delivering a substantial CTI-RCM lift.

Comparison to other cybersecurity-relevant models we evaluated

All numbers below were measured by us under the protocol above (with the noted shot count), not quoted from third-party papers. CyberPal-2.0-20B numbers reflect a single-trial run at our protocol — its own paper reports 0.874 / 0.757 using a different prompt template (Figure 11 of arXiv:2510.14113); the +2pp MCQ match validated our harness, while the RCM gap likely reflects the template difference.

Model Size CTI-RCM CTI-MCQ Notes
Foundation-Sec-8B (base) 8B 0.745 0.655 5-shot pretrained reference
Foundation-Sec-Instruct-8B 8B 0.685 0.500 0-shot, our TARGET
CyberPal-2.0-20B (cyber-pal-security/CyberOss-2.0-20B) 20B 0.728* 0.738* independently verified at our protocol
CyberSecQwen-4B (this model) 4B 0.6664 ± 0.0023 0.5868 ± 0.0029 5-trial mean ± std
Gemma4Defense-2B (companion) 2.3B 0.6754 ± 0.0035 0.6042 ± 0.0090 same recipe, different substrate
Qwen3-4B-Instruct-2507 (raw) 4B 0.519 0.473 0-shot, our base
Qwen3-4B-Base (raw) 4B 0.517 0.667 5-shot
Gemma-4-E4B-it (raw) 5.1B effective 0.618 0.666 0-shot
Gemma-4-E4B-base (raw) 5.1B effective 0.588 0.666 5-shot

* Single-trial values from our independent reproduction.

Key highlights

  • Beats Foundation-Sec-Instruct-8B on CTI-MCQ by +8.7 points at half the parameter count.
  • Stays within ~2 points of Foundation-Sec-Instruct-8B on CTI-RCM under the same evaluation protocol.
  • Cross-substrate companion (Gemma4Defense-2B) reproduces the CTI-RCM result within 0.9 points using the same recipe on a different model family.
  • Independent reproduction of CyberPal-2.0-20B at the Foundation-Sec protocol confirms its CTI-MCQ accuracy within 2 points of its paper claim.
  • Trained, merged, and evaluated end-to-end on a single AMD MI300X 192GB instance with FlashAttention-2 enabled.

Limitations

  1. Domain-specific knowledge limitations. The model is trained on cybersecurity domain text and is not a general assistant. Tasks outside this domain will produce lower-quality output than purpose-built general models.

  2. Time-anchored training data. The CTI-RCM training cohort is drawn from 2021 records. Vulnerability classes that emerged or rose in prevalence after 2021 (e.g., AI/ML-specific weaknesses, recent supply-chain CWEs) are under-represented in training and will be classified less accurately.

  3. English-only. All training and evaluation data are in English; multilingual cyber tasks will degrade.

  4. CTI-RCM gap. Foundation-Sec-Instruct-8B remains stronger on CTI-RCM under this protocol (-1.9 point gap). Production deployments where CWE classification is the primary metric should benchmark both models on their specific input distribution.

  5. No safety RLHF. The model is supervised-fine-tuned only; the training data emphasizes defensive-analyst framing but no formal reinforcement-learning safety alignment was applied.

  6. Chat template note. The repository ships with a minimal training-aligned chat_template.jinja matching the format used during SFT (Qwen <|im_start|> / <|im_end|> user-and-assistant turns, no thinking-mode block). Inference via tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) produces correctly-formatted prompts; downstream tooling that injects system prompts or thinking-mode toggles outside this template may degrade output quality.

Recommendations

  1. Always have qualified security professionals review model outputs before implementation for any operational use case (patch prioritization, ticket routing, blocklisting).
  2. Use this model as an assistive tool rather than a replacement for expert human judgment, especially for novel vulnerability classes outside the 2021 training cohort.
  3. Validate on your own input distribution before deployment. Public CTI-Bench performance does not perfectly transfer to internal advisory feeds, vendor-proprietary CWE taxonomies, or non-English content.
  4. Monitor for drift. As new CVE / CWE patterns emerge, periodically re-evaluate; consider supplementing with retrieval over a current vulnerability knowledge base for time-sensitive queries.
  5. Apply standard prompt-injection mitigations when wrapping the model in agentic workflows that accept external content (advisory feeds, scraped pages); domain-SFT does not confer prompt-injection resistance.

Companion Model

Gemma4Defense-2B is a sister release fine-tuned with the same training corpus and hyperparameters, on the Gemma-4-E2B-it base. The two models converge to within 0.9 points on CTI-RCM (0.6664 Qwen vs 0.6754 Gemma, 5-trial mean) — the same recipe produces equivalent task performance across two distinct model families. The Gemma variant is licensed under the Gemma Terms of Use; CyberSecQwen-4B (Apache 2.0) is appropriate for use cases where Gemma terms are not a fit.

Citation

If you use this model, please cite:

@misc{cybersecqwen2026,
  title  = {CyberSecQwen-4B: A Compact CTI Specialist Fine-Tuned from Qwen3-4B-Instruct-2507 on AMD MI300X},
  author = {Mulia, Samuel},
  year   = {2026},
  publisher = {Hugging Face},
  url    = {https://huggingface.co/athena129/CyberSecQwen-4B}
}

The evaluation protocol is from:

@article{foundation-sec-8b,
  title   = {Foundation-Sec-8B: A Cybersecurity-Specialized Language Model},
  author  = {Cisco Foundation AI},
  journal = {arXiv preprint arXiv:2504.21039},
  year    = {2025},
  url     = {https://arxiv.org/abs/2504.21039}
}

The benchmark is from:

@misc{cti-bench,
  title  = {CTI-Bench: A Benchmark Suite for Cybersecurity LLMs},
  author = {Alam, Md Tanvirul and Bhusal, Dipkamal and Park, Youngja and Rastogi, Nidhi},
  year   = {2024},
  url    = {https://github.com/xashru/cti-bench}
}
Downloads last month
-
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lablab-ai-amd-developer-hackathon/CyberSecQwen-4B

Adapter
(5430)
this model
Adapters
1 model

Papers for lablab-ai-amd-developer-hackathon/CyberSecQwen-4B

Evaluation results