How it works: MicroGuard fine-tunes small language models with LoRA on 127K+ faithfulness-labeled examples from RAGBench, RAGTruth, and HaluBench. At inference, constrained decoding compares FAITHFUL vs UNFAITHFUL logits for deterministic classification with zero garbage outputs.
Models: Qwen-0.5B | SmolLM-135M | TinyLlama-1.1B | Gemma-270M | Gemma-1B