BuildwellAI Qwen3-14B
Fine-tuned Qwen3-14B model for UK building regulations and construction calculations.
Model Description
This model has been fine-tuned on 50,885 examples covering:
- UK Building Regulations (Parts A-O)
- Thermal bridging and PSI values (SAP 10.2)
- U-value calculations
- Fire safety (Part B, BS 9991)
- Water efficiency (Part G)
- BREEAM, Passivhaus, WELL standards
- Structural calculations (Eurocodes)
Tool Calling
The model is trained to call MCP (Model Context Protocol) tools for calculations:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Choukrijer/buildwellai-qwen3-14b", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Choukrijer/buildwellai-qwen3-14b")
messages = [
{"role": "system", "content": "You are BuildwellAI, expert in UK building regulations."},
{"role": "user", "content": "What is the PSI value for thermal bridge junction E1?"}
]
tools = [
{"type": "function", "function": {"name": "get_psi_value", "description": "Get PSI value", "parameters": {"type": "object", "properties": {"junction_code": {"type": "string"}}, "required": ["junction_code"]}}}
]
text = tokenizer.apply_chat_template(messages, tools=tools, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=500)
print(tokenizer.decode(output[0]))
Available MCP Tools
The model can call 42 specialized MCP servers with 477 tools:
| Category | MCPs |
|---|---|
| Thermal Performance | psi-thermal-bridge, thermal-break, condensation-glaser, wufi-hygrothermal |
| Energy Assessment | sap10, sbem, air-permeability, passivhaus |
| Building Regulations | structural-part-a, water-efficiency-part-g, overheating-part-o, ventilation-part-f |
| Fire Safety | fire-safety, smoke-ventilation, evacuation, cfd-fire-smoke |
| Sustainability | breeam, well, embodied-carbon, lca, biodiversity-net-gain |
| Daylighting | daylight-factor, adf-modelling, sunlight-overshadowing |
Training Details
- Base model: Qwen/Qwen3-14B
- Training examples: 50,885
- Method: LoRA fine-tuning (rank 64, alpha 128)
- Hardware: NVIDIA H200 SXM
- Precision: BF16
Quantization
For deployment on smaller GPUs, use 4-bit quantization:
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
model = AutoModelForCausalLM.from_pretrained(
"Choukrijer/buildwellai-qwen3-14b",
quantization_config=BitsAndBytesConfig(load_in_4bit=True),
device_map="auto"
)
License
Apache 2.0 (same as base model)