tier,model,f1,precision,recall Overall,GPT4o,67.41,80.59,57.93 FactBench,GPT4o,80.93,85.11,77.13 Reddit,GPT4o,42.76,74.04,30.06 Overall,Claude 3.5-Sonnet,63.65,78.37,53.58 FactBench,Claude 3.5-Sonnet,75.68,83.28,69.35 Reddit,Claude 3.5-Sonnet,42.90,71.25,30.69 Overall,Gemini 1.5-Flash,64.10,80.72,53.16 FactBench,Gemini 1.5-Flash,77.38,85.45,70.71 Reddit,Gemini 1.5-Flash,40.26,73.87,27.67 Overall,Llama3.1-8b,48.62,60.91,40.46 FactBench,Llama3.1-8b,60.71,68.87,54.28 Reddit,Llama3.1-8b,28.86,49.36,20.39 Overall,Llama3.1-70b,55.12,68.09,46.30 FactBench,Llama3.1-70b,65.83,76.05,58.00 Reddit,Llama3.1-70b,38.61,56.54,29.31 Overall,Llama3.1-405B,60.61,72.80,51.92 FactBench,Llama3.1-405B,73.23,78.80,68.40 Reddit,Llama3.1-405B,38.98,64.10,28.00 Overall,Qwen2.5-8b,55.78,72.45,45.34 FactBench,Qwen2.5-8b,69.23,77.18,58.66 Reddit,Qwen2.5-8b,37.25,65.58,26.01 Overall,Qwen2.5-32b,60.00,77.79,47.52 FactBench,Qwen2.5-32b,71.31,82.74,62.77 Reddit,Qwen2.5-32b,37.34,70.60,25.38