Update README.md
Browse files
README.md
CHANGED
|
@@ -121,16 +121,6 @@ You can automatically apply it using the dedicated [`.apply_chat_template()`](ht
|
|
| 121 |
|
| 122 |
RAG systems enable AI solutions to include new, up-to-date, and potentially proprietary information in LLM responses that was not present in the training data. When a user asks a question, the retrieval component locates and delivers related documents from a knowledge base, and then the RAG generator model answers the question based on facts from those contextual documents.
|
| 123 |
|
| 124 |
-
## 📈 Performance
|
| 125 |
-
|
| 126 |
-
We evaluated the model across 3 metrics using LLMs as a judge, comparing against 4 similarly-sized open-source models:
|
| 127 |
-
|
| 128 |
-
- **Groundedness**: Do the model’s responses consist entirely of information from the provided contextual documents and avoid hallucinations?
|
| 129 |
-
- **Relevance**: Does the model answer the user’s question concisely? Does all of the response content contribute to the final answer without inclusion of unnecessary fluff?
|
| 130 |
-
- **Helpfulness**: Overall, how well did the model assist with the user’s query?
|
| 131 |
-
|
| 132 |
-
LFM2-1.2B-RAG achieves competitive performance in all 3 metrics compared to Qwen3-1.7B, Gemma3-1B-IT, Llama-3.21B-Instruct, and Pleias-1B-RAG.
|
| 133 |
-
|
| 134 |
## 🏃 How to run
|
| 135 |
|
| 136 |
- Hugging Face: [LFM2-1.2B](https://huggingface.co/LiquidAI/LFM2-1.2B)
|
|
|
|
| 121 |
|
| 122 |
RAG systems enable AI solutions to include new, up-to-date, and potentially proprietary information in LLM responses that was not present in the training data. When a user asks a question, the retrieval component locates and delivers related documents from a knowledge base, and then the RAG generator model answers the question based on facts from those contextual documents.
|
| 123 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 124 |
## 🏃 How to run
|
| 125 |
|
| 126 |
- Hugging Face: [LFM2-1.2B](https://huggingface.co/LiquidAI/LFM2-1.2B)
|