Update README.md
Browse files
README.md
CHANGED
|
@@ -12,23 +12,24 @@ tags:
|
|
| 12 |
- finance
|
| 13 |
---
|
| 14 |
|
| 15 |
-
# Adapting Large Language Models to Domains
|
| 16 |
-
This repo contains the domain-specific base model developed from **LLaMA-1-13B**, using the method in our
|
| 17 |
|
| 18 |
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
|
| 19 |
|
| 20 |
-
###
|
| 21 |
|
| 22 |
**************************** **Updates** ****************************
|
| 23 |
-
* 2024/
|
| 24 |
-
* 2024/6/
|
| 25 |
-
* 2024/
|
| 26 |
-
*
|
| 27 |
-
*
|
| 28 |
-
* 2023/
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
|
|
|
| 32 |
### LLaMA-1-7B
|
| 33 |
In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
|
| 34 |
|
|
@@ -39,8 +40,8 @@ In our paper, we develop three domain-specific models from LLaMA-1-7B, which are
|
|
| 39 |
### LLaMA-1-13B
|
| 40 |
Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B).
|
| 41 |
|
| 42 |
-
|
| 43 |
-
Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
|
| 44 |
|
| 45 |
For example, to chat with the finance model:
|
| 46 |
```python
|
|
@@ -68,13 +69,62 @@ outputs = model.generate(input_ids=inputs, max_length=2048)[0]
|
|
| 68 |
answer_start = int(inputs.shape[-1])
|
| 69 |
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
|
| 70 |
|
| 71 |
-
print(
|
| 72 |
```
|
| 73 |
|
| 74 |
-
|
| 75 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 76 |
|
| 77 |
-
**Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
|
| 78 |
|
| 79 |
## Citation
|
| 80 |
If you find our work helpful, please cite us:
|
|
|
|
| 12 |
- finance
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# Adapting Large Language Models to Domains (ICLR 2024)
|
| 16 |
+
This repo contains the domain-specific base model developed from **LLaMA-1-13B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
|
| 17 |
|
| 18 |
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
|
| 19 |
|
| 20 |
+
### [2024/6/21] 🤗 We release the 2nd version of AdaptLLM at [Instruction-Pretrain](https://huggingface.co/instruction-pretrain), effective for both pre-training from scratch and continual pre-training 🤗
|
| 21 |
|
| 22 |
**************************** **Updates** ****************************
|
| 23 |
+
* 2024/8/29: Updated [guidelines](https://huggingface.co/datasets/AdaptLLM/finance-tasks) on evaluating any 🤗Huggingface models on the domain-specific tasks
|
| 24 |
+
* 2024/6/22: Released the [benchmarking code](https://github.com/microsoft/LMOps/tree/main/adaptllm)
|
| 25 |
+
* 2024/6/21: Released the 2nd version of AdaptLLM at [Instruction-Pretrain](https://huggingface.co/instruction-pretrain)
|
| 26 |
+
* 2024/4/2: Released the [raw data splits (train and test)](https://huggingface.co/datasets/AdaptLLM/ConvFinQA) of all the evaluation datasets
|
| 27 |
+
* 2024/1/16: Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024
|
| 28 |
+
* 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B
|
| 29 |
+
* 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B
|
| 30 |
+
* 2023/9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B
|
| 31 |
+
|
| 32 |
+
## 1. Domain-Specific Models
|
| 33 |
### LLaMA-1-7B
|
| 34 |
In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
|
| 35 |
|
|
|
|
| 40 |
### LLaMA-1-13B
|
| 41 |
Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B).
|
| 42 |
|
| 43 |
+
### LLaMA-2-Chat
|
| 44 |
+
Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat).
|
| 45 |
|
| 46 |
For example, to chat with the finance model:
|
| 47 |
```python
|
|
|
|
| 69 |
answer_start = int(inputs.shape[-1])
|
| 70 |
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
|
| 71 |
|
| 72 |
+
print(pred)
|
| 73 |
```
|
| 74 |
|
| 75 |
+
### LLaMA-3-8B (💡New!)
|
| 76 |
+
In our recent research on [Instruction-Pretrain](https://huggingface.co/papers/2406.14491), we developed a context-based instruction synthesizer to augment the raw corpora with instruction-response pairs, **enabling Llama3-8B to be comparable to or even outperform Llama3-70B**: [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B), [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B).
|
| 77 |
+
|
| 78 |
+
## 2. Domain-Specific Tasks
|
| 79 |
+
|
| 80 |
+
### Pre-templatized Testing Splits
|
| 81 |
+
To easily reproduce our prompting results, we have uploaded the filled-in zero/few-shot input instructions and output completions of the test each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
|
| 82 |
+
|
| 83 |
+
Note: those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
|
| 84 |
+
|
| 85 |
+
### Evaluating Any Huggingface LMs on Domain-Specific Tasks (💡New!)
|
| 86 |
+
You can use the following scripts to reproduce our results and evaluate any other Huggingface models on the testing splits:
|
| 87 |
+
|
| 88 |
+
1). **Set Up Dependencies**
|
| 89 |
+
```bash
|
| 90 |
+
git clone https://github.com/microsoft/LMOps
|
| 91 |
+
cd LMOps/adaptllm
|
| 92 |
+
pip install -r requirements.txt
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
2). **Evaluate the Model**
|
| 96 |
+
```bash
|
| 97 |
+
# Select the domain from ['biomedicine', 'finance', 'law']
|
| 98 |
+
DOMAIN='finance'
|
| 99 |
+
|
| 100 |
+
# Specify any Huggingface model name (Not applicable to chat models)
|
| 101 |
+
MODEL='AdaptLLM/finance-LLM-13B'
|
| 102 |
+
|
| 103 |
+
# Model parallelization:
|
| 104 |
+
# - Set MODEL_PARALLEL=False if the model fits on a single GPU.
|
| 105 |
+
# We observe that LMs smaller than 10B always meet this requirement.
|
| 106 |
+
# - Set MODEL_PARALLEL=True if the model is too large and encounters OOM on a single GPU.
|
| 107 |
+
MODEL_PARALLEL=True
|
| 108 |
+
|
| 109 |
+
# Choose the number of GPUs from [1, 2, 4, 8]
|
| 110 |
+
N_GPU=2
|
| 111 |
+
|
| 112 |
+
# Whether to add a BOS token at the beginning of the prompt input:
|
| 113 |
+
# - Set to False for AdaptLLM.
|
| 114 |
+
# - Set to True for instruction-pretrain models.
|
| 115 |
+
# If unsure, we recommend setting it to False, as this is suitable for most LMs.
|
| 116 |
+
add_bos_token=False
|
| 117 |
+
|
| 118 |
+
# Run the evaluation script
|
| 119 |
+
bash scripts/inference.sh ${DOMAIN} ${MODEL} ${add_bos_token} ${MODEL_PARALLEL} ${N_GPU}
|
| 120 |
+
```
|
| 121 |
+
|
| 122 |
+
### Raw Datasets
|
| 123 |
+
We have also uploaded the raw training and testing splits, for facilitating fine-tuning or other usages: [ChemProt](https://huggingface.co/datasets/AdaptLLM/ChemProt), [RCT](https://huggingface.co/datasets/AdaptLLM/RCT), [ConvFinQA](https://huggingface.co/datasets/AdaptLLM/ConvFinQA), [FiQA_SA](https://huggingface.co/datasets/AdaptLLM/FiQA_SA), [Headline](https://huggingface.co/datasets/AdaptLLM/Headline), [NER](https://huggingface.co/datasets/AdaptLLM/NER), [FPB](https://huggingface.co/datasets/AdaptLLM/FPB)
|
| 124 |
+
|
| 125 |
+
### Domain Knowledge Probing
|
| 126 |
+
Our pre-processed knowledge probing datasets are available at: [med_knowledge_prob](https://huggingface.co/datasets/AdaptLLM/med_knowledge_prob) and [law_knowledge_prob](https://huggingface.co/datasets/AdaptLLM/law_knowledge_prob)
|
| 127 |
|
|
|
|
| 128 |
|
| 129 |
## Citation
|
| 130 |
If you find our work helpful, please cite us:
|