Model Card for Qwen2.5-7B-Instruct-eventimpact
Model Description
Using the Qwen2.5-7B-Instruct model as a starting point, the Qwen2.5-7B-Instruct-eventimpact Language Model is additionally fine-tuned on a 6k train dataset to detect whether a text contains an indication of a firm stating an impact to a specific extreme weather event (see also prompt).
We follow the unsloth fine-tuning setup in this work. The model is fine-tuned with the prompt template given below.
How to Get Started With the Model
You can use the model in the following way:
from vllm import LLM, SamplingParams
from unsloth.chat_templates import get_chat_template
from transformers import AutoTokenizer
# load model
model_name = "extreme-weather-impacts/Qwen2.5-7B-Instruct-eventimpact"
llm = LLM(model=model_name)
# load tokenizer with the correct chat template
tokenizer = AutoTokenizer.from_pretrained(model_name) # "Qwen/Qwen2.5-7B"
tokenizer = get_chat_template(tokenizer, chat_template="qwen-2.5")
# prompt template
prompt_template_eventimpact = """You are given a TEXT of a company disclosure, along with an EVENT and its EVENT DESCRIPTION. Your task is to determine whether the company was exposed to the specific EVENT.
EVENT: "{event}"
EVENT DESCRIPTION: "{event_description}"
Here is the TEXT from the company’s disclosure:
[begin of TEXT]
{text}
[end of TEXT]
Answer the following questions strictly with "Yes" or "No":
- Based on the TEXT, was the company exposed to the specific EVENT "{event}"?
Decision Guidelines:
- A company is considered "exposed" only if:
1. It was directly impacted by the specific EVENT. Thus, the event is mentioned or directly implied.
2. The impact happened in the past and is explicitly linked to the company.
3. The impact was caused by a clear extreme weather event, not ordinary weather conditions.
- Forward-looking statements, potential future impacts, or potential risks do NOT count as "exposed".
- Merely stating a geographic location does NOT count as "exposed".
- Merely stating a generic or specific list of extreme weather events does NOT count as "exposed".
- TEXTs that are not full sentences do NOT count as "exposed".
Output Format:
Only respond by strictly giving a "Yes" or "No".
Your Output:
"""
# some example texts
event_name = "North Dakota South Dakota and Montana Drought (Spring-Fall 2017)"
event_description = "Extreme drought causes extensive impacts to agriculture in North Dakota, South Dakota and Montana. Field crops including wheat were severely damaged and the lack of feed for cattle forced ranchers to sell off livestock. This drought has also contributed to the increased potential for severe wildfires."
text_1 = "The most severe forward-looking risks for our firm are hurricanes and wildfires."
text_2 = "Last year, a large freeze in Texas resulted in the closure of our production facilities."
text_3 = "In the summer of 2017, our company was affected by the extreme drought conditions in Montana that resulted in lower than usual water supply."
texts = [text_1, text_2, text_3]
prompt_1 = prompt_template_eventimpact.format(event=event_name, event_description=event_description, text=text_1)
prompt_2 = prompt_template_eventimpact.format(event=event_name, event_description=event_description, text=text_2)
prompt_3 = prompt_template_eventimpact.format(event=event_name, event_description=event_description, text=text_3)
# demo prompts
raw_prompts = [
[{'role': 'user', 'content': prompt_1}],
[{'role': 'user', 'content': prompt_2}],
[{'role': 'user', 'content': prompt_3}]
]
# apply the correct chat template formatting
formatted_prompts = [
tokenizer.apply_chat_template(convo, tokenize=False, add_generation_prompt=True)
for convo in raw_prompts
]
# set sampling parameters
sampling_params = SamplingParams(temperature = 0.01, min_p = 0.1)
# run inference
outputs = llm.generate(formatted_prompts, sampling_params)
# print outputs
answers = []
print(f"Event under investigation: {event_name}")
for i, output in enumerate(outputs):
generated_text = output.outputs[0].text
answers.append(generated_text)
print(f"Text under investigation: {texts[i]!r}\nGenerated Answer (Event Impact?): {generated_text!r}\n")
More details can be found in the paper
@article{Schimanski25extremeweatherimpacts,
title={{What Firms Actually Lose (and Gain) from Extreme Weather Event Impacts}},
author={Tobias Schimanski and Glen Gostlow and Malte Toetzke and Markus Leippold},
year={2025},
journal={Soon available on SSRN},
}
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support