title
stringlengths
21
128
content_TLDR
stringlengths
40
250
abstract
stringlengths
613
2.09k
authors
listlengths
1
42
openreview_url
stringlengths
42
42
id
stringlengths
10
10
forum
stringlengths
10
10
authorids
listlengths
1
42
venue
dict
venueid
dict
pdf_url
dict
invitation
stringclasses
1 value
group
stringclasses
1 value
venue_name
stringclasses
1 value
year
int64
2.03k
2.03k
conference
stringclasses
1 value
content_keywords
listlengths
1
16
content_code_of_ethics
stringclasses
1 value
content_author_guide
stringclasses
1 value
content_flagged_for_ethics_review
bool
1 class
content_ethics_comments
stringclasses
11 values
content__bibtex
stringlengths
246
1.01k
content_paperhash
stringlengths
29
134
content_supplementary_material
stringclasses
73 values
content_award_nomination
bool
1 class
content_reciprocal_reviewing_status
stringclasses
1 value
content_reciprocal_reviewing_author
stringclasses
4 values
content_reciprocal_reviewing_exemption_reason
dict
HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Interactive AI Agents
We built a sandbox simulation system to evaluate AI agent safety issues in a multi-turn setting.
To address the growing safety risks as AI agents become increasingly autonomous in their interactions with human users and environments, we present HAICOSYSTEM, a framework examining AI agent safety within diverse and complex social interactions. HAICOSYSTEM features a modular sandbox environment that simulates multi-turn interactions between users and AI agents. We then develop a comprehensive multi-dimensional evaluation framework that uses metrics covering operational, content-related, societal, and legal risks to examine the safety of AI agents in these interactions. Through running over 8K simulations based on 132 scenarios across seven domains (e.g., healthcare, finance, education), we show that state-of-the-art LLMs exhibit safety risks in 62% of cases, particularly during tool use with malicious users, highlighting the importance of evaluating and addressing AI agent safety in dynamic human-AI-environment interactions.
[ "Xuhui Zhou", "Hyunwoo Kim", "Faeze Brahman", "Liwei Jiang", "Hao Zhu", "Ximing Lu", "Frank F. Xu", "Bill Yuchen Lin", "Yejin Choi", "Niloofar Mireshghallah", "Ronan Le Bras", "Maarten Sap" ]
https://openreview.net/forum?id=KI1WQ6rLiy
KI1WQ6rLiy
KI1WQ6rLiy
[ "~Xuhui_Zhou1", "~Hyunwoo_Kim3", "~Faeze_Brahman1", "~Liwei_Jiang2", "~Hao_Zhu1", "~Ximing_Lu1", "~Frank_F._Xu1", "~Bill_Yuchen_Lin2", "~Yejin_Choi1", "~Niloofar_Mireshghallah1", "~Ronan_Le_Bras1", "~Maarten_Sap1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/23bd54b97bfa3b19000dcf8c3221f23673dad47d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "AI Safety", "Multi-Agent Systems", "Human-AI Interaction", "Social Simulation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhou2025haicosystem, title={{HAICOSYSTEM}: An Ecosystem for Sandboxing Safety Risks in Interactive {AI} Agents}, author={Xuhui Zhou and Hyunwoo Kim and Faeze Brahman and Liwei Jiang and Hao Zhu and Ximing Lu and Frank F. Xu and Bill Yuchen Lin and Yejin Choi and Niloofar Mireshghallah and Ronan Le Bras and Maarten Sap}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=KI1WQ6rLiy} }
zhou|haicosystem_an_ecosystem_for_sandboxing_safety_risks_in_interactive_ai_agents
/attachment/03da59a388eb322b34c88124a41ad0962f77ba22.zip
null
null
null
null
Towards Compute-Optimal Many-Shot In-Context Learning
We propose two straightforward and effective strategies for selecting demonstrations in many-shot in-context learning that balance performance and inference cost.
Long-context large language models (LLMs) are able to process inputs containing up to several million tokens. In the scope of in-context learning (ICL), this translates into using hundreds/thousands of demonstrations in the input prompt, enabling many-shot ICL. In practice, a fixed set of demonstrations is often selected at random in many-shot settings due to (1) high inference costs, (2) the benefits of caching and reusing computations, and (3) the similar performance offered by this strategy compared to others when scaled. In this work, we propose two straightforward strategies for demonstration selection in many-shot ICL that improve performance with minimal computational overhead. Our first method combines a small number of demonstrations, selected based on their similarity to each test sample, with a disproportionately larger set of random demonstrations that are cached. The second strategy improves the first by replacing random demonstrations with those selected using centroids derived from test sample representations via k-means clustering. Our experiments with Gemini Pro and Flash across several datasets indicate that our strategies consistently outperform random selection and surpass or match the most performant selection approach while supporting caching and reducing inference cost by up to an order of magnitude. We also show that adjusting the proportion of demonstrations selected based on different criteria can balance performance and inference cost in many-shot ICL.
[ "Shahriar Golchin", "Yanfei Chen", "Rujun Han", "Manan Gandhi", "Tianli Yu", "Swaroop Mishra", "Mihai Surdeanu", "Rishabh Agarwal", "Chen-Yu Lee", "Tomas Pfister" ]
https://openreview.net/forum?id=K7kwRv5mj1
K7kwRv5mj1
K7kwRv5mj1
[ "~Shahriar_Golchin1", "~Yanfei_Chen1", "~Rujun_Han1", "~Manan_Gandhi2", "~Tianli_Yu1", "~Swaroop_Mishra1", "~Mihai_Surdeanu1", "~Rishabh_Agarwal2", "~Chen-Yu_Lee2", "~Tomas_Pfister1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/fa78c25069cc8d78f937b406035c02be1f2e7a73.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Many-Shot In-Context Learning", "In-Context Learning", "Large Language Models", "Compute-Optimal In-Context Learning", "Demonstration Selection" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ golchin2025towards, title={Towards Compute-Optimal Many-Shot In-Context Learning}, author={Shahriar Golchin and Yanfei Chen and Rujun Han and Manan Gandhi and Tianli Yu and Swaroop Mishra and Mihai Surdeanu and Rishabh Agarwal and Chen-Yu Lee and Tomas Pfister}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=K7kwRv5mj1} }
golchin|towards_computeoptimal_manyshot_incontext_learning
null
null
null
null
null
Mitigating Modal Imbalance in Multimodal Reasoning
Attention imbalance makes foundation models struggle with cross-modal contexts.
Foundation models (FMs) deployed in real-world tasks such as computer-use agents must integrate diverse modalities. How good are FMs at performing *joint reasoning*, simultaneously reasoning over multiple modalities, especially when the modalities interact and relate to each other to form *cross-modal context*? To better understand this problem, we study FMs on *cross-modal conflicts*: scenarios where conflicting evidence is presented across modalities. This allows us to examine whether FMs prioritize one modality over another or reason jointly to reconcile the conflict. Our experiments reveal that FMs can recognize conflicts in *unimodal contexts*, composed of a single modality, 90% of the time, but the ratio falls as low as 3% when evidence is split across modalities -- similar observations hold in *cross-lingual contexts*, composed of multiple languages. We trace this failure to *cross-modal attention imbalance*, showing that FMs exhibit extreme asymmetry in attention scores, disproportionately prioritizing certain modalities. We show that cross-modal attention imbalance does not go away by simply scaling up multimodal or multilingual datasets blindly, since they lack training examples that explicitly require cross-modal reasoning. We demonstrate that even a simple and scalable method of explicitly combining multiple modalities within each training instance significantly reduces attention imbalance. Reduced attention imbalance directly translates to improved downstream performance on several vision-language benchmarks. Our findings underscore the importance of systematically addressing cross-modal contexts to build reliable foundation models.
[ "Chen Henry Wu", "Neil Kale", "Aditi Raghunathan" ]
https://openreview.net/forum?id=JsaXxGOXfU
JsaXxGOXfU
JsaXxGOXfU
[ "~Chen_Henry_Wu1", "~Neil_Kale1", "~Aditi_Raghunathan1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ee73bf3ab505b455c704652ddb9d77fcec6a2c27.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "multilingual language models", "multimodal language models", "cross-modal reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wu2025mitigating, title={Mitigating Modal Imbalance in Multimodal Reasoning}, author={Chen Henry Wu and Neil Kale and Aditi Raghunathan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=JsaXxGOXfU} }
wu|mitigating_modal_imbalance_in_multimodal_reasoning
null
null
null
null
null
Understanding Layer Significance in LLM Alignment
We propose an algorithm to identify which layers within LLMs are most critical to the alignment process.
Aligning large language models (LLMs) through supervised fine-tuning is essential for tailoring them to specific applications. Recent studies suggest that alignment primarily adjusts a model's presentation style rather than its foundational knowledge, indicating that only certain components of the model are significantly impacted. To uncover how alignment affects model behavior at a granular level, we propose identifying which layers within LLMs are most critical to the alignment process. Our approach, named ILA, involves learning a binary mask for the parameter changes in each layer during alignment, as an indicator of layer significance. Experimental results reveal that, despite substantial differences in alignment datasets, the important layers of a model identified by ILA exhibit nearly 90\% overlap, highlighting fundamental patterns in LLM alignment. The results also indicate that freezing non-essential layers improves overall model performance, while selectively tuning the most critical layers significantly enhances fine-tuning efficiency with minimal performance loss. Finally, we discuss how these findings extend from LLM alignment to reasoning. The source code is available at https://github.com/moukamisama/ILA.
[ "Guangyuan SHI", "ZEXIN LU", "Xiaoyu DONG", "Wenlong Zhang", "Xuanyu Zhang", "Yujie Feng", "Xiao-Ming Wu" ]
https://openreview.net/forum?id=JloZnCwhmk
JloZnCwhmk
JloZnCwhmk
[ "~Guangyuan_SHI1", "~ZEXIN_LU1", "~Xiaoyu_DONG4", "~Wenlong_Zhang3", "~Xuanyu_Zhang1", "~Yujie_Feng1", "~Xiao-Ming_Wu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3e050ab27a8a753359ad6b130d6b865e486c3acf.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Layer Significance", "Language Model Alignment" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shi2025understanding, title={Understanding Layer Significance in {LLM} Alignment}, author={Guangyuan SHI and ZEXIN LU and Xiaoyu DONG and Wenlong Zhang and Xuanyu Zhang and Yujie Feng and Xiao-Ming Wu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=JloZnCwhmk} }
shi|understanding_layer_significance_in_llm_alignment
/attachment/a2cf8e7304066a7a65b379b1ad9fc3b59cb017d6.zip
null
null
null
null
CUPID: Evaluating Personalized and Contextualized Alignment of LLMs from Interactions
We introduce CUPID, a benchmark that evaluates LLMs capability to infer a user's contextual preferences from prior user-LLM interactions.
Personalization of Large Language Models (LLMs) often assumes users hold static preferences that reflect globally in all tasks. In reality, humans hold dynamic preferences that change depending on the context. As users interact with an LLM in various contexts, they naturally reveal their contextual preferences, which a model must infer and apply in future contexts to ensure alignment. To assess this, we introduce 🏹 CUPID, a benchmark of 756 human-curated interaction session histories between users and LLM-based chat assistants. In each interaction session, the user provides a request in a specific context and expresses their preference through multi-turn feedback. Given a new user request and prior interaction sessions, our benchmark assesses whether LLMs can infer the preference relevant to this request and generate a response that satisfies this preference. With CUPID, we evaluated 10 open and proprietary LLMs, revealing that state-of-the-art LLMs struggle to infer preferences from multi-turn interactions and fail to discern what previous context is relevant to a new request—under 50% precision and 65% recall. Our work highlights the need to advance LLM capabilities for more contextually personalized interactions and proposes CUPID as a resource to drive these improvements.
[ "Tae Soo Kim", "Yoonjoo Lee", "Yoonah Park", "Jiho Kim", "Young-Ho Kim", "Juho Kim" ]
https://openreview.net/forum?id=JMxRn7orEk
JMxRn7orEk
JMxRn7orEk
[ "~Tae_Soo_Kim3", "~Yoonjoo_Lee1", "~Yoonah_Park1", "~Jiho_Kim3", "~Young-Ho_Kim1", "~Juho_Kim2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ef96ba79a82f36449f15acdc5f8e1cffa66af61c.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "Evaluation", "Benchmark", "Personalization", "Preferences", "Interactions" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ kim2025cupid, title={{CUPID}: Evaluating Personalized and Contextualized Alignment of {LLM}s from Interactions}, author={Tae Soo Kim and Yoonjoo Lee and Yoonah Park and Jiho Kim and Young-Ho Kim and Juho Kim}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=JMxRn7orEk} }
kim|cupid_evaluating_personalized_and_contextualized_alignment_of_llms_from_interactions
null
null
null
null
null
Control the Temperature: Selective Sampling for Diverse and High-Quality LLM Outputs
We propose selective sampling, a method that dynamically switches between greedy and high-temperature sampling based on a sampling risk metric.
Diversity is essential for language models to generate creative outputs. Temperature-based sampling is a common strategy to increase diversity. However, for tasks that require high precision, e.g., mathematical reasoning, uncontrolled high temperature sampling, e.g., min-$p$ or top-$p$ lowers reasoning quality. We demonstrate that the loss of accuracy is caused by sampling incorrect continuations in sensitive positions when entropy is high. To address this, in this paper, we propose selective sampling, a method that dynamically switches between greedy and high-temperature sampling based on a sampling risk metric. This risk metric estimates the likelihood of output errors when applying high temperature sampling on the current token position. We train a lightweight classifier on a small subset of verifiable problems to predict sampling risk. The classifier can be integrated with the base language model with minimal latency overhead. Experiments on mathematical reasoning tasks show that selective sampling improves the quality-diversity trade-off, even under high-temperature settings.
[ "Sergey Troshin", "Wafaa Mohammed", "Yan Meng", "Christof Monz", "Antske Fokkens", "Vlad Niculae" ]
https://openreview.net/forum?id=IyOC5GCzv4
IyOC5GCzv4
IyOC5GCzv4
[ "~Sergey_Troshin1", "~Wafaa_Mohammed1", "~Yan_Meng3", "~Christof_Monz1", "~Antske_Fokkens1", "~Vlad_Niculae2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/596fb42d1b712d127a4d8dd86cbafa8a2b28da8e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Natural Language Processing", "Large Language Models", "Text Generation", "Sampling Methods", "Truncation Sampling", "Stochastic Sampling", "Min-p Sampling", "Top-p Sampling", "Temperature Sampling", "Decoding Methods", "LLMs reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ troshin2025control, title={Control the Temperature: Selective Sampling for Diverse and High-Quality {LLM} Outputs}, author={Sergey Troshin and Wafaa Mohammed and Yan Meng and Christof Monz and Antske Fokkens and Vlad Niculae}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=IyOC5GCzv4} }
troshin|control_the_temperature_selective_sampling_for_diverse_and_highquality_llm_outputs
null
null
null
null
null
Training Large Language Models to Reason in a Continuous Latent Space
To explore the potential of LLM reasoning in an unrestricted latent space instead of using natural language, we introduce a new paradigm Coconut (Chain of Continuous Thought)
Large language models (LLMs) are restricted to reason in the "language space", where they typically express the reasoning process with a chain-of-thought (CoT) to solve a complex reasoning problem. However, we argue that language space may not always be optimal for reasoning. For example, most word tokens are primarily for textual coherence and not essential for reasoning, while some critical tokens require complex planning and pose huge challenges to LLMs. To explore the potential of LLM reasoning in an unrestricted latent space instead of using natural language, we introduce a new paradigm Coconut (Chain of Continuous Thought). We utilize the last hidden state of the LLM as a representation of the reasoning state (termed "continuous thought"). Rather than decoding this into a word token, we feed it back to the LLM as the subsequent input embedding directly in the continuous space. Experiments show that Coconut can effectively augment the LLM on several reasoning tasks. This novel latent reasoning paradigm leads to emergent advanced reasoning patterns: the continuous thought can encode multiple alternative next reasoning steps, allowing the model to perform a breadth-first search (BFS) to solve the problem, rather than prematurely committing to a single deterministic path like CoT. Coconut outperforms CoT in certain logical reasoning tasks that require substantial backtracking during planning, with fewer thinking tokens during inference. These findings demonstrate the promise of latent reasoning and offer valuable insights for future research.
[ "Shibo Hao", "Sainbayar Sukhbaatar", "DiJia Su", "Xian Li", "Zhiting Hu", "Jason E Weston", "Yuandong Tian" ]
https://openreview.net/forum?id=Itxz7S4Ip3
Itxz7S4Ip3
Itxz7S4Ip3
[ "~Shibo_Hao1", "~Sainbayar_Sukhbaatar1", "~DiJia_Su1", "~Xian_Li1", "~Zhiting_Hu3", "~Jason_E_Weston1", "~Yuandong_Tian1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a7eb656fca52784a118a61fee4aa58ee6ca18e9c.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "chain of thought", "reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hao2025training, title={Training Large Language Models to Reason in a Continuous Latent Space}, author={Shibo Hao and Sainbayar Sukhbaatar and DiJia Su and Xian Li and Zhiting Hu and Jason E Weston and Yuandong Tian}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Itxz7S4Ip3} }
hao|training_large_language_models_to_reason_in_a_continuous_latent_space
null
null
null
null
null
Scaling Analysis of Interleaved Speech-Text Language Models
We perform the first scaling analysis of Interleaved SpeechLMs showing they scale more efficiently and differently than existing SpeechLM scaling.
Existing Speech Language Model (SLM) scaling analysis paints a bleak picture. It predicts that SLMs require much more compute and data compared to text, leading some to question the feasibility of training high-quality SLMs. However, modern SLMs are often initialised from pre-trained TextLMs using speech-text interleaving to allow knowledge transfer. This raises the question - "Do interleaved SLMs scale more efficiently than textless-SLMs?" In this paper we answer a resounding yes! We conduct scaling analysis of interleaved SLMs by training several dozen and analysing the scaling trends. We see that under this setup SLMs scale more efficiently with compute. Additionally, our results indicate that the scaling dynamics significantly differ from textless-SLMs, suggesting one should allocate notably more of the compute budget to increasing model size over training tokens. We also study the role of synthetic data and TextLM model families in unlocking this potential. Results suggest that our scaled up model achieves comparable semantic speech performance to leading models, while using less compute and data. We open source models, samples, and data - https://pages.cs.huji.ac.il/adiyoss-lab/sims/ .
[ "Gallil Maimon", "Michael Hassid", "Amit Roth", "Yossi Adi" ]
https://openreview.net/forum?id=IXwgE8hyJs
IXwgE8hyJs
IXwgE8hyJs
[ "~Gallil_Maimon1", "~Michael_Hassid1", "~Amit_Roth1", "~Yossi_Adi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7df4a2549b2310beca0a4440def2e57eace08a90.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Speech Language Models", "Scaling Analysis", "Speech-Text Interleaving" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ maimon2025scaling, title={Scaling Analysis of Interleaved Speech-Text Language Models}, author={Gallil Maimon and Michael Hassid and Amit Roth and Yossi Adi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=IXwgE8hyJs} }
maimon|scaling_analysis_of_interleaved_speechtext_language_models
null
null
null
null
null
Short-PHD: Detecting Short LLM-generated Text with Topological Data Analysis After Off-topic Content Insertion
We present Short-PHD, a zero-shot LLM-generated text detection method tailored for short texts.
The malicious usage of large language models (LLMs) has motivated the detection of LLM-generated texts. Previous work in topological data analysis shows that the persistent homology dimension (PHD) of text embeddings can serve as a more robust and promising score than other zero-shot methods. However, effectively detecting short LLM-generated texts remains a challenge. This paper presents Short-PHD, a zero-shot LLM-generated text detection method tailored for short texts. Short-PHD stabilizes the estimation of the previous PHD method for short texts by inserting off-topic content before the given input text and identifies LLM-generated text based on an established detection threshold. Experimental results on both public and generated datasets demonstrate that Short-PHD outperforms existing zero-shot methods in short LLM-generated text detection. The implementation codes of this study are available online.
[ "Dongjun Wei", "Minjia Mao", "Xiao Fang", "Michael Chau" ]
https://openreview.net/forum?id=IC2WwhUfQg
IC2WwhUfQg
IC2WwhUfQg
[ "~Dongjun_Wei1", "~Minjia_Mao1", "~Xiao_Fang5", "~Michael_Chau1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/977b43deb9d899ac8b9b8ca5a5d2b0f125a02b36.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "large language model", "zero-shot detection", "short text", "topological data analysis" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wei2025shortphd, title={Short-{PHD}: Detecting Short {LLM}-generated Text with Topological Data Analysis After Off-topic Content Insertion}, author={Dongjun Wei and Minjia Mao and Xiao Fang and Michael Chau}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=IC2WwhUfQg} }
wei|shortphd_detecting_short_llmgenerated_text_with_topological_data_analysis_after_offtopic_content_insertion
/attachment/ae45d7449848df03bc6c863802520e24986cd3af.zip
null
null
null
null
Hyperparameter Loss Surfaces Are Simple Near their Optima
We derive a theory describing the hyperparameter loss surface and yielding new statistical tools for understanding it.
Hyperparameters greatly impact models' capabilities; however, modern models are too large for extensive search. Instead, researchers design recipes that train well across scales based on their understanding of the hyperparameters. Despite this importance, few tools exist for understanding the hyperparameter loss surface. We discover novel structure in it and propose a new theory yielding such tools. The loss surface is complex, but as you approach the optimum simple structure emerges. It becomes characterized by a few basic features, like its effective dimension and the best possible loss. To uncover this *asymptotic regime*, we develop a novel technique based on random search. Within this regime, the best scores from random search take on a new distribution we discover. Its parameters are exactly the features defining the loss surface in the asymptotic regime. From these features, we derive a new asymptotic law for random search that can explain and extrapolate its convergence. These new tools enable new analyses, such as confidence intervals for the best possible performance or determining the effective number of hyperparameters. We make these tools available at: https://github.com/nicholaslourie/opda.
[ "Nicholas Lourie", "He He", "Kyunghyun Cho" ]
https://openreview.net/forum?id=IAoSG4Q2xC
IAoSG4Q2xC
IAoSG4Q2xC
[ "~Nicholas_Lourie1", "~He_He2", "~Kyunghyun_Cho1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9d471cac16439c0decd2868d23e4cb95b22b280a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "experimental design", "hyperparameters", "scaling laws" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lourie2025hyperparameter, title={Hyperparameter Loss Surfaces Are Simple Near their Optima}, author={Nicholas Lourie and He He and Kyunghyun Cho}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=IAoSG4Q2xC} }
lourie|hyperparameter_loss_surfaces_are_simple_near_their_optima
null
null
null
null
null
Exploring Large Language Model Agents for Piloting Social Experiments
Grounded in social theories and practices, we propose an LLM-driven framework for piloting social experiments.
Computational social experiments, which typically employ agent-based modeling to create testbeds for piloting social experiments, not only provide a computational solution to the major challenges faced by traditional experimental methods, but have also gained widespread attention in various research fields. Despite their significance, their broader impact is largely limited by the underdeveloped intelligence of their core component, i.e., agents. To address this limitation, we develop a framework grounded in well-established social science theories and practices, consisting of three key elements: (i) large language model (LLM)-driven experimental agents, serving as "silicon participants", (ii) methods for implementing various interventions or treatments, and (iii) tools for collecting behavioral, survey, and interview data. We evaluate its effectiveness by replicating three representative experiments, with results demonstrating strong alignment, both quantitatively and qualitatively, with real-world evidence. This work provides the first framework for designing LLM-driven agents to pilot social experiments, underscoring the transformative potential of LLMs and their agents in computational social science.
[ "Jinghua Piao", "Yuwei Yan", "Nian Li", "Jun Zhang", "Yong Li" ]
https://openreview.net/forum?id=I95XCwHdSE
I95XCwHdSE
I95XCwHdSE
[ "~Jinghua_Piao1", "~Yuwei_Yan1", "~Nian_Li1", "~Jun_Zhang22", "~Yong_Li7" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/349d42af4159013522b79d27abec876318f0cafa.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM Agents", "Social Simulation", "Computational Social Experiments" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ piao2025exploring, title={Exploring Large Language Model Agents for Piloting Social Experiments}, author={Jinghua Piao and Yuwei Yan and Nian Li and Jun Zhang and Yong Li}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=I95XCwHdSE} }
piao|exploring_large_language_model_agents_for_piloting_social_experiments
null
null
null
null
null
SentenceKV: Efficient LLM Inference via Sentence-Level Semantic KV Caching
SentenceKV improves LLM inference efficiency by compressing and retrieving key-value cache at the sentence level based on semantic similarity.
Large language models face significant computational and memory challenges when processing long contexts. During inference, efficient management of the key-value (KV) cache, which stores intermediate activations for autoregressive generation, is critical to reducing memory overhead and improving computational efficiency. Traditional token-level efficient KV caching methods overlook semantic information, treating tokens independently without considering their semantic relationships. Meanwhile, existing semantic-preserving KV cache management approaches often suffer from substantial memory usage and high time-to-first-token. To address these limitations, we propose SentenceKV, a novel sentence-level semantic KV caching approach designed to enhance inference efficiency while preserving semantic coherence. During prefilling, SentenceKV groups tokens based on sentence-level semantic similarity, compressing sentence representations into concise semantic vectors stored directly on the GPU, while individual KV pairs are offloaded to CPU. During decoding, SentenceKV generates tokens by selectively retrieving semantically relevant sentence-level KV entries, leveraging the semantic similarity between the prefilling-stage semantic vectors and decoding-stage queries. This ensures efficient and contextually accurate predictions, minimizing the loading of redundant or irrelevant data into GPU memory and significantly reducing memory overhead while maintaining stable inference latency, even for extremely long contexts. Extensive evaluations on benchmarks including PG-19, LongBench, Needle-In-A-Haystack, and RULER demonstrate that SentenceKV significantly outperforms state-of-the-art methods in both efficiency and memory usage, without compromising model accuracy.
[ "Yuxuan Zhu", "Ali Falahati", "David H. Yang", "Mohammad Mohammadi Amiri" ]
https://openreview.net/forum?id=HyPeYU9JR6
HyPeYU9JR6
HyPeYU9JR6
[ "~Yuxuan_Zhu3", "~Ali_Falahati2", "~David_H._Yang1", "~Mohammad_Mohammadi_Amiri1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/d5d5ab2398a6fb34081a3ed58bb1ca0caa78e127.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "KV cache compression", "Sentence-level semantic caching", "Inference efficiency", "Long-context inference" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhu2025sentencekv, title={Sentence{KV}: Efficient {LLM} Inference via Sentence-Level Semantic {KV} Caching}, author={Yuxuan Zhu and Ali Falahati and David H. Yang and Mohammad Mohammadi Amiri}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=HyPeYU9JR6} }
zhu|sentencekv_efficient_llm_inference_via_sentencelevel_semantic_kv_caching
null
null
null
null
null
Reasoning-SQL: Reinforcement Learning with SQL Tailored Partial Rewards for Reasoning-Enhanced Text-to-SQL
Enhancing the reasoning ability of LLMs in text-to-SQL domain by employing GRPO and partial rewards.
Text-to-SQL is a challenging task involving multiple reasoning-intensive subtasks, including natural language understanding, database schema comprehension, and precise SQL query formulation. Existing approaches often rely on handcrafted reasoning paths with inductive biases that can limit their overall effectiveness. Motivated by the recent success of reasoning-enhanced models such as DeepSeek R1 and OpenAI O1, which effectively leverage reward-driven self-exploration to enhance reasoning capabilities and generalization, we propose a novel set of partial rewards tailored specifically for the Text-to-SQL task. Our reward set includes schema-linking, partial reward from AI feedback, n-gram similarity, and syntax check rewards, explicitly designed to address the reward sparsity issue prevalent in reinforcement learning (RL). Leveraging group relative policy optimization (GRPO), our approach explicitly encourages large language models (LLMs) to develop intrinsic reasoning skills necessary for accurate SQL query generation. With models of different sizes, we demonstrate that RL-only training with our proposed rewards consistently achieves higher accuracy and superior generalization compared to supervised fine-tuning (SFT). Remarkably, our RL-trained14B-parameter model significantly outperforms larger proprietary models, e.g. O3-Mini by 4% and Gemini-1.5-Pro-002 by 3% on the BIRD benchmark. These highlight the efficacy of our proposed RL-training framework with partial rewards for enhancing both accuracy and reasoning capabilities in Text-to-SQL tasks.
[ "Mohammadreza Pourreza", "Shayan Talaei", "Ruoxi Sun", "Xingchen Wan", "Hailong Li", "Azalia Mirhoseini", "Amin Saberi", "Sercan O Arik" ]
https://openreview.net/forum?id=HbwkIDWQgN
HbwkIDWQgN
HbwkIDWQgN
[ "~Mohammadreza_Pourreza1", "~Shayan_Talaei1", "~Ruoxi_Sun2", "~Xingchen_Wan1", "~Hailong_Li2", "~Azalia_Mirhoseini3", "~Amin_Saberi1", "~Sercan_O_Arik1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/09edb216a4dd4589989e6314446b7e4dfdcc14cc.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Text-to-SQL", "Reinforcement Learning", "Database" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ pourreza2025reasoningsql, title={Reasoning-{SQL}: Reinforcement Learning with {SQL} Tailored Partial Rewards for Reasoning-Enhanced Text-to-{SQL}}, author={Mohammadreza Pourreza and Shayan Talaei and Ruoxi Sun and Xingchen Wan and Hailong Li and Azalia Mirhoseini and Amin Saberi and Sercan O Arik}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=HbwkIDWQgN} }
pourreza|reasoningsql_reinforcement_learning_with_sql_tailored_partial_rewards_for_reasoningenhanced_texttosql
null
null
null
null
null
Customize Multi-modal RAI Guardrails with Precedent-based predictions
We propose a precedent-based approach to enhance the adaptability and interpretability of multimodal guardrails, enabling customizable policy enforcement without extensive retraining.
A multi-modal guardrail must effectively filter image content based on user-defined policies, identifying material that may be hateful, reinforce harmful stereotypes, contain explicit material, or spread misinformation. Deploying such guardrails in real-world applications, however, poses significant challenges. Users often require varied and highly customizable policies and typically cannot provide abundant examples for each custom policy. Consequently, an ideal guardrail should be scalable to the multiple policies and adaptable to evolving user standards with minimal retraining. Existing fine-tuning methods typically condition predictions on pre-defined policies, restricting their generalizability to new policies or necessitating extensive retraining to adapt. Conversely, training-free methods struggle with limited context lengths, making it difficult to incorporate all the policies comprehensively. To overcome these limitations, we propose to condition model's judgment on "precedents", which are the reasoning processes of prior data points similar to the given input. By leveraging precedents instead of fixed policies, our approach greatly enhances the flexibility and adaptability of the guardrail. In this paper, we introduce a critique-revise mechanism for collecting high-quality precedents and two strategies that utilize precedents for robust prediction. Experimental results demonstrate that our approach outperforms previous methods across both few-shot and full-dataset scenarios and exhibits superior generalization to novel policies.
[ "Cheng-Fu Yang", "Thanh Tran", "Christos Christodoulopoulos", "Weitong Ruan", "Rahul Gupta", "Kai-Wei Chang" ]
https://openreview.net/forum?id=HL5X5uX0RD
HL5X5uX0RD
HL5X5uX0RD
[ "~Cheng-Fu_Yang1", "~Thanh_Tran1", "~Christos_Christodoulopoulos1", "~Weitong_Ruan1", "~Rahul_Gupta3", "~Kai-Wei_Chang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7b668ce2b182bd34ede426e0904467d61fdd76f3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Customizable Guardrail" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yang2025customize, title={Customize Multi-modal {RAI} Guardrails with Precedent-based predictions}, author={Cheng-Fu Yang and Thanh Tran and Christos Christodoulopoulos and Weitong Ruan and Rahul Gupta and Kai-Wei Chang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=HL5X5uX0RD} }
yang|customize_multimodal_rai_guardrails_with_precedentbased_predictions
/attachment/919f1022d31c4c78d061053ab43b078255df0fa3.zip
null
null
null
null
Hardware-Efficient Attention for Fast Decoding
Incremental decoding slows attention, we propose new hardware-efficient variants that reorganize attention to preserve parallelization and model quality while boosting speed, GPU utilization, and throughput, all with minimal cache.
The combination of excessive data movement, an expanding key-value cache, and the limited parallelism inherent in incremental decoding severely bottleneck attention. We explore the design of hardware-efficient attention optimized for LLM decoding. We examine how arithmetic intensity, parallelization, and model quality interact and assess whether the current architecture fully capitalizes on modern hardware. To maximize hardware-effiency, we first propose Group Tied Attention (GTA), a simple attention variant that combines and reuses key and value states to reduce memory transfers during incremental decoding while preserving model quality. We then introduce Group Latent Attention (GLA), a parallel-friendly latent attention combined with low-level optimization designed for fast decoding while showing high model quality. We empirically demonstrate the efficacy of these inference-aware variants in language modeling experiments, showing that GTA matches grouped query attention (GQA) quality with roughly 2x smaller KV cache, and GLA matches multi-head latent attention (MLA) but is easier to shard. Our optimized attention kernel for GLA is up to 2x faster than FlashMLA.
[ "Ted Zadouri", "Hubert Strauss", "Tri Dao" ]
https://openreview.net/forum?id=HAjgxcHpzc
HAjgxcHpzc
HAjgxcHpzc
[ "~Ted_Zadouri1", "~Hubert_Strauss1", "~Tri_Dao1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ee3e3dfd385cea39616b181209d3981eb77006a1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Inference", "Engineering for large LMs", "Compute efficient LM" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zadouri2025hardwareefficient, title={Hardware-Efficient Attention for Fast Decoding}, author={Ted Zadouri and Hubert Strauss and Tri Dao}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=HAjgxcHpzc} }
zadouri|hardwareefficient_attention_for_fast_decoding
null
null
null
null
null
Arctic-Embed 2.0: Multilingual Retrieval Without Compromise
A multilingual embedding model that also performs well on English retrieval — with probing experiments to understand why.
This paper presents the training methodology of Snowflake Arctic-Embed 2.0, a set of open-source text embedding models built for effective and efficient multilingual retrieval. While prior works have suffered from degraded English retrieval quality, Arctic-Embed 2.0 delivers competitive retrieval quality on multilingual and English-only benchmarks, and supports Matryoshka Representation Learning (MRL) for efficient embedding storage with significantly lower compressed quality degradation compared to alternatives. Beyond describing the design and implementation details, we highlight critical research questions encountered during development, including the mechanisms of cross-lingual transfer in retrieval pre-training and what we term the "English performance gap" - the systematic quality difference between specialized English-only models and multilingual alternatives. Through targeted experiments addressing these questions, we derive insights from both positive and negative results, contributing to a broader understanding of multilingual embedding models and aiming to stimulate further research on improving cross-lingual representation quality while maintaining strong monolingual performance.
[ "Puxuan Yu", "Luke Merrick", "Gaurav Nuti", "Daniel F Campos" ]
https://openreview.net/forum?id=H6so82c2Sw
H6so82c2Sw
H6so82c2Sw
[ "~Puxuan_Yu1", "~Luke_Merrick1", "~Gaurav_Nuti1", "~Daniel_F_Campos1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b0f4c0662ae8c60db726622c709da07698614ebb.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Multilingual Retrieval", "Dense Retrieval", "Cross-lingual Transfer" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yu2025arcticembed, title={Arctic-Embed 2.0: Multilingual Retrieval Without Compromise}, author={Puxuan Yu and Luke Merrick and Gaurav Nuti and Daniel F Campos}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=H6so82c2Sw} }
yu|arcticembed_20_multilingual_retrieval_without_compromise
null
null
null
null
null
Adversarial Training of Reward Models
We propose Adv-RM, a method that automatically discovers adversarial examples in reward models and improves their robustness through adversarial training.
Reward modeling has emerged as a promising approach for the scalable alignment of language models. However, contemporary reward models (RMs) often lack robustness, awarding high rewards to low-quality, out-of-distribution (OOD) samples. This can lead to reward hacking, where policies exploit unintended shortcuts to maximize rewards, undermining alignment. To address this challenge, we introduce Adv-RM, a novel adversarial training framework that automatically identifies adversarial examples — responses that receive high rewards from the target RM but are OOD and of low quality. By leveraging reinforcement learning, Adv-RM trains a policy to generate adversarial examples that reliably expose vulnerabilities in large state-of-the-art reward models such as Nemotron 340B RM. Incorporating these adversarial examples into the reward training process improves the robustness of RMs, mitigating reward hacking and enhancing downstream performance in RLHF. We demonstrate that Adv-RM significantly outperforms conventional RM training, increasing stability and enabling more effective RLHF training in both synthetic and real-data settings. We will open-source all code and data.
[ "Alexander Bukharin", "Haifeng Qian", "Shengyang Sun", "Adithya Renduchintala", "Soumye Singhal", "Zhilin Wang", "Oleksii Kuchaiev", "Olivier Delalleau", "Tuo Zhao" ]
https://openreview.net/forum?id=H6Ae8Po6fS
H6Ae8Po6fS
H6Ae8Po6fS
[ "~Alexander_Bukharin1", "~Haifeng_Qian1", "~Shengyang_Sun4", "~Adithya_Renduchintala2", "~Soumye_Singhal1", "~Zhilin_Wang2", "~Oleksii_Kuchaiev1", "~Olivier_Delalleau1", "~Tuo_Zhao2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/1388b50e4f18ef4672ee0934dd4930379b8681cb.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "reward models", "robustness", "RLHF" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ bukharin2025adversarial, title={Adversarial Training of Reward Models}, author={Alexander Bukharin and Haifeng Qian and Shengyang Sun and Adithya Renduchintala and Soumye Singhal and Zhilin Wang and Oleksii Kuchaiev and Olivier Delalleau and Tuo Zhao}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=H6Ae8Po6fS} }
bukharin|adversarial_training_of_reward_models
null
null
null
null
null
Adaptive Layer-skipping in Pre-trained LLMs
FlexiDepth is a method that enabled adaptive layer-skipping in pretrained language models without modify its original parameters
Various layer-skipping methods have been proposed to accelerate token generation in large language models (LLMs). However, limited attention has been paid to a fundamental question: How do computational demands vary across the generation of different tokens? In this work, we introduce FlexiDepth, a method that dynamically adjusts the number of Transformer layers used in text generation. By incorporating a plug-in router and adapter, FlexiDepth enables adaptive computation in LLMs without modifying their original parameters. Applied to Llama-3-8B, it skips 8 out of 32 layers while maintaining full benchmark performance. Our experiments reveal that computational demands in LLMs significantly vary based on token type. Specifically, generating repetitive tokens or fixed phrases requires fewer layers, whereas producing tokens involving computation or high uncertainty requires more layers. Despite the computational savings, FlexiDepth does not yet achieve wall-clock speedup due to varied skipping patterns and I/O overhead. To inspire future work and advance research on practical speedup, we open-sourced FlexiDepth and a dataset documenting its layer allocation patterns.
[ "Xuan Luo", "Weizhi Wang", "Xifeng Yan" ]
https://openreview.net/forum?id=Gu0XSax2YS
Gu0XSax2YS
Gu0XSax2YS
[ "~Xuan_Luo2", "~Weizhi_Wang1", "~Xifeng_Yan1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f57cde34d9d1da721dddc4b56b3023fc9b0e60ef.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large language models", "Layer-skipping", "Conditional Computation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ luo2025adaptive, title={Adaptive Layer-skipping in Pre-trained {LLM}s}, author={Xuan Luo and Weizhi Wang and Xifeng Yan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Gu0XSax2YS} }
luo|adaptive_layerskipping_in_pretrained_llms
null
true
null
null
null
DoomArena: A framework for Testing AI Agents Against Evolving Security Threats
Context aware security evaluation for AI agents
We present DoomArena, a security evaluation framework for AI agents. DoomArena is designed on three principles: 1) It is a \emph{plug-in} framework and integrates easily into realistic agentic frameworks like Browsergym (for web agents) and $\tau$-bench (for tool calling agents); 2) It is \emph{configurable} and allows for detailed threat modeling, allowing configuration of specific components of the agentic framework being attackable, and specifying targets for the attacker; and 3) It is \emph{modular} and decouples the development of attacks from details of the environment in which the agent is deployed, allowing for the same attacks to be applied across multiple environments. We illustrate several advantages of our framework, including enabling the development of generic attacker agents, the ability to easily combine several previously published attacks to enable comprehensive and fine-grained security testing, and the ability to analyze trade-offs between various vulnerabilities. We apply DoomArena to state-of-the-art (SOTA) web and tool-calling agents and find a number of surprising results: 1) SOTA agents have varying levels of vulnerability to different threat models (malicious user vs malicious environment), and there is no Pareto dominant agent across all threat models; 2) When multiple attacks are applied to an agent, they often combine constructively; 3) Guardrail model-based defenses seem to fail, while defenses based on powerful SOTA LLMs work much better.
[ "Léo Boisvert", "Abhay Puri", "Gabriel Huang", "Mihir Bansal", "Chandra Kiran Reddy Evuru", "Avinandan Bose", "Maryam Fazel", "Quentin Cappart", "Alexandre Lacoste", "Alexandre Drouin", "Krishnamurthy Dj Dvijotham" ]
https://openreview.net/forum?id=GanmYQ0RpE
GanmYQ0RpE
GanmYQ0RpE
[ "~Léo_Boisvert1", "~Abhay_Puri1", "~Gabriel_Huang1", "~Mihir_Bansal1", "~Chandra_Kiran_Reddy_Evuru1", "~Avinandan_Bose1", "~Maryam_Fazel1", "~Quentin_Cappart1", "~Alexandre_Lacoste1", "~Alexandre_Drouin2", "~Krishnamurthy_Dj_Dvijotham1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/dda90c4bad963f4e52c9eacb2f34ac2338596731.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "security", "agents" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ boisvert2025doomarena, title={DoomArena: A framework for Testing {AI} Agents Against Evolving Security Threats}, author={L{\'e}o Boisvert and Abhay Puri and Gabriel Huang and Mihir Bansal and Chandra Kiran Reddy Evuru and Avinandan Bose and Maryam Fazel and Quentin Cappart and Alexandre Lacoste and Alexandre Drouin and Krishnamurthy Dj Dvijotham}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=GanmYQ0RpE} }
boisvert|doomarena_a_framework_for_testing_ai_agents_against_evolving_security_threats
null
null
null
null
null
Breakpoint: Stress-testing systems-level reasoning in LLM agents
We introduce Breakpoint, a method of generating difficult coding tasks for models at a large scale that stress-test its system-level reasoning.
Benchmarks for large language models (LLMs) have predominantly assessed short-horizon, localized reasoning. Existing long-horizon suites (e.g. SWE-lancer) rely on manually curated issues, so expanding or tuning difficulty demands expensive human effort and evaluations quickly saturate. However, many real-world tasks, such as software engineering or scientific research, require agents to rapidly comprehend and manipulate novel, complex structures dynamically; evaluating these capabilities requires the ability to construct large and varied sets of problems for agents to solve. We introduce Breakpoint, a benchmarking methodology that automatically generates code-repair tasks by adversarially corrupting functions within real-world software repositories. Breakpoint systematically controls task difficulty along two different dimensions: local reasoning (characterized by code complexity metrics such as cyclomatic complexity) and system-level reasoning (characterized by call-graph centrality and the number of simultaneously corrupted interdependent functions). In experiments across more than 900 generated tasks we demonstrate that Breakpoint's methodology can scale to arbitrary difficulty, with state-of-the-art models' success rates ranging from 55\% on the easiest tasks down to 0\% on the hardest. We analyze how static parameters control task difficulty, characterize how improvements in models and inference-time budgets affect local versus system-level reasoning, and evaluate the strategies models use to gather information and iterate on solutions, demonstrating Breakpoint’s effectiveness as a comprehensive evaluation suite for understanding agent behavior and capabilities.
[ "Kaivalya Hariharan", "Uzay Girit", "Zifan Wang", "Jacob Andreas" ]
https://openreview.net/forum?id=GQNojroNCH
GQNojroNCH
GQNojroNCH
[ "~Kaivalya_Hariharan1", "~Uzay_Girit1", "~Zifan_Wang8", "~Jacob_Andreas1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/897b34e7b305c0bc1567e54633f8c3322879e981.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLMS", "coding benchmarks", "evaluation", "inverse problems", "long-horizon reasoning", "systems-level comprehension", "robustness evaluation", "code corruption", "software engineering", "long-horizon" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hariharan2025breakpoint, title={Breakpoint: Stress-testing systems-level reasoning in {LLM} agents}, author={Kaivalya Hariharan and Uzay Girit and Zifan Wang and Jacob Andreas}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=GQNojroNCH} }
hariharan|breakpoint_stresstesting_systemslevel_reasoning_in_llm_agents
null
null
null
null
null
LongCodeBench: Evaluating Coding LLMs at 1M Context Windows
LongCodeBench, a benchmark evaluating long-context language models on real-world coding tasks—code comprehension and repair—across different context lengths up to one million tokens.
Context lengths for models have grown rapidly, from thousands to millions of tokens in just a few years. The extreme context sizes of modern long-context models have made it difficult to construct realistic long-context benchmarks -- not only due to the cost of collecting million-context tasks but also in identifying realistic scenarios that require significant contexts. We identify code comprehension and repair as a natural testbed and challenge task for long-context models and introduce **LongCodeBench** (**LCB**), a benchmark to test LLM coding abilities in long-context scenarios. Our benchmark tests both the comprehension and repair capabilities of LCLMs in realistic and important settings by drawing from real-world GitHub issues and constructing QA (**LongCodeQA**) and bug fixing (**LongSWE-Bench**) tasks. We carefully stratify the complexity of our benchmark, enabling us to evaluate models across different scales -- ranging from Qwen2.5 14B Instruct to Google's flagship Gemini model. We find that long-context remains a weakness for all models, with performance drops such as from 29% to 3% for Claude 3.5 Sonnet, or from 70.2% to 40% for Qwen2.5.
[ "Stefano Rando", "Luca Romani", "Alessio Sampieri", "Luca Franco", "John Yang", "Yuta Kyuragi", "Fabio Galasso", "Tatsunori Hashimoto" ]
https://openreview.net/forum?id=GFPoM8Ylp8
GFPoM8Ylp8
GFPoM8Ylp8
[ "~Stefano_Rando1", "~Luca_Romani1", "~Alessio_Sampieri1", "~Luca_Franco1", "~John_Yang3", "~Yuta_Kyuragi1", "~Fabio_Galasso1", "~Tatsunori_Hashimoto1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/d43e6447a1220c57377449547fc622e6333d3f54.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large language models (LLMs)", "Benchmarking", "Long-context", "Coding" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ rando2025longcodebench, title={LongCodeBench: Evaluating Coding {LLM}s at 1M Context Windows}, author={Stefano Rando and Luca Romani and Alessio Sampieri and Luca Franco and John Yang and Yuta Kyuragi and Fabio Galasso and Tatsunori Hashimoto}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=GFPoM8Ylp8} }
rando|longcodebench_evaluating_coding_llms_at_1m_context_windows
null
null
null
null
null
Register Always Matters: Analysis of LLM Pretraining Data Through the Lens of Language Variation
We investigate the effect of register (also known as genre) as an explainer of LLM performance, and show that it has a substantial impact on model accuracy on standard benchmarks.
Pretraining data curation is a cornerstone in Large Language Model (LLM) development, leading to growing research on quality filtering of large web corpora. From statistical quality flags to LLM-based labelling systems, datasets are divided into categories, frequently reducing to a binary: those passing the filters are deemed as valuable examples, others are discarded as useless or detrimental. However, a more detailed understanding of the contribution of different kinds of texts to model performance is still largely lacking. In this article, we present the first study utilising _registers_ or _genres_—a widely used standard in corpus linguistics to model linguistic variation—to curate pretraining datasets and investigate the effect of register on the performance of LLMs. We train small generative models with register classified data and evaluate them using standard benchmarks, and show that the register of pretraining data substantially affects model performance. We uncover surprising relationships between the pretraining material and the resulting models: using the _News_ register results in subpar performance, and on the contrary, including the _Opinion_ class, covering texts such as reviews and opinion blogs, is highly beneficial. While a model trained on the entire unfiltered dataset outperforms those trained on datasets limited to a single register, combining well-performing registers such as _How-to-Instructions_, _Informational Description_, and _Opinion_ leads to major improvements. Furthermore, analysis of individual benchmark results reveals key differences in the strengths and drawbacks of specific register classes as pretraining data: _How-to-Instructions_ excels at physical reasoning and sentence completion while barely crossing random baselines on world-knowledge benchmarks, while _Narrative_ boosts performance on social interaction tasks but struggles with scientific questions. These findings show that register is an important explainer of model variation and can facilitate more deliberate and detailed future data selection practices.
[ "Amanda Myntti", "Erik Henriksson", "Veronika Laippala", "Sampo Pyysalo" ]
https://openreview.net/forum?id=FqXXtSZWEZ
FqXXtSZWEZ
FqXXtSZWEZ
[ "~Amanda_Myntti1", "~Erik_Henriksson1", "~Veronika_Laippala1", "~Sampo_Pyysalo2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a493dfbb070071a8309abca7680a7ab8ccec03b3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Register", "Genre", "Large Language Models", "NLP", "LLM evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ myntti2025register, title={Register Always Matters: Analysis of {LLM} Pretraining Data Through the Lens of Language Variation}, author={Amanda Myntti and Erik Henriksson and Veronika Laippala and Sampo Pyysalo}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=FqXXtSZWEZ} }
myntti|register_always_matters_analysis_of_llm_pretraining_data_through_the_lens_of_language_variation
null
null
null
null
null
Establishing Task Scaling Laws via Compute-Efficient Model Ladders
We develop task scaling laws and model ladders to predict the individual task performance of pretrained language models (LMs) in the overtrained setting.
We develop task scaling laws and model ladders to predict the individual task performance of pretrained language models (LMs) in the overtrained setting. Standard power laws for language modeling loss cannot accurately model task performance. Therefore, we leverage a two-step prediction approach: (1) use model and data size to predict an intermediate loss, then (2) use it to predict task performance. We train a set of small-scale "ladder" models, collect data points to fit the parameterized functions of the two prediction steps, and make predictions for two target models: a 7B model trained to 4T tokens and a 13B model trained to 5T tokens. Training the ladder models only costs 1\% of the compute used for the target models. On four multiple-choice tasks formatted as ranked classification, we can predict the accuracy of both target models within 2 points of absolute error. We find that tasks with higher prediction error also have higher variance in the metrics over model checkpoints. We also contrast multiple design choices for predicting accuracy, and present recommendations for extending our method to new models and tasks.
[ "Akshita Bhagia", "Jiacheng Liu", "Alexander Wettig", "David Heineman", "Oyvind Tafjord", "Ananya Harsh Jha", "Luca Soldaini", "Noah A. Smith", "Dirk Groeneveld", "Pang Wei Koh", "Jesse Dodge", "Hannaneh Hajishirzi" ]
https://openreview.net/forum?id=FeAM2RVO8l
FeAM2RVO8l
FeAM2RVO8l
[ "~Akshita_Bhagia1", "~Jiacheng_Liu2", "~Alexander_Wettig1", "~David_Heineman1", "~Oyvind_Tafjord2", "~Ananya_Harsh_Jha2", "~Luca_Soldaini1", "~Noah_A._Smith2", "~Dirk_Groeneveld1", "~Pang_Wei_Koh1", "~Jesse_Dodge1", "~Hannaneh_Hajishirzi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/6804a8279be0eaedffe208010a57028bdd6eb71e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "scaling law", "model ladder", "downstream tasks" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ bhagia2025establishing, title={Establishing Task Scaling Laws via Compute-Efficient Model Ladders}, author={Akshita Bhagia and Jiacheng Liu and Alexander Wettig and David Heineman and Oyvind Tafjord and Ananya Harsh Jha and Luca Soldaini and Noah A. Smith and Dirk Groeneveld and Pang Wei Koh and Jesse Dodge and Hannaneh Hajishirzi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=FeAM2RVO8l} }
bhagia|establishing_task_scaling_laws_via_computeefficient_model_ladders
null
null
This submission is NOT exempt from the Reciprocal Reviewing requirement. (We expect most submissions to fall in this category.)
~Akshita_Bhagia1
{ "readers": [ "colmweb.org/COLM/2025/Conference", "colmweb.org/COLM/2025/Conference/Submission669/Authors" ] }
NoWag: A Unified Framework for Shape Preserving Com- pression of Large Language Models
NoWag is a unified framework for zero-shot shape preserving compression of LLMs, achieving state-of-the-art quantization and competitive pruning on Llama models.
Large language models (LLMs) exhibit remarkable performance across various natural language processing tasks but suffer from immense computational and memory demands, limiting their deployment in resource-constrained environments. To address this challenge, we propose NoWag (Normalized Weight and Activation Guided Compression), a unified framework for one-shot shape preserving compression algorithms. We apply NoWag to compress Llama-2 (7B, 13B, 70B) and Llama-3 (8B, 70B) models using two popular shape-preserving techniques: vector quantization (NoWag-VQ) and unstructured/semi-structured pruning (NoWag-P). Our results show that NoWag-VQ significantly outperforms state-of-the-art one-shot vector quantization methods, while NoWag-P performs competitively against leading pruning techniques. These findings highlight underlying commonalities between these compression paradigms and suggest promising directions for future research. Our code is available at https://github.com/LawrenceRLiu/NoWag
[ "Lawrence Ray Liu", "Inesh Chakrabarti", "Yixiao Li", "Mengdi Wang", "Tuo Zhao", "Lin Yang" ]
https://openreview.net/forum?id=EfTuzTijDo
EfTuzTijDo
EfTuzTijDo
[ "~Lawrence_Ray_Liu1", "~Inesh_Chakrabarti1", "~Yixiao_Li2", "~Mengdi_Wang1", "~Tuo_Zhao2", "~Lin_Yang12" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c1e5dda5bf0f7882b0e920ffedb90fc48d16ce25.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Quantization", "Vector Quantization LLMs", "Compression", "Sparsity", "Pruning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ liu2025nowag, title={NoWag: A Unified Framework for Shape Preserving Com- pression of Large Language Models}, author={Lawrence Ray Liu and Inesh Chakrabarti and Yixiao Li and Mengdi Wang and Tuo Zhao and Lin Yang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=EfTuzTijDo} }
liu|nowag_a_unified_framework_for_shape_preserving_com_pression_of_large_language_models
null
null
null
null
null
Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback
We provide a theoretically strong algorithm for Nash learning from human feedback as well as its equivalent practical implementation using online IPO.
Reinforcement learning from human feedback (RLHF) has become essential for improving language model capabilities, but traditional approaches rely on the assumption that human preferences follow a transitive Bradley-Terry model. This assumption fails to capture the non-transitive nature of populational human preferences. Nash learning from human feedback (NLHF), targeting non-transitive preferences, is a problem of computing the Nash equilibrium (NE) of the two-player constant-sum game defined by the human preference. We introduce Extragradient preference optimization (EGPO), a novel algorithm for NLHF achieving last-iterate linear convergence to the NE of KL-regularized games and polynomial convergence to the NE of original games, while being robust to noise. Unlike previous approaches that rely on nested optimization, we derive an equivalent implementation using gradients of an online variant of the identity preference optimization (IPO) loss, enabling more faithful implementation for neural networks. Our empirical evaluations demonstrate EGPO's superior performance over baseline methods when training for the same number of epochs, as measured by pairwise win-rates using the ground truth preference. These results validate both the theoretical strengths and practical advantages of EGPO for language model alignment with non-transitive human preferences.
[ "Runlong Zhou", "Maryam Fazel", "Simon Shaolei Du" ]
https://openreview.net/forum?id=EP7mAqx2BO
EP7mAqx2BO
EP7mAqx2BO
[ "~Runlong_Zhou1", "~Maryam_Fazel1", "~Simon_Shaolei_Du1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3925d452a9a3f5c659bfa88b86c4096c4849fa61.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "reinforcement learning", "reinforcement learning from human feedback", "large language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhou2025extragradient, title={Extragradient Preference Optimization ({EGPO}): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback}, author={Runlong Zhou and Maryam Fazel and Simon Shaolei Du}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=EP7mAqx2BO} }
zhou|extragradient_preference_optimization_egpo_beyond_lastiterate_convergence_for_nash_learning_from_human_feedback
null
null
null
null
null
CASCADE Your Datasets for Cross-Mode Knowledge Retrieval of Language Models
We conduct qualitative and quantitative studies on the cross-mode knowledge retrieval capabilities of LLMs, with a novel approach, CASCADE, to mitigate this issue.
Language models often struggle with cross-mode knowledge retrieval – the ability to access knowledge learned in one format (mode) when queried in another. We demonstrate that models trained on multiple data sources (e.g., Wikipedia and TinyStories) exhibit significantly reduced accuracy when retrieving knowledge in a format different from its original training mode. This paper quantitatively investigates this phenomenon through a controlled study of random token sequence memorization across different modes. We first explore dataset rewriting as a solution, revealing that effective cross-mode retrieval requires prohibitively extensive rewriting efforts that follow a sigmoid-like relationship. As an alternative, we propose CASCADE, a novel pretraining algorithm that uses cascading datasets with varying sequence lengths and computing losses on only the second half of each training sequence to capture knowledge at different scales. Our experiments demonstrate that CASCADE outperforms dataset rewriting approaches, even when compressed into a single model with a unified loss function. This work provides both qualitative evidence of cross-mode retrieval limitations and a practical solution to enhance language models' ability to access knowledge independently of its presentational format.
[ "Runlong Zhou", "Yi Zhang" ]
https://openreview.net/forum?id=EJGlOybbDB
EJGlOybbDB
EJGlOybbDB
[ "~Runlong_Zhou1", "~Yi_Zhang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/cc3bb3fb02e2f5a7cfcbc22ab7be5cdb20dce3c1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "large language models", "pretraining", "knowledge retrieval", "spurious correlations" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhou2025cascade, title={{CASCADE} Your Datasets for Cross-Mode Knowledge Retrieval of Language Models}, author={Runlong Zhou and Yi Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=EJGlOybbDB} }
zhou|cascade_your_datasets_for_crossmode_knowledge_retrieval_of_language_models
null
null
null
null
null
$100K or 100 Days: Trade-offs when Pre-Training with Academic Resources
We present insights about pre-training on academic compute and a software benchmark to determine the most efficient training settings.
Pre-training is notoriously compute-intensive and academic researchers are notoriously under-resourced. It is, therefore, commonly assumed that academics can't pre-train models. In this paper, we seek to clarify this assumption. We first survey academic researchers to learn about their available compute and then empirically measure the time to replicate models on such resources. We introduce a benchmark to measure the time to pre-train models on given GPUs and also identify ideal settings for maximizing training speed. We run our benchmark on a range of models and academic GPUs, spending 2,000 GPU-hours on our experiments. Our results reveal a brighter picture for academic pre-training: for example, although Pythia-1B was originally trained on 64 GPUs for 3 days, we find it is also possible to replicate this model (with the same hyper-parameters) in 3x fewer GPU-days: i.e. on 4 GPUs in 18 days. We conclude with a cost-benefit analysis to help clarify the trade-offs between price and pre-training time. We believe our benchmark will help academic researchers conduct experiments that require training larger models on more data. We include our codebase in supplementary materials and will fully release it.
[ "Apoorv Khandelwal", "Tian Yun", "Nihal V. Nayak", "Jack Merullo", "Stephen Bach", "Chen Sun", "Ellie Pavlick" ]
https://openreview.net/forum?id=EFxC34XbDh
EFxC34XbDh
EFxC34XbDh
[ "~Apoorv_Khandelwal1", "~Tian_Yun2", "~Nihal_V._Nayak1", "~Jack_Merullo2", "~Stephen_Bach1", "~Chen_Sun1", "~Ellie_Pavlick1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0b8dbd31b627649d9a68c76e3a8dd7efc3aa416a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "pre-training", "training efficiency", "benchmarking", "hardware", "GPUs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ khandelwal2025k, title={\$100K or 100 Days: Trade-offs when Pre-Training with Academic Resources}, author={Apoorv Khandelwal and Tian Yun and Nihal V. Nayak and Jack Merullo and Stephen Bach and Chen Sun and Ellie Pavlick}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=EFxC34XbDh} }
khandelwal|100k_or_100_days_tradeoffs_when_pretraining_with_academic_resources
null
null
null
null
null
LLM-based Multi-Agents System Attack via Continuous Optimization with Discrete Efficient Search
Attack an LLM-based multi agent system with only one intevention, we propose a token-based optimization method
Large Language Model (LLM)-based Multi-Agent Systems (MAS) have demonstrated remarkable capability in complex tasks. However, emerging evidence indicates significant security vulnerabilities within these systems. In this paper, we introduce three novel and practical attack scenarios that allow only a single intervention on one agent from the MAS. However, previous methods struggle to achieve success. Thus, we propose Continuous Optimization with Discrete Efficient Search (CODES), a token-level jailbreak method that combines continuous-space optimization with discrete-space search to efficiently generate self-replicating attack prompts. Through CODES, malicious content propagates across multiple agents, compromising the entire MAS. In the three realistic threat scenarios—ranging from triggering offensive outputs across an entire agent cohort to bypassing multi-level safeguard modules, CODES demonstrate effectiveness. Our findings underscore the urgent need for more robust safety mechanisms tailored to MAS and highlight the importance of developing resilient alignment strategies to defend against this new class of adversarial threats.
[ "Weichen Yu", "Kai Hu", "Tianyu Pang", "Chao Du", "Min Lin", "Matt Fredrikson" ]
https://openreview.net/forum?id=ED5diyzc1C
ED5diyzc1C
ED5diyzc1C
[ "~Weichen_Yu1", "~Kai_Hu2", "~Tianyu_Pang1", "~Chao_Du1", "~Min_Lin1", "~Matt_Fredrikson1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5cb074c969ef3a24a50f09eb126050f0ca1caaa3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "multi-agent system", "adversarial attack", "LLM-based jailbreak" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yu2025llmbased, title={{LLM}-based Multi-Agents System Attack via Continuous Optimization with Discrete Efficient Search}, author={Weichen Yu and Kai Hu and Tianyu Pang and Chao Du and Min Lin and Matt Fredrikson}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=ED5diyzc1C} }
yu|llmbased_multiagents_system_attack_via_continuous_optimization_with_discrete_efficient_search
/attachment/cdaee438f8055e1a86b8c8b46c158682d6257dde.zip
null
null
null
null
Language Model Personalization via Reward Factorization
We present an extension of the RLHF framework for LLM personalization
Modern large language models (LLMs) are optimized for human-aligned responses using Reinforcement Learning from Human Feedback (RLHF). However, existing RLHF approaches assume a universal preference model and fail to account for individual user preferences, limiting their effectiveness in personalized applications. We introduce a framework that extends RLHF to enable user personalization by leveraging the assumption that user preferences lie in a low-dimensional space. Instead of training a separate model per user, we represent user-specific rewards as a linear combination of base reward functions. Using only 10 user responses, our method can infer user-specific rewards and align LLM outputs accordingly. We validate our approach through experiments with both synthetic and real users, demonstrating significant personalization achieved by our method. In human evaluations, our method achieves a 67% win rate over default GPT-4o responses.
[ "Idan Shenfeld", "Felix Faltings", "Pulkit Agrawal", "Aldo Pacchiano" ]
https://openreview.net/forum?id=E7Tu5yjqXw
E7Tu5yjqXw
E7Tu5yjqXw
[ "~Idan_Shenfeld1", "~Felix_Faltings1", "~Pulkit_Agrawal1", "~Aldo_Pacchiano1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/745ea645540bf7a9cb26b77ba049b75546c488d0.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "RLHF", "Personalization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shenfeld2025language, title={Language Model Personalization via Reward Factorization}, author={Idan Shenfeld and Felix Faltings and Pulkit Agrawal and Aldo Pacchiano}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=E7Tu5yjqXw} }
shenfeld|language_model_personalization_via_reward_factorization
null
null
null
null
null
Weak-for-Strong: Training Weak Meta-Agent to Harness Strong Executors
We train a weak meta-agent to better leverage strong models according to environment feedback through reinforcement learning.
Efficiently leveraging of the capabilities of contemporary large language models (LLMs) is increasingly challenging, particularly when direct fine-tuning is expensive and often impractical. Existing training-free methods, including manually or automated designed workflows, typically demand substantial human effort or yield suboptimal results. This paper proposes Weak-for-Strong Harnessing (W4S), a novel framework that customizes smaller, cost-efficient language models to design and optimize workflows for harnessing stronger models. W4S formulates workflow design as a multi-turn markov decision process and introduces reinforcement learning for agentic workflow optimization (RLAO) to train a weak meta-agent. Through iterative interaction with the environment, the meta-agent learns to design increasingly effective workflows without manual intervention. Empirical results demonstrate the superiority of W4S that our 7B meta-agent, trained with just one GPU hour, outperforms the strongest baseline by 2.9% ~ 24.6% across eleven benchmarks, successfully elevating the performance of state-of-the-art models such as GPT-3.5-Turbo and GPT-4o. Notably, W4S exhibits strong generalization capabilities across both seen and unseen tasks, offering an efficient, high-performing alternative to directly fine-tuning strong models.
[ "Fan Nie", "Lan Feng", "Haotian Ye", "Weixin Liang", "Pan Lu", "Huaxiu Yao", "Alexandre Alahi", "James Zou" ]
https://openreview.net/forum?id=DmhcCRIfvq
DmhcCRIfvq
DmhcCRIfvq
[ "~Fan_Nie1", "~Lan_Feng1", "~Haotian_Ye1", "~Weixin_Liang1", "~Pan_Lu2", "~Huaxiu_Yao1", "~Alexandre_Alahi3", "~James_Zou1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/72966922d815c8f5b1a8ecc362e22fb4ac76e1b5.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "large language models", "reinforcement learning", "workflow generation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ nie2025weakforstrong, title={Weak-for-Strong: Training Weak Meta-Agent to Harness Strong Executors}, author={Fan Nie and Lan Feng and Haotian Ye and Weixin Liang and Pan Lu and Huaxiu Yao and Alexandre Alahi and James Zou}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=DmhcCRIfvq} }
nie|weakforstrong_training_weak_metaagent_to_harness_strong_executors
null
null
null
null
null
Evaluating Large Language Models as Expert Annotators
We investigate: whether top-performing LLMs, which might be perceived as having expert-level proficiency in academic and professional benchmarks, can be an direct alternative to human-expert annotators?
Textual data annotation, the process of labeling or tagging text with relevant information, is typically costly, time-consuming, and labor-intensive. While large language models (LLMs) have demonstrated their potential as direct alternatives to human annotators for general domains natural language processing (NLP) tasks, their effectiveness on annotation tasks in domains requiring expert knowledge remains underexplored. In this paper, we investigate: whether top-performing LLMs, which might be perceived as having expert-level proficiency in academic and professional benchmarks, can serve as direct alternatives to human expert annotators? To this end, we evaluate both individual LLMs and multi-agent approaches across three highly specialized domains: finance, biomedicine, and law. Specifically, we propose a multi-agent discussion framework to simulate a group of human annotators, where LLMs are tasked to engage in discussions by considering others’ annotations and justifications before finalizing their labels. Additionally, we incorporate reasoning models (*e.g.*, o3-mini) to enable a more comprehensive comparison. Our empirical results reveal that: *(1)* Individual LLMs equipped with inference-time techniques (*e.g.*, chain-of-thought (CoT), self-consistency) show only marginal or even negative performance gains, contrary to prior literature suggesting their broad effectiveness. *(2)* Overall, reasoning models do not demonstrate statistically significant improvements over non-reasoning models in most settings. This suggests that extended long CoT provides relatively limited benefits for data annotation in specialized domains. *(3)* Certain model behaviors emerge in the multi-agent discussion environment. For instance, Claude 3.7 Sonnet with thinking rarely changes its initial annotations, even when other agents provide correct annotations or valid reasoning.
[ "Yu-Min Tseng", "Wei-Lin Chen", "Chung-Chi Chen", "Hsin-Hsi Chen" ]
https://openreview.net/forum?id=DktAODDdbt
DktAODDdbt
DktAODDdbt
[ "~Yu-Min_Tseng1", "~Wei-Lin_Chen1", "~Chung-Chi_Chen1", "~Hsin-Hsi_Chen2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/d527b3261d56a644d91a013e3b7d7b60e216957f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLMs-as-expert-annotators", "reasoning models", "multi-agent framework" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ tseng2025evaluating, title={Evaluating Large Language Models as Expert Annotators}, author={Yu-Min Tseng and Wei-Lin Chen and Chung-Chi Chen and Hsin-Hsi Chen}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=DktAODDdbt} }
tseng|evaluating_large_language_models_as_expert_annotators
null
null
null
null
null
SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models
dynamic simulations improve spatial reasoning in MLMs - both for static relationships and on our complex tasks that require reasoning about actions
Reasoning about motion and space is a fundamental cognitive capability that is required by multiple real-world applications. While many studies highlight that large multimodal language models (MLMs) struggle to reason about space, they only focus on static spatial relationships and not dynamic awareness of motion and space---i.e. reasoning about the effect of egocentric and object motions on spatial relationships. Manually annotating such object and camera movements is expensive. Hence, we introduce SAT, a simulated spatial aptitude training dataset comprising both static and dynamic spatial reasoning across 175K question-answer (QA) pairs and 20K scenes. Complementing this, we also construct a small (150 image-QAs) yet challenging dynamic spatial test set using real-world images. Leveraging our SAT datasets and 6 existing static spatial benchmarks, we systematically investigate what improves both static and dynamic spatial awareness. Our results reveal that simulations are surprisingly effective at imparting spatial aptitude to MLMs that translate to real images. We show that perfect annotations in simulation are more effective than existing approaches of pseudo-annotating real images. For instance, SAT training improves a LLaVA-13B model by an average 11% and a LLaVA-Video-7B model by an average 8% on multiple spatial benchmarks, including our real-image dynamic test set and spatial reasoning on long videos---even outperforming some large proprietary models. While reasoning over static relationships improves with synthetic training data, there is still considerable room for improvement for dynamic reasoning questions.
[ "Arijit Ray", "Jiafei Duan", "Ellis L Brown II", "Reuben Tan", "Dina Bashkirova", "Rose Hendrix", "Kiana Ehsani", "Aniruddha Kembhavi", "Bryan A. Plummer", "Ranjay Krishna", "Kuo-Hao Zeng", "Kate Saenko" ]
https://openreview.net/forum?id=DW8U8ZWa1U
DW8U8ZWa1U
DW8U8ZWa1U
[ "~Arijit_Ray1", "~Jiafei_Duan1", "~Ellis_L_Brown_II1", "~Reuben_Tan1", "~Dina_Bashkirova1", "~Rose_Hendrix1", "~Kiana_Ehsani1", "~Aniruddha_Kembhavi1", "~Bryan_A._Plummer1", "~Ranjay_Krishna1", "~Kuo-Hao_Zeng3", "~Kate_Saenko1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/79d9149d9c8c2f6c2bcd9399420e22324eaf743c.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "spatial reasoning", "vqa", "multimodal language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ray2025sat, title={{SAT}: Dynamic Spatial Aptitude Training for Multimodal Language Models}, author={Arijit Ray and Jiafei Duan and Ellis L Brown II and Reuben Tan and Dina Bashkirova and Rose Hendrix and Kiana Ehsani and Aniruddha Kembhavi and Bryan A. Plummer and Ranjay Krishna and Kuo-Hao Zeng and Kate Saenko}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=DW8U8ZWa1U} }
ray|sat_dynamic_spatial_aptitude_training_for_multimodal_language_models
null
null
null
null
null
On the Effectiveness and Generalization of Race Representations for Debiasing High-Stakes Decisions
We use distributed alignment search to identify race subspaces in LLMs and intervene on them to debias their decisions in college admissions and hiring.
Understanding and mitigating biases is critical for the adoption of large language models (LLMs) in high-stakes decision-making. We introduce Admissions and Hiring, decision tasks with hypothetical applicant profiles where a person's race can be inferred from their name, as simplified test beds for racial bias. We show that Gemma 2B Instruct and LLaMA 3.2 3B Instruct exhibit strong biases. Gemma grants admission to 26% more White than Black applicants, and LLaMA hires 60% more Asian than White applicants. We demonstrate that these biases are resistant to prompt engineering: multiple prompting strategies all fail to promote fairness. In contrast, using distributed alignment search, we can identify "race subspaces" within model activations and intervene on them to debias model decisions. Averaging the representation across all races within the subspaces reduces Gemma's bias by 37-57%. Finally, we examine the generalizability of Gemma's race subspaces, and find limited evidence for generalization, where changing the prompt format can affect the race representation. Our work suggests mechanistic approaches may provide a promising venue for improving the fairness of LLMs, but a universal race representation remains elusive.
[ "Dang Nguyen", "Chenhao Tan" ]
https://openreview.net/forum?id=DDtwtoAMjA
DDtwtoAMjA
DDtwtoAMjA
[ "~Dang_Nguyen4", "~Chenhao_Tan1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/cf3124955d897b33a22cd9050e6946b6d20f8234.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "fairness and bias", "interpretability" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
While the authors addressed these concerns, given the sensitive nature of the topic, someone with more experiences in ethics than me should probably take a second look.
@inproceedings{ nguyen2025on, title={On the Effectiveness and Generalization of Race Representations for Debiasing High-Stakes Decisions}, author={Dang Nguyen and Chenhao Tan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=DDtwtoAMjA} }
nguyen|on_the_effectiveness_and_generalization_of_race_representations_for_debiasing_highstakes_decisions
null
true
null
null
null
Multi-Agent Systems Execute Arbitrary Malicious Code
We discover, formalize, and demonstrate the effectiveness (relative to direct and indirect prompt injection) of control flow hijacking attacks on several deployed LLM-based multi-agent systems.
Multi-agent systems coordinate LLM-based agents to perform tasks on users' behalf. In real-world applications, multi-agent systems will inevitably interact with untrusted inputs, such as malicious Web content, files, email attachments, and more. Using several recently proposed multi-agent frameworks as concrete examples, we demonstrate that adversarial content can hijack control and communication within the system to invoke unsafe agents and functionalities. This results in a complete security breach, up to execution of arbitrary malicious code on the user's device or exfiltration of sensitive data from the user's containerized environment. For example, **when agents are instantiated with GPT-4o, Web-based attacks successfully cause the multi-agent system execute arbitrary malicious code in 58-90% of trials** (depending on the orchestrator). In some model-orchestrator configurations, the attack success rate is 100%. We also demonstrate that these attacks succeed even if individual agents are not susceptible to direct or indirect prompt injection, and even if they refuse to perform harmful actions. We hope that these results will motivate development of trust and security models for multi-agent systems before they are widely deployed.
[ "Harold Triedman", "Rishi Dev Jha", "Vitaly Shmatikov" ]
https://openreview.net/forum?id=DAozI4etUp
DAozI4etUp
DAozI4etUp
[ "~Harold_Triedman2", "~Rishi_Dev_Jha1", "~Vitaly_Shmatikov1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/39f5ae48e1ab4570c0829d3a02fdb35a330c1252.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "multi-agent systems", "LLMs", "AI security", "prompt injection", "control flow hijacking" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
The paper proposes attack methods for multi-agent systems.
@inproceedings{ triedman2025multiagent, title={Multi-Agent Systems Execute Arbitrary Malicious Code}, author={Harold Triedman and Rishi Dev Jha and Vitaly Shmatikov}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=DAozI4etUp} }
triedman|multiagent_systems_execute_arbitrary_malicious_code
/attachment/732db52e05322dcb44766df874d843221176e3b2.zip
null
null
null
null
Stuffed Mamba: Oversized States Lead to the Inability to Forget
We discover and explain why Mamba-based models cannot robustly forget past information, which hurts length generalization abilities.
Recent advancements in recurrent architectures, such as Mamba and RWKV, have showcased strong language capabilities. Unlike transformer-based models, these architectures encode all contextual information into a fixed-size state, leading to great inference efficiency. However, this approach can cause information interference, where different token data conflicts, resulting in performance degradation and incoherent outputs beyond a certain context length. To prevent this, most RNNs incorporate mechanisms designed to "forget" earlier tokens. In this paper, we reveal that Mamba-based models struggle to effectively forget earlier tokens even with built-in forgetting mechanisms. We demonstrate that this issue stems from training on contexts that are too short for the state size, enabling the model to perform well without needing to learn how to forget. Then, we show that the minimum training length required for the model to learn forgetting scales linearly with the state size, and the maximum context length for accurate retrieval of a 5-digit passkey scales exponentially with the state size, indicating that the model retains some information beyond the point where forgetting begins. These findings highlight a critical limitation in current RNN architectures and provide valuable insights for improving long-context modeling. Our work suggests that future RNN designs must account for the interplay between state size, training length, and forgetting mechanisms to achieve robust performance in long-context tasks.
[ "Yingfa Chen", "Xinrong Zhang", "Shengding Hu", "Xu Han", "Zhiyuan Liu", "Maosong Sun" ]
https://openreview.net/forum?id=CdRauNXD1w
CdRauNXD1w
CdRauNXD1w
[ "~Yingfa_Chen1", "~Xinrong_Zhang1", "~Shengding_Hu2", "~Xu_Han2", "~Zhiyuan_Liu1", "~Maosong_Sun1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c45b4cd6b5ec3b57a4a54ed3ad7c6a8935d88e3a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "state space models", "mamba", "long-context modeling", "linear attention" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ chen2025stuffed, title={Stuffed Mamba: Oversized States Lead to the Inability to Forget}, author={Yingfa Chen and Xinrong Zhang and Shengding Hu and Xu Han and Zhiyuan Liu and Maosong Sun}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=CdRauNXD1w} }
chen|stuffed_mamba_oversized_states_lead_to_the_inability_to_forget
null
null
null
null
null
Transformers are Efficient Compilers, Provably
We prove transformers can efficiently act as compilers.
Transformer-based large language models (LLMs) have demonstrated surprisingly robust performance across a wide range of language-related tasks, including programming language understanding and generation. In this paper, we take the first steps towards a formal investigation of using transformers as compilers from an expressive power perspective. To this end, we introduce a representative programming language, **Mini-Husky**, which encapsulates key features of modern C-like languages. We show that if the input code sequence has a bounded depth in both the Abstract Syntax Tree (AST) and type inference (reasonable assumptions based on the *clean code principle*), then the number of parameters required by transformers depends only on the *logarithm of the input sequence length* to handle compilation tasks, such as AST construction, symbol resolution, and type analysis. A significant technical challenge stems from the fact that transformers operate at a low level, where each layer processes the input sequence as raw vectors without explicitly associating them with predefined structure or meaning. In contrast, high-level compiler tasks necessitate managing intricate relationships and structured program information. Our primary technical contribution is the development of a domain-specific language, **Cybertron**, which generates formal proofs of the transformer's expressive power, scaling to address compiler tasks. We further establish that recurrent neural networks (RNNs) require at least a linear number of parameters relative to the input sequence, leading to an exponential separation between transformers and RNNs. Finally, we empirically validate our theoretical results by comparing transformers and RNNs on compiler tasks within **Mini-Husky**.
[ "Xiyu Zhai", "Runlong Zhou", "Liao Zhang", "Simon Shaolei Du" ]
https://openreview.net/forum?id=CaWkEqUjxs
CaWkEqUjxs
CaWkEqUjxs
[ "~Xiyu_Zhai1", "~Runlong_Zhou1", "~Liao_Zhang4", "~Simon_Shaolei_Du1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8b6f0534c3aa6097dcbf8ae63730c4ca86ba1676.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Transformers", "Expressive Power", "Programming Language", "Attention Mechanism", "Compiler" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhai2025transformers, title={Transformers are Efficient Compilers, Provably}, author={Xiyu Zhai and Runlong Zhou and Liao Zhang and Simon Shaolei Du}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=CaWkEqUjxs} }
zhai|transformers_are_efficient_compilers_provably
/attachment/9e3b0798f1a0435b5281691be7542214bae68a5d.zip
null
null
null
null
Correctness-Guaranteed Code Generation via Constrained Decoding
We present a constrained decoding algorithm that uses context-sensitive parsing with non-extensible regular expressions to generate semantically correct programs that can be extended to runtime guarantees
Language Models (LMs) are increasingly being used for code generation, but ensuring the correctness of generated programs remains a significant challenge. Although imperfect code may be acceptable during software development with human oversight, domains such as video games and robotics require one-shot correctness for runtime-critical components. We present a constrained decoding algorithm for generating semantically correct programs that incorporates a context-sensitive parser, which, at each step, outputs a regular expression that satisfies a critical non-extensible property to guide the generation of the next token sequence that can continue to a correct program. To build such a context-sensitive parser, we propose a framework of a dynamic tree of parsers (ToP) during parsing, where each parser corresponds to a modular context-free grammar enriched with contextual information such as variable scopes and type constraints, with tree branches representing ambiguity in the future code segment. We demonstrate our approach through sLua, a strongly typed variant of Lua, showing that our method can generate semantically correct programs conforming to any prescribed scripting API. We further show that, with careful design, our semantic guarantees extend to runtime correctness, as validated in the application of generating game mechanics for a roguelike video game.
[ "Lingxiao Li", "salar rahili", "Yiwei Zhao" ]
https://openreview.net/forum?id=CYiXNIQegF
CYiXNIQegF
CYiXNIQegF
[ "~Lingxiao_Li1", "~salar_rahili1", "~Yiwei_Zhao1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5a76b0d026b1c2de95307b29366b4f358e3d2f65.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "code generation", "constrained decoding", "correctness", "llm" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025correctnessguaranteed, title={Correctness-Guaranteed Code Generation via Constrained Decoding}, author={Lingxiao Li and salar rahili and Yiwei Zhao}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=CYiXNIQegF} }
li|correctnessguaranteed_code_generation_via_constrained_decoding
null
null
null
null
null
X-EcoMLA: Upcycling Pre-Trained Attention into MLA for Efficient and Extreme KV Compression
Upcycling Pre-Trained Attention into MLA for Efficient and Extreme KV Compression
Multi-head latent attention (MLA) is designed to optimize KV cache memory through low-rank key-value joint compression. Rather than caching keys and values separately, MLA stores their compressed latent representations, reducing memory overhead while maintaining the performance. While MLA improves memory efficiency without compromising language model accuracy, its major limitation lies in its integration during the pre-training phase, requiring models to be trained from scratch. This raises a key question: can we use MLA’s benefits fully or partially in models that have already been pre-trained with different attention mechanisms? In this paper, we propose X-EcoMLA to deploy post training distillation to enable the upcycling of Transformer-based attention into an efficient hybrid MLA variant through lightweight post-training adaptation, bypassing the need for extensive pre-training. We demonstrate that leveraging the dark knowledge of a well-trained model can enhance training accuracy and enable extreme KV cache compression in MLA without compromising model performance. The experimental results show that our proposed method can effectively compress the KV cache while preserving the performance on the benchmarks; specifically, for Llama3.2-1B-Instruct baseline, a 6.4× compression achieves the same average score by using only 3.6B training tokens and 70 GPU hours on AMD MI300, whereas a 10.6× compression have less than 0.1\% average score drop with 7B training tokens and 140 GPU hours. The code for this work is available at \url{https://github.com/AMD-AIG-AIMA/AMD-Hybrid-Models}.
[ "Guihong Li", "Mehdi Rezagholizadeh", "Mingyu Yang", "Vikram Appia", "Emad Barsoum" ]
https://openreview.net/forum?id=CPJ9EAeYfd
CPJ9EAeYfd
CPJ9EAeYfd
[ "~Guihong_Li1", "~Mehdi_Rezagholizadeh1", "~Mingyu_Yang5", "~Vikram_Appia1", "~Emad_Barsoum1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/983fec2798cf82b016056ec8bd32a8b0c74440ec.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "MLA", "Multi-head Attention", "LLM", "Efficient" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025xecomla, title={X-Eco{MLA}: Upcycling Pre-Trained Attention into {MLA} for Efficient and Extreme {KV} Compression}, author={Guihong Li and Mehdi Rezagholizadeh and Mingyu Yang and Vikram Appia and Emad Barsoum}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=CPJ9EAeYfd} }
li|xecomla_upcycling_pretrained_attention_into_mla_for_efficient_and_extreme_kv_compression
null
null
null
null
null
Collab-RAG: Boosting Retrieval-Augmented Generation for Complex Question Answering via White-Box and Black-Box LLM Collaboration
We introduce Collab-RAG, a collaborative training framework that leverages mutual enhancement between a white-box small language model (SLM) and a black-box large language model (LLM) for RAG.
Retrieval-Augmented Generation (RAG) systems often struggle to handle multi-hop question-answering tasks accurately due to irrelevant context retrieval and limited complex reasoning capabilities. We introduce Collab-RAG, a collaborative training framework that leverages mutual enhancement between a white-box small language model (SLM) and a black-box large language model (LLM) for RAG. Specifically, the SLM decomposes complex queries into simpler sub-questions, thus enhancing the accuracy of the retrieval and facilitating more effective reasoning by the black-box LLM. Concurrently, the black-box LLM provides feedback signals to improve the SLM's decomposition capability. We observe that Collab-RAG relies solely on supervision from an affordable black-box LLM without additional distillation from frontier LLMs, yet demonstrates strong generalization across multiple black-box LLMs. Experimental evaluations across five multi-hop QA datasets demonstrate that Collab-RAG substantially outperforms existing black-box-only and SLM fine-tuning baselines by 1.8%-14.2% on average. In particular, our fine-tuned 3B SLM surpasses a frozen 32B LLM in question decomposition, highlighting the efficiency of Collab-RAG in improving reasoning and retrieval for complex questions. Our implementation is available at \url{https://github.com/ritaranx/Collab-RAG/}
[ "Ran Xu", "Wenqi Shi", "Yuchen Zhuang", "Yue Yu", "Joyce C. Ho", "Haoyu Wang", "Carl Yang" ]
https://openreview.net/forum?id=CODs4jSGhN
CODs4jSGhN
CODs4jSGhN
[ "~Ran_Xu4", "~Wenqi_Shi1", "~Yuchen_Zhuang1", "~Yue_Yu2", "~Joyce_C._Ho1", "~Haoyu_Wang6", "~Carl_Yang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f55a9b1df84306fd7c697fa91787e3480efd1baa.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "large language model", "retrieval augmented generation", "complex question answering" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xu2025collabrag, title={Collab-{RAG}: Boosting Retrieval-Augmented Generation for Complex Question Answering via White-Box and Black-Box {LLM} Collaboration}, author={Ran Xu and Wenqi Shi and Yuchen Zhuang and Yue Yu and Joyce C. Ho and Haoyu Wang and Carl Yang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=CODs4jSGhN} }
xu|collabrag_boosting_retrievalaugmented_generation_for_complex_question_answering_via_whitebox_and_blackbox_llm_collaboration
null
null
null
null
null
The Surprising Effectiveness of Membership Inference with Simple N-Gram Coverage
We introduce a membership inference attack for language models based solely on N-gram overlap between model outputs and candidate documents, making it a versatile approach for both closed and open weight models.
Membership inference attacks serves as useful tool for fair use of language models, such as detecting potential copyright infringement and auditing data leakage. However, many current state-of-the-art attacks require access to models' hidden states or probability distribution, which prevents investigation into more widely-used, API-access only models like GPT-4. In this work, we introduce N-Gram Coverage Attack, a membership inference attack that relies **solely** on text outputs from the target model, enabling attacks on completely black-box models. We leverage the observation that models are more likely to memorize and subsequently generate text patterns that were commonly observed in their training data. Specifically, to make a prediction on a candidate member, N-Gram Coverage Attack first obtains multiple model generations conditioned on a prefix of the candidate. It then uses n-gram overlap metrics to compute and aggregate the similarities of these outputs with the ground truth suffix; high similarities indicate likely membership. We first demonstrate on a diverse set of existing benchmarks that N-Gram Coverage Attack outperforms other black-box methods while also impressively achieving comparable or even better performance to state-of-the-art white-box attacks --- despite having access to only text outputs. Interestingly, we find that the success rate of our method scales with the attack compute budget --- as we increase the number of sequences generated from the target model conditioned on the prefix, attack performance tends to improve. Having verified the accuracy of our method, we use it to investigate previously unstudied closed OpenAI models on multiple domains. We find that more recent models, such as GPT-4o, exhibit increased robustness to membership inference, suggesting an evolving trend toward improved privacy protections.
[ "Skyler Hallinan", "Jaehun Jung", "Melanie Sclar", "Ximing Lu", "Abhilasha Ravichander", "Sahana Ramnath", "Yejin Choi", "Sai Praneeth Karimireddy", "Niloofar Mireshghallah", "Xiang Ren" ]
https://openreview.net/forum?id=CNWlNF8VOm
CNWlNF8VOm
CNWlNF8VOm
[ "~Skyler_Hallinan1", "~Jaehun_Jung1", "~Melanie_Sclar1", "~Ximing_Lu1", "~Abhilasha_Ravichander2", "~Sahana_Ramnath2", "~Yejin_Choi1", "~Sai_Praneeth_Karimireddy1", "~Niloofar_Mireshghallah1", "~Xiang_Ren1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b33621ab471b4f0ea1e6084f26bc6fcf03325802.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "membership inference", "membership inference attack", "privacy", "memorization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hallinan2025the, title={The Surprising Effectiveness of Membership Inference with Simple N-Gram Coverage}, author={Skyler Hallinan and Jaehun Jung and Melanie Sclar and Ximing Lu and Abhilasha Ravichander and Sahana Ramnath and Yejin Choi and Sai Praneeth Karimireddy and Niloofar Mireshghallah and Xiang Ren}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=CNWlNF8VOm} }
hallinan|the_surprising_effectiveness_of_membership_inference_with_simple_ngram_coverage
null
null
null
null
null
Efficient Process Reward Model Training via Active Learning
We developed a active learning strategy for prm training that achieves new SOTA performance on ProcessBench with merely at most 20\% labeling compared with previous SOTAs.
Process Reward Models (PRMs) provide step-level supervision to large language models (LLMs), but scaling up training data annotation remains challenging for both humans and LLMs. To address this limitation, we propose an active learning approach, ActPRM, which proactively selects the most uncertain samples for training, substantially reducing labeling costs. During training, we use the PRM to estimate uncertainty after the forward pass, retaining only highly uncertain data. A capable yet costly reasoning model then labels this data. Then we compute the loss w.r.t. the labels and update the PRM’s weights. We compare ActPRM vs. vanilla fine-tuning, on a pool-based active learning setting, demonstrating that ActPRM reduce 50\% annotation, but achieving the comparable or even better performance. Beyond annotation efficiency, we further advance the actively trained PRM by filtering over 1M+ math reasoning trajectories with ActPRM, retaining 60\% of the data. A subsequent training on this selected dataset yields a new state-of-the-art (SOTA) PRM on ProcessBench (75.0\%) and PRMBench (65.5\%) compared with same sized models.
[ "Keyu Duan", "Zichen Liu", "Xin Mao", "Tianyu Pang", "Changyu Chen", "Qiguang Chen", "Michael Qizhe Shieh", "Longxu Dou" ]
https://openreview.net/forum?id=CJ2FmPmoDE
CJ2FmPmoDE
CJ2FmPmoDE
[ "~Keyu_Duan1", "~Zichen_Liu1", "~Xin_Mao3", "~Tianyu_Pang1", "~Changyu_Chen2", "~Qiguang_Chen1", "~Michael_Qizhe_Shieh1", "~Longxu_Dou1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/6c42b4cd1c9978c36fc15d30af6dbd19b08570d8.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Process Reward Model", "Active Learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ duan2025efficient, title={Efficient Process Reward Model Training via Active Learning}, author={Keyu Duan and Zichen Liu and Xin Mao and Tianyu Pang and Changyu Chen and Qiguang Chen and Michael Qizhe Shieh and Longxu Dou}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=CJ2FmPmoDE} }
duan|efficient_process_reward_model_training_via_active_learning
null
true
null
null
null
CRABS: A syntactic-semantic pincer strategy for bounding LLM interpretation of Python notebooks
This paper proposes a notebook understanding task yielding an information flow graph and corresponding cell execution dependency graph for a notebook, together with a Capture and Resolve Assisted Bounding Strategy (CRABS).
Recognizing the information flows and operations comprising data science and machine learning Python notebooks is critical for evaluating, reusing, and adapting notebooks for new tasks. Investigating a notebook via re-execution often is impractical due to the challenges of resolving data and software dependencies. While Large Language Models (LLMs) pre-trained on large codebases have demonstrated effectiveness in understanding code without running it, we observe that they fail to understand some realistic notebooks due to hallucinations and long-context challenges. To address these issues, we propose a notebook understanding task yielding an information flow graph and corresponding cell execution dependency graph for a notebook, and demonstrate the effectiveness of a pincer strategy that uses limited syntactic analysis to assist full comprehension of the notebook using an LLM. Our Capture and Resolve Assisted Bounding Strategy (CRABS) employs shallow syntactic parsing and analysis of the abstract syntax tree (AST) to capture the correct interpretation of a notebook between lower and upper estimates of the inter-cell I/O set$\textemdash$the flows of information into or out of cells via variables$\textemdash$then uses an LLM to resolve remaining ambiguities via cell-by-cell zero-shot learning, thereby identifying the true data inputs and outputs of each cell. We evaluate and demonstrate the effectiveness of our approach using an annotated dataset of 50 representative, highly up-voted Kaggle notebooks that together represent 3454 actual cell inputs and outputs. The LLM correctly resolves 1397 of 1425 (98%) ambiguities left by analyzing the syntactic structure of these notebooks. Across 50 notebooks, CRABS achieves average $F_1$ scores of 98% identifying cell-to-cell information flows and 99% identifying transitive cell execution dependencies. Moreover, 37 out of the 50 (74%) individual information flow graphs and 41 out of 50 (82%) cell execution dependency graphs match the ground truth exactly.
[ "Meng Li", "Timothy M. McPhillips", "Dingmin Wang", "Shin-Rong Tsai", "Bertram Ludäscher" ]
https://openreview.net/forum?id=CB3CeOWo0J
CB3CeOWo0J
CB3CeOWo0J
[ "~Meng_Li25", "~Timothy_M._McPhillips1", "~Dingmin_Wang1", "~Shin-Rong_Tsai1", "~Bertram_Ludäscher1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7c102d030812d1809798843cf22f21f9edad2797.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "notebook understanding", "LLM", "YesWorkflow", "data flow", "provenance" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025crabs, title={{CRABS}: A syntactic-semantic pincer strategy for bounding {LLM} interpretation of Python notebooks}, author={Meng Li and Timothy M. McPhillips and Dingmin Wang and Shin-Rong Tsai and Bertram Lud{\"a}scher}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=CB3CeOWo0J} }
li|crabs_a_syntacticsemantic_pincer_strategy_for_bounding_llm_interpretation_of_python_notebooks
null
null
null
null
null
Resource-efficient Inference with Foundation Model Programs
We give a program synthesis method for resource-efficient multimodal reasoning in streaming tasks. Our programs exploit task structure and tailor submodules to each input, so as to achieve an optimal cost-performance tradeoff.
The inference-time resource costs of large language and vision models present a growing challenge in production deployments. We propose the use of ***foundation model programs***, i.e., programs that can invoke foundation models with varying resource costs and performance, as an approach to this problem. Specifically, we present a method that translates a task into a program, then learns a policy for resource allocation that, on each input, selects foundation model "backends" for each program module. The policy uses smaller, cheaper backends to handle simpler subtasks, while allowing more complex subtasks to leverage larger, more capable models. We evaluate the method on two new "streaming" visual question-answering tasks in which a system answers a question on a sequence of inputs, receiving ground-truth feedback after each answer. Compared to monolithic multi-modal models, our implementation achieves up to 98\% resource savings with minimal accuracy loss, demonstrating its potential for scalable and resource-efficient multi-modal inference. The source code and the benchmarks are available at [GitHub](https://github.com/Flitternie/FMProgramming).
[ "Lunyiu Nie", "Zhimin Ding", "Kevin Yu", "Marco Cheung", "Chris Jermaine", "Swarat Chaudhuri" ]
https://openreview.net/forum?id=C5mb473GMY
C5mb473GMY
C5mb473GMY
[ "~Lunyiu_Nie1", "~Zhimin_Ding1", "~Kevin_Yu4", "~Marco_Cheung1", "~Chris_Jermaine1", "~Swarat_Chaudhuri2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f9c69537f2014cf191ef12edfd3cce3fd6d91b1e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Neurosymblic Programming", "Multimodal LMs", "Agent Programming", "LLM Computational Efficiency", "Multi-modal Reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ nie2025resourceefficient, title={Resource-efficient Inference with Foundation Model Programs}, author={Lunyiu Nie and Zhimin Ding and Kevin Yu and Marco Cheung and Chris Jermaine and Swarat Chaudhuri}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=C5mb473GMY} }
nie|resourceefficient_inference_with_foundation_model_programs
null
null
null
null
null
The Negation Bias in Large Language Models: Investigating bias reflected in linguistic markers
This study investigates whether LLMs acquire subtle linguistic biases, such as negation bias, by combining social science theories with NLP research and evaluating eight models using a novel challenge set and perplexity as a metric
Large Language Models trained on large-scale uncontrolled corpora often encode stereotypes and biases, which can be displayed through harmful text generation or biased associations. However, do they also pick up subtler linguistic patterns that can potentially reinforce and communicate biases and stereotypes, as humans do? We aim to bridge theoretical insights from social science with bias research in NLP by designing controlled, theoretically motivated LLM experiments to elicit this type of bias. Our case study is negation bias, the bias that humans have towards using negation to describe situations that challenge common stereotypes. We construct an evaluation dataset containing negated and affirmed stereotypical and anti-stereotypical sentences and evaluate the performance of eight language models using perplexity as a metric for measuring model surprisal. We find that the autoregressive decoder models in our experiment exhibit this bias, while we do not find evidence for it among the stacked encoder models.
[ "Yishan Wang", "Pia Sommerauer", "Jelke Bloem" ]
https://openreview.net/forum?id=BuXZtHTefA
BuXZtHTefA
BuXZtHTefA
[ "~Yishan_Wang1", "~Pia_Sommerauer1", "~Jelke_Bloem1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/28c3760ba4126e30194f7a7106a80ecd43bfee99.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "bias", "evaluation", "perplexity", "fairness", "negation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025the, title={The Negation Bias in Large Language Models: Investigating bias reflected in linguistic markers}, author={Yishan Wang and Pia Sommerauer and Jelke Bloem}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=BuXZtHTefA} }
wang|the_negation_bias_in_large_language_models_investigating_bias_reflected_in_linguistic_markers
null
null
null
null
null
Insights from the Inverse: Reconstructing LLM Training Goals Through Inverse Reinforcement Learning
We show that we can reconstruct reward models that underlie LLM training
Large language models (LLMs) trained with Reinforcement Learning from Human Feedback (RLHF) have demonstrated remarkable capabilities, but their underlying reward functions and decision-making processes remain opaque. This paper introduces a novel approach to interpreting LLMs by applying inverse reinforcement learning (IRL) to recover their implicit reward functions. We conduct experiments on toxicity-aligned LLMs of varying sizes, extracting reward models that achieve up to 85\% accuracy in predicting human preferences. Our analysis reveals key insights into the non-identifiability of reward functions, the relationship between model size and interpretability, and potential pitfalls in the RLHF process. We demonstrate that IRL-derived reward models can be used to fine-tune new LLMs, resulting in comparable or improved performance on toxicity benchmarks. This work provides a new lens for understanding and improving LLM alignment, with implications for the responsible development and deployment of these powerful systems.
[ "Jared Joselowitz", "Ritam Majumdar", "Arjun Jagota", "Matthieu Bou", "Nyal Patel", "Satyapriya Krishna", "Sonali Parbhoo" ]
https://openreview.net/forum?id=Bs5Jb285qv
Bs5Jb285qv
Bs5Jb285qv
[ "~Jared_Joselowitz2", "~Ritam_Majumdar2", "~Arjun_Jagota1", "~Matthieu_Bou1", "~Nyal_Patel1", "~Satyapriya_Krishna2", "~Sonali_Parbhoo2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9913c9c6dbd465a2c3d0bebfac33387699ce9f71.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "inverse reinforcement learning", "reinforcement learning with human feedback", "LLMs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ joselowitz2025insights, title={Insights from the Inverse: Reconstructing {LLM} Training Goals Through Inverse Reinforcement Learning}, author={Jared Joselowitz and Ritam Majumdar and Arjun Jagota and Matthieu Bou and Nyal Patel and Satyapriya Krishna and Sonali Parbhoo}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Bs5Jb285qv} }
joselowitz|insights_from_the_inverse_reconstructing_llm_training_goals_through_inverse_reinforcement_learning
null
null
null
null
null
Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models
We present the first comprehensive overview and analysis of quantized LLMs on the reasoning benchmarks, including quantization configurations and algorithms, task difficulties, output lengths and scaling effects.
Recent advancements in reasoning language models have demonstrated remarkable performance in complex tasks, but their extended chain-of-thought reasoning process increases inference overhead. While quantization has been widely adopted to reduce the inference cost of large language models, its impact on reasoning models remains understudied. In this paper, we conduct the first systematic study on quantized reasoning models, evaluating the open-sourced DeepSeek-R1-Distilled Qwen and LLaMA families ranging from 1.5B to 70B parameters, QwQ-32B, and Qwen3-8B. Our investigation covers weight, KV cache, and activation quantization using state-of-the-art algorithms at varying bit-widths, with extensive evaluation across mathematical (AIME, MATH-500), scientific (GPQA), and programming (LiveCodeBench) reasoning benchmarks. Our findings reveal that while lossless quantization can be achieved with W8A8 or W4A16 quantization, lower bit-widths introduce significant accuracy risks. We further identify model size, model origin, and task difficulty as critical determinants of performance. Contrary to expectations, quantized models do not exhibit increased output lengths. In addition, strategically scaling the model sizes or reasoning steps can effectively enhance the performance. All quantized models and codes are open-sourced in https://github.com/ruikangliu/Quantized-Reasoning-Models.
[ "Ruikang Liu", "Yuxuan Sun", "Manyi Zhang", "Haoli Bai", "Xianzhi Yu", "Tiezheng YU", "Chun Yuan", "Lu Hou" ]
https://openreview.net/forum?id=BM192Ps5Nv
BM192Ps5Nv
BM192Ps5Nv
[ "~Ruikang_Liu1", "~Yuxuan_Sun4", "~Manyi_Zhang2", "~Haoli_Bai2", "~Xianzhi_Yu1", "~Tiezheng_YU1", "~Chun_Yuan1", "~Lu_Hou2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/60bb19f257a02757d7a562f0b628ad0e99e7d099.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Quantization", "Reasoning", "Large Language models", "Accuracy" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ liu2025quantization, title={Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models}, author={Ruikang Liu and Yuxuan Sun and Manyi Zhang and Haoli Bai and Xianzhi Yu and Tiezheng YU and Chun Yuan and Lu Hou}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=BM192Ps5Nv} }
liu|quantization_hurts_reasoning_an_empirical_study_on_quantized_reasoning_models
null
true
null
null
null
A Controlled Study on Long Context Extension and Generalization in LLMs
Using a controlled protocol to systematically study long context extension methods
Achieving robust textual comprehension and in-context learning requires language models capable of interpreting entire document contexts. However, scaling these models directly to long contexts remains technically challenging, prompting a surge of “extension” strategies. To date, rigorous comparisons among these approaches have been complicated by inconsistent base models, training data, and evaluation metrics, limiting our understanding of how long-context performance may differ from standard benchmarks. In this work, we introduce a controlled extension protocol and a standardized evaluation pipeline, enabling an apples-to-apples comparison across diverse long-context methods. Through extensive experiments, we uncover three key insights: (1) perplexity emerges as a helpful (albeit imperfect) indicator for gauging model quality on lengthy-context tasks, (2) approximate attention mechanisms exhibit systematic performance deficits on long-context benchmarks, and (3) exact fine-tuning remains robust within its extension range, although extrapolation beyond that range continues to pose challenges. All codebases, trained models, and checkpoints will be released, fostering transparency and accelerating progress in this critical area of AI research. Our results not only help clarify the current landscape of long-context modeling but also offer guidance for building more capable, context-aware language models.
[ "Yi Lu", "Jing Nathan Yan", "Songlin Yang", "Justin T Chiu", "Siyu Ren", "Fei Yuan", "Wenting Zhao", "Zhiyong Wu", "Alexander M Rush" ]
https://openreview.net/forum?id=BLonuGXDFu
BLonuGXDFu
BLonuGXDFu
[ "~Yi_Lu7", "~Jing_Nathan_Yan1", "~Songlin_Yang1", "~Justin_T_Chiu1", "~Siyu_Ren1", "~Fei_Yuan2", "~Wenting_Zhao1", "~Zhiyong_Wu3", "~Alexander_M_Rush1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3971f902ae6d9ba25a4ea675090562cd02cfad32.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Controlled Study", "Long Context", "Extension", "Benchmark", "Analysis" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lu2025a, title={A Controlled Study on Long Context Extension and Generalization in {LLM}s}, author={Yi Lu and Jing Nathan Yan and Songlin Yang and Justin T Chiu and Siyu Ren and Fei Yuan and Wenting Zhao and Zhiyong Wu and Alexander M Rush}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=BLonuGXDFu} }
lu|a_controlled_study_on_long_context_extension_and_generalization_in_llms
/attachment/7196b614579be1cfe679bde4ae69ad6ad3203024.zip
null
null
null
null
Exposing and Patching the Flaws of Large Language Models in Social Character Simulation
This paper aims to unveil and improve the reliability of LLMs in social simulation scenarios.
Large Language Models (LLMs) are increasingly used for social character simulations, enabling applications in role-playing agents and Computational Social Science (CSS). However, their inherent flaws—such as inconsistencies in simulated roles—raise concerns about their reliability and trustworthiness. In this paper, we systematically investigate these flaws and explore potential solutions. To assess the reliability of LLM-based simulations, we introduce TrustSim, a benchmark dataset covering 10 CSS-related topics. Through experiments on 14 LLMs, we uncover persistent inconsistencies in simulated roles and find that higher general model performance does not necessarily correlate with greater simulation reliability. To mitigate these flaws, we propose Adaptive Learning Rate Based ORPO (AdaORPO), a reinforcement learning-based algorithm that improves simulation consistency across seven LLMs. Our study not only exposes critical weaknesses in LLM-driven social character simulations but also offers a pathway toward more robust and trustworthy simulations, laying the foundation for future advancements in this field.
[ "Yue Huang", "Zhengqing Yuan", "Yujun Zhou", "Kehan Guo", "Xiangqi Wang", "Haomin Zhuang", "Weixiang Sun", "Lichao Sun", "Jindong Wang", "Yanfang Ye", "Xiangliang Zhang" ]
https://openreview.net/forum?id=B5E3ijlLML
B5E3ijlLML
B5E3ijlLML
[ "~Yue_Huang9", "~Zhengqing_Yuan2", "~Yujun_Zhou1", "~Kehan_Guo1", "~Xiangqi_Wang1", "~Haomin_Zhuang1", "~Weixiang_Sun1", "~Lichao_Sun1", "~Jindong_Wang4", "~Yanfang_Ye1", "~Xiangliang_Zhang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0ed4f330c745e604c2c745c8391eea15a2adefe7.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Social simulation", "Large language model", "reliability" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ huang2025exposing, title={Exposing and Patching the Flaws of Large Language Models in Social Character Simulation}, author={Yue Huang and Zhengqing Yuan and Yujun Zhou and Kehan Guo and Xiangqi Wang and Haomin Zhuang and Weixiang Sun and Lichao Sun and Jindong Wang and Yanfang Ye and Xiangliang Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=B5E3ijlLML} }
huang|exposing_and_patching_the_flaws_of_large_language_models_in_social_character_simulation
null
null
null
null
null
Crosslingual Capabilities and Knowledge Barriers in Multilingual Large Language Models
This study reveals the crosslingual knowledge barrier for multilingual LLMs in both general (MMLU benchmark) and domain-specific (Harry Potter quiz and TOFU benchmark) contexts, and proposes to mitigate the barrier via mixed-language fine-tuning.
Large language models (LLMs) are typically multilingual due to pretraining on diverse multilingual corpora. But can these models relate corresponding concepts across languages, i.e., be crosslingual? This study evaluates state-of-the-art LLMs on inherently crosslingual tasks. We observe that while these models show promising surface-level crosslingual abilities on machine translation and embedding space analyses, they struggle with deeper crosslingual knowledge transfer, revealing a crosslingual knowledge barrier in both general (MMLU benchmark) and domain-specific (Harry Potter quiz and TOFU benchmark) contexts. Since simple inference-time mitigation methods offer only limited improvement, we propose fine-tuning of LLMs on mixed-language data, which effectively reduces these gaps, even when using out-of-domain datasets like WikiText. Our findings suggest the need for explicit optimization to unlock the full crosslingual potential of LLMs. Our code is available at https://github.com/google-research/crosslingual-knowledge-barriers.
[ "Lynn Chua", "Badih Ghazi", "Yangsibo Huang", "Pritish Kamath", "Ravi Kumar", "Pasin Manurangsi", "Amer Sinha", "Chulin Xie", "Chiyuan Zhang" ]
https://openreview.net/forum?id=AwRFhS5grK
AwRFhS5grK
AwRFhS5grK
[ "~Lynn_Chua1", "~Badih_Ghazi1", "~Yangsibo_Huang2", "~Pritish_Kamath2", "~Ravi_Kumar1", "~Pasin_Manurangsi2", "~Amer_Sinha1", "~Chulin_Xie1", "~Chiyuan_Zhang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/1480230c3a531e71cb1fc5cae376f2274735b3d6.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Multilingual", "Crosslingual Knowledge Barrier" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ chua2025crosslingual, title={Crosslingual Capabilities and Knowledge Barriers in Multilingual Large Language Models}, author={Lynn Chua and Badih Ghazi and Yangsibo Huang and Pritish Kamath and Ravi Kumar and Pasin Manurangsi and Amer Sinha and Chulin Xie and Chiyuan Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=AwRFhS5grK} }
chua|crosslingual_capabilities_and_knowledge_barriers_in_multilingual_large_language_models
/attachment/616e1b96cbe3d33d5f1d781241b1a6de30c0b1e3.zip
null
null
null
null
M-Prometheus: A Suite of Open Multilingual LLM Judges
We introduce M-Prometheus, a suite of open-weight multilingual LLM judges ranging from 3B to 14B parameters. M-Prometheus models outperform state-of-the-art open LLM judges.
Employing language models as evaluators of long-form output (LLM-as-a-Judge) has become the \textit{de facto} standard for automatic evaluation. However, most LLM judges have been optimized exclusively for English outputs, with strategies for enhancing judges' multilingual evaluation capabilities remaining largely unexplored in the current literature. This has created a disparity in the quality of automatic evaluation methods for other languages, ultimately hindering the development of models with better multilingual capabilities. To bridge this gap, we introduce M-Prometheus, a suite of open-weight LLM judges ranging from 3B to 14B parameters that can provide both direct assessment and pairwise comparison feedback on multilingual outputs. M-Prometheus models outperform state-of-the-art open LLM judges on multilingual reward benchmarks spanning more than 20 languages, as well as on literary machine translation evaluation covering 4 language pairs. Furthermore, we find M-Prometheus models can be used with quality-aware decoding methods to significantly improve generated outputs, showcasing their utility for the development of better multilingual models. Crucially, through extensive ablations, we identify key strategies for training an effective multilingual judge. Our findings highlight the significance of model size and base model selection, and the advantages of using natively multilingual data rather than translated data. We release our models, training dataset, and code to reproduce our experiments.
[ "José Pombal", "Dongkeun Yoon", "Patrick Fernandes", "Ian Wu", "Seungone Kim", "Ricardo Rei", "Graham Neubig", "Andre Martins" ]
https://openreview.net/forum?id=Atyk8lnIQQ
Atyk8lnIQQ
Atyk8lnIQQ
[ "~José_Pombal1", "~Dongkeun_Yoon1", "~Patrick_Fernandes1", "~Ian_Wu3", "~Seungone_Kim1", "~Ricardo_Rei1", "~Graham_Neubig1", "~Andre_Martins1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/bce51ef48e50ad17d19b1280bebc15dd9c87f1f0.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "automatic evaluation", "llm-as-a-judge", "multilinguality" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ pombal2025mprometheus, title={M-Prometheus: A Suite of Open Multilingual {LLM} Judges}, author={Jos{\'e} Pombal and Dongkeun Yoon and Patrick Fernandes and Ian Wu and Seungone Kim and Ricardo Rei and Graham Neubig and Andre Martins}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Atyk8lnIQQ} }
pombal|mprometheus_a_suite_of_open_multilingual_llm_judges
null
null
null
null
null
Language Models Fail to Introspect About Their Knowledge of Language
We study introspection in language models by comparing direct probability measurements with responses to metalinguistic prompts in two domains (grammaticality and word prediction) and find no clear evidence of introspection.
There has been recent interest in whether large language models (LLMs) can introspect about their own internal states. Such abilities would make LLMs more interpretable, and also validate the use of standard introspective methods in linguistics to evaluate grammatical knowledge in models (e.g., asking "Is this sentence grammatical?"). We systematically investigate emergent introspection across 21 open-source LLMs, in two domains where introspection is of theoretical interest: grammatical knowledge and word prediction. Crucially, in both domains, a model’s internal linguistic knowledge can be theoretically grounded in direct measurements of string probability. We then evaluate whether models' responses to metalinguistic prompts faithfully reflect their internal knowledge. We propose a new measure of introspection: the degree to which a model’s prompted responses predict its own string probabilities, beyond what would be predicted by another model with nearly identical internal knowledge. While both metalinguistic prompting and probability comparisons lead to high task accuracy, we do not find evidence that LLMs have privileged "self-access". By using general tasks, controlling for model similarity, and evaluating a wide range of open-source models, we show that LLMs cannot introspect, and add new evidence to the argument that prompted responses should not be conflated with models' linguistic generalizations.
[ "Siyuan Song", "Jennifer Hu", "Kyle Mahowald" ]
https://openreview.net/forum?id=AivRDOFi5H
AivRDOFi5H
AivRDOFi5H
[ "~Siyuan_Song1", "~Jennifer_Hu1", "~Kyle_Mahowald1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/417213e644b88606e8a4f01266a08cca7942cc93.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "introspection", "linguistic acceptability judgments", "syntax", "grammaticality", "surprisal", "metalinguistic", "metacognition" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ song2025language, title={Language Models Fail to Introspect About Their Knowledge of Language}, author={Siyuan Song and Jennifer Hu and Kyle Mahowald}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=AivRDOFi5H} }
song|language_models_fail_to_introspect_about_their_knowledge_of_language
null
null
null
null
null
Multilingual Contextualization of Large Language Models for Document-Level Machine Translation
We show that fine-tuning LLMs with multi-paradigm instructions from our curated DocBlocks dataset significantly improves document-level translation, outperforming prompting and agent-based methods while preserving sentence-level performance.
Large language models (LLMs) have demonstrated strong performance in sentence-level machine translation, but scaling to document-level translation remains challenging, particularly in modeling long-range dependencies and discourse phenomena across sentences and paragraphs. In this work, we propose a method to improve LLM-based long-document translation through targeted fine-tuning on high-quality document-level data, which we curate and introduce as DocBlocks. Our approach supports multiple translation paradigms, including direct document-to-document and chunk-level translation, by integrating instructions both with and without surrounding context. This enables models to better capture cross-sentence dependencies while maintaining strong sentence-level translation performance. Experimental results show that incorporating multiple translation paradigms improves document-level translation quality and inference speed compared to prompting and agent-based methods.
[ "Miguel Moura Ramos", "Patrick Fernandes", "Sweta Agrawal", "Andre Martins" ]
https://openreview.net/forum?id=Ah0U1r5Ldq
Ah0U1r5Ldq
Ah0U1r5Ldq
[ "~Miguel_Moura_Ramos1", "~Patrick_Fernandes1", "~Sweta_Agrawal1", "~Andre_Martins1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/37cf84cbf0077d88fab194e85468de64f96109fa.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Machine Translation", "Long Context", "Multi-Paradigm Translation Dataset Curation", "Instruction Tuning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ramos2025multilingual, title={Multilingual Contextualization of Large Language Models for Document-Level Machine Translation}, author={Miguel Moura Ramos and Patrick Fernandes and Sweta Agrawal and Andre Martins}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Ah0U1r5Ldq} }
ramos|multilingual_contextualization_of_large_language_models_for_documentlevel_machine_translation
null
null
null
null
null
LoX: Low-Rank Extrapolation Robustifies LLM Safety Against Fine-tuning
Computationally inexpensive method to robustify safety-aligned LLMs agains fine-tuning attacks.
Large Language Models (LLMs) have become indispensable in real-world applications. However, their widespread adoption raises significant safety concerns, particularly in responding to socially harmful questions. Despite substantial efforts to improve model safety through alignment, aligned models can still have their safety protections undermined by subsequent fine-tuning—even when the additional training data appears benign. In this paper, we empirically demonstrate that this vulnerability stems from the sensitivity of safety-critical low-rank subspaces in LLM parameters to fine-tuning. Building on this insight, we propose a novel training-free method, termed Low-Rank Extrapolation (LoX), to enhance safety robustness by extrapolating the safety subspace of an aligned LLM. Our experimental results confirm the effectiveness of LoX, demonstrating significant improvements in robustness against both benign and malicious fine-tuning attacks while preserving the model’s adaptability to new tasks. For instance, LoX leads to 11% to 54% absolute reductions in attack success rates (ASR) facing benign or malicious fine-tuning attacks. By investigating the ASR landscape of parameters, we attribute the success of LoX to that the extrapolation moves LLM parameters to a flatter zone, thereby less sensitive to perturbations. The code will be released upon acceptance.
[ "Gabriel Jacob Perin", "Runjin Chen", "Xuxi Chen", "Nina S. T. Hirata", "Zhangyang Wang", "Junyuan Hong" ]
https://openreview.net/forum?id=ASS5YD4hL4
ASS5YD4hL4
ASS5YD4hL4
[ "~Gabriel_Jacob_Perin1", "~Runjin_Chen1", "~Xuxi_Chen1", "~Nina_S._T._Hirata1", "~Zhangyang_Wang1", "~Junyuan_Hong1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/51aa6258b5fac1b8fd3bc45ab8b56958e471f11f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Safety", "Alignment", "Large Language Models", "Low-rank", "Robustness" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ perin2025lox, title={LoX: Low-Rank Extrapolation Robustifies {LLM} Safety Against Fine-tuning}, author={Gabriel Jacob Perin and Runjin Chen and Xuxi Chen and Nina S. T. Hirata and Zhangyang Wang and Junyuan Hong}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=ASS5YD4hL4} }
perin|lox_lowrank_extrapolation_robustifies_llm_safety_against_finetuning
null
null
null
null
null
Understanding and Improving Noisy Embedding Techniques in Instruction Finetuning
Understanding and Improving Noisy Embedding Techniques in Instruction Finetuning by symmetric noise.
Recent advancements in instructional fine-tuning have injected noise into embeddings, with NEFTune (Jain et al., 2024) setting benchmarks using uniform noise. Despite NEFTune’s empirical findings that uniform noise outperforms Gaussian noise, the reasons for this remain unclear. This paper aims to clarify this by offering a thorough analysis, both theoretical and empirical, indicating comparable performance among these noise types. Additionally, we introduce a new fine-tuning method for language models, utilizing symmetric noise in embeddings. This method aims to enhance the model’s function by more stringently regulating its local curvature, demonstrating superior performance over the current method, NEFTune. When fine-tuning the LLaMA-2-7B model using Alpaca, standard techniques yield a 29.79% score on AlpacaEval. However, our approach, SymNoise, increases this score significantly to 69.04%, using symmetric noisy embeddings. This is a 6.7% improvement over the state-of-the-art method, NEFTune (64.69%). Furthermore, when tested on various models and stronger baseline instruction datasets, such as Evol-Instruct, ShareGPT, OpenPlatypus, SymNoise consistently outperforms NEFTune. The current literature, including NEFTune, has underscored the importance of more in-depth research into the application of noise-based strategies in the fine-tuning of language models. Our approach, SymNoise, is another significant step towards this direction, showing notable improvement over the existing state-of-the-art method.
[ "Abhay Yadav" ]
https://openreview.net/forum?id=AHhDpMMXtf
AHhDpMMXtf
AHhDpMMXtf
[ "~Abhay_Yadav1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8ca283b542d38664df51c33b581be1ddb5d90e2a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Instruction Finetuning", "LLM", "Domain Adaptation", "Multi-Task Learning", "Language Models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yadav2025understanding, title={Understanding and Improving Noisy Embedding Techniques in Instruction Finetuning}, author={Abhay Yadav}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=AHhDpMMXtf} }
yadav|understanding_and_improving_noisy_embedding_techniques_in_instruction_finetuning
null
null
null
null
null
Readability ≠ Learnability: Rethinking the Role of Simplicity in Training Small Language Models
This paper argues statistical simplicity (low n-gram diversity), not human readability, is the critical factor enabling coherence emergence in small language models trained on synthetic datasets like TinyStories.
Recent studies suggest that very small language models (SLMs) can generate surprisingly coherent text when trained on simplified, child-directed corpora such as TinyStories. These findings have been interpreted as evidence that readability—characterized by accessible vocabulary, familiar narrative structure, and simple syntax—plays a key role in enabling such capabilities to emerge. In this paper, we challenge that interpretation. We construct synthetic datasets with matched structure but varied readability, and find that readability alone does not predict coherence or learning efficiency in SLMs. Models trained on complex, adult-level text perform comparably to those trained on simplified language, and even exhibit faster development of coherence during training. Instead, we show that statistical simplicity, as measured by n-gram diversity, is a stronger predictor of learnability. Our findings caution against the growing trend of anthropomorphizing language model training—drawing parallels to human cognitive development without empirical basis—and argue for more precise reasoning about what properties actually support capability emergence in small models.
[ "Ivan Lee", "Taylor Berg-Kirkpatrick" ]
https://openreview.net/forum?id=AFMGbq39bQ
AFMGbq39bQ
AFMGbq39bQ
[ "~Ivan_Lee2", "~Taylor_Berg-Kirkpatrick1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a9f1d7eb6f59bcb4b1691158b02f126752251217.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "child-directed language", "developmentally inspired data", "small language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lee2025readability, title={Readability \ensuremath{\neq} Learnability: Rethinking the Role of Simplicity in Training Small Language Models}, author={Ivan Lee and Taylor Berg-Kirkpatrick}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=AFMGbq39bQ} }
lee|readability_learnability_rethinking_the_role_of_simplicity_in_training_small_language_models
null
true
null
null
null
The Delta Learning Hypothesis: Preference Tuning on Weak Data can Yield Strong Gains
We show that preference pairs of weak data can be leveraged to improve a stronger language model beyond the strength of each individual point, an insight that enables new state-of-the-art post-training recipes that work without strong supervision.
Improvements in language models are often driven by increasing the quality of the data we train them on, which can be limiting when strong supervision is not readily available. In this work, we show that paired preference data consisting of individually weak data points can enable gains beyond the strength of each individual sample. We formulate the **delta learning hypothesis** to explain this phenomenon, positing that the relative quality _delta_ between points suffices to drive learning via preference tuning—even when supervised finetuning on the weak data hurts. We validate our hypothesis in controlled experiments and at scale, where we post-train 8B models on preference data generated by pairing a small 3B model's responses with outputs from an even smaller 1.5B model to ensure a meaningful delta. Strikingly, on a standard 11-benchmark evaluation suite (MATH, MMLU, etc.), our simple recipe matches the performance of Tülu 3, a state-of-the-art open model that was tuned from the same base as our model while relying on vastly stronger supervisors (e.g., GPT-4o). Delta learning thus enables simpler and cheaper open recipes for state-of-the-art post-training, highlighting that models can learn a surprising amount from data that might typically be considered weak.
[ "Scott Geng", "Hamish Ivison", "Chun-Liang Li", "Maarten Sap", "Jerry Li", "Ranjay Krishna", "Pang Wei Koh" ]
https://openreview.net/forum?id=9rwtezthwo
9rwtezthwo
9rwtezthwo
[ "~Scott_Geng1", "~Hamish_Ivison1", "~Chun-Liang_Li1", "~Maarten_Sap1", "~Jerry_Li1", "~Ranjay_Krishna1", "~Pang_Wei_Koh1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8ebaf4467cb059f38954875f477823dcfe93ecf7.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Preference tuning", "LLM post-training", "synthetic data", "weak-to-strong generalization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ geng2025the, title={The Delta Learning Hypothesis: Preference Tuning on Weak Data can Yield Strong Gains}, author={Scott Geng and Hamish Ivison and Chun-Liang Li and Maarten Sap and Jerry Li and Ranjay Krishna and Pang Wei Koh}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=9rwtezthwo} }
geng|the_delta_learning_hypothesis_preference_tuning_on_weak_data_can_yield_strong_gains
null
null
null
null
null
Partial Perspectives: How LLMs Handle Logically Inconsistent Knowledge in Reasoning Tasks
We propose a new framework based on Markov logic network to evaluate LLM's reasoning over inconsistent knowledge and release accompanying datasets.
Most natural language reasoning tasks in the research community assume consistent input knowledge. Nevertheless, real-world scenarios often involve inconsistent information, which might lead to divergent conclusions and are typically associated with varying levels of uncertainty. This raises a key research question: can large language models (LLMs) effectively handle uncertainty in their reasoning process to maximize knowledge consistency? In this paper, we propose a framework for evaluating reasoning over inconsistent knowledge. Our approach models uncertainty via weights of logical rules, leveraging Markov logic networks (MLN), which integrate probabilistic reasoning with first-order logic. This enables us to quantify inconsistencies in knowledge bases, and hence rigorously evaluate LLM reasoning. We introduce two tasks using this framework: 1) QA, which involves answering questions by integrating inconsistent knowledge; and 2) knowledge rectification, where we aim to rectify language models' acquired knowledge to improve consistency. We curate a dataset of 3,000 MLN-formatted knowledge bases to implement these tasks. We evaluate state-of-the-art LLMs on these tasks and highlight their limitations in uncertainty-aware reasoning over inconsistent logical knowledge.
[ "Zichao Li", "Ines Arous", "Jackie CK Cheung" ]
https://openreview.net/forum?id=9pzNFfgtyk
9pzNFfgtyk
9pzNFfgtyk
[ "~Zichao_Li3", "~Ines_Arous1", "~Jackie_CK_Cheung1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/04ea4c5ceb83542b828e86a37be892d416e4a9f5.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "evaluation methodologies", "reasoning", "logical reasoning", "calibration/uncertainty" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025partial, title={Partial Perspectives: How {LLM}s Handle Logically Inconsistent Knowledge in Reasoning Tasks}, author={Zichao Li and Ines Arous and Jackie CK Cheung}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=9pzNFfgtyk} }
li|partial_perspectives_how_llms_handle_logically_inconsistent_knowledge_in_reasoning_tasks
null
null
null
null
null
BiXSE: Improving Dense Retrieval via Probabilistic Graded Relevance Distillation
We propose an effective training method on synthetic graded relevance scores for improving dense encoders
Neural sentence embedding models for dense retrieval typically rely on binary relevance labels, treating query-document pairs as either relevant or irrelevant. However, real-world relevance often exists on a continuum, and recent advances in large language models (LLMs) have made it feasible to scale the generation of fine-grained graded relevance labels. In this work, we propose \textbf{BiXSE}, a simple and effective pointwise training method that optimizes binary cross-entropy (BCE) over LLM-generated graded relevance scores. BiXSE interprets these scores as probabilistic targets, enabling granular supervision from a single labeled query-document pair per query. Unlike pairwise or listwise losses that require multiple annotated comparisons per query, BiXSE achieves strong performance with reduced annotation and compute costs by leveraging in-batch negatives. Extensive experiments across sentence embedding (MMTEB) and retrieval benchmarks (BEIR, TREC-DL) show that BiXSE consistently outperforms softmax-based contrastive learning (InfoNCE), and matches or exceeds strong pairwise ranking baselines when trained on LLM-supervised data. BiXSE offers a robust, scalable alternative for training dense retrieval models as graded relevance supervision becomes increasingly accessible.
[ "Christos Tsirigotis", "Vaibhav Adlakha", "Joao Monteiro", "Aaron Courville", "Perouz Taslakian" ]
https://openreview.net/forum?id=9nQsDdquOY
9nQsDdquOY
9nQsDdquOY
[ "~Christos_Tsirigotis1", "~Vaibhav_Adlakha1", "~Joao_Monteiro1", "~Aaron_Courville3", "~Perouz_Taslakian1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0ff999689098f3c8a64b8efa9359cb355fde0836.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "sentence embeddings", "dense retrieval", "retrieval", "ranking", "synthetic data" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ tsirigotis2025bixse, title={Bi{XSE}: Improving Dense Retrieval via Probabilistic Graded Relevance Distillation}, author={Christos Tsirigotis and Vaibhav Adlakha and Joao Monteiro and Aaron Courville and Perouz Taslakian}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=9nQsDdquOY} }
tsirigotis|bixse_improving_dense_retrieval_via_probabilistic_graded_relevance_distillation
null
null
null
null
null
M²IV: Towards Efficient and Fine-grained Multimodal In-Context Learning via Representation Engineering
We propose a novel approach of vectorized multimodal in-context learning, improving both efficiency and performance effectively.
Multimodal in-context learning (ICL) equips Large Vision-language Models (LVLMs) with the ability to adapt to new tasks via multiple user-provided demonstrations, without requiring any model parameter updates. However, its effectiveness is constrained by the token-intensive nature of multimodal inputs and the complexity of cross-modal few-shot reasoning, which together hinder LVLMs from extracting useful patterns from demonstrations. To address these challenges, we propose \textbf{M²IV}, a novel representation engineering approach that replaces explicit token-level demonstrations with a set of learnable Multimodal In-context Vectors directly injected into the residual streams of LVLMs. By analyzing the distinct roles of multi-head attention (MHA) and multi-layer perceptrons (MLP) in the ICL process, we design a training strategy that enables M²IV to perform fine-grained semantic distillation and robust cross-modal representation learning. M²IV not only improves performance across diverse tasks and LVLMs but also significantly reduces token overhead, enabling graceful scaling to many-shot scenarios. To further enhance usability, we introduce \textbf{VLibrary}, a repository that stores trained M²IVs for flexible retrieval and injection. With VLibrary, users can steer pre-trained LVLMs in a customized manner that meets diverse requirements. Extensive experiments demonstrate that M²IV consistently outperforms vanilla ICL and prior representation engineering baselines, achieving an average accuracy gain of 3.74\% with substantial improvements in overall efficiency.
[ "Yanshu Li", "Yi Cao", "Hongyang He", "Qisen Cheng", "Xiang Fu", "Xi Xiao", "Tianyang Wang", "Ruixiang Tang" ]
https://openreview.net/forum?id=9ffYcEiNw9
9ffYcEiNw9
9ffYcEiNw9
[ "~Yanshu_Li1", "~Yi_Cao6", "~Hongyang_He1", "~Qisen_Cheng2", "~Xiang_Fu5", "~Xi_Xiao2", "~Tianyang_Wang1", "~Ruixiang_Tang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a6d63449ec93c8f38b9d243ad95f0bd1e2106320.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Vision-Language Model", "In-context Learning", "Representation Engineering" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025miv, title={M{\texttwosuperior}{IV}: Towards Efficient and Fine-grained Multimodal In-Context Learning via Representation Engineering}, author={Yanshu Li and Yi Cao and Hongyang He and Qisen Cheng and Xiang Fu and Xi Xiao and Tianyang Wang and Ruixiang Tang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=9ffYcEiNw9} }
li|miv_towards_efficient_and_finegrained_multimodal_incontext_learning_via_representation_engineering
null
null
null
null
null
RARe: Retrieval Augmented Retrieval with In-Context Examples
We propose a method to incorporate in-context learning abilities to retriever models.
While in-context learning is well-studied with decoder-only language models (LLMs), its utility for encoder-only models remains underexplored. We study in-context learning for encoder-only models for text retrieval tasks. Can incorporating in-context examples (query-document pairs) to the target query enhance retriever performance? Our approach, \texttt{RARe}, finetunes a pre-trained model with in-context examples whose query is semantically similar to the target query. This approach achieves performance gains of up to +2.72\% nDCG across open-domain retrieval datasets (BeIR, RAR-b) compared to using the target query only as an input. In particular, we find \texttt{RARe} exhibits stronger out-of-domain generalization compared to models using queries without in-context examples, similar to what is seen for in-context learning in LLMs. We further provide analysis on the design choices of in-context example augmentation for retrievers and lay the foundation for future work.
[ "Atula Tejaswi", "Yoonsang Lee", "sujay sanghavi", "Eunsol Choi" ]
https://openreview.net/forum?id=9FES5yT9v3
9FES5yT9v3
9FES5yT9v3
[ "~Atula_Tejaswi1", "~Yoonsang_Lee1", "~sujay_sanghavi1", "~Eunsol_Choi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/d92f204e694aa45fa801bedb5398fa3014785039.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Information Retrieval", "In-Context Learning", "Representation Learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ tejaswi2025rare, title={{RAR}e: Retrieval Augmented Retrieval with In-Context Examples}, author={Atula Tejaswi and Yoonsang Lee and sujay sanghavi and Eunsol Choi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=9FES5yT9v3} }
tejaswi|rare_retrieval_augmented_retrieval_with_incontext_examples
/attachment/7e64201c27f5bfd3e5b3b0d905eb10b59c777c4f.zip
null
null
null
null
Boosting LLM Reasoning via Spontaneous Self-Correction
We propose a spontaneous self-correction approach to improve LLM's math reasoning capability.
While large language models (LLMs) have demonstrated remarkable success on a broad range of tasks, math reasoning remains a challenging one. One of the approaches for improving math reasoning is self-correction, which designs self-improving loops to let the model correct its own mistakes. However, existing self-correction approaches treat corrections as standalone post-generation refinements, relying on extra prompt and system designs to elicit self-corrections, instead of performing real-time, spontaneous self-corrections in a single pass. To address this, we propose **SPOC**, a *spontaneous self-correction* approach that enables LLMs to generate interleaved solutions and verifications in a *single inference pass*, with generation dynamically terminated based on verification outcomes, thereby effectively scaling inference time compute. SPOC considers a multi-agent perspective by assigning dual roles -- solution proposer and verifier -- to the same model. We adopt a simple yet effective approach to generate synthetic data for fine-tuning, enabling the model to develop capabilities for self-verification and multi-agent collaboration. We further improve its solution proposal and verification accuracy through online reinforcement learning. Experiments on mathematical reasoning benchmarks show that SPOC significantly improves performance. Notably, SPOC boosts the accuracy of Llama-3.1-8B and 70B Instruct models, achieving absolute gains of 8.8\% and 11.6\% on MATH500, 10.0\% and 20.0\% on AMC23, and 3.3\% and 6.7\% on AIME24, respectively.
[ "Xutong Zhao", "Tengyu Xu", "Xuewei Wang", "Zhengxing Chen", "Di Jin", "Liang Tan", "Yen-Ting Lin", "Zishun Yu", "Zhuokai Zhao", "Yun He", "Sinong Wang", "Han Fang", "Sarath Chandar", "Chen Zhu" ]
https://openreview.net/forum?id=9DCQAGBoII
9DCQAGBoII
9DCQAGBoII
[ "~Xutong_Zhao1", "~Tengyu_Xu1", "~Xuewei_Wang1", "~Zhengxing_Chen2", "~Di_Jin1", "~Liang_Tan1", "~Yen-Ting_Lin2", "~Zishun_Yu1", "~Zhuokai_Zhao1", "~Yun_He2", "~Sinong_Wang1", "~Han_Fang4", "~Sarath_Chandar1", "~Chen_Zhu2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ac1795baf25a2ce9195b7fd518fe91c92b406873.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Reasoning", "Reinforcement Learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhao2025boosting, title={Boosting {LLM} Reasoning via Spontaneous Self-Correction}, author={Xutong Zhao and Tengyu Xu and Xuewei Wang and Zhengxing Chen and Di Jin and Liang Tan and Yen-Ting Lin and Zishun Yu and Zhuokai Zhao and Yun He and Sinong Wang and Han Fang and Sarath Chandar and Chen Zhu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=9DCQAGBoII} }
zhao|boosting_llm_reasoning_via_spontaneous_selfcorrection
null
null
null
null
null
Gating is Weighting: Understanding Gated Linear Attention through In-context Learning
This work offers theoretical insights into the ICL capabilities of gated linear attention models, demonstrates how gating is crucial for achieving stronger data adaptivity, and characterizes the loss landscape of weighted preconditioned GD.
Linear attention methods offer a compelling alternative to softmax attention due to their efficiency in recurrent decoding. Recent research has focused on enhancing standard linear attention by incorporating gating while retaining its computational benefits. Such Gated Linear Attention (GLA) architectures include highly competitive models such as Mamba and RWKV. In this work, we investigate the in-context learning capabilities of the GLA model and make the following contributions. We show that a multilayer GLA can implement a general class of Weighted Preconditioned Gradient Descent (WPGD) algorithms with data-dependent weights. These weights are induced by the gating mechanism and the input, enabling the model to control the contribution of individual tokens to prediction. To further understand the mechanics of this weighting, we introduce a novel data model with multitask prompts and characterize the optimization landscape of learning a WPGD algorithm. We identify mild conditions under which there exists a unique global minimum, up to scaling invariance, and the associated WPGD algorithm is unique as well. Finally, we translate these findings to explore the optimization landscape of GLA and shed light on how gating facilitates context-aware learning and when it is provably better than vanilla linear attention.
[ "Yingcong Li", "Davoud Ataee Tarzanagh", "Ankit Singh Rawat", "Maryam Fazel", "Samet Oymak" ]
https://openreview.net/forum?id=9AFIz0YzD7
9AFIz0YzD7
9AFIz0YzD7
[ "~Yingcong_Li1", "~Davoud_Ataee_Tarzanagh1", "~Ankit_Singh_Rawat1", "~Maryam_Fazel1", "~Samet_Oymak2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/261533e620835fb31d1bd21a3990ec2ceb3dd8c0.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "linear attention", "gating", "in-context learning", "weighted gradient descent", "optimization landscape" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025gating, title={Gating is Weighting: Understanding Gated Linear Attention through In-context Learning}, author={Yingcong Li and Davoud Ataee Tarzanagh and Ankit Singh Rawat and Maryam Fazel and Samet Oymak}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=9AFIz0YzD7} }
li|gating_is_weighting_understanding_gated_linear_attention_through_incontext_learning
null
true
null
null
null
Visual Representations inside the Language Model
Introducing a new approach to Multimodal LLM interpretability: studying intermediate visual representations in Key-Value Caches. We find they encode sufficient information for perception tasks, and can be intervened on to improve model performance.
Despite interpretability work analyzing VIT encoders and transformer activations, we don't yet understand why Multimodal Language Models (MLMs) struggle on perception-heavy tasks. We offer an under-studied perspective by examining how popular MLMs (LLaVA-OneVision, Qwen2.5-VL, and Llama-3-LLaVA-NeXT) process their visual key-value tokens. We first study the flow of visual information through the language model, finding that image value tokens encode sufficient information to perform several perception-heavy tasks zero-shot: segmentation, semantic correspondence, temporal correspondence, and referring expression detection. We find that while the language model does augment the visual information received from the projection of input visual encodings---which we reveal correlates with overall MLM perception capability---it contains less visual information on several tasks than the equivalent visual encoder (SigLIP) that has not undergone MLM finetuning. Further, we find that the visual information corresponding to input-agnostic image key tokens in later layers of language models contains artifacts which reduce perception capability of the overall MLM. Next, we discuss controlling visual information in the language model, showing that adding a text prefix to the image input improves perception capabilities of visual representations. Finally, we reveal that if language models were able to better control their visual information, their perception would significantly improve; e.g., in 33.3% of Art Style questions in the BLINK benchmark, perception information present in the language model is not surfaced to the output! Our findings reveal insights into the role of key-value tokens in multimodal systems, paving the way for deeper mechanistic interpretability of MLMs and suggesting new directions for training their visual encoder and language model components.
[ "Benlin Liu", "Amita Kamath", "Madeleine Grunde-McLaughlin", "Winson Han", "Ranjay Krishna" ]
https://openreview.net/forum?id=99e72TkWTi
99e72TkWTi
99e72TkWTi
[ "~Benlin_Liu1", "~Amita_Kamath1", "~Madeleine_Grunde-McLaughlin1", "~Winson_Han1", "~Ranjay_Krishna1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e56bd3ff7464a65655d5addd0f901a2e1c37cb4f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "multimodal language model", "visual representation", "mechanistic interpretability", "vision-language" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ liu2025visual, title={Visual Representations inside the Language Model}, author={Benlin Liu and Amita Kamath and Madeleine Grunde-McLaughlin and Winson Han and Ranjay Krishna}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=99e72TkWTi} }
liu|visual_representations_inside_the_language_model
null
null
null
null
null
A Sober Look at Progress in Language Model Reasoning: Pitfalls and Paths to Reproducibility
we conduct a comprehensive empirical study and find that current mathematical reasoning benchmarks are highly sensitive to subtle implementation choices
Reasoning has emerged as the next major frontier for language models (LMs), with rapid advances from both academic and industrial labs. However, this progress often outpaces methodological rigor, with many evaluations relying on benchmarking practices that lack transparency, robustness, or statistical grounding. In this work, we conduct a comprehensive empirical study and find that current mathematical reasoning benchmarks are highly sensitive to subtle implementation choices—including decoding parameters, random seeds, prompt formatting, and even hardware and software configurations. Performance gains reported in recent studies frequently hinge on unclear comparisons or unreported sources of variance. To address these issues, we propose a standardized evaluation framework with clearly defined best practices and reporting standards. Using this framework, we reassess recent methods and find that most reinforcement learning (RL) approaches yield only modest improvements—far below prior claims—and are prone to overfitting, especially on small-scale benchmarks like AIME’24. In contrast, supervised finetuning (SFT) methods show consistently stronger generalization in the settings we study. To foster reproducibility, we release all code, prompts, and model outputs, for reasoning benchmarks, establishing more rigorous foundations for future work.
[ "Andreas Hochlehnert", "Hardik Bhatnagar", "Vishaal Udandarao", "Samuel Albanie", "Ameya Prabhu", "Matthias Bethge" ]
https://openreview.net/forum?id=90UrTTxp5O
90UrTTxp5O
90UrTTxp5O
[ "~Andreas_Hochlehnert1", "~Hardik_Bhatnagar1", "~Vishaal_Udandarao1", "~Samuel_Albanie2", "~Ameya_Prabhu1", "~Matthias_Bethge1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/899d96cd55335b087e93411723365e78f5e0b0ab.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "data curation", "rl", "grpo", "math", "reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hochlehnert2025a, title={A Sober Look at Progress in Language Model Reasoning: Pitfalls and Paths to Reproducibility}, author={Andreas Hochlehnert and Hardik Bhatnagar and Vishaal Udandarao and Samuel Albanie and Ameya Prabhu and Matthias Bethge}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=90UrTTxp5O} }
hochlehnert|a_sober_look_at_progress_in_language_model_reasoning_pitfalls_and_paths_to_reproducibility
null
null
null
null
null
CRUST-Bench: A Comprehensive Benchmark for C-to-safe-Rust Transpilation
We introduce CRUST-Bench, a dataset of 100 C repositories, each paired with manually-written interfaces in safe Rust as well as test cases that can be used to validate transpilation correctness.
C-to-Rust transpilation is essential for modernizing legacy C code while enhancing safety and interoperability with modern Rust ecosystems. However, no dataset currently exists for evaluating whether a system can transpile C into _safe_ Rust that passes a set of test cases. We introduce CRUST-Bench, a dataset of 100 C repositories, each paired with manually-written interfaces in safe Rust as well as test cases that can be used to validate correctness of the transpilation. By considering entire repositories rather than isolated functions, CRUST-Bench captures the challenges of translating complex projects with dependencies across multiple files. The provided Rust interfaces provide explicit specifications that ensure adherence to idiomatic, memory-safe Rust patterns, while the accompanying test cases enforce functional correctness. We evaluate state-of-the-art large language models (LLMs) on this task and find that safe and idiomatic Rust generation is still a challenging problem for various state-of-the-art methods and techniques. We also provide insights into the errors LLMs usually make in transpiling code from C to safe Rust. The best performing model, OpenAI o3, is able to solve only 19 tasks in a single-shot setting. Improvements on CRUST-Bench would lead to improved transpilation systems that can reason about complex scenarios and help in migrating legacy codebases from C into languages like Rust that ensure memory safety. Code and Data available [here](https://github.com/anirudhkhatry/CRUST-bench).
[ "Anirudh Khatry", "Robert Zhang", "Jia Pan", "Ziteng Wang", "Qiaochu Chen", "Greg Durrett", "Isil Dillig" ]
https://openreview.net/forum?id=8xofWL61S9
8xofWL61S9
8xofWL61S9
[ "~Anirudh_Khatry1", "~Robert_Zhang1", "~Jia_Pan3", "~Ziteng_Wang9", "~Qiaochu_Chen1", "~Greg_Durrett1", "~Isil_Dillig1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/fca55e9c1949077b663367de4c3b9f60fe3af712.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Code translation", "Code generation", "Software engineering", "Large Language Models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ khatry2025crustbench, title={{CRUST}-Bench: A Comprehensive Benchmark for C-to-safe-Rust Transpilation}, author={Anirudh Khatry and Robert Zhang and Jia Pan and Ziteng Wang and Qiaochu Chen and Greg Durrett and Isil Dillig}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=8xofWL61S9} }
khatry|crustbench_a_comprehensive_benchmark_for_ctosaferust_transpilation
null
null
null
null
null
Pretrained Hybrids with MAD Skills
We develop a framework for creating pretrained hybrid models from existing pretrained models.
While Transformers underpin modern large language models (LMs), there is a growing list of alternative architectures with new capabilities, promises, and tradeoffs. This makes choosing the right LM architecture challenging. Recently proposed hybrid architectures seek a best-of-all-worlds approach that reaps the benefits of all architectures. Hybrid design is difficult for two reasons: it requires manual expert-driven search, and new hybrids must be trained from scratch. We propose Manticore, a framework that addresses these challenges by automating the design of hybrid architectures while reusing pretrained models to create pretrained hybrids. Our approach augments ideas from differentiable Neural Architecture Search (NAS) by incorporating simple projectors that translate features between pretrained blocks from different architectures. We then fine-tune hybrids that combine pretrained models from different architecture families---such as the GPT series and Mamba---end-to-end. With Manticore, we enable LM selection without training multiple models, the construction of pretrained hybrids from existing pretrained models, and the ability to program pretrained hybrids to have certain capabilities. Manticore hybrids match existing manually designed hybrids, achieve strong performance on the Long Range Arena benchmark, and improve on pretrained transformers and state space models on various natural language tasks.
[ "Nicholas Roberts", "Samuel Guo", "Zhiqi Gao", "Satya Sai Srinath Namburi GNVV", "Sonia Cromp", "Chengjun Wu", "Chengyu Duan", "Frederic Sala" ]
https://openreview.net/forum?id=8xSbwT3763
8xSbwT3763
8xSbwT3763
[ "~Nicholas_Roberts2", "~Samuel_Guo1", "~Zhiqi_Gao2", "~Satya_Sai_Srinath_Namburi_GNVV1", "~Sonia_Cromp1", "~Chengjun_Wu1", "~Chengyu_Duan1", "~Frederic_Sala1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/db501b2ac80540180d2bd878d0aef8144a4df85a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "hybrid architectures", "large language models", "transformers", "state space models", "model merging", "neural architecture search", "mechanistic search" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ roberts2025pretrained, title={Pretrained Hybrids with {MAD} Skills}, author={Nicholas Roberts and Samuel Guo and Zhiqi Gao and Satya Sai Srinath Namburi GNVV and Sonia Cromp and Chengjun Wu and Chengyu Duan and Frederic Sala}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=8xSbwT3763} }
roberts|pretrained_hybrids_with_mad_skills
null
null
null
null
null
Layers at Similar Depths Generate Similar Activations Across LLM Architectures
Independently-trained large language models generate activations that are similar between models, but not within models.
How do the latent spaces used by independently-trained LLMs relate to one another? We study the nearest neighbor relationships induced by activations at different layers of 24 open-weight LLMs, and find that they 1) tend to vary from layer to layer within a model, and 2) are approximately shared between corresponding layers of different models. Claim 2 shows that these nearest neighbor relationships are not arbitrary, as they are shared across models, but Claim 1 shows that they are not "obvious" either, as there is no single set of nearest neighbor relationships that is universally shared. Together, these suggest that LLMs generate a progression of activation geometries from layer to layer, but that this entire progression is largely shared between models, stretched and squeezed to fit into different architectures.
[ "Christopher Wolfram", "Aaron Schein" ]
https://openreview.net/forum?id=8wKec6faAT
8wKec6faAT
8wKec6faAT
[ "~Christopher_Wolfram1", "~Aaron_Schein1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/23bdb2c8d3464cb4acadb8f4b178555545c2378c.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "universality", "representational similarity" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wolfram2025layers, title={Layers at Similar Depths Generate Similar Activations Across {LLM} Architectures}, author={Christopher Wolfram and Aaron Schein}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=8wKec6faAT} }
wolfram|layers_at_similar_depths_generate_similar_activations_across_llm_architectures
null
null
null
null
null
JailDAM: Jailbreak Detection with Adaptive Memory for Vision-Language Model
A test-time adaptive framework for harmful content detection in VLMs using memory-based unsafe concept representations, eliminating the need for labeled harmful data or model internals, and achieving state-of-the-art accuracy and efficiency.
Multimodal large language models (MLLMs) excel in vision-language tasks but also pose significant risks of generating harmful content, particularly through jailbreak attacks. Jailbreak attacks refer to intentional manipulations that bypass safety mechanisms in models, leading to the generation of inappropriate or unsafe content. Detecting such attacks is critical to ensuring the responsible deployment of MLLMs. Existing jailbreak detection methods face three primary challenges: (1) Many rely on model hidden states or gradients, limiting their applicability to white-box models, where the internal workings of the model are accessible; (2) They involve high computational overhead from uncertainty-based analysis, which limits real-time detection, and (3) They require fully labeled harmful datasets, which are often scarce in real-world settings. To address these issues, we introduce a test-time adaptive framework called JailDAM. Our method leverages a memory-based approach guided by policy-driven unsafe knowledge representations, eliminating the need for explicit exposure to harmful data. By dynamically updating unsafe knowledge during test-time, our framework improves generalization to unseen jailbreak strategies while maintaining efficiency. Experiments on multiple VLM jailbreak benchmarks demonstrate that JailDAM delivers state-of-the-art performance in harmful content detection, improving both accuracy and speed.
[ "Yi Nian", "Shenzhe Zhu", "Yuehan Qin", "Li Li", "Ziyi Wang", "Chaowei Xiao", "Yue Zhao" ]
https://openreview.net/forum?id=8Pxdzsqvx9
8Pxdzsqvx9
8Pxdzsqvx9
[ "~Yi_Nian1", "~Shenzhe_Zhu1", "~Yuehan_Qin1", "~Li_Li18", "~Ziyi_Wang31", "~Chaowei_Xiao2", "~Yue_Zhao13" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/fa2a2339b85dfee92659657fab57196b37a3d51f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Multi-modality", "AI Security", "Jailbreak", "Attack Defense" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ nian2025jaildam, title={Jail{DAM}: Jailbreak Detection with Adaptive Memory for Vision-Language Model}, author={Yi Nian and Shenzhe Zhu and Yuehan Qin and Li Li and Ziyi Wang and Chaowei Xiao and Yue Zhao}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=8Pxdzsqvx9} }
nian|jaildam_jailbreak_detection_with_adaptive_memory_for_visionlanguage_model
null
null
null
null
null
EllieSQL: Cost-Efficient Text-to-SQL with Complexity-Aware Routing
We propose EllieSQL, a complexity-aware routing framework for Text-to-SQL, enhancing cost-efficiency while maintaining performance by directing queries to suitable SQL generation methods.
Text-to-SQL automatically translates natural language queries to SQL, allowing non-technical users to retrieve data from databases without specialized SQL knowledge. Despite the success of advanced LLM-based Text-to-SQL approaches on leaderboards, their unsustainable computational costs—often overlooked—stand as the "elephant in the room" in current leaderboard-driven research, limiting their economic practicability for real-world deployment and widespread adoption. To tackle this, we exploratively propose EllieSQL, a complexity-aware routing framework that assigns queries to suitable SQL generation pipelines based on estimated complexity. We investigate multiple routers to direct simple queries to efficient approaches while reserving computationally intensive methods for complex cases. Drawing from economics, we introduce the Token Elasticity of Performance (TEP) metric, capturing cost-efficiency by quantifying the responsiveness of performance gains relative to token investment in SQL generation. Experiments show that compared to always using the most advanced methods in our study, EllieSQL with the Qwen2.5-0.5B-DPO router reduces token use by over 40% without compromising performance on Bird development set, achieving more than a 2× boost in TEP over non-routing approaches. This not only advances the pursuit of cost-efficient Text-to-SQL but also invites the community to weigh resource efficiency alongside performance, contributing to progress in sustainable Text-to-SQL. Our source code and model are available at https://elliesql.github.io/.
[ "Yizhang Zhu", "Runzhi JIANG", "Boyan Li", "Nan Tang", "Yuyu Luo" ]
https://openreview.net/forum?id=8OqGNXKwo8
8OqGNXKwo8
8OqGNXKwo8
[ "~Yizhang_Zhu1", "~Runzhi_JIANG1", "~Boyan_Li2", "~Nan_Tang3", "~Yuyu_Luo1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/22cb7760daa673d5ebdefb89c491dfcf425d7196.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Text-to-SQL", "Routing", "Cost-Efficiency", "Large Language Model" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhu2025elliesql, title={Ellie{SQL}: Cost-Efficient Text-to-{SQL} with Complexity-Aware Routing}, author={Yizhang Zhu and Runzhi JIANG and Boyan Li and Nan Tang and Yuyu Luo}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=8OqGNXKwo8} }
zhu|elliesql_costefficient_texttosql_with_complexityaware_routing
/attachment/5d463428cf69a22e085ad7c79d3aa77603ca0c36.zip
null
null
null
null
Rethinking Associative Memory Mechanism in Induction Head
This paper investigates how a two-layer transformer thoroughly captures in-context information and balances it with pretrained bigram knowledge in next token prediction, from the viewpoint of associative memory.
Induction head mechanism is a part of the computational circuits for in-context learning (ICL) that enable large language models (LLMs) to adapt to new tasks without fine-tuning. Most existing work explains the training dynamics behind acquiring such a powerful mechanism. However, it is unclear how a transformer extract information from long contexts and then use it to coordinate with global knowledge acquired during pretraninig. This paper considers weight matrices as associative memory to investigate how an induction head functions over long contexts and balances in-context and global bigram knowledge in next token prediction. We theoretically analyze the representation of the learned associative memory in attention layers and the resulting logits when a transformer is given prompts generated by a bigram model. In the experiments, we design specific prompts to evaluate whether the outputs of the trained transformer align with the theoretical results.
[ "Shuo Wang", "Issei Sato" ]
https://openreview.net/forum?id=8N5H8DgfPw
8N5H8DgfPw
8N5H8DgfPw
[ "~Shuo_Wang30", "~Issei_Sato2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/13bfec3920f5df06491af9cdb8626edb73ddb964.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "transformer", "induction head", "associative memory", "positional encoding" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025rethinking, title={Rethinking Associative Memory Mechanism in Induction Head}, author={Shuo Wang and Issei Sato}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=8N5H8DgfPw} }
wang|rethinking_associative_memory_mechanism_in_induction_head
null
null
null
null
null
SciReplicate-Bench: Benchmarking LLMs in Agent-driven Algorithmic Reproduction from Research Papers
We investigate the use of LLMs to aid in reproducing the core algorithms proposed in real-world NLP publications. To support this task, we introduce a benchmark and a multi-agent framework.
This study evaluates large language models (LLMs) in generating code from algorithm descriptions in recent NLP papers. The task requires two key competencies: (1) algorithm comprehension: synthesizing information from papers and academic literature to understand implementation logic, and (2) coding expertise: identifying dependencies and correctly implementing necessary APIs. To facilitate rigorous evaluation, we introduce SciReplicate-Bench, a benchmark of 100 tasks from 36 NLP papers published in 2024, featuring detailed annotations and comprehensive test cases. Building on SciReplicate-Bench, we propose Sci-Reproducer, a dual-agent framework consisting of a Paper Agent that interprets algorithmic concepts from literature and a Code Agent that retrieves dependencies from repositories and implements solutions. To assess algorithm understanding, we introduce reasoning graph accuracy, which quantifies similarity between generated and reference reasoning graphs derived from code comments and structure. For evaluating implementation quality, we employ execution accuracy, CodeBLEU, and repository dependency/API recall metrics. In our experiments, we evaluate various powerful non-reasoning and reasoning LLMs as foundational models. The best-performing LLM using \ModelName~achieves only 39\% execution accuracy, highlighting the benchmark's difficulty. Our analysis identifies missing or inconsistent algorithm descriptions as key barriers to successful reproduction. We make available our benchmark and code at https://github.com/xyzCS/SciReplicate-Bench and project homepage at https://xyzcs.github.io/scireplicate.github.io/.
[ "Yanzheng Xiang", "Hanqi Yan", "Shuyin Ouyang", "Lin Gui", "Yulan He" ]
https://openreview.net/forum?id=8LoPjpvWde
8LoPjpvWde
8LoPjpvWde
[ "~Yanzheng_Xiang2", "~Hanqi_Yan2", "~Shuyin_Ouyang1", "~Lin_Gui3", "~Yulan_He1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/4267a4e3a7d161020059b6c9165c7ddd3973f24d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Code Generation", "Agent", "Scientific Paper Understanding" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xiang2025scireplicatebench, title={SciReplicate-Bench: Benchmarking {LLM}s in Agent-driven Algorithmic Reproduction from Research Papers}, author={Yanzheng Xiang and Hanqi Yan and Shuyin Ouyang and Lin Gui and Yulan He}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=8LoPjpvWde} }
xiang|scireplicatebench_benchmarking_llms_in_agentdriven_algorithmic_reproduction_from_research_papers
null
null
null
null
null
UTF-8 Plumbing: Byte-level Tokenizers Unavoidably Enable LLMs to Generate Ill-formed UTF-8
Byte-level subword tokenization creates vocabularies whose tokens aren't well-formed utf-8, requiring workarounds to interpret as code points.
Subword tokenization segments input text according to a pre-defined vocabulary to feed it into a language model; the language model, in turn, generates a sequence made from this same vocabulary. The members of the vocabulary can be built of code points or bytes. Using code points means that all members of the vocabulary are valid UTF-8 characters. However, it also requires thousands of initial members to achieve acceptable coverage of inputs. Beginning with bytes, on the contrary, avoids out-of-vocabulary errors with only 256 initial members of the vocabulary, but the members of the vocabulary and sequences of them are not guaranteed to be valid UTF-8. Sequences that are not valid UTF-8 break code that assumes its input to be valid UTF-8. Applications of language models must account for the breakage thereby introduced. In this paper, we formalize tokenization using monoid theory and prove that tokenizers whose vocabularies contain tokens that are ill-formed UTF-8 can always produce sequences that are ill-formed UTF-8. We demonstrate formally that attempting to incrementally convert tokens back to a string and interpret the results as UTF-8 gives different results than converting the whole sequence of tokens at once. This formal result predicts real-world bugs: we evaluate mitigations for the problem identified and provide case studies of major foundation models, serving engines, and constrained generation systems.
[ "Preston Firestone", "Shubham Ugare", "Gagandeep Singh", "Sasa Misailovic" ]
https://openreview.net/forum?id=8ExXncFpf6
8ExXncFpf6
8ExXncFpf6
[ "~Preston_Firestone1", "~Shubham_Ugare1", "~Gagandeep_Singh1", "~Sasa_Misailovic1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/2693aa30d0ed0a1206774ab1fbdcaf7f00398c1a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "tokenization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ firestone2025utf, title={{UTF}-8 Plumbing: Byte-level Tokenizers Unavoidably Enable {LLM}s to Generate Ill-formed {UTF}-8}, author={Preston Firestone and Shubham Ugare and Gagandeep Singh and Sasa Misailovic}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=8ExXncFpf6} }
firestone|utf8_plumbing_bytelevel_tokenizers_unavoidably_enable_llms_to_generate_illformed_utf8
null
null
null
null
null
IMPersona: Evaluating Individual Level LLM Impersonation
We train LLMs to impersonate individuals by mimicking style and personal knowledge, surpassing prompting methods, while raising safety and alignment concerns.
As language models achieve increasingly human-like capabilities in conversational text generation, a critical question emerges: to what extent can these systems simulate the characteristics of specific individuals? To evaluate this, we introduce IMPersona, a framework for evaluating LMs at impersonating specific individuals' writing style and personal knowledge. Using supervised fine-tuning and a hierarchical memory-inspired retrieval system, we demonstrate that even modestly sized open-source models, such as Llama-3.1-8B-Instruct, can achieve impersonation abilities at concerning levels. In blind conversation experiments, participants (mis)identified our fine-tuned models with memory integration as human in \textbf{44.44\%} of interactions, compared to just \textbf{25.00\%} for the best prompting-based approach. We analyze these results to propose detection methods and defense strategies against such impersonation attempts. Our findings raise important questions about both the potential applications and risks of personalized language models, particularly regarding privacy, security, and the ethical deployment of such technologies in real-world contexts.
[ "Quan Shi", "Carlos E Jimenez", "Stephen Dong", "Brian Seo", "Caden Yao", "Adam Kelch", "Karthik R Narasimhan" ]
https://openreview.net/forum?id=7qhBXq0NLN
7qhBXq0NLN
7qhBXq0NLN
[ "~Quan_Shi1", "~Carlos_E_Jimenez1", "~Stephen_Dong1", "~Brian_Seo1", "~Caden_Yao1", "~Adam_Kelch1", "~Karthik_R_Narasimhan1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/d396fd514ce0b0e9caf456ecb846143d9210eafd.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Impersonation", "Language Models", "Personalization", "Stylistic Mimicry", "Contextual Knowledge", "AI Evaluation", "Social Engineering", "Ethical AI", "Memory-Augmented Models", "Human-AI Interaction" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shi2025impersona, title={{IMP}ersona: Evaluating Individual Level {LLM} Impersonation}, author={Quan Shi and Carlos E Jimenez and Stephen Dong and Brian Seo and Caden Yao and Adam Kelch and Karthik R Narasimhan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=7qhBXq0NLN} }
shi|impersona_evaluating_individual_level_llm_impersonation
/attachment/d096af9d90cf9de17d4e5c69167501cf28845909.zip
null
null
null
null
R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents
R2E-Gym: Procedural Environment Generation for Scaling Open-Weights Software Engineering Agents
Improving open-source models on real-world SWE tasks (solving GITHUB issues) faces two key challenges: 1) scalable curation of execution environments to train these models, and, 2) optimal scaling of test-time compute. We introduce R2EGym, the largest procedurally-curated executable gym environment for training real-world SWE-agents, consisting of more than 8.1K tasks. R2EGym is powered by two main contributions: 1) SWEGEN: a synthetic data curation recipe that enables scalable curation of executable environments using test-generation and back-translation directly from commits, thereby reducing reliance on human-written issues or unit tests. We show that this enables more scalable training leading to pass@1 performance of 34.4% on SWE-Bench Verified benchmark with our 32B model. 2) Hybrid Test-time Scaling: we provide an in-depth analysis of two test-time scaling axes; execution-based and execution-free verifiers, demonstrating that they exhibit complementary strengths and limitations. Test-based verifiers suffer from low distinguishability, while execution-free verifiers are biased and often rely on stylistic features. Surprisingly, we find that while each approach individually saturates around 42-43%, significantly higher gains can be obtained by leveraging their complementary strengths. Overall, our approach achieves 51% on the SWE-Bench Verified benchmark, reflecting a new state-of-the-art for open-weight SWE-agents and for the first time showing competitive performance with proprietary models such as o1, o1-preview and sonnet-3.5-v2 (with tools). We will open-source our environments, models, and agent trajectories.
[ "Naman Jain", "Jaskirat Singh", "Manish Shetty", "Tianjun Zhang", "Liang Zheng", "Koushik Sen", "Ion Stoica" ]
https://openreview.net/forum?id=7evvwwdo3z
7evvwwdo3z
7evvwwdo3z
[ "~Naman_Jain2", "~Jaskirat_Singh1", "~Manish_Shetty1", "~Tianjun_Zhang1", "~Liang_Zheng4", "~Koushik_Sen2", "~Ion_Stoica1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/da5e3975f31a0a92b22a129c3d44419cfa5807fa.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "SWE Agents ; Inference Time Scaling ; Test Generation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ jain2025regym, title={R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights {SWE} Agents}, author={Naman Jain and Jaskirat Singh and Manish Shetty and Tianjun Zhang and Liang Zheng and Koushik Sen and Ion Stoica}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=7evvwwdo3z} }
jain|r2egym_procedural_environment_generation_and_hybrid_verifiers_for_scaling_openweights_swe_agents
null
null
null
null
null
FineMedLM-o1: Enhancing Medical Knowledge Reasoning Ability of LLM from Supervised Fine-Tuning to Test-Time Training
We propose a novel synthetic data method to generate thinking data and integrate Test-Time Training during the inference phase to enhance the medical reasoning capabilities of LLMs.
Recent advancements in large language models (LLMs) have shown promise in medical applications such as disease diagnosis and treatment planning. However, most existing medical LLMs struggle with the deep reasoning required for complex medical problems, such as differential diagnosis and medication recommendations. We propose FineMedLM-o1, which leverages high-quality medical synthetic data and long-form reasoning data for Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO), enabling advanced dialogue and deep reasoning capabilities. Additionally, we introduce Test-Time Training (TTT) in the medical domain for the first time, facilitating domain adaptation and ensuring reliable, accurate reasoning. Experimental results demonstrate that FineMedLM-o1 achieves a 23% average performance improvement over prior models on key medical benchmarks. Furthermore, the introduction of TTT provides an additional 14% performance boost, highlighting its effectiveness in enhancing medical reasoning capabilities. To support this process, we also propose a novel method for synthesizing medical dialogue. Compared to other open-source datasets, our dataset stands out as superior in both quality and complexity. The project and data will be released on GitHub.
[ "hongzhou yu", "Tianhao Cheng", "Yingwen Wang", "Wen He", "Qing Wang", "Ying Cheng", "Yuejie Zhang", "Rui Feng", "Xiaobo Zhang" ]
https://openreview.net/forum?id=7ZwuGZCopw
7ZwuGZCopw
7ZwuGZCopw
[ "~hongzhou_yu1", "~Tianhao_Cheng1", "~Yingwen_Wang2", "~Wen_He2", "~Qing_Wang23", "~Ying_Cheng2", "~Yuejie_Zhang2", "~Rui_Feng2", "~Xiaobo_Zhang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b7a4b2f6d1712ffc78db20d941aefbf31903928d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LMs on diverse domains and novel applications" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yu2025finemedlmo, title={FineMed{LM}-o1: Enhancing Medical Knowledge Reasoning Ability of {LLM} from Supervised Fine-Tuning to Test-Time Training}, author={hongzhou yu and Tianhao Cheng and Yingwen Wang and Wen He and Qing Wang and Ying Cheng and Yuejie Zhang and Rui Feng and Xiaobo Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=7ZwuGZCopw} }
yu|finemedlmo1_enhancing_medical_knowledge_reasoning_ability_of_llm_from_supervised_finetuning_to_testtime_training
null
null
null
null
null
Single-Pass Document Scanning for Question Answering
Single-pass scanner in long document QA outperforms embedding methods while nearly matching full-context LLMs at much lower cost.
Handling extremely large documents for question answering is challenging: chunk-based embedding methods often lose track of important global context, while full-context transformers can be prohibitively expensive for hundreds of thousands of tokens. We propose a single-pass document scanning approach that processes the entire text in linear time, preserving global coherence while deciding which sentences are most relevant to the query. On 41 QA benchmarks, our single-pass scanner consistently outperforms chunk-based embedding methods and competes with large language models at a fraction of the computational cost. By conditioning on the entire preceding context without chunk breaks, the method preserves global coherence, which is especially important for long documents. Overall, single-pass document scanning offers a simple solution for question answering over massive text. All code, datasets, and model checkpoints are available at https://github.com/MambaRetriever/MambaRetriever
[ "Weili Cao", "Jianyou Wang", "Youze Zheng", "Longtian Bao", "Qirui Zheng", "Taylor Berg-Kirkpatrick", "Ramamohan Paturi", "Leon Bergen" ]
https://openreview.net/forum?id=7Vj78acKIp
7Vj78acKIp
7Vj78acKIp
[ "~Weili_Cao1", "~Jianyou_Wang1", "~Youze_Zheng1", "~Longtian_Bao1", "~Qirui_Zheng1", "~Taylor_Berg-Kirkpatrick1", "~Ramamohan_Paturi1", "~Leon_Bergen1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/48e039782624b01ed3c8b45c213f2679446813c1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Long Document QA", "State-Space Models", "Information Extraction" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ cao2025singlepass, title={Single-Pass Document Scanning for Question Answering}, author={Weili Cao and Jianyou Wang and Youze Zheng and Longtian Bao and Qirui Zheng and Taylor Berg-Kirkpatrick and Ramamohan Paturi and Leon Bergen}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=7Vj78acKIp} }
cao|singlepass_document_scanning_for_question_answering
null
null
null
null
null
VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation
VisualTrap is the first backdoor attack targeting GUI agents’ visual grounding, using subtle triggers to manipulate interactions across environments, posing significant security risks for edge device deployments.
Graphical User Interface (GUI) agents powered by Large Vision-Language Models (LVLMs) have emerged as a revolutionary approach to automating human-machine interactions, capable of autonomously operating personal devices (e.g., mobile phones) or applications within the device to perform complex real-world tasks in a human-like manner. However, their close integration with personal devices raises significant security concerns, with many threats, including backdoor attacks, remaining largely unexplored. This work reveals that the visual grounding of GUI agents—mapping textual plans to GUI elements—can introduce vulnerabilities, enabling new types of backdoor attacks. With backdoor attack targeting visual grounding, the agent’s behavior can be compromised even when given correct task-solving plans. To validate this vulnerability, we propose \textit{VisualTrap}, a method that can hijack the grounding by misleading the agent to locate textual plans to trigger locations instead of the intended targets. VisualTrap uses the common method of injecting poisoned data for attacks, and does so during the pre-training of visual grounding \textcolor{black}{to ensure practical feasibility of attacking.} Empirical results show that VisualTrap can effectively hijack visual grounding with as little as 5\% poisoned data and highly stealthy visual triggers (invisible to the human eye); and the attack can be generalized to downstream tasks, even after clean fine-tuning. Moreover, the injected trigger can remain effective across different GUI environments, \textit{e.g.,} being trained on mobile/web and generalizing to desktop environments. These findings underscore the urgent need for further research on backdoor attack risks in GUI agents.
[ "Ziang Ye", "Yang Zhang", "Wentao Shi", "Xiaoyu You", "Fuli Feng", "Tat-Seng Chua" ]
https://openreview.net/forum?id=7HPuAkgdVm
7HPuAkgdVm
7HPuAkgdVm
[ "~Ziang_Ye1", "~Yang_Zhang24", "~Wentao_Shi1", "~Xiaoyu_You1", "~Fuli_Feng1", "~Tat-Seng_Chua2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/568404b7cad28e65c5b5f0dc43eb62663f56bc15.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "GUI Agent", "Backdoor Attack" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ye2025visualtrap, title={VisualTrap: A Stealthy Backdoor Attack on {GUI} Agents via Visual Grounding Manipulation}, author={Ziang Ye and Yang Zhang and Wentao Shi and Xiaoyu You and Fuli Feng and Tat-Seng Chua}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=7HPuAkgdVm} }
ye|visualtrap_a_stealthy_backdoor_attack_on_gui_agents_via_visual_grounding_manipulation
null
null
null
null
null
What is the Visual Cognition Gap between Humans and Multimodal LLMs?
Exploring the visual cognition gap between Multimodal LLMs and humans.
Recently, Multimodal Large Language Models (MLLMs) and Vision Language Models (VLMs) have shown great promise in language-guided perceptual tasks such as recognition, segmentation, and object detection. However, their effectiveness in addressing visual cognition problems that require high-level multi-image reasoning and visual working memory is not well-established. One such challenge is matrix reasoning -- the cognitive ability to discern relationships among patterns in a set of images and extrapolate to predict subsequent patterns. This skill is crucial during the early neurodevelopmental stages of children. Inspired by the matrix reasoning tasks in Raven’s Progressive Matrices (RPM) and Wechsler Intelligence Scale for Children (WISC), we propose a new dataset MaRs-VQA to evaluate the visual cognition capability of MLLMs and compare their performance with existing human visual cognition studies. Based on the training data of MaRs-VQA, we also finetune a baseline model Qwen2-VCog with multi-stage cognition reasoning annotations. Our comparative experiments with different baselines reveal a gap between MLLMs and human intelligence, highlighting the visual cognitive limitations of current MLLMs. We believe that the public release of MaRs-VQA and the Qwen2-VCog baseline model will drive progress toward the next generation of MLLMs with human-like visual cognition abilities. MaRs-VQA is available at huggingface.co/datasets/IrohXu/VCog-Bench. The training code of Qwen2-VCog is available at github.com/IrohXu/Cognition-MLLM.
[ "Xu Cao", "Yifan Shen", "Bolin Lai", "Wenqian Ye", "Yunsheng Ma", "Joerg Heintz", "Jintai Chen", "Meihuan Huang", "Jianguo Cao", "Aidong Zhang", "James Matthew Rehg" ]
https://openreview.net/forum?id=78lTuD6wiO
78lTuD6wiO
78lTuD6wiO
[ "~Xu_Cao4", "~Yifan_Shen5", "~Bolin_Lai1", "~Wenqian_Ye1", "~Yunsheng_Ma2", "~Joerg_Heintz1", "~Jintai_Chen1", "~Meihuan_Huang1", "~Jianguo_Cao1", "~Aidong_Zhang2", "~James_Matthew_Rehg1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/844b03e76ede944e5131126d60bfd745efed10e5.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Visual Cognition", "Wechsler Intelligence Scale for Children", "Multimodal LLMs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ cao2025what, title={What is the Visual Cognition Gap between Humans and Multimodal {LLM}s?}, author={Xu Cao and Yifan Shen and Bolin Lai and Wenqian Ye and Yunsheng Ma and Joerg Heintz and Jintai Chen and Meihuan Huang and Jianguo Cao and Aidong Zhang and James Matthew Rehg}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=78lTuD6wiO} }
cao|what_is_the_visual_cognition_gap_between_humans_and_multimodal_llms
/attachment/5db99c4af702912c3a61960419156e79ebefdcb3.zip
null
null
null
null
Building Instruction-Tuning Datasets from Human-Written Instructions with Open-Weight Large Language Models
We constructed 4 state-of-the-art datasets in 2 languages for instruction tuning with permissive licenses, by simply appending responses generated by open-weight LLMs to human-written instructions.
Instruction tuning is crucial for enabling Large Language Models (LLMs) to solve real-world tasks. Prior work has shown the effectiveness of instruction-tuning data synthesized solely from LLMs, raising a fundamental question: Do we still need human-originated signals for instruction tuning? This work answers the question affirmatively: we build state-of-the-art instruction-tuning datasets sourced from human-written instructions, by simply pairing them with LLM-generated responses. LLMs fine-tuned on our datasets consistently outperform those fine-tuned on existing ones. Our data construction approach can be easily adapted to other languages; we build datasets for Japanese and confirm that LLMs tuned with our data reach state-of-the-art performance. Analyses suggest that instruction-tuning in a new language allows LLMs to follow instructions, while the tuned models exhibit a notable lack of culture-specific knowledge in that language. The datasets and fine-tuned models will be publicly available. Our datasets, synthesized with open-weight LLMs, are openly distributed under permissive licenses, allowing for diverse use cases.
[ "Youmi Ma", "Sakae Mizuki", "Kazuki Fujii", "Taishi Nakamura", "Masanari Ohi", "Hinari Shimada", "Taihei Shiotani", "Koshiro Saito", "Koki Maeda", "Kakeru Hattori", "Takumi Okamoto", "Shigeki Ishida", "Rio Yokota", "Hiroya Takamura", "Naoaki Okazaki" ]
https://openreview.net/forum?id=6vTv9M9ZAA
6vTv9M9ZAA
6vTv9M9ZAA
[ "~Youmi_Ma2", "~Sakae_Mizuki1", "~Kazuki_Fujii1", "~Taishi_Nakamura1", "~Masanari_Ohi1", "~Hinari_Shimada1", "~Taihei_Shiotani1", "~Koshiro_Saito1", "~Koki_Maeda1", "~Kakeru_Hattori1", "~Takumi_Okamoto1", "~Shigeki_Ishida1", "~Rio_Yokota1", "~Hiroya_Takamura1", "~Naoaki_Okazaki2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/cbf428a370ef087117ee63dfb8dc77af27cfde1b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "large language models; instruction tuning; synthetic data generation; cross-lingual datasets" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ma2025building, title={Building Instruction-Tuning Datasets from Human-Written Instructions with Open-Weight Large Language Models}, author={Youmi Ma and Sakae Mizuki and Kazuki Fujii and Taishi Nakamura and Masanari Ohi and Hinari Shimada and Taihei Shiotani and Koshiro Saito and Koki Maeda and Kakeru Hattori and Takumi Okamoto and Shigeki Ishida and Rio Yokota and Hiroya Takamura and Naoaki Okazaki}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=6vTv9M9ZAA} }
ma|building_instructiontuning_datasets_from_humanwritten_instructions_with_openweight_large_language_models
null
null
null
null
null
Improving LLMs‘ Generalized Reasoning Abilities by Graph Problems
We introduce GraphPile, a 13B-token dataset for graph problem reasoning (GPR), to enhance general reasoning in LLMs. Models trained on GraphPile achieve significant gains across diverse reasoning tasks, extending LLM capabilities beyond mathematics.
Large Language Models (LLMs) have made remarkable strides in reasoning tasks, yet their performance often falters on novel and complex problems. Domain-specific continue-pretraining (CPT) methods, such as those tailored for mathematical reasoning, have shown promise but lack transferability to broader reasoning tasks. In this work, we pioneer the use of Graph Problem Reasoning (GPR) to enhance LLMs' general reasoning capabilities. GPR tasks—spanning pathfinding, network analysis, numerical computation, and topological reasoning—require sophisticated logical and relational reasoning, making them ideal for teaching diverse reasoning patterns. To achieve this, we introduce GraphPile, the first large-scale corpus specifically designed for CPT using GPR data. Spanning 10.9 billion tokens across 23 graph tasks, the dataset includes Chain-of-Thought, Program-of-Thought, Trace of Execution, and Real-world Graph Data. Using GraphPile, we train GraphMind on popular base models-Llama 3&3.1 and Gemma 2-achieving up to 4.9% higher accuracy in mathematical reasoning and up to 21.2% improvement in non-mathematical reasoning tasks, like logical and commonsense reasoning. By being the first to harness GPR for enhancing reasoning patterns and introducing the first dataset of its kind, our work bridges the gap between domain-specific pretraining and universal reasoning capabilities, advancing the adaptability and robustness of LLMs.
[ "Qifan Zhang", "Nuo Chen", "Zehua Li", "Miao Peng", "Jing Tang", "Jia Li" ]
https://openreview.net/forum?id=6vMRcaYbU7
6vMRcaYbU7
6vMRcaYbU7
[ "~Qifan_Zhang7", "~Nuo_Chen1", "~Zehua_Li2", "~Miao_Peng1", "~Jing_Tang5", "~Jia_Li4" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c29abf3becdbc4c43ebe28e7b91d037027fd7f86.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Continue-Pretraining", "Graph Problem Reasoning", "General Reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025improving, title={Improving {LLM}s{\textquoteleft} Generalized Reasoning Abilities by Graph Problems}, author={Qifan Zhang and Nuo Chen and Zehua Li and Miao Peng and Jing Tang and Jia Li}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=6vMRcaYbU7} }
zhang|improving_llms_generalized_reasoning_abilities_by_graph_problems
null
null
null
null
null
Know Me, Respond to Me: Benchmarking LLMs for Dynamic User Profiling and Personalized Responses at Scale
We introduce the PERSONAMEM benchmark to evaluate state-of-the-art LLMs’ ability to infer a user’s profile and how it evolves, as well as whether LLMs can apply what they learn about the user to provide personalized responses across task scenarios.
Large Language Models (LLMs) have emerged as personalized assistants for users across a wide range of tasks – from offering writing support to delivering tailored recommendations or consultations. Over time, the interaction history between a user and an LLM can provide extensive information about an individual’s traits and preferences. However, open questions remain on how well LLMs today can effectively leverage such history to (1) internalize the user’s inherent traits and preferences, (2) track how the user profiling and preferences evolve over time, and (3) generate personalized responses accordingly in new scenarios. In this work, we introduce the PERSONAMEM benchmark. PERSONAMEM features curated user profiles with over 180 simulated user-LLM interaction histories, each containing up to 60 sessions of multi-turn conversations across 15 real-world tasks that require personalization. Given an in-situ user query at a specific time point, we evaluate LLM chatbots’ ability to identify the most suitable response according to the current state of the user’s profile. We observe that current LLMs still struggle to recognize the dynamic evolution in users’ profiles over time through direct prompting approaches. As a consequence, LLMs often fail to deliver responses that align with users’ current situations and preferences, with frontier models such as GPT-4.5, or Gemini-2.0 achieving only around 50% overall accuracy, suggesting room for improvement. We hope that PERSONAMEM, along with the user profile and conversation simulation pipeline, can facilitate future research in the development of truly user-aware chatbots.
[ "Bowen Jiang", "Zhuoqun Hao", "Young Min Cho", "Bryan Li", "Yuan Yuan", "Sihao Chen", "Lyle Ungar", "Camillo Jose Taylor", "Dan Roth" ]
https://openreview.net/forum?id=6ox8XZGOqP
6ox8XZGOqP
6ox8XZGOqP
[ "~Bowen_Jiang2", "~Zhuoqun_Hao1", "~Young_Min_Cho1", "~Bryan_Li1", "~Yuan_Yuan7", "~Sihao_Chen1", "~Lyle_Ungar1", "~Camillo_Jose_Taylor2", "~Dan_Roth3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3dcb3eae85f5e555bfdbd9368f3c518941e3f816.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "personalization", "long context", "memory", "conversational chatbot" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ jiang2025know, title={Know Me, Respond to Me: Benchmarking {LLM}s for Dynamic User Profiling and Personalized Responses at Scale}, author={Bowen Jiang and Zhuoqun Hao and Young Min Cho and Bryan Li and Yuan Yuan and Sihao Chen and Lyle Ungar and Camillo Jose Taylor and Dan Roth}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=6ox8XZGOqP} }
jiang|know_me_respond_to_me_benchmarking_llms_for_dynamic_user_profiling_and_personalized_responses_at_scale
null
null
null
null
null
An Illusion of Progress? Assessing the Current State of Web Agents
We introduce Online-Mind2Web, a more diverse and realistic benchmark that includes 300 high-quality tasks from 136 popular websites across various domains.
As digitalization and cloud technologies evolve, the web is becoming increasingly important in the modern society. Autonomous web agents based on large language models (LLMs) hold a great potential in work automation. It is therefore important to accurately measure and monitor the progression of their capabilities. In this work, we conduct a comprehensive and rigorous assessment of the current state of web agents. Our results depict a very different picture of the competency of current agents, suggesting over-optimism in previously reported results. This gap can be attributed to shortcomings in existing benchmarks. We introduce Online-Mind2Web, an online evaluation benchmark consisting of 300 diverse and realistic tasks spanning 136 websites. It enables us to evaluate web agents under a setting that approximates how real users use these agents. To facilitate more scalable evaluation and development, we also develop a novel LLM-as-a-Judge automatic evaluation method and show that it can achieve around 85\% agreement with human judgment, substantially higher than existing methods. Finally, we present the first comprehensive comparative analysis of current web agents, highlighting both their strengths and limitations to inspire future research.
[ "Tianci Xue", "Weijian Qi", "Tianneng Shi", "Chan Hee Song", "Boyu Gou", "Dawn Song", "Huan Sun", "Yu Su" ]
https://openreview.net/forum?id=6jZi4HSs6o
6jZi4HSs6o
6jZi4HSs6o
[ "~Tianci_Xue1", "~Weijian_Qi2", "~Tianneng_Shi1", "~Chan_Hee_Song1", "~Boyu_Gou1", "~Dawn_Song1", "~Huan_Sun1", "~Yu_Su2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ecb5fdd721a905fe2d2b08129793137078415380.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Web Agents", "Multimodal Large Language Models", "Automatic Evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xue2025an, title={An Illusion of Progress? Assessing the Current State of Web Agents}, author={Tianci Xue and Weijian Qi and Tianneng Shi and Chan Hee Song and Boyu Gou and Dawn Song and Huan Sun and Yu Su}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=6jZi4HSs6o} }
xue|an_illusion_of_progress_assessing_the_current_state_of_web_agents
null
null
null
null
null
ReFeed: Multi-dimensional Summarization Refinement with Reflective Reasoning on Feedback
Reflective Reasoning for Text Refinement
Summarization refinement faces challenges when extending to multi-dimension. In this paper, we introduce ReFeed, a powerful summarization refinement pipeline that enhances multiple dimensions through reflective reasoning on feedback. To achieve this, we release SumFeed-CoT, a large-scale Long-CoT-based dataset optimized for training a lightweight model with reflective reasoning. Our experiments reveal how the number of dimensions, feedback exposure, and reasoning policy influence refinement performance, highlighting reflective reasoning and simultaneously addressing multiple feedback is crucial to mitigate trade-off between dimensions. Furthermore, ReFeed is robust to noisy feedback and feedback order. Lastly, our finding emphasizes that creating data with a proper goal and guideline constitutes a fundamental pillar of effective reasoning. The dataset and model is available at https://github.com/DISL-Lab/ReFeed.
[ "Taewon Yun", "Jihwan Oh", "Hyangsuk Min", "Yuho Lee", "Jihwan Bang", "Jason Cai", "Hwanjun Song" ]
https://openreview.net/forum?id=6BGDGKZN7q
6BGDGKZN7q
6BGDGKZN7q
[ "~Taewon_Yun1", "~Jihwan_Oh2", "~Hyangsuk_Min1", "~Yuho_Lee1", "~Jihwan_Bang1", "~Jason_Cai1", "~Hwanjun_Song2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b0cae039ea9c3f116d172c97dce2426a432438b7.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Text Refinement", "Text Summarization", "LLMs", "Reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yun2025refeed, title={ReFeed: Multi-dimensional Summarization Refinement with Reflective Reasoning on Feedback}, author={Taewon Yun and Jihwan Oh and Hyangsuk Min and Yuho Lee and Jihwan Bang and Jason Cai and Hwanjun Song}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=6BGDGKZN7q} }
yun|refeed_multidimensional_summarization_refinement_with_reflective_reasoning_on_feedback
null
null
null
null
null
Ensemble Debiasing Across Class and Sample Levels for Fairer Prompting Accuracy
We propose a Heaviside step function based ensemble debiasing method, to flexibly rectify biased ICL output class probabilities across both class and sample levels, achieving fairer prompting accuracy for LLMs.
Language models are strong few-shot learners and achieve good overall accuracy in text classification tasks, masking the fact that their results suffer from great class accuracy imbalance. We believe that the pursuit of overall accuracy should not come from enriching the strong classes, but from raising up the weak ones. To address the imbalance, we propose a Heaviside step function based ensemble debiasing method, which enables flexible rectifications of in-context learned class probabilities at both class and sample levels. Evaluations with Llama-2-13B on seven text classification benchmarks show that our approach achieves state-of-the-art overall accuracy gains with balanced class accuracies. More importantly, we perform analyses on the resulted probability correction scheme, showing that sample-level corrections are necessary to elevate weak classes. Due to effectively correcting weak classes, our method also brings significant performance gains to a larger model variant, Llama-2-70B, especially on a biomedical domain task, further demonstrating the necessity of ensemble debiasing at both levels. Our source code is available at https://github.com/NUS-HPC-AI-Lab/DCS.
[ "Ruixi Lin", "Ziqiao Wang", "Yang You" ]
https://openreview.net/forum?id=63c7hTrUCh
63c7hTrUCh
63c7hTrUCh
[ "~Ruixi_Lin1", "~Ziqiao_Wang2", "~Yang_You1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/bb5d070d181b52b53c02e3a12e3705fe76a2a03f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "ensemble debiasing", "accuracy imbalance", "Heaviside step function", "post-hoc correction" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lin2025ensemble, title={Ensemble Debiasing Across Class and Sample Levels for Fairer Prompting Accuracy}, author={Ruixi Lin and Ziqiao Wang and Yang You}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=63c7hTrUCh} }
lin|ensemble_debiasing_across_class_and_sample_levels_for_fairer_prompting_accuracy
null
null
null
null
null
Scaling Web Agent Training through Automatic Data Generation and Fine-grained Evaluation
Scaling Web Agent Training through Fine-grained Constraint Evaluation
We present a scalable pipeline for automatically generating high-quality training data for web agents. In particular, a major challenge in identifying high-quality training instances is trajectory evaluation - quantifying how much progress was made towards task completion. We introduce a novel constraint-based evaluation framework that provides fine-grained assessment of progress towards task completion. This enables us to leverage partially successful trajectories, which significantly expands the amount of usable training data. We evaluate our method on a new benchmark we propose called BookingArena, which consists of complex booking tasks across 20 popular websites, and demonstrate that our distilled student model outperforms open-source approaches and matches or exceeds commercial systems, while being a significantly smaller model. Our work addresses the challenge of efficiently creating diverse, realistic web interaction datasets and provides a systematic evaluation methodology for complex structured web tasks.
[ "Lajanugen Logeswaran", "Jaekyeom Kim", "Sungryull Sohn", "Creighton Glasscock", "Honglak Lee" ]
https://openreview.net/forum?id=63JtmQL7dv
63JtmQL7dv
63JtmQL7dv
[ "~Lajanugen_Logeswaran1", "~Jaekyeom_Kim1", "~Sungryull_Sohn1", "~Creighton_Glasscock2", "~Honglak_Lee2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/94dc5128f5357bccc1a6a42d1c5ce09fc5a0da1f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "web agent", "evaluation", "distillation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ logeswaran2025scaling, title={Scaling Web Agent Training through Automatic Data Generation and Fine-grained Evaluation}, author={Lajanugen Logeswaran and Jaekyeom Kim and Sungryull Sohn and Creighton Glasscock and Honglak Lee}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=63JtmQL7dv} }
logeswaran|scaling_web_agent_training_through_automatic_data_generation_and_finegrained_evaluation
null
null
null
null
null
Style over Substance: Distilled Language Models Reason Via Stylistic Replication
We show that language models distilled from reasoning models primarily mimic stylistic patterns rather than internalize deeper reasoning capabilities.
Specialized reasoning language models (RLMs) have demonstrated that scaling test-time computation through detailed reasoning traces significantly enhances performance.Although these traces effectively facilitate knowledge distillation into smaller, instruction-tuned models, the precise nature of transferred reasoning remains unclear. In this study, we investigate to what extent distilled models internalize replicated stylistic patterns during reasoning. To this end, we systematically analyze reasoning traces, identifying structural and lexical patterns that characterize successful reasoning. We then introduce two new datasets -- a dataset of emergent reasoning traces and a synthetic dataset explicitly constructed to replicate these stylistic patterns -- to precisely examine their influence on distilled models' reasoning capabilities. We find that models trained on the synthetic traces achieve comparable performance, indicating that distilled reasoning abilities rely significantly on surface-level patterns. Surprisingly, we observe an increase in performance even when the synthetic traces are altered to lead to the wrong answer. Our findings highlight how stylistic patterns can be leveraged to efficiently enhance LM reasoning across diverse model families.
[ "Philip Lippmann", "Jie Yang" ]
https://openreview.net/forum?id=5wAfbEs34A
5wAfbEs34A
5wAfbEs34A
[ "~Philip_Lippmann1", "~Jie_Yang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f5facd3cddbcec76bd17211c7126587b8b3a8fae.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Reasoning", "language models", "stylistic mimicry", "pivots", "synthetic data", "distillation", "finetuning", "metacognition" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lippmann2025style, title={Style over Substance: Distilled Language Models Reason Via Stylistic Replication}, author={Philip Lippmann and Jie Yang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=5wAfbEs34A} }
lippmann|style_over_substance_distilled_language_models_reason_via_stylistic_replication
/attachment/249d626d9fb3445366a1147e370bbc82ac834811.zip
null
null
null
null
MixAssist: An Audio-Language Dataset for Co-Creative AI Assistance in Music Mixing
CoMix introduces a novel audio-language dataset of 640 collaborative mixing conversations, designed to train AI assistants for co-creative music production by capturing real-world expert-amateur interactions.
While AI presents significant potential for enhancing music mixing and mastering workflows, current research predominantly emphasizes end-to-end automation or generation, often overlooking the collaborative and instructional dimensions vital for co-creative processes. This gap leaves artists, particularly amateurs seeking to develop expertise, underserved. To bridge this, we introduce MixAssist, a novel audio-language dataset capturing the situated, multi-turn dialogue between expert and amateur music producers during collaborative mixing sessions. Comprising 431 audio-grounded conversational turns derived from 7 in-depth sessions involving 12 producers, MixAssist provides a unique resource for training and evaluating audio-language models that can comprehend and respond to the complexities of real-world music production dialogues. Our evaluations, including automated LLM-as-a-judge assessments and human expert comparisons, demonstrate that fine-tuning models such as Qwen-Audio on MixAssist can yield promising results, with Qwen significantly outperforming other tested models in generating helpful, contextually relevant mixing advice. By focusing on co-creative instruction grounded in audio context, MixAssist enables the development of intelligent AI assistants designed to support and augment the creative process in music mixing.
[ "Michael Paul Clemens", "Ana Marasovic" ]
https://openreview.net/forum?id=5mICyyD4OF
5mICyyD4OF
5mICyyD4OF
[ "~Michael_Paul_Clemens1", "~Ana_Marasovic1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/cd09255238fcf94272f9254aa90f415f64f7f7af.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "audio-language dataset", "dataset creation", "co-creativity" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ clemens2025mixassist, title={MixAssist: An Audio-Language Dataset for Co-Creative {AI} Assistance in Music Mixing}, author={Michael Paul Clemens and Ana Marasovic}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=5mICyyD4OF} }
clemens|mixassist_an_audiolanguage_dataset_for_cocreative_ai_assistance_in_music_mixing
null
true
null
null
null
When Does Metadata Conditioning (NOT) Work for Language Model Pre-Training? A Study with Context-Free Grammars
We reveal why prepending metadata solely at pre-training sometimes helps LM's performance and sometimes hinders: it depends on whether the context suffices to infer latent semantics.
The ability to acquire latent semantics is one of the key properties that determines the performance of language models. One convenient approach to invoke this ability is to prepend metadata (e.g. URLs, domains, and styles) at the beginning of texts in the pre-training data, making it easier for the model to access latent semantics before observing the entire text. Previous studies have reported that this technique actually improves the performance of trained models in downstream tasks; however, this improvement has been observed only in specific downstream tasks, without consistent enhancement in average next-token prediction loss. To understand this phenomenon, we closely investigate how prepending metadata during pre-training affects model performance by examining its behavior using artificial data. Interestingly, we found that this approach produces both positive and negative effects on the downstream tasks. We demonstrate that the effectiveness of the approach depends on whether latent semantics can be inferred from the downstream task's prompt. Specifically, through investigations using data generated by probabilistic context-free grammars, we show that training with metadata helps improve model's performance when the given context is long enough to infer the latent semantics. In contrast, the technique negatively impacts performance when the context lacks the necessary information to make an accurate posterior inference.
[ "Rei Higuchi", "Ryotaro Kawata", "Naoki Nishikawa", "Kazusato Oko", "Shoichiro Yamaguchi", "Sosuke Kobayashi", "Seiya Tokui", "Kohei Hayashi", "Daisuke Okanohara", "Taiji Suzuki" ]
https://openreview.net/forum?id=5UkUsRsWYx
5UkUsRsWYx
5UkUsRsWYx
[ "~Rei_Higuchi1", "~Ryotaro_Kawata1", "~Naoki_Nishikawa1", "~Kazusato_Oko1", "~Shoichiro_Yamaguchi1", "~Sosuke_Kobayashi1", "~Seiya_Tokui1", "~Kohei_Hayashi1", "~Daisuke_Okanohara1", "~Taiji_Suzuki1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/846a9ecccba9cd10c88c1da694730cb2e83aa909.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "pretraining", "metadata conditioning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ higuchi2025when, title={When Does Metadata Conditioning ({NOT}) Work for Language Model Pre-Training? A Study with Context-Free Grammars}, author={Rei Higuchi and Ryotaro Kawata and Naoki Nishikawa and Kazusato Oko and Shoichiro Yamaguchi and Sosuke Kobayashi and Seiya Tokui and Kohei Hayashi and Daisuke Okanohara and Taiji Suzuki}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=5UkUsRsWYx} }
higuchi|when_does_metadata_conditioning_not_work_for_language_model_pretraining_a_study_with_contextfree_grammars
null
null
null
null
null
Understanding R1-Zero-Like Training: A Critical Perspective
We fix the optimization bias of GRPO, and propose a new algorithm to achieve state-of-the-art results using pure reinforcement learning.
DeepSeek-R1-Zero has shown that reinforcement learning (RL) at scale can directly enhance the reasoning capabilities of LLMs without supervised fine-tuning. In this work, we critically examine R1-Zero-like training by analyzing its two core components: base models and RL. We investigate a wide range of base models, including DeepSeek-V3-Base, to understand how pretraining characteristics influence RL performance. Our analysis reveals that DeepSeek-V3-Base already exhibit ''Aha moment'', while Qwen2.5 base models demonstrate strong reasoning capabilities even without prompt templates, suggesting potential pretraining biases. Additionally, we identify an optimization bias in Group Relative Policy Optimization (GRPO), which artificially increases response length (especially for incorrect outputs) during training. To address this, we introduce Dr. GRPO, an unbiased optimization method that improves token efficiency while maintaining reasoning performance. Leveraging these insights, we present a minimalist R1-Zero recipe that achieves 43.3% accuracy on AIME 2024 with a 7B base model, establishing a new state-of-the-art.
[ "Zichen Liu", "Changyu Chen", "Wenjun Li", "Penghui Qi", "Tianyu Pang", "Chao Du", "Wee Sun Lee", "Min Lin" ]
https://openreview.net/forum?id=5PAF7PAY2Y
5PAF7PAY2Y
5PAF7PAY2Y
[ "~Zichen_Liu1", "~Changyu_Chen2", "~Wenjun_Li1", "~Penghui_Qi1", "~Tianyu_Pang1", "~Chao_Du1", "~Wee_Sun_Lee1", "~Min_Lin1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c2d5be4ed491c133ce7e6d094ae2d7a1a3dd7476.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "R1-Zero", "Reinforcement Learning", "Post-Training" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ liu2025understanding, title={Understanding R1-Zero-Like Training: A Critical Perspective}, author={Zichen Liu and Changyu Chen and Wenjun Li and Penghui Qi and Tianyu Pang and Chao Du and Wee Sun Lee and Min Lin}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=5PAF7PAY2Y} }
liu|understanding_r1zerolike_training_a_critical_perspective
/attachment/d50bac1e8727c4425c63d8fa98bfd5d403479734.zip
null
null
null
null
Spike No More: Stabilizing the Pre-training of Large Language Models
Theoretical analysis to prevent loss spikes during LLM pre-training
Loss spikes often occur during pre-training of large language models. The spikes degrade the performance of large language models and sometimes ruin the pre-training. Since the pre-training needs a vast computational budget, we should avoid such spikes. Based on the assumption that the loss spike is caused by the sudden growth of the gradient norm, we explore factors to keep the gradient norm small through an analysis of the spectral norms of the Jacobian matrices for the sub-layers. Our findings suggest that stabilizing the pre-training process requires two conditions: small sub-layers and large shortcut. We conduct various experiments to empirically verify our theoretical analyses. Experimental results demonstrate that methods satisfying the conditions effectively prevent loss spikes during pre-training.
[ "Sho Takase", "Shun Kiyono", "Sosuke Kobayashi", "Jun Suzuki" ]
https://openreview.net/forum?id=52YBEzcI0l
52YBEzcI0l
52YBEzcI0l
[ "~Sho_Takase2", "~Shun_Kiyono1", "~Sosuke_Kobayashi1", "~Jun_Suzuki1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0430d3f7ddc8fac915136b722584c5d9bc77384a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "stable training", "llm", "pre-training" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ takase2025spike, title={Spike No More: Stabilizing the Pre-training of Large Language Models}, author={Sho Takase and Shun Kiyono and Sosuke Kobayashi and Jun Suzuki}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=52YBEzcI0l} }
takase|spike_no_more_stabilizing_the_pretraining_of_large_language_models
null
null
null
null
null
Resona: Improving Context Copying in Linear Recurrence Models with Retrieval
We improve the in-context performance of linear recurrent models by augmenting them with a parallel cross-attention branch that can mix in information from the context.
Recent shifts in the space of large language model (LLM) research have shown an increasing focus on novel architectures to compete with prototypical Transformer-based models that have long dominated this space. Linear recurrent models have proven to be a viable competitor due to their computational efficiency. However, such models still demonstrate a sizeable gap compared to Transformers in terms of in-context learning among other tasks that require recalling information from a context. In this work, we introduce __Resona__, a simple and scalable framework for augmenting linear recurrent models with retrieval. __Resona__ augments models with the ability to integrate retrieved information from the provided input context, enabling tailored behaviour to diverse task requirements. Experiments on a variety of linear recurrent models demonstrate that __Resona__-augmented models observe significant performance gains on a variety of synthetic as well as real-world natural language tasks, highlighting its ability to act as a general purpose method to improve the in-context learning and language modelling abilities of linear recurrent LLMs.
[ "Xinyu Wang", "Linrui Ma", "Jerry Huang", "Peng Lu", "Prasanna Parthasarathi", "Xiao-Wen Chang", "Boxing Chen", "Yufei Cui" ]
https://openreview.net/forum?id=4mxQmpnawk
4mxQmpnawk
4mxQmpnawk
[ "~Xinyu_Wang17", "~Linrui_Ma1", "~Jerry_Huang1", "~Peng_Lu6", "~Prasanna_Parthasarathi2", "~Xiao-Wen_Chang1", "~Boxing_Chen1", "~Yufei_Cui3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/d9e4cf3a98ef99ae8e685b66d9a9665fbcfd760d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Linear Recurrent Models", "In-Context Learning", "Retrieval" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025resona, title={Resona: Improving Context Copying in Linear Recurrence Models with Retrieval}, author={Xinyu Wang and Linrui Ma and Jerry Huang and Peng Lu and Prasanna Parthasarathi and Xiao-Wen Chang and Boxing Chen and Yufei Cui}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=4mxQmpnawk} }
wang|resona_improving_context_copying_in_linear_recurrence_models_with_retrieval
null
null
null
null
null
L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning
We propose Length Controlled Policy Optimization (LCPO), a simple reinforcement learning method that gives reasoning language models adaptive control over the length using just a prompt.
Reasoning language models have shown an uncanny ability to improve performance at test-time by ``thinking longer''—that is, by generating longer chain-of-thought sequences and hence using more compute. However, the length of their chain-of-thought reasoning is not controllable, making it impossible to allocate test-time compute to achieve a desired level of performance. We introduce Length Controlled Policy Optimization (LCPO), a simple reinforcement learning method that optimizes for accuracy and adherence to user-specified length constraints. We use LCPO to train L1, a reasoning language model that produces outputs satisfying a length constraint given in its prompt. L1's length control allows for smoothly trading off computational cost and accuracy on a wide range of tasks, and outperforms the state-of-the-art S1 method for length control. Furthermore, we uncover an unexpected short chain-of-thought capability in models trained with LCPO. Specifically, using LCPO we derive Short Reasoning Models (SRMs), that exhibit similar reasoning patterns as full-length reasoning models, but can generate CoT lengths comparable to non-reasoning models. They demonstrate significant performance gains, for instance, our 1.5B L1 model surpasses GPT-4o at equal reasoning lengths. Overall, LCPO enables precise control over reasoning length, allowing for fine-grained allocation of test-time compute and accuracy.
[ "Pranjal Aggarwal", "Sean Welleck" ]
https://openreview.net/forum?id=4jdIxXBNve
4jdIxXBNve
4jdIxXBNve
[ "~Pranjal_Aggarwal1", "~Sean_Welleck1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/572910f8542cea8933faeadcd3eb510530d38319.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "reasoning llms", "controllability", "test-time comptue" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ aggarwal2025l, title={L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning}, author={Pranjal Aggarwal and Sean Welleck}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=4jdIxXBNve} }
aggarwal|l1_controlling_how_long_a_reasoning_model_thinks_with_reinforcement_learning
null
true
null
null
null
Law of Vision Representation in MLLMs
MLLM performance is correlated to cross-modal alignment and correspondence of its vision representation
We introduce the "Law of Vision Representation" in multimodal large language models (MLLMs), revealing a strong correlation among cross-modal alignment, vision representation correspondence, and overall model performance. We quantify the these factors using the cross-modal Alignment and Correspondence score. Extensive experiments across fifteen distinct vision representation settings and evaluations on eight benchmarks show that the A and C scores correlate with performance following a quadratic relationship. By leveraging this relationship, we can identify and train the optimal vision representation for an MLLM, achieving a 99.7% reduction in computational cost without the need for repeated finetuning of the language model.
[ "Shijia Yang", "Bohan Zhai", "Quanzeng You", "Jianbo Yuan", "Hongxia Yang", "Chenfeng Xu" ]
https://openreview.net/forum?id=4d69EwfKAr
4d69EwfKAr
4d69EwfKAr
[ "~Shijia_Yang1", "~Bohan_Zhai1", "~Quanzeng_You3", "~Jianbo_Yuan1", "~Hongxia_Yang2", "~Chenfeng_Xu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8fd117276262b99f92acf38f2bdfb1fa56b0b2cf.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Multimodality Large Language Models; Computer Vision; Vision Representation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yang2025law, title={Law of Vision Representation in {MLLM}s}, author={Shijia Yang and Bohan Zhai and Quanzeng You and Jianbo Yuan and Hongxia Yang and Chenfeng Xu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=4d69EwfKAr} }
yang|law_of_vision_representation_in_mllms
null
null
null
null
null
Speculative Thinking: Enhancing Small-Model Reasoning with Large Model Guidance at Inference Time
a training-free framework that enables large reasoning models to guide smaller ones during inference at the reasoning level, distinct from speculative decoding, which operates at the token level
Recent advances leverage post-training to enhance model reasoning performance, which typically requires costly training pipelines and still suffers from inefficient, overly lengthy outputs. We introduce **Speculative Thinking**, a training-free framework that enables large reasoning models to guide smaller ones during inference at the reasoning level, distinct from speculative decoding, which operates at the token level. Our approach is based on two observations: (1) reasoning-supportive tokens such as $''wait''$ frequently appear after structural delimiters like $''\n\n''$, serving as signals for reflection or continuation; and (2) larger models exhibit stronger control over reflective behavior, reducing unnecessary backtracking while improving reasoning quality. By strategically delegating reflective steps to a more capable model, our method significantly boosts the reasoning accuracy of reasoning models while shortening their output. With the assistance of the 32B reasoning model, the 1.5B model’s accuracy on $MATH500$ increases from 83.2\% to 89.4\%, marking a substantial improvement of 6.2\%. Simultaneously, the average output length is reduced from $5439$ tokens to $4583$ tokens, representing a 15.7\% decrease. Moreover, when applied to a non-reasoning model (Qwen-2.5-7B-Instruct), our framework boosts its accuracy from 74.0\% to 81.8\% on the same benchmark, achieving a relative improvement of 7.8\%.
[ "Van Yang", "Xiang Yue", "Vipin Chaudhary", "Xiaotian Han" ]
https://openreview.net/forum?id=4Ns18bSoHo
4Ns18bSoHo
4Ns18bSoHo
[ "~Van_Yang1", "~Xiang_Yue1", "~Vipin_Chaudhary2", "~Xiaotian_Han1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/feedba6dcc4febff9424a211c565dbf51d6d22ad.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM Reasoning; LLM Inference; Speculative Decoding" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yang2025speculative, title={Speculative Thinking: Enhancing Small-Model Reasoning with Large Model Guidance at Inference Time}, author={Van Yang and Xiang Yue and Vipin Chaudhary and Xiaotian Han}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=4Ns18bSoHo} }
yang|speculative_thinking_enhancing_smallmodel_reasoning_with_large_model_guidance_at_inference_time
null
null
null
null
null
Privately Learning from Graphs with Applications in Fine-tuning Large Language Models
We present a privacy-preserving framework for relational learning, showcased by fine-tuning LLMs on sensitive graphs with differential privacy.
Graphs offer unique insights into relationships between entities, complementing data modalities like text and images and enabling AI models to extend their capabilities beyond traditional tasks. However, learning from graphs often involves handling sensitive relations, raising significant privacy concerns. Existing privacy-preserving methods, such as DP-SGD, rely on gradient decoupling assumptions and are incompatible with relational learning due to the inherent dependencies between training samples. To address this challenge, we propose a privacy-preserving pipeline for relational learning that decouples dependencies in sampled relations during training, ensuring differential privacy through a tailored application of DP-SGD. We apply this approach to fine-tune large language models (LLMs), such as BERT and Llama2, on sensitive graph data while addressing the associated computational complexities. Our method is evaluated on four real-world text-attributed graphs, demonstrating significant improvements in relational learning tasks while maintaining robust privacy guarantees. Additionally, we analyze the trade-offs between privacy, utility, and computational efficiency, offering insights into the practical deployment of our approach for privacy-preserving relational learning. Code is available at https://github.com/Graph-COM/PvGaLM.
[ "Haoteng Yin", "Rongzhe Wei", "Eli Chien", "Pan Li" ]
https://openreview.net/forum?id=3xErKrVAdG
3xErKrVAdG
3xErKrVAdG
[ "~Haoteng_Yin1", "~Rongzhe_Wei1", "~Eli_Chien1", "~Pan_Li2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f4a729bda078e3dfbf5db1e035fb03e1a2322066.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "differential privacy", "relational learning", "private learning", "language models", "fine-tuning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yin2025privately, title={Privately Learning from Graphs with Applications in Fine-tuning Large Language Models}, author={Haoteng Yin and Rongzhe Wei and Eli Chien and Pan Li}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=3xErKrVAdG} }
yin|privately_learning_from_graphs_with_applications_in_finetuning_large_language_models
/attachment/ed5d9a838d52f64768239444d7c642ba54b0ab31.zip
null
null
null
null
One ruler to measure them all: Benchmarking multilingual long-context language models
ONERULER is a multilingual benchmark for evaluating long-context LLMs across 26 languages, extending RULER beyond English. It aims to assess model performance in diverse linguistic settings using seven tasks, including detecting absent information.
We present ONERULER, a multilingual benchmark designed to evaluate long-context language models across 26 languages. ONERULER adapts the English-only RULER benchmark (Hsieh et al., 2024) by including seven synthetic tasks that test both retrieval and aggregation, including new variations of the "needle-in-a-haystack" task that allow for the possibility of a nonexistent needle. We create ONERULER through a two-step process, first writing English instructions for each task and then collaborating with native speakers to translate them into 25 additional languages. Experiments with both open-weight and closed LLMs reveal a widening performance gap between low- and high-resource languages as context length increases from 8K to 128K tokens. Surprisingly, English is not the top-performing language on long-context tasks (ranked 6th out of 26), with Polish emerging as the top language. Our experiments also show that many LLMs (particularly OpenAI's o3-mini-high) incorrectly predict the absence of an answer, even in high-resource languages. Finally, in cross-lingual scenarios where instructions and context appear in different languages, performance can fluctuate by up to 20% depending on the instruction language. We hope the release of ONERULER will facilitate future research into improving multilingual and cross-lingual long-context training pipelines.
[ "Yekyung Kim", "Jenna Russell", "Marzena Karpinska", "Mohit Iyyer" ]
https://openreview.net/forum?id=3vxxB3Ar9r
3vxxB3Ar9r
3vxxB3Ar9r
[ "~Yekyung_Kim1", "~Jenna_Russell1", "~Marzena_Karpinska1", "~Mohit_Iyyer1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/92e3dc9717e16c3eef5a257286d5143e9b5083dc.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Multilingual", "Benchmark", "Long-context", "Synthetic dataset" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ kim2025one, title={One ruler to measure them all: Benchmarking multilingual long-context language models}, author={Yekyung Kim and Jenna Russell and Marzena Karpinska and Mohit Iyyer}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=3vxxB3Ar9r} }
kim|one_ruler_to_measure_them_all_benchmarking_multilingual_longcontext_language_models
null
null
null
null
null
Verifying the Verifiers: Unveiling Pitfalls and Potentials in Fact Verifiers
We share three findings intended to guide future development of more robust fact verifiers.
Fact verification is essential for ensuring the reliability of LLM applications. In this study, we evaluate 13 different fact verification models, including frontier LLMs and open-weight reasoning LLMs, using a collection of examples from 14 fact-checking benchmarks. We share three findings intended to guide future development of more robust fact verifiers. First, we highlight the importance of addressing annotation errors and ambiguity in datasets, demonstrating that approximately 16\% of ambiguous or incorrectly labeled data substantially influences model rankings. Neglecting this issue may result in misleading conclusions during comparative evaluations, and we suggest using a systematic pipeline utilizing LLM-as-a-judge to help identify these issues at scale. Second, we discover that frontier LLMs with few-shot in-context examples, often overlooked in previous works, achieve top-tier performance. We therefore recommend that future studies include comparisons with these simple yet highly effective baselines. Lastly, despite their effectiveness, frontier LLMs incur substantial costs, motivating the development of small, fine-tuned fact verifiers. We show that these small models still have room for improvement, particularly on instances that require complex reasoning. Encouragingly, we demonstrate that augmenting training with synthetic multi-hop reasoning data significantly enhances their capabilities in such instances.
[ "Wooseok Seo", "Seungju Han", "Jaehun Jung", "Benjamin Newman", "Seungwon Lim", "Seungbeen Lee", "Ximing Lu", "Yejin Choi", "Youngjae Yu" ]
https://openreview.net/forum?id=3NjnRo6apU
3NjnRo6apU
3NjnRo6apU
[ "~Wooseok_Seo1", "~Seungju_Han2", "~Jaehun_Jung1", "~Benjamin_Newman1", "~Seungwon_Lim1", "~Seungbeen_Lee1", "~Ximing_Lu1", "~Yejin_Choi1", "~Youngjae_Yu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/1cefeb46b8f9ae83c817b0f583d3daaf12f5e144.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "factuality", "fact verifier", "attribution evaluation", "LLM hallucination" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ seo2025verifying, title={Verifying the Verifiers: Unveiling Pitfalls and Potentials in Fact Verifiers}, author={Wooseok Seo and Seungju Han and Jaehun Jung and Benjamin Newman and Seungwon Lim and Seungbeen Lee and Ximing Lu and Yejin Choi and Youngjae Yu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=3NjnRo6apU} }
seo|verifying_the_verifiers_unveiling_pitfalls_and_potentials_in_fact_verifiers
/attachment/eb70f4621cb8b5fcdd29c5acc930ae032dfcfcae.zip
null
null
null
null
SmolLM2: When Smol Goes Big — Data-Centric Training of a Fully Open Small Language Model
SmolLM2 is a fully open 1.7B parameter LM that achieves state-of-the-art performance through multi-stage training on diverse high-quality data and is released alongside new math, code and instruction tuning datasets.
Large language models, while groundbreaking, are computationally expensive and difficult to deploy in resource-constrained settings. To address this challenge, small language models have emerged, but their performance critically depends on the quality and composition of the pretraining datasets—yet many recent models, such as Qwen2.5-1.5B and Llama3.2-1B, remain opaque about their training data, limiting reproducibility and scientific understanding. In this paper, we document and publicly release SmolLM2, a fully transparent state-of-the-art ``small'' (1.7 billion parameter) language model (LM), along with its training datasets and code. To attain strong performance, we overtrain SmolLM2 on 11 trillion tokens of data using a multi-stage training process that mixes web text with specialized math, code, and instruction-following data. We additionally curate and release new specialized datasets (FineMath, Stack-Edu, and SmolTalk) at stages where we found existing datasets to be problematically small or low-quality. To inform our design decisions, we perform both small-scale ablations and a manual refinement process that updates the dataset mixing rates at each stage based on the performance at the previous one. Ultimately, we demonstrate that SmolLM2 outperforms other recent small LMs including Qwen2.5-1.5B, Llama3.2-1B, and Falcon3-1.6B. By releasing our model, datasets, and code, we aim to facilitate future research on LM development as well as applications of small LMs.
[ "Loubna Ben allal", "Anton Lozhkov", "Elie Bakouch", "Gabriel Martin Blazquez", "Guilherme Penedo", "Lewis Tunstall", "Andrés Marafioti", "Agustín Piqueres Lajarín", "Hynek Kydlíček", "Vaibhav Srivastav", "Joshua Lochner", "Caleb Fahlgren", "Xuan Son NGUYEN", "Ben Burtenshaw", "Clémentine Fourrier", "Haojun Zhao", "Hugo Larcher", "Mathieu Morlon", "Cyril Zakka", "Colin Raffel", "Leandro Von Werra", "Thomas Wolf" ]
https://openreview.net/forum?id=3JiCl2A14H
3JiCl2A14H
3JiCl2A14H
[ "~Loubna_Ben_allal1", "~Anton_Lozhkov1", "~Elie_Bakouch1", "~Gabriel_Martin_Blazquez1", "~Guilherme_Penedo1", "~Lewis_Tunstall1", "~Andrés_Marafioti1", "~Agustín_Piqueres_Lajarín1", "~Hynek_Kydlíček1", "~Vaibhav_Srivastav2", "~Joshua_Lochner1", "~Caleb_Fahlgren1", "~Xuan_Son_NGUYEN3", "~Ben_Burtenshaw1", "~Clémentine_Fourrier1", "~Haojun_Zhao1", "~Hugo_Larcher1", "~Mathieu_Morlon1", "~Cyril_Zakka1", "~Colin_Raffel1", "~Leandro_Von_Werra1", "~Thomas_Wolf1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/189f4f29934315320072913ebc2e106d51581365.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "small language models", "dataset", "pretraining" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ allal2025smollm, title={Smol{LM}2: When Smol Goes Big {\textemdash} Data-Centric Training of a Fully Open Small Language Model}, author={Loubna Ben allal and Anton Lozhkov and Elie Bakouch and Gabriel Martin Blazquez and Guilherme Penedo and Lewis Tunstall and Andr{\'e}s Marafioti and Agust{\'\i}n Piqueres Lajar{\'\i}n and Hynek Kydl{\'\i}{\v{c}}ek and Vaibhav Srivastav and Joshua Lochner and Caleb Fahlgren and Xuan Son NGUYEN and Ben Burtenshaw and Cl{\'e}mentine Fourrier and Haojun Zhao and Hugo Larcher and Mathieu Morlon and Cyril Zakka and Colin Raffel and Leandro Von Werra and Thomas Wolf}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=3JiCl2A14H} }
allal|smollm2_when_smol_goes_big_datacentric_training_of_a_fully_open_small_language_model
null
null
null
null
null
Fast Controlled Generation from Language Models with Adaptive Weighted Rejection Sampling
We introduce a fast, principled, and adaptive sampler for controlled generation.
The dominant approach to generating from language models subject to some constraint is locally constrained decoding (LCD), incrementally sampling tokens at each time step such that the constraint is never violated. Typically, this is achieved through token masking: looping over the vocabulary and excluding non-conforming tokens. There are two important problems with this approach. (i) Evaluating the constraint on every token can be prohibitively expensive---LM vocabularies often exceed 100,000 tokens. (ii) LCD can distort the global distribution over strings, sampling tokens based only on local information, even if they lead down dead-end paths. This work introduces a new algorithm that addresses both these problems. First, to avoid evaluating a constraint on the full vocabulary at each step of generation, we propose an adaptive rejection sampling algorithm that typically requires orders of magnitude fewer constraint evaluations. Second, we show how this algorithm can be extended to produce low-variance, unbiased estimates of importance weights at a very small additional cost---estimates that can be soundly used within previously proposed sequential Monte Carlo algorithms to correct for the myopic behavior of local constraint enforcement. Through extensive empirical evaluation in text-to-SQL, molecular synthesis, goal inference, pattern matching, and JSON domains, we show that our approach is superior to state-of-the-art baselines, supporting a broader class of constraints and improving both runtime and performance. Additional theoretical and empirical analyses show that our method's runtime efficiency is driven by its dynamic use of computation, scaling with the divergence between the unconstrained and constrained LM, and as a consequence, runtime improvements are greater for better models.
[ "Ben Lipkin", "Benjamin LeBrun", "Jacob Hoover Vigly", "João Loula", "David R. MacIver", "Li Du", "Jason Eisner", "Ryan Cotterell", "Vikash Mansinghka", "Timothy J. O'Donnell", "Alexander K. Lew", "Tim Vieira" ]
https://openreview.net/forum?id=3BmPSFAdq3
3BmPSFAdq3
3BmPSFAdq3
[ "~Ben_Lipkin1", "~Benjamin_LeBrun1", "~Jacob_Hoover_Vigly1", "~João_Loula1", "~David_R._MacIver1", "~Li_Du2", "~Jason_Eisner1", "~Ryan_Cotterell1", "~Vikash_Mansinghka1", "~Timothy_J._O'Donnell1", "~Alexander_K._Lew1", "~Tim_Vieira1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0fd0c7cec5b1ac789e2786eed46bfde5915c41f2.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Controlled generation", "Bayesian inference", "Approximate sampling", "Decoding methods" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lipkin2025fast, title={Fast Controlled Generation from Language Models with Adaptive Weighted Rejection Sampling}, author={Ben Lipkin and Benjamin LeBrun and Jacob Hoover Vigly and Jo{\~a}o Loula and David R. MacIver and Li Du and Jason Eisner and Ryan Cotterell and Vikash Mansinghka and Timothy J. O'Donnell and Alexander K. Lew and Tim Vieira}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=3BmPSFAdq3} }
lipkin|fast_controlled_generation_from_language_models_with_adaptive_weighted_rejection_sampling
null
true
null
null
null
RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale
RADLADS is a process for rapidly converting transformers into linear attention decoder models, and we release a set of SoTA models converted to custom RWKV variants via this process. It costs less than $2000 USD to convert a 72B model.
We present Rapid Attention Distillation to Linear Attention Decoders at Scale (RADLADS), a protocol for rapidly converting softmax attention transformers into linear attention decoder models, along with two new RWKV-variant architectures, and models converted from popular Qwen2.5 open source models in 7B, 32B, and 72B sizes. Our conversion process requires only 350-700M tokens, less than 0.005\% of the token count used to train the original teacher models. Converting to our 72B linear attention model costs less than \$2,000 USD at today's prices, yet quality at inference remains close to the original transformer. These models achieve state-of-the-art downstream performance across a set of standard benchmarks for linear attention models of their size. We release all our code on GitHub and models on HuggingFace under the Apache 2.0 license, with the exception of our 72B models which are also governed by the Qwen License Agreement.
[ "Daniel Goldstein", "Eric Alcaide", "Janna Lu", "Eugene Cheah" ]
https://openreview.net/forum?id=38GehGepDd
38GehGepDd
38GehGepDd
[ "~Daniel_Goldstein2", "~Eric_Alcaide2", "~Janna_Lu1", "~Eugene_Cheah1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e3a50ca00241dc86b97a2794701e3ddae96cc09d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "RADLAD", "RADLADS", "RWKV", "Linear Attention", "Conversion", "LLM", "SUPRA", "MOHAWK", "LolCATs", "Hedgehog", "DiJiang", "Mamba in the Llama" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ goldstein2025radlads, title={{RADLADS}: Rapid Attention Distillation to Linear Attention Decoders at Scale}, author={Daniel Goldstein and Eric Alcaide and Janna Lu and Eugene Cheah}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=38GehGepDd} }
goldstein|radlads_rapid_attention_distillation_to_linear_attention_decoders_at_scale
null
null
null
null
null