VoladorLuYu 's Collections Symbolic LLM Reasoning
updated
CRUXEval: A Benchmark for Code Reasoning, Understanding and Execution
Paper
• 2401.03065
• Published
• 11
DeepSeek-Coder: When the Large Language Model Meets Programming -- The
Rise of Code Intelligence
Paper
• 2401.14196
• Published
• 70
WaveCoder: Widespread And Versatile Enhanced Instruction Tuning with
Refined Data Generation
Paper
• 2312.14187
• Published
• 49
On the Effectiveness of Large Language Models in Domain-Specific Code
Generation
Paper
• 2312.01639
• Published
• 2
AST-T5: Structure-Aware Pretraining for Code Generation and
Understanding
Paper
• 2401.03003
• Published
• 14
Magicoder: Source Code Is All You Need
Paper
• 2312.02120
• Published
• 82
InstructCoder: Empowering Language Models for Code Editing
Paper
• 2310.20329
• Published
• 2
Can It Edit? Evaluating the Ability of Large Language Models to Follow
Code Editing Instructions
Paper
• 2312.12450
• Published
• 1
LLM-Assisted Code Cleaning For Training Accurate Code Generators
Paper
• 2311.14904
• Published
• 5
The Program Testing Ability of Large Language Models for Code
Paper
• 2310.05727
• Published
• 2
Binding Language Models in Symbolic Languages
Paper
• 2210.02875
• Published
• 1
Small LLMs Are Weak Tool Learners: A Multi-LLM Agent
Paper
• 2401.07324
• Published
• 3
From Good to Great: Improving Math Reasoning with Tool-Augmented
Interleaf Prompting
Paper
• 2401.05384
• Published
T-Eval: Evaluating the Tool Utilization Capability Step by Step
Paper
• 2312.14033
• Published
• 2
Chain-of-Thought Reasoning Without Prompting
Paper
• 2402.10200
• Published
• 109
Deductive Beam Search: Decoding Deducible Rationale for Chain-of-Thought
Reasoning
Paper
• 2401.17686
• Published
• 1
Large Language Models Are Neurosymbolic Reasoners
Paper
• 2401.09334
• Published
• 3
PathFinder: Guided Search over Multi-Step Reasoning Paths
Paper
• 2312.05180
• Published
• 10
Interpreting Pretrained Language Models via Concept Bottlenecks
Paper
• 2311.05014
• Published
• 1
Beyond A*: Better Planning with Transformers via Search Dynamics
Bootstrapping
Paper
• 2402.14083
• Published
• 47
ReWOO: Decoupling Reasoning from Observations for Efficient Augmented
Language Models
Paper
• 2305.18323
• Published
• 1
Why think step by step? Reasoning emerges from the locality of
experience
Paper
• 2304.03843
• Published
Chain-of-Instructions: Compositional Instruction Tuning on Large
Language Models
Paper
• 2402.11532
• Published
• 1
Do Large Language Models Latently Perform Multi-Hop Reasoning?
Paper
• 2402.16837
• Published
• 29
CodeS: Towards Building Open-source Language Models for Text-to-SQL
Paper
• 2402.16347
• Published
• 1
StarCoder 2 and The Stack v2: The Next Generation
Paper
• 2402.19173
• Published
• 152
Common 7B Language Models Already Possess Strong Math Capabilities
Paper
• 2403.04706
• Published
• 18
Inference via Interpolation: Contrastive Representations Provably Enable
Planning and Inference
Paper
• 2403.04082
• Published
Quiet-STaR: Language Models Can Teach Themselves to Think Before
Speaking
Paper
• 2403.09629
• Published
• 79
Boosting of Thoughts: Trial-and-Error Problem Solving with Large
Language Models
Paper
• 2402.11140
• Published
Structured Chain-of-Thought Prompting for Code Generation
Paper
• 2305.06599
• Published
• 1
STaR-GATE: Teaching Language Models to Ask Clarifying Questions
Paper
• 2403.19154
• Published
LLM-R2: A Large Language Model Enhanced Rule-based Rewrite System for
Boosting Query Efficiency
Paper
• 2404.12872
• Published
• 11
Iterative Reasoning Preference Optimization
Paper
• 2404.19733
• Published
• 49
Buffer of Thoughts: Thought-Augmented Reasoning with Large Language
Models
Paper
• 2406.04271
• Published
• 29
On Memorization of Large Language Models in Logical Reasoning
Paper
• 2410.23123
• Published
• 18