The Scaling Properties of Implicit Deductive Reasoning in Transformers

arXiv cs.AIMay 7, 2026
transformersdeductive-reasoningmachine-learningdeep-learning

This study explores the scaling properties of implicit deductive reasoning in depth-bounded Transformers, focusing on Horn clauses. The authors demonstrate that with deep models and a bidirectional prefix mask, implicit reasoning can achieve performance levels similar to explicit Chain of Thought (CoT) reasoning, although CoT is still essential for depth extrapolation across various graph topologies and problem widths.

Read original source
← Back to AI Research