Papers
arxiv:2601.20614

Harder Is Better: Boosting Mathematical Reasoning via Difficulty-Aware GRPO and Multi-Aspect Question Reformulation

Published on Jan 28
· Submitted by
Yanqi Dai
on Jan 29
#1 Paper of the day
Authors:
,
,
,
,
,

Abstract

MathForge enhances mathematical reasoning in large models through a dual framework combining difficulty-aware policy optimization and multi-aspect question reformulation to address limitations in existing reinforcement learning methods.

AI-generated summary

Reinforcement Learning with Verifiable Rewards (RLVR) offers a robust mechanism for enhancing mathematical reasoning in large models. However, we identify a systematic lack of emphasis on more challenging questions in existing methods from both algorithmic and data perspectives, despite their importance for refining underdeveloped capabilities. Algorithmically, widely used Group Relative Policy Optimization (GRPO) suffers from an implicit imbalance where the magnitude of policy updates is lower for harder questions. Data-wise, augmentation approaches primarily rephrase questions to enhance diversity without systematically increasing intrinsic difficulty. To address these issues, we propose a two-dual MathForge framework to improve mathematical reasoning by targeting harder questions from both perspectives, which comprises a Difficulty-Aware Group Policy Optimization (DGPO) algorithm and a Multi-Aspect Question Reformulation (MQR) strategy. Specifically, DGPO first rectifies the implicit imbalance in GRPO via difficulty-balanced group advantage estimation, and further prioritizes harder questions by difficulty-aware question-level weighting. Meanwhile, MQR reformulates questions across multiple aspects to increase difficulty while maintaining the original gold answer. Overall, MathForge forms a synergistic loop: MQR expands the data frontier, and DGPO effectively learns from the augmented data. Extensive experiments show that MathForge significantly outperforms existing methods on various mathematical reasoning tasks. The code and augmented data are all available at https://github.com/AMAP-ML/MathForge.

Community

Paper submitter

Accepted for ICLR 2026

The theoretical proof in the appendix cannot lead to the main conclusion of this paper: GRPO focuses on problems of medium difficulty.

·

Hi! Thank you for your careful reading and insightful comment.
In Appendices B.2 and B.3, we show that the total policy update magnitude in GRPO can be well approximated by 2G\sqrt{p(1-p)}, which reaches its maximum when p=0.5 (i.e., problems of medium difficulty).
We would be happy to discuss this further. Please feel free to contact us via email at [email protected].

请问计划什么时候开源算法吗,谢谢

·

您好,算法代码会在本周尽快开源,谢谢

This can only estimate the upper bound of gradient updates, but it cannot estimate the magnitude of each update.

·

You can refer to the last two paragraphs of analysis in Appendix B.2; although it is not a strictly accurate measure of the update magnitude, we believe it can serve as a suitable approximation.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.20614 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.20614 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.20614 in a Space README.md to link it from this page.

Collections including this paper 2