Model Card for ProRAG

This model is a fine-tuned version of Qwen/Qwen3-8B based on the methodology described in the paper associated with arXiv ID: 2601.21912.

Model Details

  • Base Model: Qwen3-8B
  • Language: English, Chinese (and others supported by Qwen3)
  • Paper: View on arXiv
  • Library: Transformers

💻 Code & Inference

For inference code, usage examples, and reproduction scripts, please refer to our GitHub repository:

👉 Click here to view the GitHub Repository

(Please verify the details and instructions on the GitHub page.)

Citation

If you use this model or the associated paper in your research, please cite:

@misc{wang2026proragprocesssupervisedreinforcementlearning,
      title={ProRAG: Process-Supervised Reinforcement Learning for Retrieval-Augmented Generation}, 
      author={Zhao Wang and Ziliang Zhao and Zhicheng Dou},
      year={2026},
      eprint={2601.21912},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2601.21912}, 
}
Downloads last month
122
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bmbgsj/ProRAG

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Finetuned
(908)
this model

Collection including bmbgsj/ProRAG

Paper for bmbgsj/ProRAG