Verifying Chain-of-Thought Reasoning via Its Computational Graph
Paper
•
2510.09312
•
Published
This repository provides the TopK transcoder checkpoints used in the paper “Verifying Chain-of-Thought Reasoning via Its Computational Graph”.
The model is based on Llama 3.1 8B Instruct and trained with the TopK transcoder method described in the paper.
To run the model, you need the Circuit Tracer library.
It can be installed from the project page:
https://github.com/zsquaredz/circuit-tracer
Note that this is a fork of the original library as they don't yet support TopK transcoder.
After installing the library, you can load and run the transcoder as shown below.
from circuit_tracer import ReplacementModel
import torch
# Load transcoders into a ReplacementModel
model = ReplacementModel.from_pretrained("meta-llama/Llama-3.1-8B-Instruct", "facebook/crv-8b-instruct-transcoders", dtype=torch.bfloat16)
Once you have loaded the model, you can perform attribution or intervention as shown in this demo.
If you use this model, please cite our paper:
@article{zhao2025verifying,
title={Verifying Chain-of-Thought Reasoning via Its Computational Graph},
author={Zheng Zhao and Yeskendir Koishekenov and Xianjun Yang and Naila Murray and Nicola Cancedda},
year={2025},
eprint={2510.09312},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2510.09312},
}
Base model
meta-llama/Llama-3.1-8B