HELM-BERT: A Transformer for Medium-sized Peptide Property Prediction
Paper • 2512.23175 • Published • 1
A peptide language model using HELM (Hierarchical Editing Language for Macromolecules) notation, compatible with Hugging Face Transformers.
HELM-BERT is built upon the DeBERTa architecture, pre-trained on ~75k peptides from four databases (ChEMBL, CREMP, CycPeptMPDB, Propedia) using Masked Language Modeling (MLM) with a Warmup-Stable-Decay (WSD) learning rate schedule.

| Parameter | Value |
|---|---|
| Parameters | 54.8M |
| Hidden size | 768 |
| Layers | 6 |
| Attention heads | 12 |
| Vocab size | 78 |
| Max token length | 512 |
| Pre-training data | ~75k peptides (ChEMBL, CREMP, CycPeptMPDB, Propedia) |
| Pre-training objective | MLM (span masking, p=0.15) |
| LR schedule | Warmup-Stable-Decay (WSD) |
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("Flansma/helm-bert", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Flansma/helm-bert", trust_remote_code=True)
# Cyclosporine A
inputs = tokenizer("PEPTIDE1{[Abu].[Sar].[meL].V.[meL].A.[dA].[meL].[meL].[meV].[Me_Bmt(E)]}$PEPTIDE1,PEPTIDE1,1:R1-11:R2$$$", return_tensors="pt")
outputs = model(**inputs)
embeddings = outputs.last_hidden_state
Pre-trained on deduplicated peptide sequences from:
Single-Assay (mixed PAMPA/Caco-2 target):
| Split | R² | Pearson | RMSE | MAE |
|---|---|---|---|---|
| Random | 0.769 | 0.878 | 0.388 | 0.269 |
| Scaffold | 0.643 | 0.812 | 0.380 | 0.284 |
Multi-Assay (separate PAMPA and Caco-2 heads):
| Split | Assay | R² | Pearson | RMSE | MAE |
|---|---|---|---|---|---|
| Random | PAMPA | 0.711 | 0.844 | 0.426 | 0.298 |
| Random | Caco-2 | 0.772 | 0.878 | 0.402 | 0.305 |
| Scaffold | PAMPA | 0.584 | 0.788 | 0.393 | 0.299 |
| Scaffold | Caco-2 | 0.701 | 0.846 | 0.381 | 0.287 |
Train/test 9:1, val 10% from train. Scaffold split by Murcko scaffolds.

| Split | ROC-AUC | PR-AUC | F1 | MCC | Balanced Acc |
|---|---|---|---|---|---|
| Random | 0.972 | 0.912 | 0.859 | 0.824 | 0.911 |
| aCSM | 0.868 | 0.702 | 0.613 | 0.559 | 0.735 |
Train/test 8:2, val 10% from train, 1:4 positive:negative ratio.

| Split | R² | Pearson | RMSE | MAE |
|---|---|---|---|---|
| Random | 0.312 | 0.600 | 0.742 | 0.499 |
| Scaffold | 0.006 | 0.236 | 0.632 | 0.551 |
Train/test 8:2, val 10% from train. Scaffold split by Murcko scaffolds.

@article{lee2025helmbert,
title={HELM-BERT: A Transformer for Medium-sized Peptide Property Prediction},
author={Seungeon Lee and Takuto Koyama and Itsuki Maeda and Shigeyuki Matsumoto and Yasushi Okuno},
journal={arXiv preprint arXiv:2512.23175},
year={2025},
url={https://arxiv.org/abs/2512.23175}
}
MIT License