HELM-BERT

A peptide language model using HELM (Hierarchical Editing Language for Macromolecules) notation, compatible with Hugging Face Transformers.

GitHub

Model Description

HELM-BERT is built upon the DeBERTa architecture, pre-trained on ~75k peptides from four databases (ChEMBL, CREMP, CycPeptMPDB, Propedia) using Masked Language Modeling (MLM) with a Warmup-Stable-Decay (WSD) learning rate schedule.

  • Disentangled Attention: Decomposes attention into content-content and content-position terms
  • Enhanced Mask Decoder (EMD): Injects absolute position embeddings at the decoder stage
  • Span Masking: Contiguous token masking with geometric distribution
  • nGiE: n-gram Induced Encoding layer (1D convolution, kernel size 3)

Model Specifications

Parameter Value
Parameters 54.8M
Hidden size 768
Layers 6
Attention heads 12
Vocab size 78
Max token length 512
Pre-training data ~75k peptides (ChEMBL, CREMP, CycPeptMPDB, Propedia)
Pre-training objective MLM (span masking, p=0.15)
LR schedule Warmup-Stable-Decay (WSD)

How to Use

from transformers import AutoModel, AutoTokenizer

model = AutoModel.from_pretrained("Flansma/helm-bert", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Flansma/helm-bert", trust_remote_code=True)

# Cyclosporine A
inputs = tokenizer("PEPTIDE1{[Abu].[Sar].[meL].V.[meL].A.[dA].[meL].[meL].[meV].[Me_Bmt(E)]}$PEPTIDE1,PEPTIDE1,1:R1-11:R2$$$", return_tensors="pt")
outputs = model(**inputs)
embeddings = outputs.last_hidden_state

Training Data

Pre-trained on deduplicated peptide sequences from:

  • ChEMBL: Bioactive molecules database
  • CREMP: Cyclic peptide conformational ensemble database
  • CycPeptMPDB: Cyclic peptide membrane permeability database
  • Propedia: Protein-peptide interaction database

Downstream Performance

Permeability Regression (CycPeptMPDB)

Single-Assay (mixed PAMPA/Caco-2 target):

Split Pearson RMSE MAE
Random 0.769 0.878 0.388 0.269
Scaffold 0.643 0.812 0.380 0.284

Multi-Assay (separate PAMPA and Caco-2 heads):

Split Assay Pearson RMSE MAE
Random PAMPA 0.711 0.844 0.426 0.298
Random Caco-2 0.772 0.878 0.402 0.305
Scaffold PAMPA 0.584 0.788 0.393 0.299
Scaffold Caco-2 0.701 0.846 0.381 0.287

Train/test 9:1, val 10% from train. Scaffold split by Murcko scaffolds.

PPI Classification (Propedia v2)

Split ROC-AUC PR-AUC F1 MCC Balanced Acc
Random 0.972 0.912 0.859 0.824 0.911
aCSM 0.868 0.702 0.613 0.559 0.735

Train/test 8:2, val 10% from train, 1:4 positive:negative ratio.

  • Random: random split
  • aCSM: clustering-based split on aCSM-ALL complex signatures with protein overlap pruning

SST2 Binding Affinity (pChEMBL)

Split Pearson RMSE MAE
Random 0.312 0.600 0.742 0.499
Scaffold 0.006 0.236 0.632 0.551

Train/test 8:2, val 10% from train. Scaffold split by Murcko scaffolds.

Citation

@article{lee2025helmbert,
  title={HELM-BERT: A Transformer for Medium-sized Peptide Property Prediction},
  author={Seungeon Lee and Takuto Koyama and Itsuki Maeda and Shigeyuki Matsumoto and Yasushi Okuno},
  journal={arXiv preprint arXiv:2512.23175},
  year={2025},
  url={https://arxiv.org/abs/2512.23175}
}

License

MIT License

Downloads last month
2,111
Safetensors
Model size
54.8M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for Flansma/helm-bert