MIST: Molecular Insight SMILES Transformers
MIST is a family of molecular foundation models for molecular property prediction. The models were pre-trained on SMILES strings from the Enamine REAL Space dataset using the Masked Language Modeling (MLM) objective, then fine-tuned for downstream prediction tasks. Further information is available in our pre-print on arXiv.
Model Details
Model Description
This fine-tuned MIST variant consists of the MIST-28M encoder finetuned on the Lipophilicity dataset from MoleculeNet benchmark. Fine-tuned MIST models consist of the pretrained MIST model (the encoder), followed by a task network. The task networks consist of a two-layer MLP with Gaussian Error Linear Units (GELU) activations and dropout. The final hidden state vectors for all tokens in the sequence are pooled to produce a single embedding vector. Consistent with prior works, the encoder hidden states were pooled by taking the hidden state of the first token.
- Developed by: Electrochemical Energy Group, University of Michigan, Ann Arbor.
- Model type: Self-supervised pre-trained MIST encoder with supervised finetuning.
- License: Apache v2.0
- Finetuned from model:
mist-28M-ti624ev1
Model Sources
- Repository: Full MIST Code
- Paper: arXiv Preprint
- Demo: Finetuning and Inference Demo
Getting Started
Setting Up Your Environment
Create a virtual environment and install dependencies:
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -r requirements.txt
Note: SMIRK tokenizers require Rust to be installed. See the Rust installation guide for details.
Property Prediction
from transformers import AutoModel
from smirk import SmirkTokenizerFast
# Load the model
model = AutoModel.from_pretrained(
"path/to/model",
trust_remote_code=True
)
# Make predictions for lipophilicity
smiles_batch = [
"CCO", # Ethanol
"CC(=O)O", # Acetic acid
"C1=CC=CC=C1" # Benzene
]
# Returns predictions for octanol/water distribution coefficient (logD)
results = model.predict(smiles_batch)
Use and Restrictions
Model weights are provided as-is for research purposes only, without guarantees of correctness, fitness for purpose, or warranties of any kind.
- Research use only
- No redistribution without permission
- No commercial use without licensing agreement
Training Details
Training Data
Pretraining We use the the Enamine REAL Space dataset to pretrain MIST models. At time of writing, Enamine REAL Space is the largest database of commercially available compounds. The dataset was constructed using forward synthetic analysis: experimentally validated building blocks were converted into synthons annotated with reactivity features. Enamine REAL Space was selected as the pretraining dataset since it was the largest database of molecular SMILES at the time of training, it is easily accessible for academic use and molecules relevant to downstream tasks, such as drug candidates, electrolytes, fragrances, live in synthetically accessible regions of chemical space.
Finetuning The Lipophilicity dataset from MoleculeNet was used. It provides measured octanol–water distribution coefficients (log D) for 4,200 small molecules.
Training Procedure
Inputs
- Inputs: The input to MIST models are SMILES strings for molecules. Unless specified otherwise, models were pretrained and fine-tuned on Kekulized SMILES strings.
- Outputs: Regression prediction for lipophilicity (logD at pH 7.4)
Evaluation
Testing
Testing Data
Dataset was split 80/10/10 using a random split.
Metrics
Root Mean Squared Error (RMSE) in logD units
Technical Specifications
Model Architecture and Objective
- Encoder:
RoBERTa-PreLayerNormencoder with 8 layers, a hidden size of 512, intermediate size of 2048, 8 attention heads and maximum sequence length of 2048. - Task Network: Two-layer MLP (Multi-layer perceptron)
- Objective
- Pretraining: MLM (Masked Language Modeling)
- Fine-tuning: Regression
- Loss:
- Pretraining: Cross-Entropy Loss
- Fine-tuning: Mean Squared Error (MSE) Loss
- Optimizer:
- Pretraining:
deepspeed.ops.lamb.FusedLAMB - Fine-tuning:
torch.optim.AdamW
- Pretraining:
Compute Infrastructure
Hardware
This model was pre-trained on 2 NVIDIA A100-SXM4-80GB GPUs in 12 hours 15 minutes. It was finetuned on 1 NVIDIA A100 GPU.
Software
This model was trained with PyTorchLightning using the DeepSpeed strategy for data distributed parallelism. Model are exported in a Safetensors format.
Citation
If you use this model in your research, please cite:
@online{MIST,
title = {Foundation Models for Discovery and Exploration in Chemical Space},
author = {Wadell, Alexius and Bhutani, Anoushka and Azumah, Victor and Ellis-Mohr, Austin R. and Kelly, Celia and Zhao, Hancheng and Nayak, Anuj K. and Hegazy, Kareem and Brace, Alexander and Lin, Hongyi and Emani, Murali and Vishwanath, Venkatram and Gering, Kevin and Alkan, Melisa and Gibbs, Tom and Wells, Jack and Varshney, Lav R. and Ramsundar, Bharath and Duraisamy, Karthik and Mahoney, Michael W. and Ramanathan, Arvind and Viswanathan, Venkatasubramanian},
date = {2025-10-20},
eprint = {2510.18900},
eprinttype = {arXiv},
eprintclass = {physics},
doi = {10.48550/arXiv.2510.18900},
url = {http://arxiv.org/abs/2510.18900},
}
Model Card Authors
Anoushka Bhutani, Alexius Wadell
Model Card Contact
For questions, issues, or licensing inquiries, please contact Venkat Viswanathan venkvis@umich.edu.
- Downloads last month
- 20