TorchSim is a next-generation open-source atomistic simulation engine for the MLIP era. By rewriting the core primitives of atomistic simulation in Pytorch, it allows orders of magnitude acceleration of popular machine learning potentials.
- Automatic batching and GPU memory management allowing significant simulation speedup
- Support for MACE, Fairchem, SevenNet, ORB, MatterSim, graph-pes, and metatomic MLIP models
- Support for classical lennard jones, morse, and soft-sphere potentials
- Molecular dynamics integration schemes like NVE, NVT Langevin, and NPT Langevin
- Relaxation of atomic positions and cell with gradient descent and FIRE
- Swap monte carlo and hybrid swap monte carlo algorithm
- An extensible binary trajectory writing format with support for arbitrary properties
- A simple and intuitive high-level API for new users
- Integration with ASE, Pymatgen, and Phonopy
- and more: differentiable simulation, elastic properties, custom workflows...
Quick Start
Here is a quick demonstration of many of the core features of TorchSim: native support for GPUs, MLIP models, ASE integration, simple API, autobatching, and trajectory reporting, all in under 40 lines of code.
Running batched MD
import torch import torch_sim as ts # run natively on gpus device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # easily load the model from mace-mp from mace.calculators.foundations_models import mace_mp from torch_sim.models.mace import MaceModel mace = mace_mp(model="small", return_raw_model=True) mace_model = MaceModel(model=mace, device=device) from ase.build import bulk cu_atoms = bulk("Cu", "fcc", a=3.58, cubic=True).repeat((2, 2, 2)) many_cu_atoms = [cu_atoms] * 50 trajectory_files = [f"Cu_traj_{i}.h5md" for i in range(len(many_cu_atoms))] # run them all simultaneously with batching final_state = ts.integrate( system=many_cu_atoms, model=mace_model, n_steps=50, timestep=0.002, temperature=1000, integrator=ts.Integrator.nvt_langevin, trajectory_reporter=dict(filenames=trajectory_files, state_frequency=10), ) final_atoms_list = final_state.to_atoms() # extract the final energy from the trajectory file final_energies = [] for filename in trajectory_files: with ts.TorchSimTrajectory(filename) as traj: final_energies.append(traj.get_array("potential_energy")[-1]) print(final_energies)
Running batched relaxation
To then relax those structures with FIRE is just a few more lines.
# relax all of the high temperature states relaxed_state = ts.optimize( system=final_state, model=mace_model, optimizer=ts.Optimizer.fire, autobatcher=True, init_kwargs=dict(cell_filter=ts.CellFilter.frechet), ) print(relaxed_state.energy)
Speedup
TorchSim achieves up to 100x speedup compared to ASE with popular MLIPs.
This figure compares the time per atom of ASE and torch_sim. Time per atom is defined
as the number of atoms / total time. While ASE can only run a single system of n_atoms
(on the $x$ axis), torch_sim can run as many systems as will fit in memory. On an H100 80 GB card,
the max atoms that could fit in memory was ~8,000 for EGIP,
~10,000 for MACE-MPA-0, ~22,000 for Mattersim V1 1M,
~2,500 for SevenNet, and ~9000 for PET-MAD.
This metric describes model performance by capturing speed and memory usage simultaneously.
Installation
PyPI Installation
pip install torch-sim-atomistic
Installing from source
git clone https://github.com/TorchSim/torch-sim cd torch-sim pip install .
Examples
To understand how TorchSim works, start with the comprehensive tutorials in the documentation.
Core Modules
TorchSim's package structure is summarized in the API reference documentation and drawn as a treemap below.
Contributing
If you are interested in contributing, please join our slack and check out the contributing.md.
License
TorchSim is released under an MIT license.
Citation
If you use TorchSim in your research, please cite our publication.
@article{cohen2025torchsim, title={TorchSim: An efficient atomistic simulation engine in PyTorch}, author={Cohen, Orion and Riebesell, Janosh and Goodall, Rhys and Kolluru, Adeesh and Falletta, Stefano and Krause, Joseph and Colindres, Jorge and Ceder, Gerbrand and Gangan, Abhijeet S}, journal={AI for Science}, volume={1}, number={2}, pages={025003}, year={2025}, publisher={IOP Publishing}, doi={10.1088/3050-287X/ae1799} }
Due Credit
We aim to recognize all duecredit for the decades of work that TorchSim builds on top of, an automated list of references can be obtained for the package by running DUECREDIT_ENABLE=yes uv run --with . --extra docs --extra test python -m duecredit <(printf 'import pytest\nraise SystemExit(pytest.main(["-q"]))\n'). This list is incomplete and we welcome PRs to help improve our citation coverage.
To collect citations for a specific tutorial run, for example autobatching, use:
DUECREDIT_ENABLE=yes uv run --with . --extra docs --extra test python -m duecredit examples/tutorials/autobatching_tutorial.py