pytorCH OPtimize (CHOP): a library for continuous and constrained optimization built on PyTorch
...with applications to adversarially attacking and training neural networks.
⚠️ This library is not actively maintained anymore, and I won't be handling new issues in a timely manner. Contact me if you'd like to contribute. ⚠️
Stochastic Algorithms
We define stochastic optimizers in the chop.stochastic module. These follow PyTorch Optimizer conventions, similar to the torch.optim module.
These can be used to
- train structured models;
- compute universal adversarial perturbations over a dataset.
Full Gradient Algorithms
We also define full-gradient algorithms which operate on a batch of optimization problems in the chop.optim module. These are used for adversarial attacks, using the chop.Adversary wrapper.
Installing
Run the following:
or
pip install git+https://github.com/openopt/chop.git
for the latest development version.
Welcome to chop!
See examples directory and our webpage.
Tests
Run the tests with pytests tests.
Citing
If this software is useful to your research, please consider citing it as
@article{chop,
author = {Geoffrey Negiar, Fabian Pedregosa},
title = {CHOP: continuous optimization built on Pytorch},
year = 2020,
url = {https://github.com/openopt/chop}
}
Affiliations
Geoffrey Négiar was in the Mahoney lab and the El Ghaoui lab at UC Berkeley at the time this package was developped.
Fabian Pedregosa is at Google Research.