Niv Haim


Reconstructing Training Data From Real-World Models Trained with Transfer Learning
Yakir Oz, Gilad Yehudai, Gal Vardi, Itai Antebi, Michal Irani, Niv Haim

SaTML 2026

BibTeX / ArXiv / Code
Reconstructing training images from models trained on encodings of popular image backbones (e.g., DINO, CLIP)

Deconstructing Data Reconstruction: Multiclass, Weight Decay and General Losses
Gon Buzaglo*, Niv Haim*, Gilad Yehudai, Gal Vardi, Yakir Oz, Yaniv Nikankin, Michal Irani

NeurIPS 2023

BibTeX / ArXiv / Code / Video
Training set reconstruction from multiclass classifiers and models trained with regression loss with some inriguing observations on the implications of weight decay on reconstructability.
(Earlier version appeared in ICLR Workshop on Trustworthy ML, 2023)

SinFusion: Training Diffusion Models on a Single Image or Video
Yaniv Nikankin*, Niv Haim*, Michal Irani

ICML 2023

BibTeX / ArXiv / Code / Project Page
Diffusion models can be trained on a single image or video, giving rise to diverse video generation and extrapolation.

Reconstructing Training Data from Trained Neural Networks
Niv Haim*, Gal Vardi*, Gilad Yehudai*, Ohad Shamir, Michal Irani

NeurIPS 2022

ORAL

BibTeX / ArXiv / Code / Project Page / Video
We show that a large portion of the training data can be reconstructed from the parameters of trained MLP binary classifiers. Our method stems from theoretical results about the implicit bias of neural networks trained with gradient descent

Diverse Generation from a Single Video Made Possible
Niv Haim*, Ben Finestein*, Niv Granot, Assaf Shocher, Shai Bagon, Tali Dekel, Michal Irani

ECCV 2022

BibTeX / ArXiv / Code / Video / Project Page
We generate diverse video samples from a single video using patch-based methods. Our results outperform single-video GANs in visual quality and are orders of magnitude faster to generate
(Extended abstract appeared at AI For Content Creation Workshop @ CVPR, 2022)

From Discrete to Continuous Convolution Layers
Assaf Shocher*, Ben Finestein*, Niv Haim*, Michal Irani
Preprint, 2020

BibTeX / ArXiv
Learning continuous convolution kernels improve translation equivariance and allow test time scales augmentations

clean-usnob Implicit Geometric Regularization for Learning Shapes
Amos Gropp, Lior Yariv, Niv Haim, Matan Atzmon, Yaron Lipman

ICML 2020

BibTeX / ArXiv / Code / Video
Using an "Eikonal regularization" term with implicit neural representation works surprisingly well for modelling complex surfaces

clean-usnob Controlling Neural Level Sets
Matan Atzmon, Niv Haim, Lior Yariv, Ofer Israelov, Haggai Maron, Yaron Lipman

NeurIPS 2019

BibTeX / ArXiv / Code / Poster
Making input points differentiable (w.r.t model parameters), and using it for shape modelling, improved robustness to adversrial examples and more

clean-usnob Surface Networks via General Covers
Niv Haim*, Nimrod Segol*, Heli Ben-Hamu, Haggai Maron, Yaron Lipman

ICCV 2019

BibTeX / ArXiv / Code
Transforming 3D shapes to image representation so we can feed them to off-the-shelf CNNs and do classification, human-parts segmentation and more

Extreme close approaches in hierarchical triple systems with comparable masses
Niv Haim, Boaz Katz

MNRAS 2018

BibTeX / ArXiv / Code
Ever wondered if your hierarchical three-body system will eventually collide? find out by plugging your initial conditions into our analytical prediction formula (that works with high probability)





I play the violin [YouTube]

I sometimes write about my travels [blog]