GitHub - zlou/DeepLearningAnimePapers: A list of papers and other resources on deep learning with anime style images.
Context Encoders: Feature Learning by Inpainting [arXiv] (April 25 2016)
- First use of CNNs in image inpainting.
- Utilizes an adversarial loss
- Completed regions blurry.
- Missing content infered by searching for closest encoding of the corrupted image in the latent image manifold.
- No end to end training.
- IMHO, generating images is harder than inpainting images because with inpainting, there is always ground truth present. So converting inpainting to the harder problem of generating images might not be the way to go.
- Dilated convolutions
- 2 discriminators: one local discriminator for the completed region and one global discriminator for whole image
- Long training time.
- Poisson blending needed.
- Complex training process. Completion network is trained, then the completion network is fixed and discriminators are trained, then finally both are trained.
- 2 stages: coarse prediction and refinement through feature based texture swapping
- Framework can be adapted to multi-scale
- Conditioned on facial attributes.
- Progressive growing of GANs.
- Three new losses: attribute, feature, and boundary.
- Fails to learn low level skin features.
- Long training time.
- Introduces partial convolutions, which exclude information from the mask.
- No post processing.
- Utilizes gated convolutions.
- State of the art inpainting for irregular masks.
- No post processing.