Skip to content
forked from tzt101/MichiGAN

MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait Editing (SIGGRAPH 2020)

License

Notifications You must be signed in to change notification settings

RohitSaha/MichiGAN

 
 

Repository files navigation

MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait Editing

Installation

Clone this repo.

git clone https://github.com/tzt101/MichiGAN.git
cd hairSynthesis/

This code requires PyTorch 1.0 and python 3+. Please install dependencies by

pip install -r requirements.txt

If necessary, download the Synchronized-BatchNorm-PyTorch rep.

cd models/networks/
git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
cd ../../

Dataset Preparation

The FFHQ dataset can be downloaded from Baidu Netdisk with the extracted code ichc, you should specify the dataset root from through --data_dir.

Generating Images Using Pretrained Model

Once the dataset is ready, the result images can be generated using pretrained models.

  1. Download the pretrained models from the Google Drive Folder, save it in 'checkpoints/MichiGAN/'

  2. Generate single image using the pretrained model.

    python inference.py --name MichiGAN --gpu_ids 0 --inference_ref_name 67172 --inference_tag_name 67172 --inference_orient_name 67172 --netG spadeb --which_epoch 50 --use_encoder --noise_background --expand_mask_be --expand_th 5 --use_ig --load_size 512 --crop_size 512 --add_feat_zeros --data_dir [path_to_dataset]
  3. The outputs images are stored at ./inference_samples/ by default.

Training New Models

New models can be trained with the following command.

python train.py --name [name_experiment] --batchSize 8 --no_confidence_loss --gpu_ids 0,1,2,3,4,5,6,7 --no_style_loss --no_rgb_loss --no_content_loss --use_encoder --wide_edge 2 --no_background_loss --noise_background --random_expand_mask --use_ig --load_size 568 --crop_size 512 --data_dir [pah_to_dataset] ----checkpoints_dir ./checkpoints

[name_experiment] is the directory name of the checkpoint file saved. if you want to train the model with orientation inpainting model (with the option --use_ig), please download the pretrained inpainting model from Google Drive Folder and save them in ./checkpoints/[name_experiment]/ firstly.

Code Structure

  • train.py, inference.py: the entry point for training and inferencing.
  • trainers/pix2pix_trainer.py: harnesses and reports the progress of training.
  • models/pix2pix_model.py: creates the networks, and compute the losses
  • models/networks/: defines the architecture of all models
  • options/: creates option lists using argparse package. More individuals are dynamically added in other files as well. Please see the section below.
  • data/: defines the class for loading datas.

Acknowledgments

This code borrows heavily from SPADE. We thank Jiayuan Mao for his Synchronized Batch Normalization code.

About

MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait Editing (SIGGRAPH 2020)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Shell 0.2%