Skip to content

jianwen-xie/Dynamic_generator

Repository files navigation

Learning Dynamic Generator Model by Alternating Back-Propagation Through Time

This repository contains a tensorflow implementation for the paper "Learning Dynamic Generator Model by Alternating Back-Propagation Through Time".

Project Page: (http://www.stat.ucla.edu/~jxie/DynamicGenerator/DynamicGenerator.html)

Reference

@article{DG,
    author = {Xie, Jianwen and Gao, Ruiqi and Zheng, Zilong and Zhu, Song-Chun and Wu, Ying Nian},
    title = {Learning Dynamic Generator Model by Alternating Back-Propagation Through Time},
    journal={The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI)},
    year = {2019}
}

Requirements

Usage

(1) For dynamic texture synthesis

(i) Training

First, prepare your data into a folder, for example ./trainingVideo/dynamicTexture/fire

To train a model with dynamic texture fire:

$ python main_dyn_G.py --category fire --isTraining True

The training results will be saved in ./output_synthesis/fire/final_result.

The learned models will be saved in ./output_synthesis/fire/model.

(ii) Testing for dynamic texture synthesis

$ python main_dyn_G.py --category fire --isTraining False --num_sections_in_test 4 --num_batches_in_test 2 --ckpt_name model.ckpt-2960

the 'num_sections_in_test' indicates the number of trucations of the synthesized video

the 'num_batches_in_test' indicates the number of independent synthesized videos

testing results will be saved in ./output_synthesis/fire/final_result_testing.

(iii) Results

       

For each category, the first one is the observed video, and the other three are synthesized videos generated by the learned model. The observed video is of 60 frames in length, while the two synthesized videos are of 120 frames in length.

(2) For action pattern synthesis

(i) Training

First, prepare your data into a folder, for example ./trainingVideo/action_dataset/animal30_running

To train a model with dataset animal30_running:

$ python main_dyn_G_motion.py --category animal30_running --isTraining True

The training results will be saved in ./output_synthesis/animal30_running/final_result.

The learned models will be saved in ./output_synthesis/animal30_running/model.

(ii) Testing

$ python main_dyn_G_motion.py --category animal30_running --isTraining False --num_sections_in_test 2 --num_batches_in_test 2 --ckpt_name model.ckpt-6990

testing results will be saved in ./output_synthesis/animal30_running/final_result_testing.

(iii) Results

Synthesizing animal actions (animal action dataset). The first row shows the observed videos, while the second and third rows display two corresponding synthesized videos for each obcerved video. The number of frames of the observed video is less than that of the synthesized video in the experiment of synthesizing human actions.

(3) For recovery

(i) Training and recovering

Using external mask file

Type 1: missing frames

$ python main_dyn_G_recovery.py --category ocean --isTraining True  --training_mode incomplete --mask_type external --mask_file missing_frame_type.mat

Type 2: single region masks

$ python main_dyn_G_recovery.py --category ocean --isTraining True  --training_mode incomplete --mask_type external --mask_file region_type.mat
Using mask generated by the code

Type 1: missing frames

$ python main_dyn_G_recovery.py --category ocean --isTraining True  --training_mode incomplete --mask_type missingFrames

Type 2: single region masks

$ python main_dyn_G_recovery.py --category ocean --isTraining True  --training_mode incomplete --mask_type randomRegion

The results will be saved in ./output_recovery/ocean/final_result.

(ii) Results

   
   

In each example, the first one is the occluded training video, and the second one is the recovered result.

(4) For background inpainting

(i) Inpainting

$ python main_dyn_G_background_inpainting.py --category boats --isTraining True  --training_mode incomplete --mask_type external --mask_file mask128.mat

The results will be saved in ./output_background_inpainting/boats/final_result.

(ii) Results

   

In each example, the first one is the original video, the second one is the result where the target object is removed by our algorithm. (Left) removing a walking person in front of fountain. (Right) removing a moving boat in the lake.

Q & A

For any questions, please contact Jianwen Xie (jianwen@ucla.edu), Ruiqi Gao (ruiqigao@ucla.edu) and Zilong Zheng (zilongzheng0318@ucla.edu)

About

Learning Dynamic Generator Model by Alternating Back-Propagation Through Time

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages