Please use conda env create -f environment.yml
or conda env create -f environment_bare.yml
.
environment_bare.yml
has no version information for dependencies to allow easier version hunting if a version specified in environment.yml
is no longer available.
Then activate the python environment with conda activate prlenv
.
Our VGG data for LPIPS perceptual loss is required for training. Our LDR and HDR model must be downloaded for evaluation and the master demo notebook.
Download and untar vgg_conv.pth into model_data/vgg_conv.pth
folder.
Download and untar ldr_final_prl_model.pth into model_data/ldr_final_prl_model.pth
folder.
Download and untar hdr_final_prl_model.pth into model_data/hdr_final_prl_model.pth
folder.
Datasets are only required for training and evaluation, not the demo notebook. Our training and eval datasets are sourced directly from MatFusion Ensure you download all three training sets before training.
The inria dataset can be downloaded from https://team.inria.fr/graphdeco/projects/deep-materials/. Unzip it into the data
directory (data/DeepMaterialsData/
).
Then run cd data && python convert_inria.py
. This create a data/inria_svbrdfs
folder formatted as needed for our training process.
These SVBRDFs are distributed under a CC BY-NC-ND 2.0 licence.
Download and untar cc0_svbrdfs.tar.lz4 into a data/cc0_svbrdfs
folder.
These SVBRDFs are collected from PolyHaven and AmbientCG by Sam Sartor for MatFusion, and are distributed under the CC0 licence.
Download and untar mixed_svbrdfs.tar.lz4 into a data/mixed_svbrdfs
folder.
These SVBRDFs are derived from the above INRIA and CC0 datasets by Sam Sartor for MatFusion.
Download and untar test_svbrdf.tar.lz4 into a data/test_svbrdfs
folder.## Demo
See the prl_master_demo.ipynb for a demonstration of our model (after downloading our LDR model).
See the prl_main_train.py script for a template of how we train our model.
See the prl_main_eval.py script for a template of how we evaluate our model.