Followed the online tutorial on https://www.deepfakevfx.com/guides/deepfacelab-2-0-guide/ in order to make deep fake video. These were the steps taken. PLEASE DO TAKE NOTE of installing the appropriate libraries refer to : Troubleshooting GPU Detection section for this. We ran into this problem ourselves and it caused a lot of problems. At some point the process timed out so please refer to Note on GPU Process Time in order to make the appropriate changes so the process keeps running.
- OS: Linux Mint
- GPU: NVIDIA RTX 3090 (24GB)
- RAM: 16GB
- Follow the standard Anaconda installation steps.
-
Check for Compatible Versions:
- Check the latest cuDNN and CUDA Toolkit versions for your GPU device at TensorFlow Tested Build Configurations.
- Check your CUDA version and then search for compatible versions using:
conda search cudnn conda search cudatoolkit conda search tensorflow-gpu
-
Create Conda Environment:
- Create a Conda environment with the compatible versions you identified:
conda create -n deepfacelab -c main python=[version] cudnn=[version] cudatoolkit=[version] tensorflow-gpu=[version]
- Example:
conda create -n deepfacelab -c main python=3.7 cudnn=7.6.5 cudatoolkit=10.2.89 tensorflow-gpu=2.4.1
- Create a Conda environment with the compatible versions you identified:
-
Activate Environment and Clone Repositories:
- Activate the Conda environment:
conda activate deepfacelab
- Clone the DeepFaceLab repositories:
git clone --depth 1 https://github.com/nagadit/DeepFaceLab_Linux.git cd DeepFaceLab_Linux git clone --depth 1 https://github.com/iperov/DeepFaceLab.git
- Install required Python packages:
python -m pip install -r ./DeepFaceLab/requirements-cuda.txt
- Activate the Conda environment:
-
Set Up cuDNN and Toolkit Paths:
- Add the cuDNN and CUDA Toolkit paths in your
.sh
file. Open the bash configuration file and copy and paste the paths.
- Add the cuDNN and CUDA Toolkit paths in your
conda activate deepfacelab
python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
If you encounter issues with face detection in Step 4, (faces not being detected and high RAM usage), it might be because TensorFlow isn't recognizing your GPU. To fix this follow these steps:
- Locate your CUDA library path. Make sure you have this installed otherwise the model will not run properly reference line above.
#See point 2 line 1 for path setup in bashrc after finding the path
cd /
echo $PATH | sed "s/:/\n/g" | grep "cuda/bin" | sed "s/\/bin//g" | head -n 1
#See point 2 line 2 for path setup in bashrc after finding the path
cd ~/
find -iname libcudnn.so*
- Add the following path to your bash file ( cd home, vi .bashrc), adjusting the paths to match your system: Eg.
1 export PATH=/lib/cuda/bin:$PATH
2 export LD_LIBRARY_PATH=/home/[user]/anaconda3/lib:$LD_LIBRARY_PATH
These lines ensure that the CUDA binaries are in your PATH and that the necessary libraries are accessible to TensorFlow.
[Download Pretained Dataset](Add your cmpressed link here)
cd DeepFaceLab_Linux/DeepFaceLab
mkdir pretrain_CelebA # Keep folder name the same because the code directly references this folder
cd pretrain_CelebA
# Place the downloaded compressed dataset of your choice here
#if tar file
tar -xvf filename.tar.gz
#If zip file
unzip filename.zip
cd DeepfaceLab_Test/scripts
Step 1: Clear Workspace & Import Data
./1_clear_workspace.sh
data_src.mp4
data_dst.mp4
Run below command
./2_extract_image_from_data_src
Run below command
./3_extract_image_from_data_dst.sh
Run below command
./3.1_denoise_data_dst_images.sh
Run below command
./4_data_src_extract_faces_S3FD.sh
During the execution of Step 4, you may encounter issues where the GPU process takes a long time and gets killed. To resolve this, modify the following file:
DeepFaceLab_Linux/DeepFaceLab/core/joblib/SubprocessorBase.py
Change lines 103-105 as follows:
103 self.SubprocessorCli_class = SubprocessorCli_class
104 #self.no_response_time_sec = no_response_time_sec # Disabled
105 self.no_response_time_sec = 0 # Added
Go to DeepFaceLab_Linux/workspace/data_src/aligned
path and verify if faces are extracted accurately/
Run below commnad
./4.2_data_src_sort.sh
Go to DeepFaceLab_Linux/workspace/data_src/aligned
path and delete all images having no face/
Run below command
./5_data_dst_extract_faces_S3FD.sh
Go to DeepFaceLab_Linux/workspace/data_dst/aligned
path and verify if faces are extracted accurately
Run below commnad
./5.2_data_dst_sort.sh
Go to DeepFaceLab_Linux/workspace/data_dst/aligned
path and delete all images having no face/
Run below commnads
./5_XSeg_generic_wf_data_dst_apply.sh
./5_XSeg_data_dst_mask_edit.sh
./5_XSeg_data_dst_mask_fetch.sh
./5_XSeg_data_dst_mask_remove.sh
./5_XSeg_data_dst_mask_apply.sh
./5_XSeg_data_dst_trained_mask_remove.sh
./5_XSeg_generic_wf_data_src_apply.sh
./5_XSeg_data_src_mask_edit.sh
./5_XSeg_data_src_mask_fetch.sh
./5_XSeg_data_src_mask_remove.sh
./5_XSeg_data_src_mask_apply.sh
./5_XSeg_data_src_trained_mask_remove.sh
./5_XSeg_train
./5_XSeg_generic_wf_data_dst_apply.sh
./5_XSeg_generic_wf_data_src_apply.sh
Used SAEHD - Sparse Auto Encoder HD. The standard model and trainer for most deepfakes.
Execute below command and check on link to understand Model Training Settings. The training should be done in 4 phases of mininmum 100K each. After reaching 100k just ctrl+c
or enter
in GUI.
Step 1 – Pretrain the Model or Import a Pretrained Model
- Enter all the model settings.
- Enable Pretrain mode.
Step 2 – Random Warp
- Enable Random Warp of samples.
- Enable Masked training (WF/Head only).
- Disable True Face Power (DF models only).
- Disable GAN.
- Disable Pretrain mode.
- Optional:
- Flip SRC randomly.
- Flip DST randomly.
- Color Transfer Mode.
- Random HSL.
- Add or remove faceset images and alter masks during warp phase.
- Enable gradient clipping as needed.
Step 3 – Eyes and Mouth Priority (Optional)
- Enable Eyes and mouth priority.
Step 4 – Uniform Yaw (Optional)
- Disable Eyes and mouth priority.
- Enable Uniform yaw distribution of samples.
Step 5 – Learning Rate Dropout (Optional)
- Enable Use learning rate dropout.
- Optional: Disable Uniform yaw distribution of samples.
Step 6 – Regular Training
- Disable Random Warp.
- Disable Uniform Yaw.
- Disable Eyes and mouth priority.
- Disable Use learning rate dropout.
Step 7 – Style and Color (Optional)
- Enable Blur out mask.
- Enable ‘True Face’ power (DF only).
- Enable Face style power.
- Enable Background Style Power.
Step 8 – Eyes and Mouth Priority (Optional)
- Enable Eyes and mouth priority.
Step 9 – Uniform Yaw (Optional)
- Disable Eyes and mouth priority.
- Enable Uniform yaw distribution of samples.
Step 10 – LRD (Optional)
- Enable Use learning rate dropout.
- Disable Eyes and mouth priority.
- Optional: Disable Uniform yaw distribution of samples.
Step 11 – GAN (Model settings:)
- Disable Eyes and mouth priority.
- Disable Uniform yaw distribution of samples.
- Set GAN power.
- Set GAN patch size.
- Set GAN dimensions.
Run below commands, check on the link to understand mergining
./7_merge_SAEHD.sh
Run below command
./8_merged_to_mp4.sh
Go to DeepFaceLab_Linux/workspace/
and play result.mp4