Skip to content

Commit db0b087

Browse files
author
huyibo6
committed
DSDG/README
1 parent 44eda6b commit db0b087

File tree

1 file changed

+30
-23
lines changed

1 file changed

+30
-23
lines changed

addition_module/DSDG/README.md

+30-23
Original file line numberDiff line numberDiff line change
@@ -1,38 +1,45 @@
1-
#DSDG
1+
# DSDG
22

3-
Official Pytorch code of paper Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth Uncertainty Learning in IEEE Transactions on Circuits and Systems for Video Technology.
3+
Official Pytorch code of paper [Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth Uncertainty Learning](https://arxiv.org/pdf/2112.00568.pdf) in IEEE Transactions on Circuits and Systems for Video Technology.
44

5-
##Requirements
5+
Face anti-spoofing (FAS) plays a vital role in preventing face recognition systems from presentation attacks. Existing face anti-spoofing datasets lack diversity due to the insufficient identity and insignificant variance, which limits the generalization ability of FAS model. In this paper, we propose Dual Spoof Disentanglement Generation (DSDG) framework to tackle this challenge by "anti-spoofing via generation". Depending on the interpretable factorized latent disentanglement in Variational Autoencoder (VAE), DSDG learns a joint distribution of the identity representation and the spoofing pattern representation in the latent space. Then, large-scale paired live and spoofing images can be generated from random noise to boost the diversity of the training set. However, some generated face images are partially distorted due to the inherent defect of VAE. Such noisy samples are hard to predict precise depth values, thus may obstruct the widely-used depth supervised optimization. To tackle this issue, we further introduce a lightweight Depth Uncertainty Module (DUM), which alleviates the adverse effects of noisy samples by depth uncertainty learning. DUM is developed without extra-dependency, thus can be flexibly integrated with any depth supervised network for face anti-spoofing. We evaluate the effectiveness of the proposed method on five popular benchmarks and achieve state-of-the-art results under both intra- and inter- test settings.
6+
7+
## Requirements
68
Our experiments are conducted under the following environments:
7-
* Python == 3.8
8-
* Pytorch == 1.6.0
9-
* torchvision == 0.7.0
9+
- Python == 3.8
10+
- Pytorch == 1.6.0
11+
- torchvision == 0.7.0
1012

11-
##Training
13+
## Training
1214
Before training, we need to extract frame images for some video data sets. Then, we use [MTCNN](https://github.com/ipazc/mtcnn) for face detection and [PRNet](https://github.com/YadiraF/PRNet) for face depth map prediction. We give an example of the OULU-NPU dataset:
13-
* Configure the paths in [./data/extract_frame.py]() to extract frames from videos.
14-
* Configure the paths in [./data/bbox.py]() to get the location of face in each frame with MTCNN.
15+
* Configure the paths in [./data/extract_frame.py](https://github.com/JDAI-CV/FaceX-Zoo/blob/main/addition_module/DSDG/data/extract_frame.py) to extract frames from videos.
16+
* Configure the paths in [./data/bbox.py](https://github.com/JDAI-CV/FaceX-Zoo/blob/main/addition_module/DSDG/data/bbox.py) to get the location of face in each frame with MTCNN.
1517
* Utilize the PRNet to get the depth map of face in each frame.
1618
* Save the processed data in `./oulu_images/ `.
17-
####DSDG
19+
20+
#### DSDG
1821
* Download the LightCNN-29 model from this [link](https://drive.google.com/file/d/1Jn6aXtQ84WY-7J3Tpr2_j6sX0ch9yucS/view) and put it to `./ip_checkpoint`.
19-
* Run [./data/make_train_list.py]() to generate training list.
20-
* Run [./train_generator.sh]() to train the generator.
21-
* Run [./generated.py]() to generate paired face anti-spoofing data.
22+
* Run [./data/make_train_list.py](https://github.com/JDAI-CV/FaceX-Zoo/blob/main/addition_module/DSDG/data/make_train_list.py) to generate training list.
23+
* Run [./train_generator.sh](https://github.com/JDAI-CV/FaceX-Zoo/blob/main/addition_module/DSDG/train_generator.sh) to train the generator.
24+
* Run [./generated.py](https://github.com/JDAI-CV/FaceX-Zoo/blob/main/addition_module/DSDG/generated.py) to generate paired face anti-spoofing data.
2225
* Save the processed data in `./fake_images/` and utilize the PRNet to get the depth map.
23-
####DUM
24-
* Run the [./DUM/make_dataset/crop_dataset.py]() to crop the original data. Save the cropped data in `./oulu_images_crop`.
26+
27+
#### DUM
28+
* Run the [./DUM/make_dataset/crop_dataset.py](https://github.com/JDAI-CV/FaceX-Zoo/blob/main/addition_module/DSDG/DUM/make_dataset/crop_dataset.py) to crop the original data. Save the cropped data in `./oulu_images_crop`.
2529
* Move the generated data in `./fake_images/` to `./oulu_images_crop/` and upgrade the protocol.
26-
* Run [./DUM/train.py]() to train the model with original data and generated data.
27-
##Testing
30+
* Run [./DUM/train.py](https://github.com/JDAI-CV/FaceX-Zoo/blob/main/addition_module/DSDG/DUM/train.py) to train the model with original data and generated data.
31+
32+
## Testing
2833
We provide a CDCN model with DUM trained on OULU-NPU Protocol-1, and the following shows how to test it.
2934
* The trained model is released in `./DUM/checkpoint/`.
30-
* Run [./DUM/test.py]() to test on OULU-NPU Protocol-1.
35+
* Run [./DUM/test.py](https://github.com/JDAI-CV/FaceX-Zoo/blob/main/addition_module/DSDG/DUM/test.py) to test on OULU-NPU Protocol-1.
36+
3137
## Citation
3238
Please consider citing our paper in your publications if the project helps your research.
33-
##Acknowledgements
39+
40+
## Acknowledgements
3441
This repo is based on the following projects, thank the authors a lot.
35-
* [BradyFU/DVG](https://github.com/BradyFU/DVG)
36-
* [ZitongYu/CDCN](https://github.com/ZitongYu/CDCN)
37-
* [YadiraF/PRNet](https://github.com/YadiraF/PRNet)
38-
* [ipazc/mtcnn](https://github.com/ipazc/mtcnn)
42+
- [BradyFU/DVG](https://github.com/BradyFU/DVG)
43+
- [ZitongYu/CDCN](https://github.com/ZitongYu/CDCN)
44+
- [YadiraF/PRNet](https://github.com/YadiraF/PRNet)
45+
- [ipazc/mtcnn](https://github.com/ipazc/mtcnn)

0 commit comments

Comments
 (0)