<Contribution>
FAS(Face Anti-Spoofing)를 일반화할 수 있는 어셈블리 네트워크(SSAN) 제안
도메인 구별이 불가능하도록 Adversarial Learning을 채택한다.
Style feature의 경우, 도메인별 정보를 억제하면서 활력 관련 스타일 정보를 강조하기 위해 Contrastive Learning이 사용된다.
기존 데이터셋을 집계하여 FAS에 대한 대규모 벤치마크를 구축
Pagine di Moda: Maria Canella ed Elena Puccinelli, Centro MIC Università degl...Sergio Primo Del Bello
Giornate di studio nell’ambito del progetto nazionale
“Archivi della moda del Novecento”, Milano, 7 marzo 2013
" 'Archivi della moda del Novecento'. Gli archivi dell’editoria femminile e di moda", Maria Canella e Elena Puccinelli, Centro MIC Università degli Studi di Milano
Info e programma: http://www.moda.san.beniculturali.it/wordpress/?news=convegno-pagine-di-moda
Progetto Archivi della Moda: http://www.moda.san.beniculturali.it/wordpress/
Videoregistrazione dell'intervento: https://vimeo.com/album/2307514/video/61964970
Album fotografico:
https://picasaweb.google.com/117290793877692021380/ConvegnoPagineDiModaMilano2013?authuser=0&feat=directlink
Instalação da Moldura 2 DIN S10 Autoplast
Instalação da Moldura 2 DIN
1º Passo
Remover a moldura principal para ter acesso à fixação da moldura 1 DIN Original
2º Passo
Cortar a parte interna traseira
Do painel para obter espaço para
A instalação da central Multimídia.
3º Passo
Instalar e fixar a moldura multimídia.
4º Passo
Reinstalar a moldura principal.
_______________________________________________
Voltar para a moldura
Original 1 DIN
(Reverter a instalação)
1º Passo
Retire a moldura principal e também a central multimídia, em seguida reinstale a moldura original 1 DIN usando os mesmos pontos de fixação originais.
2º Passo
Reinstale a moldura principal.
Fim.
Empowering humans in immersive learning environmentsLeonel Morgado
Keynote at ACM IMX 2022, XR-WALC workshop. Abstract:
You dare plowing through immersive learning environments. The jungle surrounds you and entices you, but also hampers your progress. Suddenly, activities you deemed plain are out of reach, like practicing team sports, while exotic ones are readily available, like visiting the molecular structure of materials. Relentlessly, you plow on, trusting your trail of experience to support your safe return, but the jungle overgrows it rapidly: your plans meet unexpected circumstances, like connectivity issues, lack of situational awareness, or sheer inadequacy to new realities: why assess by quizzing or milestones when we are immersed in situational evidence, for instance?
In this talk, I will address two main questions and provide examples and pathways to address them, by sharing my previous efforts on this regard:
- How can immersive environments expand the range of educational dynamics and subjects that online learning can provide?
- How can we have widespread deployment of immersive environments?
Initially, I will present an overview on the multifaceted concept of immersion, a phenomenon emerging from the combination of presence within a system, and psychological absorption by narratives and agency. A panorama of current uses, practices, and strategies of immersive learning environments will be provided, and two approaches for the main questions will be provided: the concept and use of virtual choreographies for semantic-based deployment and recording of immersive content, and the Inven!RA architecture for mapping analytics from immersive environments to learning objectives in plans.
Progressive Growing of GANs for Improved Quality, Stability, and Variation Re...태엽 김
Progressive Growing of GANs for Improved Quality, Stability, and Variation
Karras, Tero
Aila, Timo
Laine, Samuli
Lehtinen, Jaakko
모두의 연구소 GAN찮아 논문 리뷰 발표 자료
PR-313 Training BatchNorm and Only BatchNorm: On the Expressive Power of Rand...Sunghoon Joo
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
Jonathan Frankle, David J. Schwab, Ari S. Morcos
ICLR 2021
Paper link: https://arxiv.org/abs/2008.09093
Video presentation link: https://youtu.be/bI8ceHOoYxk
reviewed by Sunghoon Joo (주성훈)
Pagine di Moda: Maria Canella ed Elena Puccinelli, Centro MIC Università degl...Sergio Primo Del Bello
Giornate di studio nell’ambito del progetto nazionale
“Archivi della moda del Novecento”, Milano, 7 marzo 2013
" 'Archivi della moda del Novecento'. Gli archivi dell’editoria femminile e di moda", Maria Canella e Elena Puccinelli, Centro MIC Università degli Studi di Milano
Info e programma: http://www.moda.san.beniculturali.it/wordpress/?news=convegno-pagine-di-moda
Progetto Archivi della Moda: http://www.moda.san.beniculturali.it/wordpress/
Videoregistrazione dell'intervento: https://vimeo.com/album/2307514/video/61964970
Album fotografico:
https://picasaweb.google.com/117290793877692021380/ConvegnoPagineDiModaMilano2013?authuser=0&feat=directlink
Instalação da Moldura 2 DIN S10 Autoplast
Instalação da Moldura 2 DIN
1º Passo
Remover a moldura principal para ter acesso à fixação da moldura 1 DIN Original
2º Passo
Cortar a parte interna traseira
Do painel para obter espaço para
A instalação da central Multimídia.
3º Passo
Instalar e fixar a moldura multimídia.
4º Passo
Reinstalar a moldura principal.
_______________________________________________
Voltar para a moldura
Original 1 DIN
(Reverter a instalação)
1º Passo
Retire a moldura principal e também a central multimídia, em seguida reinstale a moldura original 1 DIN usando os mesmos pontos de fixação originais.
2º Passo
Reinstale a moldura principal.
Fim.
Empowering humans in immersive learning environmentsLeonel Morgado
Keynote at ACM IMX 2022, XR-WALC workshop. Abstract:
You dare plowing through immersive learning environments. The jungle surrounds you and entices you, but also hampers your progress. Suddenly, activities you deemed plain are out of reach, like practicing team sports, while exotic ones are readily available, like visiting the molecular structure of materials. Relentlessly, you plow on, trusting your trail of experience to support your safe return, but the jungle overgrows it rapidly: your plans meet unexpected circumstances, like connectivity issues, lack of situational awareness, or sheer inadequacy to new realities: why assess by quizzing or milestones when we are immersed in situational evidence, for instance?
In this talk, I will address two main questions and provide examples and pathways to address them, by sharing my previous efforts on this regard:
- How can immersive environments expand the range of educational dynamics and subjects that online learning can provide?
- How can we have widespread deployment of immersive environments?
Initially, I will present an overview on the multifaceted concept of immersion, a phenomenon emerging from the combination of presence within a system, and psychological absorption by narratives and agency. A panorama of current uses, practices, and strategies of immersive learning environments will be provided, and two approaches for the main questions will be provided: the concept and use of virtual choreographies for semantic-based deployment and recording of immersive content, and the Inven!RA architecture for mapping analytics from immersive environments to learning objectives in plans.
Progressive Growing of GANs for Improved Quality, Stability, and Variation Re...태엽 김
Progressive Growing of GANs for Improved Quality, Stability, and Variation
Karras, Tero
Aila, Timo
Laine, Samuli
Lehtinen, Jaakko
모두의 연구소 GAN찮아 논문 리뷰 발표 자료
PR-313 Training BatchNorm and Only BatchNorm: On the Expressive Power of Rand...Sunghoon Joo
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
Jonathan Frankle, David J. Schwab, Ari S. Morcos
ICLR 2021
Paper link: https://arxiv.org/abs/2008.09093
Video presentation link: https://youtu.be/bI8ceHOoYxk
reviewed by Sunghoon Joo (주성훈)
[ECCV2022] Generative Domain Adaptation for Face Anti-SpoofingKIMMINHA3
이미지 변환을 통해 대상 데이터를 소스 도메인 스타일로 스타일화하여 대상 데이터를 소스 모델에 직접 맞추는 얼굴 안티 스푸핑을 위한 감독되지 않은 도메인 적응의 새로운 관점을 제안한다.
스타일화를 보장하기 위해, 도메인 간 신경 통계 NSC과 DSC과 결합된 생성 도메인 적응 프레임워크를 제시함. 그리고 일반화를 보장하기 위해 목표 데이터 분포를 더욱 확장하기 위해 도메인 내 SpecMix을 제시함.
광범위한 Experiments과 Visualization을 통해 제안된 방법의 효과를 입증.
PR-383: Solving ImageNet: a Unified Scheme for Training any Backbone to Top R...Sunghoon Joo
Tensorflow KR PR-12 season4 slide
PR-383: Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results Reviewer: Sunghoon Joo (VUNO Inc.)
Paper link: https://arxiv.org/abs/2204.03475
YouTube link: https://youtu.be/WeYuLO1nTmE
[TIFS'22] Learning Meta Pattern for Face Anti-SpoofingKIMMINHA3
Digital displays are made of glass sand have high reflection coefficients.
Printed photos attacks tend to present lower image quality due to the low Dots Per Inch (DPI) and color degradation.
이전 연구에선 handcrafted features을 이용해서 spoofing attack을 탐지했음
(i.e., Local Binary Pattern (LBP), Speeded Up Robust Features (SURF), and Blurring, etc.)
→ 전문가들의 domain knowledge에 의존하는 문제가 있음
Only using RGB images as input
→ 모델이 source domain에 overfitting되는 문제
이를 극복하기 위해서, hand-crafted feature을 함께 학습시키거나 HSV 채널을 concat하는 방식이 등장
→ 모델 일반화 성능을 높이기엔 충분하지 않음
본 논문에선 meta-learning을 통해 Meta Pattern (MP)를 생성할 수 있는 네트워크를 제안
생성된 MP와 RGB image를 융화시키기 위한 Hierarchical Fusion Network (HFN)을 제안
최근, Spoof trace disentanglement framework가 일반화 성능 측면에서 높은 잠재성을 보여주며 등장하고 있음.
하지만, single-modal input 시나리오에서 제약이 큼.
본 논문에선 다음과 같은 방법을 제안
1. Multi-modal disentanglement model
→robust generic attack detection
2. Two-stream disentangling network
→robust on RGB and depth inputs
3. Fusion module
→ spoof의 RGB, Depth로 부터 more informative feature을 각각 생성
I summarized that I've researched super-resolution tasks. In detail, I studied the SR as unsupervised learning which doesn't need some ground-truth high-resolution dataset. But I changed the main topic from unsupervised learning. That is SR is based on continual learning.
I'm continually researching the unsupervised SR that I cannot finish in the past. Thus, it's an honor any contact which is interesting in the SR.
(You can refer to these slides anytime.)
[CVPRW2021]FReTAL: Generalizing Deepfake detection using Knowledge Distillati...KIMMINHA3
This is my first paper when I was a graduate student.
I really appreciate it if you want to use this presentation, and I would if it is useful for you. If you use it, please refer to my name or my ID for your future presentation.
We propose a novel domain adaption framework, “Feature Representation Transfer Adaptation Learning” (FReTAL), based on knowledge distillation and representation learning that can prevent catastrophic forgetting without accessing the source domain data.
We show that leveraging knowledge distillation and representation learning can enhance adaptability across different deepfake domains.
We demonstrate that our method outperforms baseline approaches on deepfake benchmark datasets with up to 86.97% accuracy on low-quality deepfake detection.
We propose a novel domain adaption framework, “Feature Representation Transfer Adaptation Learning” (FReTAL), based on knowledge distillation and representation learning that can prevent catastrophic forgetting without accessing to the source domain data.
We show that leveraging knowledge distillation and representation learning can enhance adaptability across different deepfake domains.
We demonstrate that our method outperforms baseline approaches on deepfake benchmark datasets with up to 86.97% accuracy on low-quality deepfake detection.
They proposed two novel methods.
1. Stripe-Wise Pruning (SWP)
They propose a new pruning paradigm called SWP (Stripe-Wise Pruning)
They achieve a higher pruning ratio compared to the filter-wise, channel-wise, and group-wise pruning methods.
2. Filter Skeleton (FS)
They propose a new method ‘Filter Skeleton’ to efficiently learn the optimal shape of the filters for pruning.
They didn't much compare with other baselines. But they obviously suggested the novel methods, that is why I choose for review when reviewing the paper. More, they said that It is State-of-the-art (SOTA) method of lately pruning methods.
Methods for interpreting and understanding deep neural networksKIMMINHA3
This paper "methods for interpreting and understanding deep neural networks" was presented in ICASSP 2017, and introduced by G Montavon et al.
Now, the number of citations is more than 1,370. That is, this paper has a lot of things to study for deep learning technology.
Meta learned Confidence for Few-shot LearningKIMMINHA3
This was presented Meta learned Confidence for Few-shot Learning on CVPR in 2020.
Few-shot learning is an important challenge under data scarcity.
When there is a lot of unlabeled data and data scarcity,
a) leveraging nearest neighbor graph
b) using predicted soft or hard labels on unlabeled samples to update the class prototype.
the model confidence may be unreliable, which may lead to incorrect predictions.
[CVPRW 2020]Real world Super-Resolution via Kernel Estimation and Noise Injec...KIMMINHA3
This paper is about the Super-Resolution (SR) task and was introduced in CVPRW 2020 as the winner of two tasks with SR competition.
The authors called into question why there are no practical methods for denoising. Because previous papers dealt with ideal noise like bicubic downsampling.
To solve this impractical and ideal problem, the authors proposed to improve the resolution via kernel estimation and noise injection, which means that they do not use it while the training phase. That is why I was interested in this paper.
It is simply for before training. So I was interested in how they explore the proper with real-world images; kernel estimation and noise injection.
In summary, they save some informs of kernels that are applied corresponding to their formula using the eval data, i.e., no have ground truth. Also, the values of noise are as well.
These are what they are emphasizing novel method.
If you guys want to see and know more specifically this paper, you can cite this link:
https://openaccess.thecvf.com/content_CVPRW_2020/papers/w31/Ji_Real-World_Super-Resolution_via_Kernel_Estimation_and_Noise_Injection_CVPRW_2020_paper.pdf
In this presentation, I'll introduce the 'Real-world Super-Resolution via Kernel Estimation and Noise Injection'
More Related Content
Similar to [CVPR'22] Domain Generalization via Shuffled Style Assembly for Face Anti-Spoofing
[ECCV2022] Generative Domain Adaptation for Face Anti-SpoofingKIMMINHA3
이미지 변환을 통해 대상 데이터를 소스 도메인 스타일로 스타일화하여 대상 데이터를 소스 모델에 직접 맞추는 얼굴 안티 스푸핑을 위한 감독되지 않은 도메인 적응의 새로운 관점을 제안한다.
스타일화를 보장하기 위해, 도메인 간 신경 통계 NSC과 DSC과 결합된 생성 도메인 적응 프레임워크를 제시함. 그리고 일반화를 보장하기 위해 목표 데이터 분포를 더욱 확장하기 위해 도메인 내 SpecMix을 제시함.
광범위한 Experiments과 Visualization을 통해 제안된 방법의 효과를 입증.
PR-383: Solving ImageNet: a Unified Scheme for Training any Backbone to Top R...Sunghoon Joo
Tensorflow KR PR-12 season4 slide
PR-383: Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results Reviewer: Sunghoon Joo (VUNO Inc.)
Paper link: https://arxiv.org/abs/2204.03475
YouTube link: https://youtu.be/WeYuLO1nTmE
[TIFS'22] Learning Meta Pattern for Face Anti-SpoofingKIMMINHA3
Digital displays are made of glass sand have high reflection coefficients.
Printed photos attacks tend to present lower image quality due to the low Dots Per Inch (DPI) and color degradation.
이전 연구에선 handcrafted features을 이용해서 spoofing attack을 탐지했음
(i.e., Local Binary Pattern (LBP), Speeded Up Robust Features (SURF), and Blurring, etc.)
→ 전문가들의 domain knowledge에 의존하는 문제가 있음
Only using RGB images as input
→ 모델이 source domain에 overfitting되는 문제
이를 극복하기 위해서, hand-crafted feature을 함께 학습시키거나 HSV 채널을 concat하는 방식이 등장
→ 모델 일반화 성능을 높이기엔 충분하지 않음
본 논문에선 meta-learning을 통해 Meta Pattern (MP)를 생성할 수 있는 네트워크를 제안
생성된 MP와 RGB image를 융화시키기 위한 Hierarchical Fusion Network (HFN)을 제안
최근, Spoof trace disentanglement framework가 일반화 성능 측면에서 높은 잠재성을 보여주며 등장하고 있음.
하지만, single-modal input 시나리오에서 제약이 큼.
본 논문에선 다음과 같은 방법을 제안
1. Multi-modal disentanglement model
→robust generic attack detection
2. Two-stream disentangling network
→robust on RGB and depth inputs
3. Fusion module
→ spoof의 RGB, Depth로 부터 more informative feature을 각각 생성
I summarized that I've researched super-resolution tasks. In detail, I studied the SR as unsupervised learning which doesn't need some ground-truth high-resolution dataset. But I changed the main topic from unsupervised learning. That is SR is based on continual learning.
I'm continually researching the unsupervised SR that I cannot finish in the past. Thus, it's an honor any contact which is interesting in the SR.
(You can refer to these slides anytime.)
[CVPRW2021]FReTAL: Generalizing Deepfake detection using Knowledge Distillati...KIMMINHA3
This is my first paper when I was a graduate student.
I really appreciate it if you want to use this presentation, and I would if it is useful for you. If you use it, please refer to my name or my ID for your future presentation.
We propose a novel domain adaption framework, “Feature Representation Transfer Adaptation Learning” (FReTAL), based on knowledge distillation and representation learning that can prevent catastrophic forgetting without accessing the source domain data.
We show that leveraging knowledge distillation and representation learning can enhance adaptability across different deepfake domains.
We demonstrate that our method outperforms baseline approaches on deepfake benchmark datasets with up to 86.97% accuracy on low-quality deepfake detection.
We propose a novel domain adaption framework, “Feature Representation Transfer Adaptation Learning” (FReTAL), based on knowledge distillation and representation learning that can prevent catastrophic forgetting without accessing to the source domain data.
We show that leveraging knowledge distillation and representation learning can enhance adaptability across different deepfake domains.
We demonstrate that our method outperforms baseline approaches on deepfake benchmark datasets with up to 86.97% accuracy on low-quality deepfake detection.
They proposed two novel methods.
1. Stripe-Wise Pruning (SWP)
They propose a new pruning paradigm called SWP (Stripe-Wise Pruning)
They achieve a higher pruning ratio compared to the filter-wise, channel-wise, and group-wise pruning methods.
2. Filter Skeleton (FS)
They propose a new method ‘Filter Skeleton’ to efficiently learn the optimal shape of the filters for pruning.
They didn't much compare with other baselines. But they obviously suggested the novel methods, that is why I choose for review when reviewing the paper. More, they said that It is State-of-the-art (SOTA) method of lately pruning methods.
Methods for interpreting and understanding deep neural networksKIMMINHA3
This paper "methods for interpreting and understanding deep neural networks" was presented in ICASSP 2017, and introduced by G Montavon et al.
Now, the number of citations is more than 1,370. That is, this paper has a lot of things to study for deep learning technology.
Meta learned Confidence for Few-shot LearningKIMMINHA3
This was presented Meta learned Confidence for Few-shot Learning on CVPR in 2020.
Few-shot learning is an important challenge under data scarcity.
When there is a lot of unlabeled data and data scarcity,
a) leveraging nearest neighbor graph
b) using predicted soft or hard labels on unlabeled samples to update the class prototype.
the model confidence may be unreliable, which may lead to incorrect predictions.
[CVPRW 2020]Real world Super-Resolution via Kernel Estimation and Noise Injec...KIMMINHA3
This paper is about the Super-Resolution (SR) task and was introduced in CVPRW 2020 as the winner of two tasks with SR competition.
The authors called into question why there are no practical methods for denoising. Because previous papers dealt with ideal noise like bicubic downsampling.
To solve this impractical and ideal problem, the authors proposed to improve the resolution via kernel estimation and noise injection, which means that they do not use it while the training phase. That is why I was interested in this paper.
It is simply for before training. So I was interested in how they explore the proper with real-world images; kernel estimation and noise injection.
In summary, they save some informs of kernels that are applied corresponding to their formula using the eval data, i.e., no have ground truth. Also, the values of noise are as well.
These are what they are emphasizing novel method.
If you guys want to see and know more specifically this paper, you can cite this link:
https://openaccess.thecvf.com/content_CVPRW_2020/papers/w31/Ji_Real-World_Super-Resolution_via_Kernel_Estimation_and_Noise_Injection_CVPRW_2020_paper.pdf
In this presentation, I'll introduce the 'Real-world Super-Resolution via Kernel Estimation and Noise Injection'
To present on the seminar in DASH-Lab, SKKU, I brought out the thesis, which is Transferable GAN-generated Images (ICML 2020)
Detection.
.
If you want to see the context more specifically, you can see from this link : https://arxiv.org/abs/2008.04115
Extreme Incetion을 명명한 "Xception"은 적은 파라미터 수로 빠르게 학습시키는 CNN 모델 중 한개이다.
Depth wise Conv -> Point wise Conv 방식인 Depth wise Separable Conv와
모듈간의 분리를 한 Cross-channel correlation 형태인 Inception Hypothesis의 개념을 합친 것이라고 볼 수 있다.
[출처]
Sparable Convolutions :
https://zzsza.github.io/data/2018/02/23/introduction-convolution/
Inception & Xception :
https://hichoe95.tistory.com/49
short text large effect measuring the impact of user reviews on android app s...KIMMINHA3
ppt about short text large effect measuring the impact of user reviews on android app security & privacy
this is for presentation NLP study in Hanyang-University
If Anyone wants, You may use this
3. 19
3
• Shuffled Style Assembly Network (SSAN)
– Domain generalization에 효과적인 SSAN 프레임워크 제안
– SSAN : “we split the complete representation into content and style ones with various
supervision. Then, a generalized feature space is obtained by resembling features
under a contrastive learning strategy.”
– Stylized feature space를 얻기 위해 ‘Style Transfer’를 활용
– Contrastive learning을 적용하여 liveness의 분류 성능을 높이면서 domain specific하게
학습되는 현상을 억제하는 방법을 제안
Abstract
4. 19
4
• Domain Generalization
– 기존 연구들은 새로운 도메인에서 상대적으로 좋지 않은 detection 성능을 보여줌
– Unlabeled target data 를 학습하는 것은 비효과적일 수 있음
– 몇몇 도메인 일반화 연구가 진행되었지만, 대부분 BN layer을 적용하고 있음
– Batch Normalization (BN)은 global image statistics에 초점을 맞추기 때문에, local
image 속성을 무시할 수 있음
– Instance Normalization (IN)은 이미지 한장의 liveness-related texture 과 domain-
specific 정보를 추출 가능함
– Global + local 두 가지 정보들을 모두 획득하기 위해 BN + IN Normalization 을 적
용
Introduction
5. 19
5
Related Work
• Normalization and Style Transfer
Adaptive Instance Normalization (AdaIN) :
기존에 있던 Style transfer의 속도와 성능을 개선시키기 위해 등장
한 Normalization 기법으로 제안됨.
content feature과 style feature을 이용해 다양한 stylized image 생
성이 가능
• Protocols for Face Anti-Spoofing
– OCIM is used to evaluate their domain generalization
– real-world에 적합한 train & test set 구성 (i.e., attack types,
such as print, replay, mask, makeup, waxworks)
7. 19
7
Proposed Method
Content and Style Information Aggregation
생성된 content feature을 서로 다른 도메인에서 구분할 수 없도록 하기 위해
Adversarial
GRL (Gradient Reversion Layer) :
도메인이 달라지더라도 충분히 일반화할 수 있도록
모델을 학습하려면, domain을 구분하는 성능은 낮
아지게 훈련해야함, 즉 역전파 동안 음의 부호를 곱
하여 gradient를 역전시켜 loss값을 최대화시킴
set of domain labels
the number of different data domains
12. 19
12
• Data Evaluation Protocol & Metrics
Experiments
• Protocol 1. intra-dataset evaluation
all datasets are used as training and testing sets, simultaneously.
• Protocol 2. crossdomain evaluation.
P1: {D3, D4, D5, D10, D11, D12}, P2: {D1, D2, D6, D7, D8, D9}
Metrics : HTER ((FRR+FAR)/2), AUC
13. 19
13
• Experiment with Leave-One-Out (LOO) setting on OCIM
Experiments
SSAN-M : mean value of the predicted depth map is the final score
SSAN-R : The value of the sigmoid function on living is the final score
OULU-NPU [3] (O), CASIA-MFSD [64] (C), Replay-Attack [6] (I), and MSU-MFSD [50] (M)
18. 19
18
Conclusion
FAS(Face Anti-Spoofing)를 일반화할 수 있는 어셈블리 네트워크(SSAN) 제안
도메인 구별이 불가능하도록 Adversarial Learning을 채택한다.
Style feature의 경우, 도메인별 정보를 억제하면서 활력 관련 스타일 정보를 강조하기 위해
Contrastive Learning이 사용된다.
기존 데이터셋을 집계하여 FAS에 대한 대규모 벤치마크를 구축
Editor's Notes
원하는 Contents를 담고 있는 이미지의 feature xx 에서, 이미지의 스타일을 빼주고, 내가 입히고 싶은 Style을 더해주는 방식
Shuffled Style Assembly Network(SSAN)의 전체 아키텍처.
(Domain-Adversarial Neural Networks (DANN))
styletransfer :
https://lifeignite.tistory.com/46
adain : this method is widely used in generative tasks for texture synthesis and style transfer.
From (1), the filter weights and FS are jointly optimized during training. After training, we merge I onto the filter weights W(I.e., W ← W I), and only use W during evaluating. Thus no additional cost is brought to the network when applying inference
Half Total Error Rate (HTER)
(Half Total Error Rate between FAR and FRR)
ACER = (APCER + NPCER) / 2
False Positive Rate (FPR):
FPR = FP / (FP + TN)
True Positive Rate (TPR):
TPR = TP / (TP + FN)