SlideShare a Scribd company logo
1 of 35
Doctoral Student
Reported By: Zain Ul Abideen
Dec 21, 2023
Central South University, Changsha, Hunan, China
01
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用
C e n t r a l S o u t h U n i v e r s i t y
Self Introduction
Eagerly seeking a Ph.D., I hail from a Pakistani village, blessed with numerous successes. Grateful for the opportunity
to interview at top-tier institutions, my passion lies in contributing to impactful research, particularly in Artificial
Intelligence and Computer Vision
Honor and Awards
Awarded a Fully Funded Master’s University Scholarship at Central South University
Sep 01, 2021 – June 2024
Date of Birth: 31-01-1997
Nationality: Pakistani
Name: Zain Ul Abideen
01
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用
C e n t r a l S o u t h U n i v e r s i t y
Educational Background
Sep. 2021 Jun. 2024: School of Computer Science and Technology,
Central South University China, Master's (CS), MuST-GAN MFAS: Multi-Semantic
Spoof Tracer GAN with Transformer Layers for Multi-modal Face Anti-Spoofing,
supervised by Professor Shu Liu.
Sep. 2016-Jul. 2020: Department of Computer Science and Technology, Government
College University, Pakistan, BS(CS, “AI based GCUF Assistant Chatbot” Supervised
by Professor Rao Iqbal.
01
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用
C e n t r a l S o u t h U n i v e r s i t y
Publications
• Ul Abideen, Z. and Liu, S. (2023). GAN MFAS: Multi-Semantic Spoof Tracer GAN
with Transformer Layers for Multi-modal Face Anti-Spoofing
• Shahzad, I. and Ul Abideen, Z. (2023). Hybrid Learning to Classify Autistic and
Non-autistic Face.
• Abbas, W. and Ul Abideen, Z. (2023) Classify Attire detection using Optimized
Hybrid Learning
• Hussain, T. Ul Abideen, Z. , (2023) A Hybrid Deep Learning Approach for Improved
E-Learning based on Automatic Learning Style.
01
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用
C e n t r a l S o u t h U n i v e r s i t y
Work Experience
• Nov 2020-Feb 2022: Data Analyst at E-commerce company, Running E-commerce
stores in all over the world. Location: Lahore, Pakistan.
• Sep 2018-May 2020: Sales Agent at UK based company Providing Electricity and
Gas Services. Location: Faisalabad, Pakistan.
01
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用
C e n t r a l S o u t h U n i v e r s i t y
Skills
• Programming Languages: Proficient in Python at Visual Studio Code, PyCharm,
Google Colab and Kaggle Environments.
• LaTeX: Experienced in using LaTeX for academic paper writing.
• Graphic Design: Skilled in visio for creating figures, diagrams, and graphics to
enhance research presentations and publications
01
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Introduction
Fig. 1 Disentangled
Spoofs on Cefa
Casia Surf Cefa Dataset
MuST-GAN MFAS is a novel model in multi-modal Face Anti-Spoofing,
effectively addressing challenges in adaptability to unseen attacks. Utilizing
modality-specific encoders and Swin Transformer layers, the model
disentangles spoof traces through cross-modal attention mechanisms and a
StyleSwin transformer-based generator. Its bidirectional adversarial learning
approach ensures identity consistency, intensity, center, and classification
considerations. Rigorous evaluations demonstrate MuST-GAN MFAS's
superiority over existing frameworks, showcasing remarkable performance
across diverse modal samples. This model makes a substantial contribution to
face anti-spoofing by emphasizing the importance of learning multi-semantic
spoof traces for improved generalization and adaptability.
02
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
• Traditional machine learning FAS methods
• CNN-based FAS methods
• Vision Transformer based FAS methods
• GAN based FAS methods
Related Work
Fig. 2 Typical GAN
02
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
•Spoof Trace Generation:
• StyleSwin-based generator extracts complementary features for precise spoof
detection.
• Modality-specific encoders and decoders separate live features from spoof traces.
•Multi-Modal Fusion:
• Novel method fuses RGB, depth, and infrared modalities.
• Cross-modal attention mechanisms and style-based architecture enhance resilience.
• Extracts textural, geometric, and thermal patterns for efficient detection of emerging
attacks.
•Comprehensive Training:
• Bidirectional adversarial learning, consistency loss, identity, intensity, center, and
classification losses.
• Ensures effective separation of spoof traces while preserving identity information
and modality consistency.
Academic Innovations
03
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
•MuST-GAN MFAS enhances model robustness against spoofing attacks by integrating
RGB, depth, and infrared modalities. Modality-specific encoders tailored for each image type
disentangle multi-semantic spoof traces. The model employs cross-modal attention
mechanisms to selectively emphasize relevant information and a style-based architecture for
efficient fusion. A double attention mechanism in transformer layers captures fine structures
and coarse geometry simultaneously. The integration of absolute position knowledge, lost in
window-based transformers, enhances generation quality. The model's training involves
various losses, guiding the process for improved accuracy, and a classification network
combines spoof traces from all modalities for the final decision.
MuST GAN FAS
03
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
MuST GAN FAS
Fig. 3 Overall Architecture
03
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Cross-modality and Feature Fusion
The proposed MuST-GAN MFAS employs a cross-modality feature fusion process for
enhanced spoof trace generation in face anti-spoofing. This process involves Feature
Separation (FS) and Feature Aggregation (FA) stages, utilizing cross-modality channel
attention and spatial-wise gates. The fusion methodology integrates RGB, depth, and
infrared modalities, filtering influential channels with learned attention weights. The
refined features undergo re-computation and resizing steps for. A spatial aggregation
step, utilizing transformer layers of global information, focuses on spoof-related
regions induced by attacks. The final reinforced features represent a seamless
integration of RGB, depth, and infrared aspects, enhancing disentanglement
performance across modalities in the MuST-GAN MFAS transformer.
03
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Cross Modality and Feature Fusion
Fig. 4 Cross Modality and Feature Fusion
03
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Training Process and Loss Functions
The proposed approach employs bidirectional adversarial learning for disentangling spoof traces, utilizing live parts'
data dispersion similarities and decomposed spoof patterns as building blocks. Multi-scale discriminators, inspired
by PatchGAN structure, assess features for adversarial learning, incorporating transformer layers. Identity
consistency loss ensures preservation of identity properties during reconstruction, while intensity loss regularizes
spoof trace intensity. Center loss optimizes feature distribution for improved generalization, and the classification
loss ensures correct live and spoof sample classification. Training involves three steps: generator, discriminator, and
consistency supervision. The overall loss balances various components, enhancing the model's ability to generate and
classify spoof traces.
03
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Training Process and Loss Functions
Fig. 5 Training Process and Loss Functions
01
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Experimental Settings
•MuST-GAN MFAS Multi-Encoder and Multi-Decoder model training was
conducted on a NVIDIA GeForce RTX 3090Ti GPU.
•A batch size of 16 and the Adam optimizer were utilized for 300 iterations.
•The initial learning rate was set to 5× 10−5.
•Hyperparameters {α1, α2, α3, α4, α5, α6, α7, α8} were configured as {0.25, 100,
1, 100, 1, 10, 1, 1}.
•To address class imbalance, distinct learning rates for the generator and
discriminator were applied: 5 × 10−5 and 2 × 10−4, respectively.
04
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Experimental Settings
Experimental Evaluations:
•Extensive experiments conducted on three diverse multi-modal datasets.
•Assessment of MuST-GAN MFAS using intra-testing experiments.
•Metrics include APCER, BPCER, ACER, and TPR@FPR=10^-4 for Casia Surf.
•Cross-testing experiments measure HTER based.
•Datasets intentionally selected for comprehensive examination across difficulties and
scenarios.
04
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Experimental Settings
Datasets Utilized:
•CASIA-Surf :
• Large-scale, multi-modal dataset for face anti-spoofing.
• Over 50,000 videos from 10,000 subjects.
• Includes RGB, depth, and infrared modalities.
•CASIA-SURF CeFA :
• Encompasses 2D and 3D attacks, considering rigid and silicon masks.
• RGB, depth, and IR modalities.
• Provides five protocols for varied conditions.
•WMCA :
• Wide Multi-Channel Presentation Attack database.
• Developed at Idiap within "IARPA BATL" and "H2020 TESLA" projects.
• 1941 short video recordings across 72 identities.
• Channels include color, depth, infrared, and thermal.
• Utilized for cross-testing MuST-GAN MFAS for generalization capabilities in diverse tasks.
Fig. 6 Cefa Dataset
04
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Experimental Settings
•Testing Scenarios:
• Investigated two testing scenarios:
• Common practice aligning testing with training techniques.
• Flexible scenario allowing testers to use individual or combined techniques.
•Protocols and Test Scenarios:
• Implemented specific protocols aligned with training techniques for intra-dataset
evaluation.
• Explored adaptability across datasets, emphasizing effectiveness within CASIA-
SURF and CeFA.
• Extended evaluation to cross-dataset testing, particularly with WMCA.
• WMCA introduces grandtest and unseen attack protocols, evaluating generalization in
various scenarios.
04
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Experimental Settings
•MuST-GAN MFAS Evaluation:
• Perfected through extensive training on CASIA-SURF and CeFA datasets, leveraging
RGB, Depth, and Infrared modalities.
• Thorough evaluation in intra-dataset scenarios, focusing on CASIA-SURF and CeFA
intricacies.
• Transitioned seamlessly into cross-dataset testing, evaluating generalization on
WMCA with distinct protocols.
•Comparison with SOTA Frameworks:
• Contrasted MuST-GAN MFAS with state-of-the-art frameworks.
• Engaged in a comprehensive discussion beyond insignificant performance metrics.
• Carefully evaluated the effectiveness of MuST-GAN MFAS, securing its position
among the state-of-the-art face anti-spoofing systems.
04
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Results
04
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Results
04
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Results
04
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Results
04
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Results
04
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Results
•Ablation Study Summary:Baseline:
• No disentanglement, classification network trained and
evaluated using unaltered RGB-D-ir images.
• Disentanglement process significantly improves performance
compared to the baseline.
• Single modality usage (RGB Only, DEP Only, IR Only)
surpasses baseline in terms of ACER.
•RGB Only, DEP Only, and IR Only:
• Each disentanglement network branch modeled separately
without cross-modality integration.
• Noticeable performance enhancements when employing
individual modalities for decision-making.
• DEP Only performance exceeds RGB Only and IR Only,
emphasizing the benefit of using depth information alone.
04
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Results
•Feature Concatenation (RGB & DEP & IR):
Cross-modality feature fusion implemented through vanilla
concatenation.
Outperforms feature concatenation method, showcasing the
effectiveness of cross-modality fusion in reinforcing
correlations.
SA-Gate:
Feature fusion module replaced with the original version of SA-
Gate.
Model exhibits additional enhancements in ACER compared to
SA-Gate, highlighting the superior efficacy of the proposed
fusion strategy.
04
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Results
04
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Results
04
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
Conclusion
In conclusion, MuST-GAN MFAS emerges as a novel force in Face Anti-Spoofing (FAS).
Leveraging adversarial disentangled representation learning and cross-modality feature fusion, our
model excels in detecting a diverse range of fraudulent scenarios. The innovative spoof trace
generator, with double attention enhancement, achieves unprecedented levels of trace synthesis.
Extensive experiments showcase the model's effectiveness against both traditional and unseen
attacks. MuST-GAN MFAS not only advances FAS solutions but also paves the way for exploring
transformers in facial recognition research. Its versatile defense mechanisms establish it as a
steadfast guardian against evolving threats, symbolizing a creative advancement at the intersection
of innovation and security. As the curtain descends, MuST-GAN MFAS stands as a motivational
work for future endeavors in the dynamic landscape of face anti-spoofing.
20
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
1. Lord, C.; Elsabbagh, M.; Baird, G.; Veenstra-Vanderweele, J. Autism Spectrum Disorder. Lancet 2018, 392, 508–520.
2. Kojovic, N.; Natraj, S.; Mohanty, S.P.; Maillart, T.; Schaer, M. Using 2D Video-Based Pose Estimation for Automated Prediction of Autism Spectrum Disorders in Young
Children. Sci. Rep. 2021, 11, 1–10.
3. Georgoula, C.; Ferrin, M.; Pietraszczyk-Kedziora, B.; Hervas, A.; Marret, S.; Oliveira, G.; Rosier, A.; Crutel, V.; Besse, E.; Severo, C.A.; et al. A Phase III Study of
Bumetanide Oral Liquid Formulation for the Treatment of Children and Adolescents Aged between 7 and 17 Years with Autism Spectrum Disorder (SIGN 1 Trial):
Participant Baseline Characteristics. Child Psychiatry Hum. Dev. 2022, 8, 1–13.
4. Zuckerman, K.E.; Broder-Fingert, S.; Sheldrick, R.C. To Reduce the Average Age of Autism Diagnosis, Screen Preschoolers in Primary Care. Autism 2021, 25, 593–596.
5. Goh, K.L.; Morris, S.; Rosalie, S.; Foster, C.; Falkmer, T.; Tan, T. Typically Developed Adults and Adults with Autism Spectrum Disorder Classification Using Centre of
Pressure Measurements. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March
2016; pp. 844–848, ISSN 2379-190X.
6. Speaks, A. What is autism. Retrieved Novemb. 2011, 17, 2011. Available online: https://www.autismspeaks.org/what-autism (accessed on 20 January 2023).
7. Guillon, Q.; Hadjikhani, N.; Baduel, S.; Rogé, B. Visual Social Attention in Autism Spectrum Disorder: Insights from Eye Tracking Studies. Neurosci. Biobehav. Rev.
2014, 42, 279–297.
8. Kanhirakadavath, M.R.; Chandran, M.S.M. Investigation of Eye-Tracking Scan Path as a Biomarker for Autism Screening Using Machine Learning Algorithms.
Diagnostics 2022, 12, 518.
9. Mujeeb Rahman, K.K.; Subashini, M.M. Identification of Autism in Children Using Static Facial Features and Deep Neural Networks. Brain Sci. 2022, 12, 94. [CrossRef]
Yang, M.; Cao, M.; Chen, Y.; Chen, Y.; Fan, G.; Li, C.; Wang, J.; Liu, T. Large-scale brain functional network integration for discrimination of autism using a 3-D deep
learning model. Front. Hum. Neurosci. 2021, 15, 277.
10. Ahsan, M.M.; E Alam, T.; Trafalis, T.; Huebner, P. Deep MLP-CNN model using mixed-data to distinguish between COVID-19 and Non-COVID-19 patients. Symmetry
2020, 12, 1526.
Reference
21
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
11. Aldhyani, T.H.H.; Nair, R.; Alzain, E.; Alkahtani, H.; Koundal, D. Deep Learning Model for the Detection of Real Time Breast Cancer Images Using Improved Dilation-Based
Method. Diagnostics 2022, 12, 2505.
12. Schelinski, S.; Borowiak, K.; von Kriegstein, K. Temporal Voice Areas Exist in Autism Spectrum Disorder but Are Dysfunctional for Voice Identity Recognition. Soc. Cogn. Affect.
Neurosci. 2016, 11, 1812–1822.
13. Jiang, X.; Chen, Y.F. Facial Image Processing. In Applied Pattern Recognition; Bunke, H., Kandel, A., Last, M., Eds.; Studies in Computational Intelligence; Springer:
Berlin/Heidelberg, Germany, 2008; pp. 29–48.
14. Garcia, C.; Ostermann, J.; Cootes, T. Facial Image Processing. Eurasip J. Image Video Process. 2008, 2007, 070872.
15. Yolcu, G.; Oztel, I.; Kazan, S.; Oz, C.; Palaniappan, K.; Lever, T.E.; Bunyak, F. Facial Expression Recognition for Monitoring Neurological Disorders Based on Convolutional
Neural Network. Multimed. Tools Appl. 2019, 78, 31581–31603.
16. Haque, M.I.U.; Valles, D. A Facial Expression Recognition Approach Using DCNN for Autistic Children to Identify Emotions. In Proceedings of the 2018 IEEE 9th Annual
Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 1–3 November 2018; pp. 546–551.
17. Oosterling, I.J.; Wensing, M.; Swinkels, S.H.; Van Der Gaag, R.J.; Visser, J.C.; Woudenberg, T.; Minderaa, R.; Steenhuis, M.P.; Buitelaar, J.K. Advancing early detection of autism
spectrum disorder by applying an integrated two-stage screening approach. J. Child Psychol. Psychiatry 2010, 51, 250–258.
18. Rossignol, D.A.; Frye, R.E. Melatonin in autism spectrum disorders: A systematic review and meta-analysis. Dev. Med. Child Neurol. 2011, 53, 783–792.
19. Tait, K.; Fung, F.; Hu, A.; Sweller, N.; Wang, W. Understanding Hong Kong Chinese families’ experiences of an autism/ASD diagnosis. J. Autism Dev. Disord. 2016, 46, 1164–1183.
20. Jaiswal, S., et al. Automatic detection of ADHD and ASD from expressive behaviour in RGBD data. in 2017 12th IEEE International Conference on Automatic Face & Gesture
Recognition (FG 2017). 2017. IEEE.
21. Aldrige, D.K. Is It Autism? Facial Features That Show Disorder. Available online: https://www.cbsnews.com/images/is-itautism-facial-features-that-show-disorder/ (accessed on 3
March 2022).
22
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
22. Ahsan, M.M.; Gupta, K.D.; Islam, M.M.; Sen, S.; Rahman, M.; Shakhawat Hossain, M.; COVID-19 symptoms detection based on nasnetmobile with explainable ai using
various imaging modalities. Mach. Learn. Knowl. Extr. 2020, 2, 490–504.
23. Mishra, P.; Passos, D. Realizing transfer learning for updating deep learning models of spectral data to be used in new scenarios. Chemom. Intell. Lab. Syst. 2021, 212, 104283.
24. Allely, C.S. and P. Wilson, Diagnosing autism spectrum disorders in primary care. The Practitioner, 2011. 255(1745): p. 27-31.
25. LaMantia, A.S. Why does the face predict the brain? Neural crest induction, craniofacial morphogenesis, and neural circuit development. Front. Physiol. 2020, 11, 610970.
26. Akter, T.; Ali, M.H.; Khan, M.; Satu, M.; Uddin, M.; Alyami, S.A.; Ali, S.; Azad, A.; Moni, M.A. Improved transfer-learning-based facial recognition framework to detect
autistic children at an early stage. Brain Sci. 2021, 11, 734.
27. Hosseini, M.P.; Beary, M.; Hadsell, A.; Messersmith, R.; Soltanian-Zadeh, H. Deep Learning for Autism Diagnosis and Facial Analysis in Children. Front. Comput. Neurosci.
2021, 15, 789998.
28. Mujeeb Rahman, K.; Subashini, M.M. Identification of Autism in Children Using Static Facial Features and Deep Neural Networks. Brain Sci. 2022, 12, 94. [CrossRef]
29. Khosla, Y.; Ramachandra, P.; Chaitra, N. Detection of autistic individuals using facial images and deep learning. In Proceedings of the 2021 IEEE International Conference on
Computation System and Information Technology for Sustainable Solutions (CSITSS), Bangalore, India, 16–18 December 2021; pp. 1–5.
30. Alsaade, F.W.; Alzahrani, M.S. Classification and Detection of Autism Spectrum Disorder Based on Deep Learning Algorithms. Comput. Intell. Neurosci. 2022, 2022, 8709145.
31. Alsaade, F.W.; Alzahrani, M.S. Classification and Detection of Autism Spectrum Disorder Based on Deep Learning Algorithms. Comput. Intell. Neurosci. 2022, 2022, 8709145.
23
自 强 不 息 厚 德 载 物
知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y
32. Thabtah, F.; Kamalov, F.; Rajab, K. A New Computational Intelligence Approach to Detect Autistic Features for Autism Screening. Int. J. Med. Inform. 2018, 117, 112–124.
33. Alkahtani, H.; Aldhyani, T.H.H.; Alzahrani, M.Y. Deep Learning Algorithms to Identify Autism Spectrum Disorder in Children-Based Facial Landmarks. Appl. Sci. 2023, 13,
4855. https://doi.org/10.3390/app13084855
34. Rabbi, M.F., Hasan, S.M., Champa, A.I., Zaman, M.A.: A convolutional neural network model for early-stage detection of autism spectrum disorder. In: 2021 International
Conference on Information and Communication Technology for Sustainable Development (ICICT4SD), pp. 110–114. IEEE (2021)
35. Arumugam, S.R., Karuppasamy, S.G., Gowr, S., Manoj, O., Kalaivani, K.: A deep convolutional neural network based detection system for autism spectrum disorder in facial
images. In: 2021 5th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC), pp. 1255–1259. IEEE (2021)
36. Akter, T., et al.: Improved transfer-learning-based facial recognition framework to detect autistic children at an early stage. Brain Sci. 11(6), 734 (2021)
感谢各位老师聆听!
THANKS FOR ALL

More Related Content

Similar to PRESENTATION-Zain - Central South Univeristy.pptx

Object Detection with Computer Vision
Object Detection with Computer VisionObject Detection with Computer Vision
Object Detection with Computer VisionIRJET Journal
 
DATI, AI E ROBOTICA @POLITO
DATI, AI E ROBOTICA @POLITODATI, AI E ROBOTICA @POLITO
DATI, AI E ROBOTICA @POLITOMarcoMellia
 
IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...
IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...
IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...csandit
 
Network Intrusion Detection System Using Machine Learning and Deep Learning F...
Network Intrusion Detection System Using Machine Learning and Deep Learning F...Network Intrusion Detection System Using Machine Learning and Deep Learning F...
Network Intrusion Detection System Using Machine Learning and Deep Learning F...Leaving A Legacy
 
Bayesian distance metric learning and its application in automatic speaker re...
Bayesian distance metric learning and its application in automatic speaker re...Bayesian distance metric learning and its application in automatic speaker re...
Bayesian distance metric learning and its application in automatic speaker re...IJECEIAES
 
1-s2.0-S0957417422020759-main.pdf
1-s2.0-S0957417422020759-main.pdf1-s2.0-S0957417422020759-main.pdf
1-s2.0-S0957417422020759-main.pdfarchurssu
 
IRJET- Intrusion Detection using IP Binding in Real Network
IRJET- Intrusion Detection using IP Binding in Real NetworkIRJET- Intrusion Detection using IP Binding in Real Network
IRJET- Intrusion Detection using IP Binding in Real NetworkIRJET Journal
 
Network Intrusion Detection System using Machine Learning
Network Intrusion Detection System using Machine LearningNetwork Intrusion Detection System using Machine Learning
Network Intrusion Detection System using Machine LearningIRJET Journal
 
Post Graduate Admission Prediction System
Post Graduate Admission Prediction SystemPost Graduate Admission Prediction System
Post Graduate Admission Prediction SystemIRJET Journal
 
[20240513_LabSeminar_Huy]GraphFewShort_Transfer.pptx
[20240513_LabSeminar_Huy]GraphFewShort_Transfer.pptx[20240513_LabSeminar_Huy]GraphFewShort_Transfer.pptx
[20240513_LabSeminar_Huy]GraphFewShort_Transfer.pptxthanhdowork
 
Applications of machine learning in Wireless sensor networks.
Applications of machine learning in Wireless sensor networks.Applications of machine learning in Wireless sensor networks.
Applications of machine learning in Wireless sensor networks.Sahana B S
 
A Transfer Learning Approach to Traffic Sign Recognition
A Transfer Learning Approach to Traffic Sign RecognitionA Transfer Learning Approach to Traffic Sign Recognition
A Transfer Learning Approach to Traffic Sign RecognitionIRJET Journal
 
UNCERTAINTY ESTIMATION IN NEURAL NETWORKS THROUGH MULTI-TASK LEARNING
UNCERTAINTY ESTIMATION IN NEURAL NETWORKS THROUGH MULTI-TASK LEARNINGUNCERTAINTY ESTIMATION IN NEURAL NETWORKS THROUGH MULTI-TASK LEARNING
UNCERTAINTY ESTIMATION IN NEURAL NETWORKS THROUGH MULTI-TASK LEARNINGgerogepatton
 
UNCERTAINTY ESTIMATION IN NEURAL NETWORKS THROUGH MULTI-TASK LEARNING
UNCERTAINTY ESTIMATION IN NEURAL NETWORKS THROUGH MULTI-TASK LEARNINGUNCERTAINTY ESTIMATION IN NEURAL NETWORKS THROUGH MULTI-TASK LEARNING
UNCERTAINTY ESTIMATION IN NEURAL NETWORKS THROUGH MULTI-TASK LEARNINGijaia
 
Survey of Adversarial Attacks in Deep Learning Models
Survey of Adversarial Attacks in Deep Learning ModelsSurvey of Adversarial Attacks in Deep Learning Models
Survey of Adversarial Attacks in Deep Learning ModelsIRJET Journal
 
IRJET- A New Hybrid Squirrel Search Algorithm and Invasive Weed Optimization ...
IRJET- A New Hybrid Squirrel Search Algorithm and Invasive Weed Optimization ...IRJET- A New Hybrid Squirrel Search Algorithm and Invasive Weed Optimization ...
IRJET- A New Hybrid Squirrel Search Algorithm and Invasive Weed Optimization ...IRJET Journal
 
IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...
IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...
IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...cscpconf
 
Full resume dr_russell_john_childs_2016
Full resume dr_russell_john_childs_2016Full resume dr_russell_john_childs_2016
Full resume dr_russell_john_childs_2016Russell Childs
 

Similar to PRESENTATION-Zain - Central South Univeristy.pptx (20)

Object Detection with Computer Vision
Object Detection with Computer VisionObject Detection with Computer Vision
Object Detection with Computer Vision
 
DATI, AI E ROBOTICA @POLITO
DATI, AI E ROBOTICA @POLITODATI, AI E ROBOTICA @POLITO
DATI, AI E ROBOTICA @POLITO
 
IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...
IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...
IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...
 
Network Intrusion Detection System Using Machine Learning and Deep Learning F...
Network Intrusion Detection System Using Machine Learning and Deep Learning F...Network Intrusion Detection System Using Machine Learning and Deep Learning F...
Network Intrusion Detection System Using Machine Learning and Deep Learning F...
 
PNN and inversion-B
PNN and inversion-BPNN and inversion-B
PNN and inversion-B
 
Bayesian distance metric learning and its application in automatic speaker re...
Bayesian distance metric learning and its application in automatic speaker re...Bayesian distance metric learning and its application in automatic speaker re...
Bayesian distance metric learning and its application in automatic speaker re...
 
1-s2.0-S0957417422020759-main.pdf
1-s2.0-S0957417422020759-main.pdf1-s2.0-S0957417422020759-main.pdf
1-s2.0-S0957417422020759-main.pdf
 
IRJET- Intrusion Detection using IP Binding in Real Network
IRJET- Intrusion Detection using IP Binding in Real NetworkIRJET- Intrusion Detection using IP Binding in Real Network
IRJET- Intrusion Detection using IP Binding in Real Network
 
Network Intrusion Detection System using Machine Learning
Network Intrusion Detection System using Machine LearningNetwork Intrusion Detection System using Machine Learning
Network Intrusion Detection System using Machine Learning
 
Post Graduate Admission Prediction System
Post Graduate Admission Prediction SystemPost Graduate Admission Prediction System
Post Graduate Admission Prediction System
 
[20240513_LabSeminar_Huy]GraphFewShort_Transfer.pptx
[20240513_LabSeminar_Huy]GraphFewShort_Transfer.pptx[20240513_LabSeminar_Huy]GraphFewShort_Transfer.pptx
[20240513_LabSeminar_Huy]GraphFewShort_Transfer.pptx
 
Applications of machine learning in Wireless sensor networks.
Applications of machine learning in Wireless sensor networks.Applications of machine learning in Wireless sensor networks.
Applications of machine learning in Wireless sensor networks.
 
A Transfer Learning Approach to Traffic Sign Recognition
A Transfer Learning Approach to Traffic Sign RecognitionA Transfer Learning Approach to Traffic Sign Recognition
A Transfer Learning Approach to Traffic Sign Recognition
 
UNCERTAINTY ESTIMATION IN NEURAL NETWORKS THROUGH MULTI-TASK LEARNING
UNCERTAINTY ESTIMATION IN NEURAL NETWORKS THROUGH MULTI-TASK LEARNINGUNCERTAINTY ESTIMATION IN NEURAL NETWORKS THROUGH MULTI-TASK LEARNING
UNCERTAINTY ESTIMATION IN NEURAL NETWORKS THROUGH MULTI-TASK LEARNING
 
UNCERTAINTY ESTIMATION IN NEURAL NETWORKS THROUGH MULTI-TASK LEARNING
UNCERTAINTY ESTIMATION IN NEURAL NETWORKS THROUGH MULTI-TASK LEARNINGUNCERTAINTY ESTIMATION IN NEURAL NETWORKS THROUGH MULTI-TASK LEARNING
UNCERTAINTY ESTIMATION IN NEURAL NETWORKS THROUGH MULTI-TASK LEARNING
 
Survey of Adversarial Attacks in Deep Learning Models
Survey of Adversarial Attacks in Deep Learning ModelsSurvey of Adversarial Attacks in Deep Learning Models
Survey of Adversarial Attacks in Deep Learning Models
 
IRJET- A New Hybrid Squirrel Search Algorithm and Invasive Weed Optimization ...
IRJET- A New Hybrid Squirrel Search Algorithm and Invasive Weed Optimization ...IRJET- A New Hybrid Squirrel Search Algorithm and Invasive Weed Optimization ...
IRJET- A New Hybrid Squirrel Search Algorithm and Invasive Weed Optimization ...
 
My Resume 2016 (Spring)
My Resume 2016 (Spring)My Resume 2016 (Spring)
My Resume 2016 (Spring)
 
IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...
IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...
IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...
 
Full resume dr_russell_john_childs_2016
Full resume dr_russell_john_childs_2016Full resume dr_russell_john_childs_2016
Full resume dr_russell_john_childs_2016
 

Recently uploaded

MichaelStarkes_UncutGemsProjectSummary.pdf
MichaelStarkes_UncutGemsProjectSummary.pdfMichaelStarkes_UncutGemsProjectSummary.pdf
MichaelStarkes_UncutGemsProjectSummary.pdfmstarkes24
 
The basics of sentences session 4pptx.pptx
The basics of sentences session 4pptx.pptxThe basics of sentences session 4pptx.pptx
The basics of sentences session 4pptx.pptxheathfieldcps1
 
How to Manage Notification Preferences in the Odoo 17
How to Manage Notification Preferences in the Odoo 17How to Manage Notification Preferences in the Odoo 17
How to Manage Notification Preferences in the Odoo 17Celine George
 
IATP How-to Foreign Travel May 2024.pdff
IATP How-to Foreign Travel May 2024.pdffIATP How-to Foreign Travel May 2024.pdff
IATP How-to Foreign Travel May 2024.pdff17thcssbs2
 
factors influencing drug absorption-final-2.pptx
factors influencing drug absorption-final-2.pptxfactors influencing drug absorption-final-2.pptx
factors influencing drug absorption-final-2.pptxSanjay Shekar
 
ppt your views.ppt your views of your college in your eyes
ppt your views.ppt your views of your college in your eyesppt your views.ppt your views of your college in your eyes
ppt your views.ppt your views of your college in your eyesashishpaul799
 
size separation d pharm 1st year pharmaceutics
size separation d pharm 1st year pharmaceuticssize separation d pharm 1st year pharmaceutics
size separation d pharm 1st year pharmaceuticspragatimahajan3
 
Financial Accounting IFRS, 3rd Edition-dikompresi.pdf
Financial Accounting IFRS, 3rd Edition-dikompresi.pdfFinancial Accounting IFRS, 3rd Edition-dikompresi.pdf
Financial Accounting IFRS, 3rd Edition-dikompresi.pdfMinawBelay
 
ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH FORM 50 CÂU TRẮC NGHI...
ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH FORM 50 CÂU TRẮC NGHI...ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH FORM 50 CÂU TRẮC NGHI...
ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH FORM 50 CÂU TRẮC NGHI...Nguyen Thanh Tu Collection
 
會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文
會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文
會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文中 央社
 
Open Educational Resources Primer PowerPoint
Open Educational Resources Primer PowerPointOpen Educational Resources Primer PowerPoint
Open Educational Resources Primer PowerPointELaRue0
 
An Overview of the Odoo 17 Discuss App.pptx
An Overview of the Odoo 17 Discuss App.pptxAn Overview of the Odoo 17 Discuss App.pptx
An Overview of the Odoo 17 Discuss App.pptxCeline George
 
Incoming and Outgoing Shipments in 2 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 2 STEPS Using Odoo 17Incoming and Outgoing Shipments in 2 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 2 STEPS Using Odoo 17Celine George
 
How to Manage Closest Location in Odoo 17 Inventory
How to Manage Closest Location in Odoo 17 InventoryHow to Manage Closest Location in Odoo 17 Inventory
How to Manage Closest Location in Odoo 17 InventoryCeline George
 
2024_Student Session 2_ Set Plan Preparation.pptx
2024_Student Session 2_ Set Plan Preparation.pptx2024_Student Session 2_ Set Plan Preparation.pptx
2024_Student Session 2_ Set Plan Preparation.pptxmansk2
 
Dementia (Alzheimer & vasular dementia).
Dementia (Alzheimer & vasular dementia).Dementia (Alzheimer & vasular dementia).
Dementia (Alzheimer & vasular dementia).Mohamed Rizk Khodair
 
Features of Video Calls in the Discuss Module in Odoo 17
Features of Video Calls in the Discuss Module in Odoo 17Features of Video Calls in the Discuss Module in Odoo 17
Features of Video Calls in the Discuss Module in Odoo 17Celine George
 
....................Muslim-Law notes.pdf
....................Muslim-Law notes.pdf....................Muslim-Law notes.pdf
....................Muslim-Law notes.pdfVikramadityaRaj
 

Recently uploaded (20)

MichaelStarkes_UncutGemsProjectSummary.pdf
MichaelStarkes_UncutGemsProjectSummary.pdfMichaelStarkes_UncutGemsProjectSummary.pdf
MichaelStarkes_UncutGemsProjectSummary.pdf
 
The basics of sentences session 4pptx.pptx
The basics of sentences session 4pptx.pptxThe basics of sentences session 4pptx.pptx
The basics of sentences session 4pptx.pptx
 
How to Manage Notification Preferences in the Odoo 17
How to Manage Notification Preferences in the Odoo 17How to Manage Notification Preferences in the Odoo 17
How to Manage Notification Preferences in the Odoo 17
 
IATP How-to Foreign Travel May 2024.pdff
IATP How-to Foreign Travel May 2024.pdffIATP How-to Foreign Travel May 2024.pdff
IATP How-to Foreign Travel May 2024.pdff
 
factors influencing drug absorption-final-2.pptx
factors influencing drug absorption-final-2.pptxfactors influencing drug absorption-final-2.pptx
factors influencing drug absorption-final-2.pptx
 
ppt your views.ppt your views of your college in your eyes
ppt your views.ppt your views of your college in your eyesppt your views.ppt your views of your college in your eyes
ppt your views.ppt your views of your college in your eyes
 
Operations Management - Book1.p - Dr. Abdulfatah A. Salem
Operations Management - Book1.p  - Dr. Abdulfatah A. SalemOperations Management - Book1.p  - Dr. Abdulfatah A. Salem
Operations Management - Book1.p - Dr. Abdulfatah A. Salem
 
size separation d pharm 1st year pharmaceutics
size separation d pharm 1st year pharmaceuticssize separation d pharm 1st year pharmaceutics
size separation d pharm 1st year pharmaceutics
 
Financial Accounting IFRS, 3rd Edition-dikompresi.pdf
Financial Accounting IFRS, 3rd Edition-dikompresi.pdfFinancial Accounting IFRS, 3rd Edition-dikompresi.pdf
Financial Accounting IFRS, 3rd Edition-dikompresi.pdf
 
ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH FORM 50 CÂU TRẮC NGHI...
ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH FORM 50 CÂU TRẮC NGHI...ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH FORM 50 CÂU TRẮC NGHI...
ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH FORM 50 CÂU TRẮC NGHI...
 
會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文
會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文
會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文
 
Open Educational Resources Primer PowerPoint
Open Educational Resources Primer PowerPointOpen Educational Resources Primer PowerPoint
Open Educational Resources Primer PowerPoint
 
An Overview of the Odoo 17 Discuss App.pptx
An Overview of the Odoo 17 Discuss App.pptxAn Overview of the Odoo 17 Discuss App.pptx
An Overview of the Odoo 17 Discuss App.pptx
 
Incoming and Outgoing Shipments in 2 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 2 STEPS Using Odoo 17Incoming and Outgoing Shipments in 2 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 2 STEPS Using Odoo 17
 
How to Manage Closest Location in Odoo 17 Inventory
How to Manage Closest Location in Odoo 17 InventoryHow to Manage Closest Location in Odoo 17 Inventory
How to Manage Closest Location in Odoo 17 Inventory
 
Word Stress rules esl .pptx
Word Stress rules esl               .pptxWord Stress rules esl               .pptx
Word Stress rules esl .pptx
 
2024_Student Session 2_ Set Plan Preparation.pptx
2024_Student Session 2_ Set Plan Preparation.pptx2024_Student Session 2_ Set Plan Preparation.pptx
2024_Student Session 2_ Set Plan Preparation.pptx
 
Dementia (Alzheimer & vasular dementia).
Dementia (Alzheimer & vasular dementia).Dementia (Alzheimer & vasular dementia).
Dementia (Alzheimer & vasular dementia).
 
Features of Video Calls in the Discuss Module in Odoo 17
Features of Video Calls in the Discuss Module in Odoo 17Features of Video Calls in the Discuss Module in Odoo 17
Features of Video Calls in the Discuss Module in Odoo 17
 
....................Muslim-Law notes.pdf
....................Muslim-Law notes.pdf....................Muslim-Law notes.pdf
....................Muslim-Law notes.pdf
 

PRESENTATION-Zain - Central South Univeristy.pptx

  • 1. Doctoral Student Reported By: Zain Ul Abideen Dec 21, 2023 Central South University, Changsha, Hunan, China
  • 2. 01 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Self Introduction Eagerly seeking a Ph.D., I hail from a Pakistani village, blessed with numerous successes. Grateful for the opportunity to interview at top-tier institutions, my passion lies in contributing to impactful research, particularly in Artificial Intelligence and Computer Vision Honor and Awards Awarded a Fully Funded Master’s University Scholarship at Central South University Sep 01, 2021 – June 2024 Date of Birth: 31-01-1997 Nationality: Pakistani Name: Zain Ul Abideen
  • 3. 01 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Educational Background Sep. 2021 Jun. 2024: School of Computer Science and Technology, Central South University China, Master's (CS), MuST-GAN MFAS: Multi-Semantic Spoof Tracer GAN with Transformer Layers for Multi-modal Face Anti-Spoofing, supervised by Professor Shu Liu. Sep. 2016-Jul. 2020: Department of Computer Science and Technology, Government College University, Pakistan, BS(CS, “AI based GCUF Assistant Chatbot” Supervised by Professor Rao Iqbal.
  • 4. 01 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Publications • Ul Abideen, Z. and Liu, S. (2023). GAN MFAS: Multi-Semantic Spoof Tracer GAN with Transformer Layers for Multi-modal Face Anti-Spoofing • Shahzad, I. and Ul Abideen, Z. (2023). Hybrid Learning to Classify Autistic and Non-autistic Face. • Abbas, W. and Ul Abideen, Z. (2023) Classify Attire detection using Optimized Hybrid Learning • Hussain, T. Ul Abideen, Z. , (2023) A Hybrid Deep Learning Approach for Improved E-Learning based on Automatic Learning Style.
  • 5. 01 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Work Experience • Nov 2020-Feb 2022: Data Analyst at E-commerce company, Running E-commerce stores in all over the world. Location: Lahore, Pakistan. • Sep 2018-May 2020: Sales Agent at UK based company Providing Electricity and Gas Services. Location: Faisalabad, Pakistan.
  • 6. 01 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Skills • Programming Languages: Proficient in Python at Visual Studio Code, PyCharm, Google Colab and Kaggle Environments. • LaTeX: Experienced in using LaTeX for academic paper writing. • Graphic Design: Skilled in visio for creating figures, diagrams, and graphics to enhance research presentations and publications
  • 7. 01 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Introduction Fig. 1 Disentangled Spoofs on Cefa Casia Surf Cefa Dataset MuST-GAN MFAS is a novel model in multi-modal Face Anti-Spoofing, effectively addressing challenges in adaptability to unseen attacks. Utilizing modality-specific encoders and Swin Transformer layers, the model disentangles spoof traces through cross-modal attention mechanisms and a StyleSwin transformer-based generator. Its bidirectional adversarial learning approach ensures identity consistency, intensity, center, and classification considerations. Rigorous evaluations demonstrate MuST-GAN MFAS's superiority over existing frameworks, showcasing remarkable performance across diverse modal samples. This model makes a substantial contribution to face anti-spoofing by emphasizing the importance of learning multi-semantic spoof traces for improved generalization and adaptability.
  • 8. 02 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y • Traditional machine learning FAS methods • CNN-based FAS methods • Vision Transformer based FAS methods • GAN based FAS methods Related Work Fig. 2 Typical GAN
  • 9. 02 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y •Spoof Trace Generation: • StyleSwin-based generator extracts complementary features for precise spoof detection. • Modality-specific encoders and decoders separate live features from spoof traces. •Multi-Modal Fusion: • Novel method fuses RGB, depth, and infrared modalities. • Cross-modal attention mechanisms and style-based architecture enhance resilience. • Extracts textural, geometric, and thermal patterns for efficient detection of emerging attacks. •Comprehensive Training: • Bidirectional adversarial learning, consistency loss, identity, intensity, center, and classification losses. • Ensures effective separation of spoof traces while preserving identity information and modality consistency. Academic Innovations
  • 10. 03 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y •MuST-GAN MFAS enhances model robustness against spoofing attacks by integrating RGB, depth, and infrared modalities. Modality-specific encoders tailored for each image type disentangle multi-semantic spoof traces. The model employs cross-modal attention mechanisms to selectively emphasize relevant information and a style-based architecture for efficient fusion. A double attention mechanism in transformer layers captures fine structures and coarse geometry simultaneously. The integration of absolute position knowledge, lost in window-based transformers, enhances generation quality. The model's training involves various losses, guiding the process for improved accuracy, and a classification network combines spoof traces from all modalities for the final decision. MuST GAN FAS
  • 11. 03 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y MuST GAN FAS Fig. 3 Overall Architecture
  • 12. 03 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Cross-modality and Feature Fusion The proposed MuST-GAN MFAS employs a cross-modality feature fusion process for enhanced spoof trace generation in face anti-spoofing. This process involves Feature Separation (FS) and Feature Aggregation (FA) stages, utilizing cross-modality channel attention and spatial-wise gates. The fusion methodology integrates RGB, depth, and infrared modalities, filtering influential channels with learned attention weights. The refined features undergo re-computation and resizing steps for. A spatial aggregation step, utilizing transformer layers of global information, focuses on spoof-related regions induced by attacks. The final reinforced features represent a seamless integration of RGB, depth, and infrared aspects, enhancing disentanglement performance across modalities in the MuST-GAN MFAS transformer.
  • 13. 03 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Cross Modality and Feature Fusion Fig. 4 Cross Modality and Feature Fusion
  • 14. 03 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Training Process and Loss Functions The proposed approach employs bidirectional adversarial learning for disentangling spoof traces, utilizing live parts' data dispersion similarities and decomposed spoof patterns as building blocks. Multi-scale discriminators, inspired by PatchGAN structure, assess features for adversarial learning, incorporating transformer layers. Identity consistency loss ensures preservation of identity properties during reconstruction, while intensity loss regularizes spoof trace intensity. Center loss optimizes feature distribution for improved generalization, and the classification loss ensures correct live and spoof sample classification. Training involves three steps: generator, discriminator, and consistency supervision. The overall loss balances various components, enhancing the model's ability to generate and classify spoof traces.
  • 15. 03 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Training Process and Loss Functions Fig. 5 Training Process and Loss Functions
  • 16. 01 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Experimental Settings •MuST-GAN MFAS Multi-Encoder and Multi-Decoder model training was conducted on a NVIDIA GeForce RTX 3090Ti GPU. •A batch size of 16 and the Adam optimizer were utilized for 300 iterations. •The initial learning rate was set to 5× 10−5. •Hyperparameters {α1, α2, α3, α4, α5, α6, α7, α8} were configured as {0.25, 100, 1, 100, 1, 10, 1, 1}. •To address class imbalance, distinct learning rates for the generator and discriminator were applied: 5 × 10−5 and 2 × 10−4, respectively.
  • 17. 04 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Experimental Settings Experimental Evaluations: •Extensive experiments conducted on three diverse multi-modal datasets. •Assessment of MuST-GAN MFAS using intra-testing experiments. •Metrics include APCER, BPCER, ACER, and TPR@FPR=10^-4 for Casia Surf. •Cross-testing experiments measure HTER based. •Datasets intentionally selected for comprehensive examination across difficulties and scenarios.
  • 18. 04 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Experimental Settings Datasets Utilized: •CASIA-Surf : • Large-scale, multi-modal dataset for face anti-spoofing. • Over 50,000 videos from 10,000 subjects. • Includes RGB, depth, and infrared modalities. •CASIA-SURF CeFA : • Encompasses 2D and 3D attacks, considering rigid and silicon masks. • RGB, depth, and IR modalities. • Provides five protocols for varied conditions. •WMCA : • Wide Multi-Channel Presentation Attack database. • Developed at Idiap within "IARPA BATL" and "H2020 TESLA" projects. • 1941 short video recordings across 72 identities. • Channels include color, depth, infrared, and thermal. • Utilized for cross-testing MuST-GAN MFAS for generalization capabilities in diverse tasks. Fig. 6 Cefa Dataset
  • 19. 04 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Experimental Settings •Testing Scenarios: • Investigated two testing scenarios: • Common practice aligning testing with training techniques. • Flexible scenario allowing testers to use individual or combined techniques. •Protocols and Test Scenarios: • Implemented specific protocols aligned with training techniques for intra-dataset evaluation. • Explored adaptability across datasets, emphasizing effectiveness within CASIA- SURF and CeFA. • Extended evaluation to cross-dataset testing, particularly with WMCA. • WMCA introduces grandtest and unseen attack protocols, evaluating generalization in various scenarios.
  • 20. 04 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Experimental Settings •MuST-GAN MFAS Evaluation: • Perfected through extensive training on CASIA-SURF and CeFA datasets, leveraging RGB, Depth, and Infrared modalities. • Thorough evaluation in intra-dataset scenarios, focusing on CASIA-SURF and CeFA intricacies. • Transitioned seamlessly into cross-dataset testing, evaluating generalization on WMCA with distinct protocols. •Comparison with SOTA Frameworks: • Contrasted MuST-GAN MFAS with state-of-the-art frameworks. • Engaged in a comprehensive discussion beyond insignificant performance metrics. • Carefully evaluated the effectiveness of MuST-GAN MFAS, securing its position among the state-of-the-art face anti-spoofing systems.
  • 21. 04 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Results
  • 22. 04 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Results
  • 23. 04 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Results
  • 24. 04 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Results
  • 25. 04 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Results
  • 26. 04 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Results •Ablation Study Summary:Baseline: • No disentanglement, classification network trained and evaluated using unaltered RGB-D-ir images. • Disentanglement process significantly improves performance compared to the baseline. • Single modality usage (RGB Only, DEP Only, IR Only) surpasses baseline in terms of ACER. •RGB Only, DEP Only, and IR Only: • Each disentanglement network branch modeled separately without cross-modality integration. • Noticeable performance enhancements when employing individual modalities for decision-making. • DEP Only performance exceeds RGB Only and IR Only, emphasizing the benefit of using depth information alone.
  • 27. 04 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Results •Feature Concatenation (RGB & DEP & IR): Cross-modality feature fusion implemented through vanilla concatenation. Outperforms feature concatenation method, showcasing the effectiveness of cross-modality fusion in reinforcing correlations. SA-Gate: Feature fusion module replaced with the original version of SA- Gate. Model exhibits additional enhancements in ACER compared to SA-Gate, highlighting the superior efficacy of the proposed fusion strategy.
  • 28. 04 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Results
  • 29. 04 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Results
  • 30. 04 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y Conclusion In conclusion, MuST-GAN MFAS emerges as a novel force in Face Anti-Spoofing (FAS). Leveraging adversarial disentangled representation learning and cross-modality feature fusion, our model excels in detecting a diverse range of fraudulent scenarios. The innovative spoof trace generator, with double attention enhancement, achieves unprecedented levels of trace synthesis. Extensive experiments showcase the model's effectiveness against both traditional and unseen attacks. MuST-GAN MFAS not only advances FAS solutions but also paves the way for exploring transformers in facial recognition research. Its versatile defense mechanisms establish it as a steadfast guardian against evolving threats, symbolizing a creative advancement at the intersection of innovation and security. As the curtain descends, MuST-GAN MFAS stands as a motivational work for future endeavors in the dynamic landscape of face anti-spoofing.
  • 31. 20 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y 1. Lord, C.; Elsabbagh, M.; Baird, G.; Veenstra-Vanderweele, J. Autism Spectrum Disorder. Lancet 2018, 392, 508–520. 2. Kojovic, N.; Natraj, S.; Mohanty, S.P.; Maillart, T.; Schaer, M. Using 2D Video-Based Pose Estimation for Automated Prediction of Autism Spectrum Disorders in Young Children. Sci. Rep. 2021, 11, 1–10. 3. Georgoula, C.; Ferrin, M.; Pietraszczyk-Kedziora, B.; Hervas, A.; Marret, S.; Oliveira, G.; Rosier, A.; Crutel, V.; Besse, E.; Severo, C.A.; et al. A Phase III Study of Bumetanide Oral Liquid Formulation for the Treatment of Children and Adolescents Aged between 7 and 17 Years with Autism Spectrum Disorder (SIGN 1 Trial): Participant Baseline Characteristics. Child Psychiatry Hum. Dev. 2022, 8, 1–13. 4. Zuckerman, K.E.; Broder-Fingert, S.; Sheldrick, R.C. To Reduce the Average Age of Autism Diagnosis, Screen Preschoolers in Primary Care. Autism 2021, 25, 593–596. 5. Goh, K.L.; Morris, S.; Rosalie, S.; Foster, C.; Falkmer, T.; Tan, T. Typically Developed Adults and Adults with Autism Spectrum Disorder Classification Using Centre of Pressure Measurements. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 844–848, ISSN 2379-190X. 6. Speaks, A. What is autism. Retrieved Novemb. 2011, 17, 2011. Available online: https://www.autismspeaks.org/what-autism (accessed on 20 January 2023). 7. Guillon, Q.; Hadjikhani, N.; Baduel, S.; Rogé, B. Visual Social Attention in Autism Spectrum Disorder: Insights from Eye Tracking Studies. Neurosci. Biobehav. Rev. 2014, 42, 279–297. 8. Kanhirakadavath, M.R.; Chandran, M.S.M. Investigation of Eye-Tracking Scan Path as a Biomarker for Autism Screening Using Machine Learning Algorithms. Diagnostics 2022, 12, 518. 9. Mujeeb Rahman, K.K.; Subashini, M.M. Identification of Autism in Children Using Static Facial Features and Deep Neural Networks. Brain Sci. 2022, 12, 94. [CrossRef] Yang, M.; Cao, M.; Chen, Y.; Chen, Y.; Fan, G.; Li, C.; Wang, J.; Liu, T. Large-scale brain functional network integration for discrimination of autism using a 3-D deep learning model. Front. Hum. Neurosci. 2021, 15, 277. 10. Ahsan, M.M.; E Alam, T.; Trafalis, T.; Huebner, P. Deep MLP-CNN model using mixed-data to distinguish between COVID-19 and Non-COVID-19 patients. Symmetry 2020, 12, 1526. Reference
  • 32. 21 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y 11. Aldhyani, T.H.H.; Nair, R.; Alzain, E.; Alkahtani, H.; Koundal, D. Deep Learning Model for the Detection of Real Time Breast Cancer Images Using Improved Dilation-Based Method. Diagnostics 2022, 12, 2505. 12. Schelinski, S.; Borowiak, K.; von Kriegstein, K. Temporal Voice Areas Exist in Autism Spectrum Disorder but Are Dysfunctional for Voice Identity Recognition. Soc. Cogn. Affect. Neurosci. 2016, 11, 1812–1822. 13. Jiang, X.; Chen, Y.F. Facial Image Processing. In Applied Pattern Recognition; Bunke, H., Kandel, A., Last, M., Eds.; Studies in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2008; pp. 29–48. 14. Garcia, C.; Ostermann, J.; Cootes, T. Facial Image Processing. Eurasip J. Image Video Process. 2008, 2007, 070872. 15. Yolcu, G.; Oztel, I.; Kazan, S.; Oz, C.; Palaniappan, K.; Lever, T.E.; Bunyak, F. Facial Expression Recognition for Monitoring Neurological Disorders Based on Convolutional Neural Network. Multimed. Tools Appl. 2019, 78, 31581–31603. 16. Haque, M.I.U.; Valles, D. A Facial Expression Recognition Approach Using DCNN for Autistic Children to Identify Emotions. In Proceedings of the 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 1–3 November 2018; pp. 546–551. 17. Oosterling, I.J.; Wensing, M.; Swinkels, S.H.; Van Der Gaag, R.J.; Visser, J.C.; Woudenberg, T.; Minderaa, R.; Steenhuis, M.P.; Buitelaar, J.K. Advancing early detection of autism spectrum disorder by applying an integrated two-stage screening approach. J. Child Psychol. Psychiatry 2010, 51, 250–258. 18. Rossignol, D.A.; Frye, R.E. Melatonin in autism spectrum disorders: A systematic review and meta-analysis. Dev. Med. Child Neurol. 2011, 53, 783–792. 19. Tait, K.; Fung, F.; Hu, A.; Sweller, N.; Wang, W. Understanding Hong Kong Chinese families’ experiences of an autism/ASD diagnosis. J. Autism Dev. Disord. 2016, 46, 1164–1183. 20. Jaiswal, S., et al. Automatic detection of ADHD and ASD from expressive behaviour in RGBD data. in 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017). 2017. IEEE. 21. Aldrige, D.K. Is It Autism? Facial Features That Show Disorder. Available online: https://www.cbsnews.com/images/is-itautism-facial-features-that-show-disorder/ (accessed on 3 March 2022).
  • 33. 22 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y 22. Ahsan, M.M.; Gupta, K.D.; Islam, M.M.; Sen, S.; Rahman, M.; Shakhawat Hossain, M.; COVID-19 symptoms detection based on nasnetmobile with explainable ai using various imaging modalities. Mach. Learn. Knowl. Extr. 2020, 2, 490–504. 23. Mishra, P.; Passos, D. Realizing transfer learning for updating deep learning models of spectral data to be used in new scenarios. Chemom. Intell. Lab. Syst. 2021, 212, 104283. 24. Allely, C.S. and P. Wilson, Diagnosing autism spectrum disorders in primary care. The Practitioner, 2011. 255(1745): p. 27-31. 25. LaMantia, A.S. Why does the face predict the brain? Neural crest induction, craniofacial morphogenesis, and neural circuit development. Front. Physiol. 2020, 11, 610970. 26. Akter, T.; Ali, M.H.; Khan, M.; Satu, M.; Uddin, M.; Alyami, S.A.; Ali, S.; Azad, A.; Moni, M.A. Improved transfer-learning-based facial recognition framework to detect autistic children at an early stage. Brain Sci. 2021, 11, 734. 27. Hosseini, M.P.; Beary, M.; Hadsell, A.; Messersmith, R.; Soltanian-Zadeh, H. Deep Learning for Autism Diagnosis and Facial Analysis in Children. Front. Comput. Neurosci. 2021, 15, 789998. 28. Mujeeb Rahman, K.; Subashini, M.M. Identification of Autism in Children Using Static Facial Features and Deep Neural Networks. Brain Sci. 2022, 12, 94. [CrossRef] 29. Khosla, Y.; Ramachandra, P.; Chaitra, N. Detection of autistic individuals using facial images and deep learning. In Proceedings of the 2021 IEEE International Conference on Computation System and Information Technology for Sustainable Solutions (CSITSS), Bangalore, India, 16–18 December 2021; pp. 1–5. 30. Alsaade, F.W.; Alzahrani, M.S. Classification and Detection of Autism Spectrum Disorder Based on Deep Learning Algorithms. Comput. Intell. Neurosci. 2022, 2022, 8709145. 31. Alsaade, F.W.; Alzahrani, M.S. Classification and Detection of Autism Spectrum Disorder Based on Deep Learning Algorithms. Comput. Intell. Neurosci. 2022, 2022, 8709145.
  • 34. 23 自 强 不 息 厚 德 载 物 知 行 合 一 、 经 世 致 用 C e n t r a l S o u t h U n i v e r s i t y 32. Thabtah, F.; Kamalov, F.; Rajab, K. A New Computational Intelligence Approach to Detect Autistic Features for Autism Screening. Int. J. Med. Inform. 2018, 117, 112–124. 33. Alkahtani, H.; Aldhyani, T.H.H.; Alzahrani, M.Y. Deep Learning Algorithms to Identify Autism Spectrum Disorder in Children-Based Facial Landmarks. Appl. Sci. 2023, 13, 4855. https://doi.org/10.3390/app13084855 34. Rabbi, M.F., Hasan, S.M., Champa, A.I., Zaman, M.A.: A convolutional neural network model for early-stage detection of autism spectrum disorder. In: 2021 International Conference on Information and Communication Technology for Sustainable Development (ICICT4SD), pp. 110–114. IEEE (2021) 35. Arumugam, S.R., Karuppasamy, S.G., Gowr, S., Manoj, O., Kalaivani, K.: A deep convolutional neural network based detection system for autism spectrum disorder in facial images. In: 2021 5th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC), pp. 1255–1259. IEEE (2021) 36. Akter, T., et al.: Improved transfer-learning-based facial recognition framework to detect autistic children at an early stage. Brain Sci. 11(6), 734 (2021)

Editor's Notes

  1. 基于以上现状,我的博士研究课题为。。。
  2. 基于以上现状,我的博士研究课题为。。。
  3. 基于以上现状,我的博士研究课题为。。。
  4. 基于以上现状,我的博士研究课题为。。。
  5. 基于以上现状,我的博士研究课题为。。。
  6. 基于以上现状,我的博士研究课题为。。。
  7. 基于以上现状,我的博士研究课题为。。。
  8. 基于以上现状,我的博士研究课题为。。。
  9. 基于以上现状,我的博士研究课题为。。。
  10. 基于以上现状,我的博士研究课题为。。。
  11. 基于以上现状,我的博士研究课题为。。。
  12. 基于以上现状,我的博士研究课题为。。。
  13. 基于以上现状,我的博士研究课题为。。。
  14. 基于以上现状,我的博士研究课题为。。。
  15. 基于以上现状,我的博士研究课题为。。。
  16. 基于以上现状,我的博士研究课题为。。。
  17. 基于以上现状,我的博士研究课题为。。。
  18. 基于以上现状,我的博士研究课题为。。。
  19. 基于以上现状,我的博士研究课题为。。。
  20. 基于以上现状,我的博士研究课题为。。。
  21. 基于以上现状,我的博士研究课题为。。。
  22. 基于以上现状,我的博士研究课题为。。。
  23. 基于以上现状,我的博士研究课题为。。。
  24. 基于以上现状,我的博士研究课题为。。。
  25. 基于以上现状,我的博士研究课题为。。。
  26. 基于以上现状,我的博士研究课题为。。。
  27. 基于以上现状,我的博士研究课题为。。。
  28. 基于以上现状,我的博士研究课题为。。。
  29. 基于以上现状,我的博士研究课题为。。。
  30. 基于以上现状,我的博士研究课题为。。。
  31. 基于以上现状,我的博士研究课题为。。。
  32. 基于以上现状,我的博士研究课题为。。。
  33. 基于以上现状,我的博士研究课题为。。。