Emerging Security and Privacy Threat Landscape in AI
Strategy and Synergy for Security
Dr. Reshmi T R
Scientist
15th
March, 2024
Focus will be on the Security and Privacy Threats with Big Data
in AI Systems.
Outline of the session
 Background
 Emerging Security & Privacy Threats of AI
 Countermeasures for the Security & Privacy Threats of AI
 Standards for dealing Security and Privacy of AI Systems
 Way Forward
 Works Progressing at SETS
Background
Machine Learning (ML) and Artificial Intelligence (AI)
‱ML consists of a set of algorithms and statistical models for computer systems to
efficiently perform a particular task without relying on rule-based programming or
human interaction.
‱Developing the mathematical model is strongly dependent on the Training Dataset
‱Program gradually improves through the experiences and learning process from
the training data for predicting, detecting or making decision.
Background
‱ The workflow and different phases of AI systems developed based on ML
algorithms is shown below:
AI models are used for
‱Automated Threat Detection
‱Enhanced Accuracy
‱Adaptive Defense
‱Behavioral Analysis
‱Threat Intelligence Integration
‱Scalability
‱Continuous Monitoring and Analysis
‱Streamlined Incident Response
AI4CS - AI in Cybersecurity Applications or Services
‱ Artificial Intelligence (AI) presents unprecedented
security and privacy challenges.
‱ Understanding and addressing these risks is crucial
for the responsible deployment of AI technologies.
‱ AI introduces privacy vulnerabilities through data
collection and processing.
‱ Security risks arise from AI's potential susceptibility
to adversarial attacks and exploitation of
vulnerabilities.
Made With
W
Source: NISTIR 8062
CS4AI - AI Security and Privacy
This content is used into standards for the EU AI Act, ISO/IEC 27090 (AI security), the OWASP ML top 10,
the OWASP LLM top 10, and OpenCRE
Security and Privacy Security Threats on AI Models
Data poisoning:
‱Involves manipulating the training data to control the prediction
behavior of AI models
Adversarial attacks
‱Avoid detection or modify the model’s outcomes
Evasion attacks
‱Tweaks the data processed by the model to mislead it, No
manipulation of training data
Model inversion attacks
‱Reveal sensitive information from the model’s outputs, even when
they do not have direct access to the underlying data
Model stealing
‱Involves unauthorized duplication of trained models, thereby
bypassing the investments
Model Extractions
‱Manipulate the model’s input-output pairs to learn about its structure
and functionality
Prompt injection Attacks
‱Concern in LLMs, which involve manipulative tactics where attackers
deceive users into revealing confidential information
w
Malicious attacks targeting AI systems
Emerging Security & Privacy Threats of AI
‱ Data Breach
o As a common privacy incident, a data breach is the disclosure of confidential
or sensitive data in unauthorized access.
‱ Biasness in Data
o Biasness targets the training phase and violates the integrity of an AI system.
o Biasness can target different attributes in decisions making.
‱ Data Poisoning
o Data poisoning is one of the most widespread attacks developed based on
the idea of learning with polluted data.
o The attack happens by injecting adversarial training data during the learning
to corrupt the model or to force a system towards producing false results
Emerging Security & Privacy Threats of AI
‱ Model Extraction
o The adversary’s aim from the model extraction is to infer record(s) that is
used to train the model, thus, violates the confidentiality of the system
o Based on how sensitive the trained data is (e.g., medical record), the attack
can cause a significant privacy breach by disclosing sensitive information
o Many ML techniques (e.g., logistic regression, linear classifier, support vector
machine, and neural network) are shown to be vulnerable to Model
Extraction attack
o Difficult to protect the privacy and security of data.
‱ Evasion
o Evasion is a popular attack in which the attacker’s aim is to evade detection
by fooling the systems towards misclassification
Emerging Security & Privacy Threats of AI
Threat
AI Workflow Phase
Training Model Apply Inference
Data Breach Yes Yes - Yes
Biasness in Data Yes - - -
Data Poisoning Yes - - -
Model Extraction - Yes - -
Evasion - - Yes -
Table: Attack Phases that penetrates the AI System
Emerging Security & Privacy Threats of AI
Threat
Challenged Security
Goal (CIA)
Areas - Security Threat
Data Breach Confidentiality ‱ Re-identification
‱ Risk of inference
Biasness in Data Integrity,
Availability
‱ Gender classification
‱ Face recognition
‱ Criminal legal system
Data Poisoning Availability,
Integrity
‱ Self-driving car
‱ Sentiment analysis
‱ Social media chatbot
Model Extraction Confidentiality ‱ Image recognition
‱ Location data
Evasion Integrity ‱ Image classification
‱ Spam emails
‱ Self-driving car
CIA: Confidentiality, Integrity, Availability
Countermeasures for the Security & Privacy Threats of AI
‱ Data Breach
o The privacy-preserving techniques of big data can be categorized in three
classes: Anonymization, De-identification, and Privacy-enhancing Techniques
(PET).
o The k-anonymity, l-diversity and t-closeness are popular techniques which
are suitable to mask sensitive information such as location-based data to
guarantee that the identity of records is not distinguishable in a dataset.
o PET was developed for privacy-preserving data analysis in various domains
o Verifiable Data Audit
Countermeasures for the Security & Privacy Threats of AI
‱ Type of Biasness in Data
o Difference in means
o Difference in residuals
o Equal opportunity
o Disparate impact
o Normalized mutual information
 Moreover, benefiting the metrics, approaches to mitigate AI biasness are
developed to diagnose and remove AI biasness.
 Optimized prepossessing, reject option classification, learning fair
representations, and adversarial debiasing are such techniques to remove AI
biasness
Countermeasures for the Security & Privacy Threats of AI
‱ Data Poisoning
o One common approach to detect the poisoned data is to identify the outlier
(i.e., anomaly detection) since the injected data is expected to follow a
different data distribution.
o Other technique to recognize and remove the poisoned data in the training
dataset by separating the new joined input and calculate the accuracy of the
model on them
Countermeasures for the Security & Privacy Threats of AI
‱ Model Extraction
o By analyzing the distribution of consecutive API queries and compare it with
benign behavior.
o By training multiple models using different partitions of training data to each
model
o Another approach to protect the learning model is to limit the information
regarding the probability score of the model and degrade the success rate by
misleading the adversary
o Evasion
o Compute the perturbations which fool the classifier and thus quantify the
robustness of the classifier.
o Detect the adversarial samples from the original ones and remove them from
the dataset
‱ Generative AI (GenAI)
‱ Predictive AI
‱ Robust AI
‱ Privacy-preserving AI
‱ Trustable AI
‱ Frontier AI
‱ Responsible AI
‱ Explainable AI
Source: towards data science
Evolving AI Categories
‱ GenAI: creates new data or content resembling human-created data.
‱ Predictive AI: predicts future outcomes based on patterns and data analysis.
‱ Robust AI: withstand and adapt to various challenges (adversarial attacks/ changes in the
environment), while maintaining performance.
‱ Privacy-preserving AI: Protects the privacy of individuals' data by using techniques like
encryption or anonymization to prevent unauthorized access.
‱ Trustable AI: AI that behave ethically, reliably, and transparently, ensuring fairness,
accountability, and reliability in its decisions and actions.
‱ Frontier AI: cutting-edge technologies, techniques, and applications, pushing the
boundaries.
‱ Responsible AI: developed, deployed, and used in a responsible manner, considering its
potential societal impact, ethical implications, and risks.
‱ Explainable AI: AI that provides explanations or justifications for its decisions and actions
in a human-understandable manner, enhancing transparency and trustworthiness
Source: towards data science
Evolving AI Categories
Types of Prevalent Cyberattacks on Evolvoing AI Models
Landscape of threats is HUGE. It keeps evolving !!!
Standards for Development of AI Systems
ISO and IEC initiated a Joint Technical Committee (JTC) for Information
technology, known as ISO/IEC JTC 1 which covers several domains
concerning smart ICT and information technology including privacy,
data protection and security of ICT technologies.
Threat Security Goal (CIA) Security Threat
Developed / Under
development Standards
Data Breach Confidentiality ‱ Re-identification
‱ Risk of inference
‱ ISO/IEC CD 20547-4
‱ ISO/IEC PD TR 24028
Biasness in Data Integrity,
Availability
‱ Gender classification
‱ Face recognition
‱ Criminal legal system
‱ ISO/IEC NP TR 24027
‱ ISO/IEC PD TR 24028
Data Poisoning Availability,
Integrity
‱ Self-driving car
‱ Sentiment analysis
‱ Social media chatbot
‱ ISO/IEC PD TR 24028
Model Extraction Confidentiality ‱ Image recognition
‱ Location data
‱ ISO/IEC PD TR 24028
Evasion Integrity ‱ Image classification
‱ Spam emails
‱ Self-driving car
‱ ISO/IEC PD TR 24028
Standards for Development of AI Systems
Way Forward
Development of Trustworthy AI
References
1. Dilmaghani, Saharnaz, Matthias R. Brust, Grégoire Danoy, Natalia Cassagnes, Johnatan Pecero, and Pascal Bouvry. "Privacy and
security of big data in AI systems: A research and standards perspective." In 2019 IEEE international conference on big data (big
data), pp. 5737-5743. IEEE, 2019.
2. K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song, “Robust physical-world attacks
on deep learning visual classification,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition.
3. “Iso/iec pd tr 24028: Information technology – artificial intelligence (ai) – overview of trustworthiness in artificial intelligence,”
International Organization for Standardization, Geneva, CH, Standard.
4. T. Gu, K. Liu, B. Dolan-Gavitt, and S. Garg, “Badnets: Evaluating backdooring attacks on deep neural networks,” IEEE Access,
2019.
5. L. Sweeney, “Simple demographics often identify people uniquely,” Health (San Francisco), vol. 671, 2000.
6. S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” in
Proc. of the IEEE Conf. on computer vision and pattern recognition, 2016.
7. B. Biggio and F. Roli, “Wild patterns: Ten years after the rise of adversarial machine learning,” Pattern Recognition, vol. 84,
2018.
8. M. Al-Rubaie and J. M. Chang, “Privacy-preserving machine learning: Threats and solutions,” IEEE Security & Privacy, vol. 17,
2019.
References
9. “Iso/iec cd 20547-4: Information technology – big data reference architecture – part 4: Security and privacy,”
International Organization for Standardization, Geneva, CH, Standard.
10. M. Wall. (2019, Jul.) Biased and wrong? facial recognition tech in the dock. BBC. [Online]. Available:
https://www.bbc.com/news/ business-48842750
11. S. X. Zhang, R. E. Roberts, and D. Farabee, “An analysis of prisoner reentry and parole risk using compas and
traditional criminal history measures,” Crime & Delinquency, vol. 60, 2014.
12. “Iso/iec np tr 24027: Information technology – artificial intelligence (ai) – bias in ai systems and ai aided
decision making,” International Organization for Standardization, Geneva, CH, Standard.
13. A. Newell, R. Potharaju, L. Xiang, and C. Nita-Rotaru, “On the practicality of integrity attacks on document-level
sentiment analysis,” in Proc. of the Artificial Intelligent and Security Workshop, 2014.
14. J. Vincent. (2016) Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. [Online].
Available: https: //www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
15. A. Pyrgelis, C. Troncoso, and E. D. Cristofaro, “Knock knock, who’s there? membership inference on aggregate
location data,” CoRR, 2017.
“Establishment of High-Quality Node”
Artificial Intelligence for Cyber Security &
Cyber Security for Artificial Intelligence
(HiQN-AICS)
Collaborators
Society for Electronic Transactions and Security (SETS), Chennai
Centre for Development of Advanced Computing (C-DAC), Bengaluru
Indian Institute of Technology Madras (IITM), Chennai
Indian Institute of Technology Jodhpur (IITJ), Jodhpur
Indian Institute of Technology Delhi (IITD), Delhi
A Brainstorming session on Artificial Intelligence & Cyber Security (AI & CS) was held at SETS, Chennai on
17th November 2018 under the chairmanship of Prof. K. VijayRaghavan, Former PSA to GOI & Former
President, SETS. It resulted in the formation of Task Force committee chaired by Prof. B. Ravindran
Department of CSE and Head, Robert Bosch Centre for Data Science and Artificial Intelligence, IIT Madras with
members from academic institutes, R&D Labs and industries.
Key Recommendation of the Task Force
Formation of a High Quality Node
Initiating R & D Programme under specific
schemes
Skilling and Training
Research collaborations
Creation of National level test bed
Collection of Repository of datasets
Arriving at AI-CS and CS-AI Standards
Launching of CyberSec4AI-India Portal
Background
Domain Expertise of SETS Chennai in AI4CS and CS4AI
Building core competencies, state-of- the- art infrastructure
‱ Deploy nodes having supercomputing capabilities
‱ Advanced Hardware infrastructure will be built with high-end computing servers
‱ Utilizing the existing hardware infrastructure consisting of high-end servers
Define delivery models to the identified verticals
Work on development and use of tools
‱ Misinformation detection and Network analysis tool
‱ DARFA Lab Kit consisting of a framework for studying adversarial attacks on AI models
‱ Toolkit for differentially private ML and study on the same with a Interactive testbed for Ransomware Forensics
‱ Physical Attack Analysis on AI models and AI models to improve Side Channel Analysis
‱ Proliferation of the AI models to study and understand the futuristic hardware requirements for AI applications
‱ Formation of Start-up ecosystem for delivery of the end user products.
Formation of High Quality Node
Reach me
@
reshmi@setsindia.net
Thank You..

Emerging Security and Privacy Threats in AI- 15.03.24.ppt

  • 1.
    Emerging Security andPrivacy Threat Landscape in AI Strategy and Synergy for Security Dr. Reshmi T R Scientist 15th March, 2024
  • 3.
    Focus will beon the Security and Privacy Threats with Big Data in AI Systems.
  • 4.
    Outline of thesession  Background  Emerging Security & Privacy Threats of AI  Countermeasures for the Security & Privacy Threats of AI  Standards for dealing Security and Privacy of AI Systems  Way Forward  Works Progressing at SETS
  • 5.
    Background Machine Learning (ML)and Artificial Intelligence (AI) ‱ML consists of a set of algorithms and statistical models for computer systems to efficiently perform a particular task without relying on rule-based programming or human interaction. ‱Developing the mathematical model is strongly dependent on the Training Dataset ‱Program gradually improves through the experiences and learning process from the training data for predicting, detecting or making decision.
  • 6.
    Background ‱ The workflowand different phases of AI systems developed based on ML algorithms is shown below:
  • 7.
    AI models areused for ‱Automated Threat Detection ‱Enhanced Accuracy ‱Adaptive Defense ‱Behavioral Analysis ‱Threat Intelligence Integration ‱Scalability ‱Continuous Monitoring and Analysis ‱Streamlined Incident Response AI4CS - AI in Cybersecurity Applications or Services
  • 8.
    ‱ Artificial Intelligence(AI) presents unprecedented security and privacy challenges. ‱ Understanding and addressing these risks is crucial for the responsible deployment of AI technologies. ‱ AI introduces privacy vulnerabilities through data collection and processing. ‱ Security risks arise from AI's potential susceptibility to adversarial attacks and exploitation of vulnerabilities. Made With W Source: NISTIR 8062 CS4AI - AI Security and Privacy
  • 9.
    This content isused into standards for the EU AI Act, ISO/IEC 27090 (AI security), the OWASP ML top 10, the OWASP LLM top 10, and OpenCRE Security and Privacy Security Threats on AI Models
  • 10.
    Data poisoning: ‱Involves manipulatingthe training data to control the prediction behavior of AI models Adversarial attacks ‱Avoid detection or modify the model’s outcomes Evasion attacks ‱Tweaks the data processed by the model to mislead it, No manipulation of training data Model inversion attacks ‱Reveal sensitive information from the model’s outputs, even when they do not have direct access to the underlying data Model stealing ‱Involves unauthorized duplication of trained models, thereby bypassing the investments Model Extractions ‱Manipulate the model’s input-output pairs to learn about its structure and functionality Prompt injection Attacks ‱Concern in LLMs, which involve manipulative tactics where attackers deceive users into revealing confidential information w Malicious attacks targeting AI systems
  • 11.
    Emerging Security &Privacy Threats of AI ‱ Data Breach o As a common privacy incident, a data breach is the disclosure of confidential or sensitive data in unauthorized access. ‱ Biasness in Data o Biasness targets the training phase and violates the integrity of an AI system. o Biasness can target different attributes in decisions making. ‱ Data Poisoning o Data poisoning is one of the most widespread attacks developed based on the idea of learning with polluted data. o The attack happens by injecting adversarial training data during the learning to corrupt the model or to force a system towards producing false results
  • 12.
    Emerging Security &Privacy Threats of AI ‱ Model Extraction o The adversary’s aim from the model extraction is to infer record(s) that is used to train the model, thus, violates the confidentiality of the system o Based on how sensitive the trained data is (e.g., medical record), the attack can cause a significant privacy breach by disclosing sensitive information o Many ML techniques (e.g., logistic regression, linear classifier, support vector machine, and neural network) are shown to be vulnerable to Model Extraction attack o Difficult to protect the privacy and security of data. ‱ Evasion o Evasion is a popular attack in which the attacker’s aim is to evade detection by fooling the systems towards misclassification
  • 13.
    Emerging Security &Privacy Threats of AI Threat AI Workflow Phase Training Model Apply Inference Data Breach Yes Yes - Yes Biasness in Data Yes - - - Data Poisoning Yes - - - Model Extraction - Yes - - Evasion - - Yes - Table: Attack Phases that penetrates the AI System
  • 14.
    Emerging Security &Privacy Threats of AI Threat Challenged Security Goal (CIA) Areas - Security Threat Data Breach Confidentiality ‱ Re-identification ‱ Risk of inference Biasness in Data Integrity, Availability ‱ Gender classification ‱ Face recognition ‱ Criminal legal system Data Poisoning Availability, Integrity ‱ Self-driving car ‱ Sentiment analysis ‱ Social media chatbot Model Extraction Confidentiality ‱ Image recognition ‱ Location data Evasion Integrity ‱ Image classification ‱ Spam emails ‱ Self-driving car CIA: Confidentiality, Integrity, Availability
  • 15.
    Countermeasures for theSecurity & Privacy Threats of AI ‱ Data Breach o The privacy-preserving techniques of big data can be categorized in three classes: Anonymization, De-identification, and Privacy-enhancing Techniques (PET). o The k-anonymity, l-diversity and t-closeness are popular techniques which are suitable to mask sensitive information such as location-based data to guarantee that the identity of records is not distinguishable in a dataset. o PET was developed for privacy-preserving data analysis in various domains o Verifiable Data Audit
  • 16.
    Countermeasures for theSecurity & Privacy Threats of AI ‱ Type of Biasness in Data o Difference in means o Difference in residuals o Equal opportunity o Disparate impact o Normalized mutual information  Moreover, benefiting the metrics, approaches to mitigate AI biasness are developed to diagnose and remove AI biasness.  Optimized prepossessing, reject option classification, learning fair representations, and adversarial debiasing are such techniques to remove AI biasness
  • 17.
    Countermeasures for theSecurity & Privacy Threats of AI ‱ Data Poisoning o One common approach to detect the poisoned data is to identify the outlier (i.e., anomaly detection) since the injected data is expected to follow a different data distribution. o Other technique to recognize and remove the poisoned data in the training dataset by separating the new joined input and calculate the accuracy of the model on them
  • 18.
    Countermeasures for theSecurity & Privacy Threats of AI ‱ Model Extraction o By analyzing the distribution of consecutive API queries and compare it with benign behavior. o By training multiple models using different partitions of training data to each model o Another approach to protect the learning model is to limit the information regarding the probability score of the model and degrade the success rate by misleading the adversary o Evasion o Compute the perturbations which fool the classifier and thus quantify the robustness of the classifier. o Detect the adversarial samples from the original ones and remove them from the dataset
  • 19.
    ‱ Generative AI(GenAI) ‱ Predictive AI ‱ Robust AI ‱ Privacy-preserving AI ‱ Trustable AI ‱ Frontier AI ‱ Responsible AI ‱ Explainable AI Source: towards data science Evolving AI Categories
  • 20.
    ‱ GenAI: createsnew data or content resembling human-created data. ‱ Predictive AI: predicts future outcomes based on patterns and data analysis. ‱ Robust AI: withstand and adapt to various challenges (adversarial attacks/ changes in the environment), while maintaining performance. ‱ Privacy-preserving AI: Protects the privacy of individuals' data by using techniques like encryption or anonymization to prevent unauthorized access. ‱ Trustable AI: AI that behave ethically, reliably, and transparently, ensuring fairness, accountability, and reliability in its decisions and actions. ‱ Frontier AI: cutting-edge technologies, techniques, and applications, pushing the boundaries. ‱ Responsible AI: developed, deployed, and used in a responsible manner, considering its potential societal impact, ethical implications, and risks. ‱ Explainable AI: AI that provides explanations or justifications for its decisions and actions in a human-understandable manner, enhancing transparency and trustworthiness Source: towards data science Evolving AI Categories
  • 21.
    Types of PrevalentCyberattacks on Evolvoing AI Models Landscape of threats is HUGE. It keeps evolving !!!
  • 22.
    Standards for Developmentof AI Systems ISO and IEC initiated a Joint Technical Committee (JTC) for Information technology, known as ISO/IEC JTC 1 which covers several domains concerning smart ICT and information technology including privacy, data protection and security of ICT technologies.
  • 23.
    Threat Security Goal(CIA) Security Threat Developed / Under development Standards Data Breach Confidentiality ‱ Re-identification ‱ Risk of inference ‱ ISO/IEC CD 20547-4 ‱ ISO/IEC PD TR 24028 Biasness in Data Integrity, Availability ‱ Gender classification ‱ Face recognition ‱ Criminal legal system ‱ ISO/IEC NP TR 24027 ‱ ISO/IEC PD TR 24028 Data Poisoning Availability, Integrity ‱ Self-driving car ‱ Sentiment analysis ‱ Social media chatbot ‱ ISO/IEC PD TR 24028 Model Extraction Confidentiality ‱ Image recognition ‱ Location data ‱ ISO/IEC PD TR 24028 Evasion Integrity ‱ Image classification ‱ Spam emails ‱ Self-driving car ‱ ISO/IEC PD TR 24028 Standards for Development of AI Systems
  • 24.
  • 25.
    References 1. Dilmaghani, Saharnaz,Matthias R. Brust, GrĂ©goire Danoy, Natalia Cassagnes, Johnatan Pecero, and Pascal Bouvry. "Privacy and security of big data in AI systems: A research and standards perspective." In 2019 IEEE international conference on big data (big data), pp. 5737-5743. IEEE, 2019. 2. K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song, “Robust physical-world attacks on deep learning visual classification,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 3. “Iso/iec pd tr 24028: Information technology – artificial intelligence (ai) – overview of trustworthiness in artificial intelligence,” International Organization for Standardization, Geneva, CH, Standard. 4. T. Gu, K. Liu, B. Dolan-Gavitt, and S. Garg, “Badnets: Evaluating backdooring attacks on deep neural networks,” IEEE Access, 2019. 5. L. Sweeney, “Simple demographics often identify people uniquely,” Health (San Francisco), vol. 671, 2000. 6. S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” in Proc. of the IEEE Conf. on computer vision and pattern recognition, 2016. 7. B. Biggio and F. Roli, “Wild patterns: Ten years after the rise of adversarial machine learning,” Pattern Recognition, vol. 84, 2018. 8. M. Al-Rubaie and J. M. Chang, “Privacy-preserving machine learning: Threats and solutions,” IEEE Security & Privacy, vol. 17, 2019.
  • 26.
    References 9. “Iso/iec cd20547-4: Information technology – big data reference architecture – part 4: Security and privacy,” International Organization for Standardization, Geneva, CH, Standard. 10. M. Wall. (2019, Jul.) Biased and wrong? facial recognition tech in the dock. BBC. [Online]. Available: https://www.bbc.com/news/ business-48842750 11. S. X. Zhang, R. E. Roberts, and D. Farabee, “An analysis of prisoner reentry and parole risk using compas and traditional criminal history measures,” Crime & Delinquency, vol. 60, 2014. 12. “Iso/iec np tr 24027: Information technology – artificial intelligence (ai) – bias in ai systems and ai aided decision making,” International Organization for Standardization, Geneva, CH, Standard. 13. A. Newell, R. Potharaju, L. Xiang, and C. Nita-Rotaru, “On the practicality of integrity attacks on document-level sentiment analysis,” in Proc. of the Artificial Intelligent and Security Workshop, 2014. 14. J. Vincent. (2016) Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. [Online]. Available: https: //www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist 15. A. Pyrgelis, C. Troncoso, and E. D. Cristofaro, “Knock knock, who’s there? membership inference on aggregate location data,” CoRR, 2017.
  • 27.
    “Establishment of High-QualityNode” Artificial Intelligence for Cyber Security & Cyber Security for Artificial Intelligence (HiQN-AICS) Collaborators Society for Electronic Transactions and Security (SETS), Chennai Centre for Development of Advanced Computing (C-DAC), Bengaluru Indian Institute of Technology Madras (IITM), Chennai Indian Institute of Technology Jodhpur (IITJ), Jodhpur Indian Institute of Technology Delhi (IITD), Delhi
  • 28.
    A Brainstorming sessionon Artificial Intelligence & Cyber Security (AI & CS) was held at SETS, Chennai on 17th November 2018 under the chairmanship of Prof. K. VijayRaghavan, Former PSA to GOI & Former President, SETS. It resulted in the formation of Task Force committee chaired by Prof. B. Ravindran Department of CSE and Head, Robert Bosch Centre for Data Science and Artificial Intelligence, IIT Madras with members from academic institutes, R&D Labs and industries. Key Recommendation of the Task Force Formation of a High Quality Node Initiating R & D Programme under specific schemes Skilling and Training Research collaborations Creation of National level test bed Collection of Repository of datasets Arriving at AI-CS and CS-AI Standards Launching of CyberSec4AI-India Portal Background
  • 29.
    Domain Expertise ofSETS Chennai in AI4CS and CS4AI
  • 30.
    Building core competencies,state-of- the- art infrastructure ‱ Deploy nodes having supercomputing capabilities ‱ Advanced Hardware infrastructure will be built with high-end computing servers ‱ Utilizing the existing hardware infrastructure consisting of high-end servers Define delivery models to the identified verticals Work on development and use of tools ‱ Misinformation detection and Network analysis tool ‱ DARFA Lab Kit consisting of a framework for studying adversarial attacks on AI models ‱ Toolkit for differentially private ML and study on the same with a Interactive testbed for Ransomware Forensics ‱ Physical Attack Analysis on AI models and AI models to improve Side Channel Analysis ‱ Proliferation of the AI models to study and understand the futuristic hardware requirements for AI applications ‱ Formation of Start-up ecosystem for delivery of the end user products. Formation of High Quality Node
  • 31.