7:00 PM โ€“ 11:00 PM (IST)
01โ€“02 November 2025
Masterclass
LLM Security &

Red T
eaming
2 Days Hands-on Sessions 15+ AI Tools Online
Why Attend This Masterclass?
Large Language Models (LLMs) are transforming industriesโ€”but with innovation
comes new vulnerabilities and attack surfaces. This intensive 2-day masterclass
blends foundational knowledge with practical red-teaming techniques, equipping
you to test, defend, and secure AI systems using real-world adversarial strategies.
Key Takeaways
8 CPE Credits Issued on
Completion
Guidance from Seasoned
Practitioners
Attack & Defense Playbooks Interactive Red Team Labs
Actionable Security
Techniques
15+ Cutting-Edge AI
Tools
Speakers Lineup
Avnish
7+ Years of Experience
AI Security Expert | Information Security | Cloud Security |
Data Security | Consultant & Trainer
Summary
Avnish is an experienced Information security & Cloud Security Consultant and
Trainer with over 7 years of expertise, specializing in cloud security, AI-assisted
threat detection and securing AI systems. He has delivered tailored training
globally across various sectors, equipping professionals with practical
cybersecurity, cloud security and AI security skills.
Ashish
10+ Years of Experience
AI Red Teaming Expert | Network+ | Security+ | Pentest+ |
CEH | CND | ECSA | CCNA | ECDE | CPENT | LPT | OSCP
Summary
Ashish is a cybersecurity and network security trainer who delivers 30+ programs
annually to professionals worldwide. He has expertise in vulnerability
assessments, penetration testing, OSINT, threat intelligence, digital forensics, and
AI-powered security training, equipping learners with skills in machine learningโ€“
based threat detection and anomaly analysis. Known for strong exam success
rates and customized content, he has trained government and corporate clients in
CCNA, CEH v11, Pentest+, Linux+, Microsoft Windows Server 2016, and more,
while guiding professionals in network administration, troubleshooting, traffic
analysis, and defense against evolving threats.
Masterclass Agenda
Day 1: Introduction to AI and LLM Security by Avnish (7 PM โ€“ 11 PM)
Demystifying the core concepts and components of an AI system

Types of AI Systems: Machine Learning, Deep Learning, Generative AI,
Agentic AI

Building and deploying AI โ€“ Model Development Lifecycle

Understanding LLMs: Transformer Architecture, Pre-training and Fine Tuning

LLM Applications: Chatbots, Code Generation, Cybersecurity Use Cases

AI and GenAI Frameworks: Scikit-learn, Tensorflow, AutoML, Hugging Face,
LangChain, Llamaindex, OpenAI API, Ollama, LMStudio

Security Considerations while Developing and Deploying AI Systems
Day 2: AI and LLM Red Teaming by Ashish (7 PM โ€“ 11 PM)
Introduction to AI Red Teaming โ€“ What is it and why it is needed?

Attack Families for AI Red Teaming: Poisoning, Injection, Evasion, Extraction,
Availability, Supply Chain

LLM01: Prompt Injection โ€“ Direct and Indirect

LLM02: Sensitive Information Disclosure โ€“ Data exfiltration

LLM03: Supply Chain โ€“ Malicious Packages and Models

LLM04: Data and Model Poisoning โ€“ Poisoning datasets and models during
training and fine-tuning

LLM05: Improper Output Handling โ€“ Injection via model outputs

LLM06: Excessive Agency โ€“ Agents with dangerous privileges

LLM07: System Prompt Leakage โ€“ Exposing hidden system instructions
through crafted queries

LLM08: Vector and Embedding Weaknesses

LLM09: Misinformation โ€“ Detecting Hallucinations

LLM10: Unbounded Consumption โ€“ Resource abuse and DOS Attacks

Tools and Frameworks for LLM Red Teaming: Cleverhans, Foolbox,
Adversarial Robustness Toolbox
Contact us
sales@infosectrain.com
www.infosectrain.com

LLM Security & Red Teaming Masterclass.pdf

  • 1.
    7:00 PM โ€“11:00 PM (IST) 01โ€“02 November 2025 Masterclass LLM Security & Red T eaming 2 Days Hands-on Sessions 15+ AI Tools Online
  • 2.
    Why Attend ThisMasterclass? Large Language Models (LLMs) are transforming industriesโ€”but with innovation comes new vulnerabilities and attack surfaces. This intensive 2-day masterclass blends foundational knowledge with practical red-teaming techniques, equipping you to test, defend, and secure AI systems using real-world adversarial strategies. Key Takeaways 8 CPE Credits Issued on Completion Guidance from Seasoned Practitioners Attack & Defense Playbooks Interactive Red Team Labs Actionable Security Techniques 15+ Cutting-Edge AI Tools
  • 3.
    Speakers Lineup Avnish 7+ Yearsof Experience AI Security Expert | Information Security | Cloud Security | Data Security | Consultant & Trainer Summary Avnish is an experienced Information security & Cloud Security Consultant and Trainer with over 7 years of expertise, specializing in cloud security, AI-assisted threat detection and securing AI systems. He has delivered tailored training globally across various sectors, equipping professionals with practical cybersecurity, cloud security and AI security skills. Ashish 10+ Years of Experience AI Red Teaming Expert | Network+ | Security+ | Pentest+ | CEH | CND | ECSA | CCNA | ECDE | CPENT | LPT | OSCP Summary Ashish is a cybersecurity and network security trainer who delivers 30+ programs annually to professionals worldwide. He has expertise in vulnerability assessments, penetration testing, OSINT, threat intelligence, digital forensics, and AI-powered security training, equipping learners with skills in machine learningโ€“ based threat detection and anomaly analysis. Known for strong exam success rates and customized content, he has trained government and corporate clients in CCNA, CEH v11, Pentest+, Linux+, Microsoft Windows Server 2016, and more, while guiding professionals in network administration, troubleshooting, traffic analysis, and defense against evolving threats.
  • 4.
    Masterclass Agenda Day 1:Introduction to AI and LLM Security by Avnish (7 PM โ€“ 11 PM) Demystifying the core concepts and components of an AI system Types of AI Systems: Machine Learning, Deep Learning, Generative AI, Agentic AI Building and deploying AI โ€“ Model Development Lifecycle Understanding LLMs: Transformer Architecture, Pre-training and Fine Tuning LLM Applications: Chatbots, Code Generation, Cybersecurity Use Cases AI and GenAI Frameworks: Scikit-learn, Tensorflow, AutoML, Hugging Face, LangChain, Llamaindex, OpenAI API, Ollama, LMStudio Security Considerations while Developing and Deploying AI Systems
  • 5.
    Day 2: AIand LLM Red Teaming by Ashish (7 PM โ€“ 11 PM) Introduction to AI Red Teaming โ€“ What is it and why it is needed? Attack Families for AI Red Teaming: Poisoning, Injection, Evasion, Extraction, Availability, Supply Chain LLM01: Prompt Injection โ€“ Direct and Indirect LLM02: Sensitive Information Disclosure โ€“ Data exfiltration LLM03: Supply Chain โ€“ Malicious Packages and Models LLM04: Data and Model Poisoning โ€“ Poisoning datasets and models during training and fine-tuning LLM05: Improper Output Handling โ€“ Injection via model outputs LLM06: Excessive Agency โ€“ Agents with dangerous privileges LLM07: System Prompt Leakage โ€“ Exposing hidden system instructions through crafted queries LLM08: Vector and Embedding Weaknesses LLM09: Misinformation โ€“ Detecting Hallucinations LLM10: Unbounded Consumption โ€“ Resource abuse and DOS Attacks Tools and Frameworks for LLM Red Teaming: Cleverhans, Foolbox, Adversarial Robustness Toolbox
  • 6.