“LLM Security: Know, Test, Mitigate” explores the emerging landscape of AI security risks tied to LLMs. It highlights the growing importance of prompt injection, resource exhaustion and information leakage as critical threats that can lead to fraud, data breaches and operational failures. OWASP Top 10 for LLMs categorizes key vulnerabilities and explains how attackers exploit instruction-data confusion in AI systems. The presentation also introduces practical testing and mitigation tools such as NVIDIA Garak, Azure OpenAI Content Filters, Guardrails and LangKit. Backed by real-world incidents and alarming statistics—over 56% of tested models were successfully compromised—it underscores the urgency for organizations to adopt continuous evaluation, red teaming, and defense-in-depth strategies to safeguard LLM-driven applications. The presentation is an up-to-date content of previous work, co-authored with Hristo Dabovski, Cyber Security Engineer at Kongsberg Maritime (Norway)