Secure, Deliver and Optimize GenAI Apps with F5 Source | Edit on
Class 6: LLM Red Teaming¶
Note
AI governance itself is a huge topic. Hence, we will only focus on the technical controls such as LLM Red Teaming.
Note
No hands on lab for this class partly due to extensive resources required for scanning an LLM. Demo and discussion will be conducted in class.
What is AI Governance?¶
AI governance refers to the frameworks, policies, and practices established to guide the ethical development, deployment, and use of artificial intelligence systems. Its goal is to ensure that AI technologies are aligned with societal values, legal requirements, and organizational objectives while mitigating risks and addressing challenges related to bias, transparency, respect human rights, accountability, and security.
AI Risk/Safety vs AI Security¶
AI Safety/Risk
Focus on preventing unintended harm caused by the AI services, ensuring it operates ethically and aligns with human values, considering broder societal impacts beyond just technical vulnerabiliites. - Protecting human using AI. (responsible use of AI and adherence to ethical standards). There are standard like NIST AI 600-1 that covered the Risk Management Framework for the AI Safety.
AI Security
Focus to secure AI services from external threats like cyberattack and data breach, safeguarding the confidentiality and integrity of the AI services. General framework that address this is OWASP Top 10 for LLM, MITRE ATLAS ( Adversarial Threat Landscape for Artificial Intelligence System) - Protecting AI system use by human.
What is LLM Red Teaming?¶
Red teaming for Large Language Models (LLMs) is the practice of systematically testing these AI systems to identify potential vulnerabilities, risks, and failure modes. It’s similar to cybersecurity red teaming - defensive (blue) vs offensive(red), but focused specifically on finding ways LLMs might breaks or compromised.
F5 LLM Red Teaming Solution¶
Here are some of the example reports shown on F5XC Web App Scanning on LLM Discovery.