Model Armor: Protecting Generative AI from Threats and Misuse

Author: Google Cloud Tech
Published At: 2025-05-29T00:00:00
Length: 07:28

Summary

Description

Protect your Generative AI applications from threats like prompt injection and data leaks with Model Armor, the new security guard for any LLM. This video dives into how Model Armor uses centralized policies and prompt/response filtering to address some of the OWASP LLM Top 10 risks. We'll explore key features and benefits, then see a live demo showing Model Armor in action against unsafe prompts and jailbreaking attempts, malicious URLs, and attempts to exchange sensitive data, both in user inputs and model outputs.

Resources:

Read the Model Armor documentation → https://goo.gle/43fWaK6

Subscribe to Google Cloud Tech → https://goo.gle/GoogleCloudTech

Speakers: Aron Eidelman

Products Mentioned: AI Infrastructure

Translated At: 2025-05-31T03:50:11Z

Request translate (One translation is about 5 minutes)

Version 3 (stable)

Optimized for a single speaker. Suitable for knowledge sharing or teaching videos.

Recommended Videos