Securing generative AI

Accelerate AI innovation with built-in security and governance

Overview

Build, run, and scale your generative AI workloads with confidence on a cloud foundation uniquely designed for security. Leverage integrated AWS security, compliance, and governance tools and capabilities across your AI stack to help secure your generative AI applications without having to reinvent your security strategy.

Built-in security

Each layer of the AI stack - infrastructure, models, and applications - presents unique risk considerations and requires tailored security measures. AWS helps you navigate this complexity with infrastructure and services that build with security at the foundation.
A secure infrastructure begins with a strong foundation that helps ensure data protection at every layer of the AI stack. Customers retain full control over their data, with built-in security measures that enable isolation from both the infrastructure operator and other workloads. This approach helps ensure data confidentiality, integrity, and availability, minimizing the risk of unauthorized access. With this foundation, even the most sensitive industries can innovate confidently while helping to meet their to security and compliance goals.
Foundation and large language models are central to generative AI applications and require robust security to prevent unauthorized access and security events. Ensuring data integrity, confidentiality, and ownership is paramount. Implementing key security principles, such as encryption, zero-trust architecture, and stringent access controls, enables organizations to safeguard these models. Continuous monitoring, detection, and governance further help maintain the security and compliance of AI models throughout their lifecycle.
Generative AI applications interact with end users and often handle sensitive data, making comprehensive security essential across the entire application lifecycle. Strong encryption, access controls, and continuous monitoring protect data and models, helping ensure the integrity of inputs and outputs. By addressing threats and vulnerabilities at every stage, organizations can help secure their generative AI applications against evolving risks and maintain trust with users.

Generative AI Security Scoping Matrix

To implement security measures effectively, it's essential to address the challenges at each layer of the AI technology stack —infrastructure, models, and applications. The Generative AI Security Scoping Matrix helps customers match their AI workloads with the right security, privacy, governance, and compliance controls, helping ensure protection of their data and assets.

Learn more about the Generative AI Security Scoping Matrix

Image with colorful shapes