Transform responsible AI from theory into practice

Promoting the safe and responsible development of AI as a force for good

Building AI responsibly at AWS

The rapid growth of generative AI brings promising new innovation, and at the same time raises new challenges. At AWS, we are committed to developing AI responsibly, taking a people-centric approach that prioritizes education, science, and our customers, to integrate responsible AI across the end-to-end AI lifecycle.

Image with colorful shapes

Core dimensions of responsible AI

Fairness

Considering impacts on different groups of stakeholders

Explainability

Understanding and evaluating system outputs

Privacy and security

Appropriately obtaining, using, and protecting data and models

Safety

Preventing harmful system output and misuse

Controllability

Having mechanisms to monitor and steer AI system behavior

Veracity and robustness

Achieving correct system outputs, even with unexpected or adversarial inputs

Governance

Incorporating best practices into the AI supply chain, including providers and deployers

Transparency

Enabling stakeholders to make informed choices about their engagement with an AI system

Core dimensions of responsible AI

Fairness

Considering impacts on different groups of stakeholders

Explainability

Understanding and evaluating system outputs

Privacy and security

Appropriately obtaining, using, and protecting data and models

Safety

Preventing harmful system output and misuse

Controllability

Having mechanisms to monitor and steer AI system behavior

Veracity and robustness

Achieving correct system outputs, even with unexpected or adversarial inputs

Governance

Incorporating best practices into the AI supply chain, including providers and deployers

Transparency

Enabling stakeholders to make informed choices about their engagement with an AI system

Services and tools

AWS offers services and tools to help you design, build, and operate AI systems responsibly.

Implementing safeguards in generative AI

Amazon Bedrock Guardrails helps you implement safeguards tailored to your generative AI applications and aligned with your responsible AI policies. Guardrails provides additional customizable safeguards on top of the native protections of FMs, delivering safety protections that is among the best in the industry by:

  • Blocking as much as 85% more harmful content
  • Filtering over 75% hallucinated responses for RAG and summarization workloads
  • Enabling customers to customize and apply safety, privacy and truthfulness protections within a single solution
Shades of stream lines

Foundation model (FM) evaluations

Model Evaluation on Amazon Bedrock helps you evaluate, compare, and select the best FMs for your specific use case based on custom metrics, such as accuracy, robustness, and toxicity. You can also use Amazon SageMaker Clarify and fmeval for model evaluation.

Different colored shapes on a blue background

Detecting bias and explaining predictions

Biases are imbalances in data or disparities in the performance of a model across different groups. Amazon SageMaker Clarify helps you mitigate bias by detecting potential bias during data preparation, after model training, and in your deployed model by examining specific attributes.

Understanding a model’s behavior is important to develop more accurate models and make better decisions. Amazon SageMaker Clarify provides greater visibility into model behavior, so you can provide transparency to stakeholders, inform humans making decisions, and track whether a model is performing as intended.

Explore Amazon SageMaker Clarify

Blue and green wave design

Monitoring and human review

Monitoring is important to maintain high-quality machine learning (ML) models and help ensure accurate predictions. Amazon SageMaker Model Monitor automatically detects and alerts you to inaccurate predictions from deployed models. And with Amazon SageMaker Ground Truth you can apply human feedback across the ML lifecycle to improve the accuracy and relevancy of models.

Different sized and shaped objects on a conveyer belt

Improving governance

ML Governance from Amazon SageMaker provides purpose-built tools for improving governance of your ML projects by giving you tighter control and visibility over your ML models. You can easily capture and share model information and stay informed on model behavior, like bias, all in one place.

Abstract pattern of connected dots

AWS AI Service Cards

AI Service Cards are a resource to enhance transparency by providing you with a single place to find information on the intended use cases and limitations, responsible AI design choices, and performance optimization best practices for our AI services and models.

Explore available service cards

Arial shot of cars driving over a bridge

Community contribution and collaboration

With deep engagement with multi-stakeholder organizations such as the OECD AI working groups, the Partnership on AI, the Responsible AI Institute, and the National AI Advisory Committee as well as strategic partnerships with universities on a global scale, we are committed to working alongside others to develop AI and ML technology responsibly and build trust.

We take a people-centric approach to educating the next generation of AI leaders with programs like the AI & ML Scholarship program and We Power Tech to increase access to hands-on learning, scholarships, and mentorship for underserved or underrepresented in tech.

Our investment in safe, transparent, and responsible generative AI includes collaboration with the global community and policymakers including the White House Voluntary AI commitments, AI Safety Summit in the UK, and support for ISO 42001, a new foundational standard to advance responsible AI. We support the development of effective risk-based regulatory frameworks for AI that protects civil rights, while allowing for continued innovation.

Responsible AI is an active area of research and development at Amazon. We have strategic partnerships with academia, like the California Institute of Technology and with Amazon Scholars, including leading experts who apply their academic research to help shape responsible AI workstreams at Amazon.

We innovate alongside our customers – staying at the forefront of new trends and research to deliver value – with ongoing research grants via the Amazon Research Awards and scientific publications with Amazon Science. Learn more about the science to build generative AI responsibly in this Amazon Science blog that unpacks the top emerging challenges and solutions.