TwelveLabs in Amazon Bedrock (coming soon)

Unlock the full potential of enterprise video assets

Introducing TwelveLabs

TwelveLabs uses multimodal foundation models (FMs) to bring humanlike understanding to video data. The company's FMs understand what is happening in videos, including actions, objects, and background sounds, allowing developers to create applications that can search through videos, classify scenes, summarize, and extract insights with precision and reliability.

State-of-the-art video understanding models

Marengo 2.7

Get fast, context-aware results that reveal exactly what you’re looking for in your videos—moving beyond basic tags into a whole new dimension of multimodal understanding.

Pegasus 1.2

Transform your videos through the power of language. Generate everything you need from video—from concise summaries and captivating highlights to effective hashtags and customized reports. Uncover deeper insights and unlock entirely new possibilities with your content.
 

Benefits

Pinpoint precise moments in your videos using simple language queries, removing the need for tedious scrubbing and manual review.

Extract deep insights from your content without predefining labels or categories, allowing for discovery of unexpected patterns and information.

Convert extensive video libraries into accessible, searchable knowledge resources with technology built for enterprise scale and security.

Simultaneously process visual elements, audio content, and text for comprehensive understanding that captures every dimension of your videos.

Connect related events across time, identifying patterns and relationships that can be challenging and time-consuming to manually track.

Use Cases

Transform production workflows by instantly finding, summarizing, and connecting video moments, helping storytellers focus on creativity instead of searching through footage.

Accelerate content analysis and workflow efficiency through AI-powered video understanding, helping brands more effectively connect with their audience while reducing production time and costs.

Enhance situational awareness by rapidly identifying critical events and patterns across video sources, allowing for proactive security monitoring and faster, more informed decision-making.

From detecting driver hazards to predicting pedestrian behavior, TwelveLabs AI analyzes video with humanlike comprehension, transforming transportation safety and efficiency.

TwelveLabs in Amazon Bedrock overview

With Marengo and Pegasus in Amazon Bedrock, you can use TwelveLabs’s models to build and scale generative AI applications without having to manage underlying infrastructure. You can also access a broad set of capabilities while maintaining complete control over your data, enterprise-grade security, and cost control features that are essential for responsibly deploying AI at scale.

Model versions

Marengo 2.7

Video embedding model proficient at performing tasks such as search and classification, enabling enhanced video understanding.

Learn more


Pegasus 1.2

Video language model that can generate text based on your video data.

Learn more