Introducing TwelveLabs
TwelveLabs uses multimodal foundation models (FMs) to bring humanlike understanding to video data. The company's FMs understand what is happening in videos, including actions, objects, and background sounds, allowing developers to create applications that can search through videos, classify scenes, summarize, and extract insights with precision and reliability.
State-of-the-art video understanding models
Benefits
Use Cases
TwelveLabs in Amazon Bedrock overview
With Marengo and Pegasus in Amazon Bedrock, you can use TwelveLabs’s models to build and scale generative AI applications without having to manage underlying infrastructure. You can also access a broad set of capabilities while maintaining complete control over your data, enterprise-grade security, and cost control features that are essential for responsibly deploying AI at scale.
Model versions
Marengo 2.7
Video embedding model proficient at performing tasks such as search and classification, enabling enhanced video understanding.
Pegasus 1.2
Video language model that can generate text based on your video data.