Pegasus 1.2: High-Performance Video Language Model
Pegasus 1.2 sets a new standard for long-form video AI with top-tier accuracy and minimal lag. Designed for commercial use, it supports efficient video querying at scale
The video understanding firm TwelveLabs and Amazon Web Services (AWS), currently announced that Amazon Bedrock would soon provide TwelveLabs’ cutting-edge multimodal foundation models, Marengo and Pegasus
TwelveLabs is launching Pegasus 1.2, a major advancement in industry-grade video language models, in response to these commercial demands. Pegasus 1.2 reaches cutting-edge results in interpreting lengthy videos
State-of-the-art Pegasus 1.2 provides commercial value with its intelligent, tailored system design that excels in production-grade video processing pipelines
With the help of video-focused model architecture and optimised inference system, Pegasus 1.2 consistently shows time-to-first-token latency for videos up to 15 minutes long, and responds more quickly to longer content
Cost For commercial video processing, Pegasus 1.2 offers best-in-class performance without the high cost
Pegasus 1.2 creates rich video embeddings when movies are indexed and saves them in database for further API requests, enabling customers to create continuously for a very minimal cost
Pegasus 1.2 has safety features, but like any AI model, it runs the risk of producing content that can be deemed offensive or dangerous if sufficient supervision and regulation aren’t in place