Seaweed, short for "Seed-Video," is a research effort for building a foundational model for video generation. This webpage showcases diffusion transformers with approximately 7 billion (7B) parameters, trained using compute equivalent to 1,000 H100 GPUs. Seaweed learns world representation from massive amounts of multi-modal data such as video, image, and text. It allows for creating videos of various resolutions, aspect ratios, and durations from text descriptions. In this article, we present its generated videos and highlight its hallmark capability as a foundational model capable of supporting a wide range of downstream applications.
Our model is highly adept at generating lifelike human characters that exhibit a diverse array of actions, gestures, and emotions.
from ByteDance Seed.
https://seaweed.video/seaweed.pdfhttps://seaweed.video/
❤️ If you want to support the channel ❤️
Support here:
Patreon - / 1littlecoder
Ko-Fi - https://ko-fi.com/1littlecoder
🧭 Follow me on 🧭
Twitter - / 1littlecoder