데브허브 | DEVHUB | Forget Deepseek, Here's another MAX Release from China!
Qwen2.5-Max: Exploring the Intelligence of Large-scale MoE Model
Qwen2.5-Max, a large-scale MoE model that has been pretrained on over 20 trillion tokens and further post-trained with curated Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) methodologies
Qwen Chat here - https://chat.qwenlm.ai/
❤️ If you want to support the channel ❤️
Support here:
Patreon - / 1littlecoder
Ko-Fi - https://ko-fi.com/1littlecoder
🧭 Follow me on 🧭
Twitter - / 1littlecoder