26b Moe
2 mentions across 2 people
All mentions
“26B MoE for low latency”
Gemma 4: Next-Generation Open Models Launched with Diverse Sizes and Licensing ↗Google DeepMind
Recommendedtweet · 2026-04-02
“31B Dense & 26B MoE: state-of-the-art performance for advanced local reasoning tasks – like custom coding assistants or analyzing scientific datasets.”
Gemma 4: Next-Gen Open Models for Advanced AI and Edge Applications ↗