Tony Bark@pawb.social to Technology@lemmy.worldEnglish · 2 days agoGoogle removes Gemma models from AI Studio after GOP senator’s complaintarstechnica.comexternal-linkmessage-square15fedilinkarrow-up1114arrow-down10cross-posted to: technology@beehaw.org
arrow-up1114arrow-down1external-linkGoogle removes Gemma models from AI Studio after GOP senator’s complaintarstechnica.comTony Bark@pawb.social to Technology@lemmy.worldEnglish · 2 days agomessage-square15fedilinkcross-posted to: technology@beehaw.org
minus-squarefilister@lemmy.worldlinkfedilinkEnglisharrow-up11·1 day agoThe future is very small models trained to work in a certain domain and able to run on devices. Huge foundational models are nice and everything, but they are simply too heavy and expensive to run.
minus-squarebrucethemoose@lemmy.worldlinkfedilinkEnglisharrow-up2·edit-21 day agoYeah. You are preaching to the choir here. …Still though, I just meant there’s no reason to use Gemma 3 27B (or 12? Whatever they used) unaugmented in AI Studio. The smallest flash seems to be more optimal for TPUs (hence it runs faster).
The future is very small models trained to work in a certain domain and able to run on devices.
Huge foundational models are nice and everything, but they are simply too heavy and expensive to run.
Yeah. You are preaching to the choir here.
…Still though, I just meant there’s no reason to use Gemma 3 27B (or 12? Whatever they used) unaugmented in AI Studio. The smallest flash seems to be more optimal for TPUs (hence it runs faster).