With the New York mayoral election just days away, independent candidate Andrew Cuomo has taken to social media on Halloween to try to warn voters against Democratic rival and frontrunner Zohran Mamdani and Republican candidate Curtis Sliwa – with the help of “embarrassing” AI-generated content.Cuom...
(I watched the whole ad)
Yeah…
I mean, I’m a local ML tinkerer. I’m practically an AI-loving extremist on Lemmy, and this is just… weird? Even if I was a New York Republican or whatever, it feels like a clickbait supplement ad your boomer uncle’d send you from Facebook, complete with emojiis.
I think this is why many like Trump. For all his antics and trolling, it’s often authentic, where this feels like an alien trying to figure out what a young adult would tweet.
I’d say Cuomo needs to be authentic, too, but that’d probably be even worse, heh.
ML tech isn’t bad, just like blockchain isn’t bad. It’s the gross capitalist opportunism happening around it that makes it overpromise (to the detriment of quality), overbuild (to the detriment of the environment), overuse (to the detriment of the economy), and overstimulate (to the detriment of mental health) while stealing the hard work of basically all of humanity up to this point (like then or not, the material reality is that these companies should be compensating artists and authors alike for their work being used for training).
And with all that power, they do THIS fucking shit with it.
I’m all for local LLMs to be assistants, autocompletes, and reference librarians; but just like the web, they were better when you had to be a fucking turbo-nerd to get them working.
One unique thing I’ve observed is that big firms, especially the US ones, seem to miss all the cool innovations coming from LLM research papers… unless its in house, of course.
So it’s also insular corporate ‘don’t innovate, scale up’ culture kind of poisoning them too. They don’t have to be so expensive and big to be useful tools.
Also the fact that a lot of the big firms really seem to be just interested in it as a way to get more user data. People will share some pretty sensitive info with an LLM that they wouldn’t otherwise provide.
Running locally is definitely the way to go, if you’re going to use them.
My feelings on AI are also complicated, but I’m glad we all agree this is radioactive trash.
I love to play around with local AI; random github releases, local-hosted-and-use LLMs, and free code seem genuinely interesting, even if I don’t plan to do anything public. And when it doesn’t feed back to a corporation, and the benefits go only to the user, I’m usually for it.
Like…My local-hosted Immich server has AI image recognition, fully locally hosted, no corporate parasitic training use, just the ability to use language to search. Like, I can immediately find that one picture from 15 years ago that I know has a dog in it by searching for “dog” and scrolling a tiny bit.
But AI platforms use that seeks to reinforce power (corporations like Google and Meta, politicians like Trump and Cuomo, billionaires like Musk) or startups obviously staking claims to try to capture and monetize users (OpenAI, and the fifty thousand other techbro VC simp startups) are physically repulsive to me.
deleted by creator