- cross-posted to:
- news@lemmy.world
- cross-posted to:
- news@lemmy.world
“AI businesses are putting profits over sanity and safety”.
Remove “AI” from that sentence and you’ll see that’s just more normal business practices that have been going on for decades.
After $1T poured into global warming, why can’t AI replace them ?
What profits?
Now what does that tell us about the sanity and safety?
This is many companies right now.
Mine is run from the top down (all executives) by people who use LLM’s for fucking everything.
Everyone fucking hates them at this point. We all think they are fucking trash.
Maybe it’s time to grab some fellow employees and make an employee owned business.
What industry is your company in?
They are leaving because they are getting much better offers elsewhere.

Just vibe code the AI. I’m sure it’ll work perfectly.
That’s just every company isn’t it?
Not in my experience. Once their clients budgets get cut by funding cuts due to reality and they notice it doesn’t do anything of benefit (on the clients side), they will be all “client first”.
And it’s not even working. Not one of the AI companies is profitable. So they’re putting the hope for profits some time in the future over sanity and safety.
I don’t think they even care about profits anymore.
Billionaires live in the balance sheet, not the P/L statements.
Midjourney is profitable
Midjourney are the worse of the worse when it comes specifically training on stolen work from artists who dedicated their lives to it. They opened the floodgates to what we see now with mass theft of content by not getting sued into oblivion. Fuck them and their creepy little fuck face of a CEO.
They’re also not providing a large language model, so they actually did have a path to profitability. It’s keeping LLMs updated and running that costs so much money that companies trying to do so are losing billions, and Midjourney doesn’t have that problem.
It’s just that their path to profitability was built on plagiarism on an astonishing scale. You’re spot on, they should have been utterly destroyed right at the start.
Steve Burke (of GN) described the absurdity pretty well, within the context of the currently uncertain Nvidia and OpenAI deal:
Nvidia offered OpenAI $100B in investment, money that it didn’t have, as long as OpenAI gave that money back to Nvidia to lease GPUs that haven’t been made, to then put in data centres that haven’t been constructed, which will be powered by electricity that hasn’t come online, to then rent to users who haven’t subscribed, to provide them features that haven’t come to fruition.
And hope you’ve propped up the economy enough by the end of it that the government has to
bail you out…sorry i meant provide a “backstop”.
What about all the user data they sell to third parties? I’d be interested in knowing how that contributes to this
So what’s the problem? This looks self-correcting to me, if none of the AI companies are profitable then they’re going to go away. Short their stock and make a fortune.
Shorting isn’t just a bet that a stock will fail but also when
Then invest in competitors, they’ve got a more flexible timeframe.
Imagine your entire life is viewed through the lens of actions you can take in the stock market. What a sad life.
The original comment that this subthread descends from was about the profitability of AI companies.
The problem is the cost of that correction is going to fall on us. Or did we forget that the flavor of capitalism we live under is the “Privatize the profits, socialize the losses” kind. We’re not the ones in the casino, but we’re the ones who will lose our shirts when they lose.
So what’s the problem?
“What’s the problem” with the entire American economy being moored to a bunch of companies all acting as flaky as Enron and friends during the dot-com crash?
Edit: just realized FaceDeer is obsessed with AI stuff, so he’s probably here just to troll with questions he already knows the answers to.
It’s only self correcting if the powers that be are losing money, which they aren’t because they are either liquidating important assets to pad their pockets or just using economic magic to make trillions appear out of no where. They’ll only feel it when their company or the economy collapses, and at that point they make off with their ill gotten gains
Where’s this infinite well of investment money coming from? “Economic magic” is pretty vague.
I wrote it intentionally vague because the reality is that we don’t know exactly how they are doing it, but we don’t need to to be able to see that there is a positive feedback loop between ai companies, cloud service providers and compute hardware manufacturers. Likely what they are doing is simply lying about their books and since the regulators have been bought already, or are simply incompotent, there is no one to say otherwise.
Have you not heard of inflation? It is literally the creation of new monies that the government gives to banks to give loans.
that’s… not at all what inflation means… it is 1 of many causes of inflation
and the government doesn’t “give” it to banks
the reality is far more complex in both cases, even if sometimes the simplified version of things looks like that
I’m talking monetary inflation, not price inflation.
In the mean time they’re soaking up all the RAM, SSD and silicon processing which makes basically everything with any of those cost a lot more (like the RAM I bought for $99, 4 yrs ago thats now $560). Not to mention the power requirements and costs being passed on to the consumers that don’t want it anyway.
They’re also screwing up the environment in ways it won’t recover from.
Which is pouring money into the manufacturers of those things. If you’re convinced the AI companies are going to collapse then just wait a little and you’ll get all those things way cheaper than they were before.
At the moment, most of that “money” is just stock in the other company. And the type of RAM and “GPU”'s being manufactured are not ones that normal consumers will use. They’re very specialized for AI en masse.
Another thing around that is that the major manufacturers being leveraged for that gear have stated that they are not increasing production in the near future because of this. It seems they’re mostly in a “wait and see, it might just be a bubble” mode as scale up takes a lot of time and only pays off with continued demand over a long period of time.
I’d love if it was going to be flooding the market with cheaper tech, but thats not been shown to be the case. And it’s really not worth the environmental impact in either case.
And the type of RAM and “GPU”'s being manufactured are not ones that normal consumers will use.
They’re using the same foundries that would make those things. I’m not saying that there’ll be a flood of “used” equipment (though there would indeed be some of that too, other companies could set up data centers much more cheaply), I’m saying that the foundries will switch back to consumer products.
The stock is worth a lot because it can be sold for a lot. If the manufacturers don’t think the AI companies will stick around they should be selling the stock they’re receiving from them. It’s money either way. What do you think they’re doing with that money?
They’ll just declare bankruptcy and get away scott-free
That’s not how bankruptcy works. The investors don’t get their money back.
thanks for pointing that out.
I’m not sure how this benefits the big corporations then. Surely they don’t have a deathwish? And aren’t stupid/incompetent enough to not see that? Else they’d not have gotten this big, right?
If anybody leaves an AI company with a fat paycheck, promises to “be honest about the real problems,” and then proceeds to regurgitate things the AI company CEOs say: be suspicious.
Exhibit A is Anthropic millionaire Mrinank Sharma, who only mentioned (future) peril from AI and AI-made bioweapons, two fictional scenarios on the short list that Anthropic officially endorses. It’s a list of things that please Anthropic investors.
Real-world stuff like AI psychosis, poisoning people’s air, or generating CSAM doesn’t get a mention from him. There’s no profit in acknowledging those things, so he won’t.
Maybe Claude Code learned this from its creators: I’ve noticed that when it says “the real issue is…”, that means it has no clue and is about barf out a bunch of slop that I’m going to have to revert.
The only time it ever says “the real issue is” and is right is when I’ve just corrected it and told it the real issue which it then wastes tokens regurgitating back to me. Gods I miss not having to use this crap at work
You HAVE to use it?
Yes, our usage is tracked. Managers have statistics broken down by daily usage per agent
Yup. Lots of managers are forcing people to use AI so they can report doing so in their metrics.
AI businesses are putting profits over sanity and safety.
All businesses are. That’s what a business is. A legal entity that seeks to extract as much wealth from people as possible. They put profits before people as a matter of policy.
Limited liability corporations should be outlawed.
That’s not what business is inherently; that’s what capitalism is.
It’s not even strictly what capitalism is about. It’s some stupid bullshit interpretation that came out of the University of Chicago economics department.
Seriously, go look at Adam Smiths wealth of nations. The only mention of the “invisible hand” is so different from what is taught in economics now
… by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention.
It’s not that we need no regulation. It’s just trying to say that if we set things up correctly, we don’t need to worry about people pursuing SOLELY their own personal gain. Because the market seeks out the “greatest value”, which is not just about money. It’s also the value to society as a whole.
Instead, we got the fucking bullshit from Chicago saying that the only / best way of measuring value is by profit.
As a legal requirement.
Interestingly its not.
I believe that’s only if you’re publicly traded.
Yeah, but it might depend on the company bylaws even if its privately owned.
We’re so cooked
Every company does that. The difference is the AI companies will be broke soon and laying off all their staff so people are getting out before the collapse.
Exactly. And some of them are rats fleeing a sinking ship. Just look at Mrinank Sharma, who got paid a million bucks or so for a year at Anthropic, then ditched.
Oh thank heavens some people still have ethics.
if they had ethics they would have left a long time ago.
🌎 👨🚀 🔫 👨🚀










