At first I thought it was the Beaverton. Had to check the URL.
AI is opening so many security HOLES. Its not solving shit. AI browsers and MCP connectors are wild west security nightmares. And that’s before you even trust any code these things write.
One of the most idiotic takes I’ve read in a long time
Schrödinger’s AI: It’s so smart it can build perfect security, but it’s too dumb to figure out how to break it.
If there are actually no bugs, can’t that create a situation where it’s impossible to break it? Not to say this is actually a thing AI can achieve, but it doesn’t seem like bad logic.
Even if there’s such a thing as a program without bugs, you’d still be overlooking one crucial detail - no matter the method, the end point of cybersecurity has to interface with humans. Humans are SO much easier to hack than computers.
Let’s say you get a phone call from your boss - It’s their phone number and their voice, but they sound a bit panicked. “Hey, I’m just about to head into a meeting to close a major deal, but my laptop can’t access the server. I need you to set up a temporary password in the next two minutes or we risk losing this deal. No, I don’t remember my backup - it’s written down in my desk but the meeting is at the client’s office.”
You’d be surprised how many people would comply, and all of that can be done by AI right now. It’s all about managing risk - there’s never going to be a foolproof system.
Rice’s Theorem prevents this… mostly.
I’d guess that hypothetical AI cybersecurity verification of code would be like that, where there are probably no bugs, but it’s not a totally sure thing. But even if you can’t have mathematical certainty there are no bugs, that doesn’t mean every or most programs verified this way are possible to be exploited.
All these brainwashed AI-obsessed people should be required to watch I, Robot on loop for a month or two.
Because then Security would be non-existent.
The S in AI stands for security.
ahahahaha
Oh, you’re serious. Let me laugh even harder.
AHAHAHAHA
Because its doing so so well now unattended…
Ron Howard narrator: Actually, they would need more.
The look on her face in the thumbnail matches the title perfectly.
People who say these things clearly have no experience. I spent an hour today trying to get one of the better programming models to parse a response. I gave it the inputs and expected outputs and it COULD not derive functional code until I told it what the implementation needed to be. If it isn’t cookie-cutter problems then it just can’t predict it’s way through it.
Who is paying her?
AI might pull her head our of her ass… eventually.
At this point we need to pull their heads out of our asses
@cm0002 #nowplaying Absolutely Right - Five Man Electrical Band (Absolutely Right: The Best of Five Man Electrical Band)
couldn’t ai, then also, break code faster than we could fix it ?
It’s like the “bla bla bla, blablabla… therefore God exists”
Except for CEOS it’s “blablablabla, therefore we can fire all our workers”
Same shit different day
I mean, at a high level it is very much the concept of ICE from Gibson et al back in the day.
Intrusion Countermeasures Electronics. The idea that you have code that is constantly changing and updating based upon external stimuli. A particularly talented hacker, or AI, can potentially bypass it but it is a very system/mental intensive process and the stronger the ICE, the stronger the tools need to be.
In the context of AI on both sides? Higher quality models backed by big ass expensive rigs on one side should work for anything short of a state level actor… if your models are good (big ol’ “if” that).
Which then gets into the idea of Black ICE that is actively antagonistic towards those who are detected as attempting to bypass it. In the books it would fry brains. In the modern day it isn’t overly dissimilar from how so many VPN controlled IPs are just outright blocked from services and there is always the risk of getting banned because your wifi coffee maker is part of a botnet.
But it is also not hard to imagine a world where a counter-DDOS or hack is run. Or a message is sent to the guy in the basement of the datacenter to go unplug that rack and provide the contact information of whoever was using it.
In the context of AI on both sides? Higher quality models backed by big ass expensive rigs on one side should work for anything short of a state level actor… if your models are good (big ol’ “if” that).
Turns out Harlan Ellison was a goddamn prophet when he wrote I Have No Mouth And I Must Scream.
I have no clue how you think these two are related in any way, except for the word “AI” occurring in both.
Tbf, every day that goes by is starting to feel more and more like we’re all being being tortured by a psychotic omnipotent AI… With a really boring sense of humor.
AI WRITES broken code. Exploiting is is even easier.
How do you exploit that which is too broken to run?
Self-exploiting code.
They say it’s healthy to self-exploit several times per month.
AI should start breaking code much sooner than it can start fixing it.
Maybe breaking isn’t even far, because the AI can be wrong 90% of the time and still be successful.
A few years back someone made virus that connected to an llm server and kept finding ways to infect computers in the simulated network. I think it was kind of successful. Not viable for a virus though, but an interesting idea non the less








