True! I’m an AI researcher and using an AI agent to check the work of another agent does improve accuracy! I could see things becoming more and more like this, with teams of agents creating, reviewing, and approving. If you use GitHub copilot agent mode though, it involves constant user interaction before anything is actually run. And I imagine (and can testify as someone that has installed different ML algorithms/tools on government hardware) that the operators/decision makers want to check the work, or understand the “thought process” before committing to an action.
Will this be true forever as people become more used to AI as a tool? Probably not.
You either deliberately or accidentally misinterpreted the joke. I kinda connect the “woosh” to the adult animation show Archer, but I might be conflating it due them emerging around the same time.
Oh no, I mean could you explain the joke? I believe I get the joke (shitty AI will replace experts). I was just leaving a comment about how systems that use LLMs to check the work of other LLMs do better than if they don’t. And that when I’ve introduced AI systems to stakeholders with consequential decision making, they tend to want a human in the loop. While also saying that this will probably change over time as AI systems get better and we get more used to using them. Is that a good thing? It will have to be on a case by case basis.
I’m kinda stoked by the tech as well and kinda understand how multiple LLMs can produce pretty novel ideas. I think it was in protein-mapping where I first heard of some breakthroughs.
While I’m happy to hear your experience shows you otherwise, it feels like your advocating for the devil. We don’t want to get lost in an angsty anti-capitalist echochamber, but surely you can see how the comic is poking fun at our tendencies to very cleverly cause everything to turn to shit.
I guess woosh means missing the point? You are right on an individual basis, but if you look at it in tendencies, you might see why your swings didn’t connect.
Oh I completely agree that we are turning everything to shit in about a million different ways. And as oligarchs take over more, while AI is a huge money-maker, I can totally see regulation around it being scarce or entirely non-existent. So as it’s introduced into areas like the DoD, health, transportation, crime, etc., it’s going to be sold to the government first and it’s ramifications considered second. This has also been my experience as someone working in the intersection of AI research and government application. I immediately saw Elon’s companies, employees, and tech immediately get contracts without consultation by FFRDCs or competition by other for-profit entities. I’ve also seen people on the ground say “I’m not going to use this unless I can trust the output.”
I’m much more on the side of “technology isn’t inherently bad, but our application of it can be.” Of course that can also be argued against with technology like atom bombs or whatever but I lean much more on that side.
Anyway, I really didn’t miss the point. I just wanted to share an interesting research result that this comic reminded me of.
True! I’m an AI researcher and using an AI agent to check the work of another agent does improve accuracy! I could see things becoming more and more like this, with teams of agents creating, reviewing, and approving. If you use GitHub copilot agent mode though, it involves constant user interaction before anything is actually run. And I imagine (and can testify as someone that has installed different ML algorithms/tools on government hardware) that the operators/decision makers want to check the work, or understand the “thought process” before committing to an action.
Will this be true forever as people become more used to AI as a tool? Probably not.
Whoosh
Could you explain?
You either deliberately or accidentally misinterpreted the joke. I kinda connect the “woosh” to the adult animation show Archer, but I might be conflating it due them emerging around the same time.
Oh no, I mean could you explain the joke? I believe I get the joke (shitty AI will replace experts). I was just leaving a comment about how systems that use LLMs to check the work of other LLMs do better than if they don’t. And that when I’ve introduced AI systems to stakeholders with consequential decision making, they tend to want a human in the loop. While also saying that this will probably change over time as AI systems get better and we get more used to using them. Is that a good thing? It will have to be on a case by case basis.
I’m kinda stoked by the tech as well and kinda understand how multiple LLMs can produce pretty novel ideas. I think it was in protein-mapping where I first heard of some breakthroughs.
While I’m happy to hear your experience shows you otherwise, it feels like your advocating for the devil. We don’t want to get lost in an angsty anti-capitalist echochamber, but surely you can see how the comic is poking fun at our tendencies to very cleverly cause everything to turn to shit.
I guess woosh means missing the point? You are right on an individual basis, but if you look at it in tendencies, you might see why your swings didn’t connect.
Oh I completely agree that we are turning everything to shit in about a million different ways. And as oligarchs take over more, while AI is a huge money-maker, I can totally see regulation around it being scarce or entirely non-existent. So as it’s introduced into areas like the DoD, health, transportation, crime, etc., it’s going to be sold to the government first and it’s ramifications considered second. This has also been my experience as someone working in the intersection of AI research and government application. I immediately saw Elon’s companies, employees, and tech immediately get contracts without consultation by FFRDCs or competition by other for-profit entities. I’ve also seen people on the ground say “I’m not going to use this unless I can trust the output.”
I’m much more on the side of “technology isn’t inherently bad, but our application of it can be.” Of course that can also be argued against with technology like atom bombs or whatever but I lean much more on that side.
Anyway, I really didn’t miss the point. I just wanted to share an interesting research result that this comic reminded me of.