Much of what’s known as ‘AI’ has nothing to do with progress — it’s about lobbyists pushing shoddy digital replacements for human labour that increase billionaire’s profits and make workers’ lives worse.
I agree currently technology is extremely unlikely to achieve general intelligence but my expression was that we never should try to achieve AGI; it is not worth the risk until after we solve the alignment problem.
“alignment problem” is what CEOs use as a distraction to take responsibility away from their grift and frame the issue as a technical problem. That’s another word that make you lose any credibility
I think we are talking past each other. Alignment with human values is important; otherwise we end up with a paper clip optimizer wanting humans only as a feedstock of atoms or deciding to pull a “With Folded Hands” situation.
None of the “AI” companies are even remotely interested in or working on this legitimate concern.
I agree currently technology is extremely unlikely to achieve general intelligence but my expression was that we never should try to achieve AGI; it is not worth the risk until after we solve the alignment problem.
“alignment problem” is what CEOs use as a distraction to take responsibility away from their grift and frame the issue as a technical problem. That’s another word that make you lose any credibility
I think we are talking past each other. Alignment with human values is important; otherwise we end up with a paper clip optimizer wanting humans only as a feedstock of atoms or deciding to pull a “With Folded Hands” situation.
None of the “AI” companies are even remotely interested in or working on this legitimate concern.
Unfortunately game theory says we’re gonna do it whenever it’s technologically possible.
Only for zero sum games