Update: After this article was published, Bluesky restored Kabas’ post and told 404 Media the following: “This was a case of our moderators applying the policy for non-consensual AI content strictly. After re-evaluating the newsworthy context, the moderation team is reinstating those posts.”
Bluesky deleted a viral, AI-generated protest video in which Donald Trump is sucking on Elon Musk’s toes because its moderators said it was “non-consensual explicit material.” The video was broadcast on televisions inside the office Housing and Urban Development earlier this week, and quickly went viral on Bluesky and Twitter.
Independent journalist Marisa Kabas obtained a video from a government employee and posted it on Bluesky, where it went viral. Tuesday night, Bluesky moderators deleted the video because they said it was “non-consensual explicit material.”
Other Bluesky users said that versions of the video they uploaded were also deleted, though it is still possible to find the video on the platform.
Technically speaking, the AI video of Trump sucking Musk’s toes, which had the words “LONG LIVE THE REAL KING” shown on top of it, is a nonconsensual AI-generated video, because Trump and Musk did not agree to it. But social media platform content moderation policies have always had carve outs that allow for the criticism of powerful people, especially the world’s richest man and the literal president of the United States.
For example, we once obtained Facebook’s internal rules about sexual content for content moderators, which included broad carveouts to allow for sexual content that criticized public figures and politicians. The First Amendment, which does not apply to social media companies but is relevant considering that Bluesky told Kabas she could not use the platform to “break the law,” has essentially unlimited protection for criticizing public figures in the way this video is doing.
Content moderation has been one of Bluesky’s growing pains over the last few months. The platform has millions of users but only a few dozen employees, meaning that perfect content moderation is impossible, and a lot of it necessarily needs to be automated. This is going to lead to mistakes. But the video Kabas posted was one of the most popular posts on the platform earlier this week and resulted in a national conversation about the protest. Deleting it—whether accidentally or because its moderation rules are so strict as to not allow for this type of reporting on a protest against the President of the United States—is a problem.
You do remember snuff and goatse and csam of the early internet, I hope.
Even with that of course it was better, because that stuff still floats around, and small groups of enjoyers easily find ways to share it over mainstream platforms.
I’m not even talking about big groups of enjoyers, ISIS (rebranded sometimes), Turkey, Azerbaijan, Israel, Myanma’s regime, cartels and everyone share what they want of snuff genre, and it holds long enough.
In text communication their points of view are also less likely to be banned or suppressed than mine.
So yes.
They don’t think so, just use the opportunity to do this stuff in area where immunity against it is not yet established.
There are very few stupid people in positions of power, competition is a bitch.
I’m weirded out when people say they want zero moderation. I really don’t want to see any more beheading or CSAM and moderation can prevent that.
Moderation should be optional .
Say, a message may have any amount of “moderating authority” verdicts, where a user might set up whether they see only messages vetted by authority A, only by authority B, only by A logical-or B, or all messages not blacklisted by authority A, and plenty of other variants, say, we trust authority C unless authority F thinks otherwise, because we trust authority F to know things C is trying to reduce in visibility.
Filtering and censorship are two different tasks. We don’t need censorship to avoid seeing CSAM. Filtering is enough.
This fallacy is very easy to encounter, people justify by their unwillingness to encounter something the need to censor it for everyone as if that were not solvable. They also refuse to see that’s technically solvable. Such a “verdict” from moderation authority, by the way, is as hard to do as an upvote or a downvote.
For a human or even a group of humans it’s hard to pre-moderate every post in a period of time, but that’s solvable too - by putting, yes, an AI classifier before humans and making humans check only uncertain cases (or certain ones someone complained about, or certain ones another good moderation authority flagged the opposite, you get the idea).
I like that subject, I think it’s very important for the Web to have a good future.
I can’t engage in good faith with someone who says this about CSAM.
No it is not. People are not tagging their shit properly when it is illegal.
Right, you can’t.
If someone posts CSAM, police should get their butts to that someone’s place.
What I described doesn’t have anything to do with people tagging what they post. It’s about users choosing the logic of interpreting moderation decisions. But I’ve described it very clearly in the previous comment, so please read it or leave the thread.