It feels like the amount of both, divisive posts and ghoulish comments is rising again.
One could argue that the world has a lot of divisive stuff going on and lemmy just talks about it. But the way people post about stuff seems more oot and hateful than it has been in the past.
Not saying it is that but if I wanted to bring the Fediverse down or at least keep my customers from going there, I would sow this stuff as much as I can.
I’m blocking ghouls left right and center atm but if I ever asked a friend to join lemmy, I’d hate to think of what they would see that I dont anymore.
Do we need stronger moderation?
- Maybe ban politics from c/memes?
- Become a little more stringent on “dont be a jerk” rules in communities?
One thing that really bothers me is the collapsing “discourse”. Trying to mend fences and keep the conversation between sides going ime leads to nothing but downvotes and shitstorm.
I feel like a little more interaction (instead of intervention, at first) of the moderators would do wonders there.
Thanks for reading this rant. Have a nice day.
You’re not imagining it. I’m pretty sure you can see regular work from propaganda teams on lemmy. I’d love to see the backed logs to confirm it.
They tend to work in very hostile teams to brigade topics.
I‘m encountering it this exact moment in a piracy discussion where some very abusive people start arguing for IP and excuse the blatant manipulation by calling limited licenses „buying“ and „owning“.
It’s interesting right?
I’m thinking the architecture of the fediverse makes it particularly vulnerable to these sorts of attacks.
I’m pretty sure I’ve spotted bots circle jerking on some subjects also which makes me think there’s a few different sources.
Very interesting indeed.
I‘m starting to report, block and ban accounts from being viewed on my instance that use abusive language but from a systemic standpoint we should find a design solution to make this work.
Reddit had karma for this reason among others. People needed to make helpful contributions to prove they are able to function in the group.
For many reasons this is not implemented in the fediverse but a design solution would be good.
If I was designing an anti troll/bot system I’d implement a few things. Let’s call any bad actor on here a bot/troll or broll for ease.
If this LLM-detection function ever results in false positives, this system will be banning innocent people.
Also there are many, many cases where a person openly displays results from an LLM, without it being in any way antisocial.
The odds of someone coming up with the same sentence as an llm within common sense bounds of time far exceed winning the lottery or getting struck by lightning.
Your second point is straight up nonsense. This platform is for humans to interact. The use of bots is inherently deceptive.
Fascinating to have someone argue for them. I think the backend logs will be pretty illuminating.
I don’t know what a person “coming up with the same sentence as an llm” would have anything to do with this unless the LLM detection is based on direct string comparison.
Nope. I can say:
That is not deceptive. But it would be detected by this system and result in them being banned. Because you guys are gung-ho to build a powerful head-cracking machine and didn’t think of an obvious edge case.
You’re wrong and don’t have the technical knowledge to understand why and I can’t be asked explaining it.
Relax, it won’t affect that case.