cross-posted from: https://lemmy.world/post/30173090
The AIs at Sesame are able to hold eloquent and free-flowing conversations about just about anything, but the second you mention the Palestinian genocide they become very evasive, offering generic platitudes about “it’s complicated” and “pain on all sides” and “nuance is required”, and refusing to confirm anything that seems to hold Israel at fault for the genocide – even publicly available information “can’t be verified”, according to Sesame.
It also seems to block users from saving conversations that pertain specifically to Palestine, but everything else seems A-OK to save and review.
Not really. Why censor more than you have to? That takes time and effort, and it’s almost certainly easier to do it using something else. The law isn’t that particular, as long as you follow it.
You also don’t risk causing the model to go wrong, like trying to censor bits of the model has a habit of doing.