I feel like AI generated videos of very active things like protests would be easy to debunk. Just look at the people in the background and point out the people disappearing after moving behind another person, or look for changing faces.
You haven’t seen a lot of Sora 2 videos. Identifiable traits require a pretty careful eye in many videos to spot.
You absolutely can’t even rely on the watermark, since removing that watermark is trivial to the nation-states running disinfo campaigns, and even for end users removal is trivial compared to creating typical public AI video models.
I feel like AI generated videos of very active things like protests would be easy to debunk. Just look at the people in the background and point out the people disappearing after moving behind another person, or look for changing faces.
Doesn’t matter how easy it is to debunk. I assume you were around for COVID and Trump’s first term?
You haven’t seen a lot of Sora 2 videos. Identifiable traits require a pretty careful eye in many videos to spot.
You absolutely can’t even rely on the watermark, since removing that watermark is trivial to the nation-states running disinfo campaigns, and even for end users removal is trivial compared to creating typical public AI video models.