German journalist Martin Bernklau typed his name and location into Microsoft’s Copilot to see how his culture blog articles would be picked up by the chatbot, according to German public broadcaster SWR.
The answers shocked Bernklau. Copilot falsely claimed Bernklau had been charged with and convicted of child abuse and exploiting dependents. It also claimed that he had been involved in a dramatic escape from a psychiatric hospital and had exploited grieving women as an unethical mortician.
…
Bernklau believes the false claims may stem from his decades of court reporting in Tübingen on abuse, violence, and fraud cases. The AI seems to have combined this online information and mistakenly cast the journalist as a perpetrator.
Microsoft attempted to remove the false entries but only succeeded temporarily. They reappeared after a few days, SWR reports. The company’s terms of service disclaim liability for generated responses.
…
I imagine the ones creating and distributing the model. Even if you only got sued when you hosted a model and not when you shared it, it still doesn’t make for a good ecosystem. Regular people should have the choice to use models even if it spits out garbage for certain tasks, it might suit their needs for their own task perfectly.
There’s no reason to gatekeep llms and lock them behind hardware requirements, it’s up to people to understand their limitations and what they are for.
I mean I’m not a lawyer but this is what I think is relevant here:
I really don’t think it matters whether what’s behind it is an LLM or an underpaid Indian writing the text in real time or if it’s just static pages the site owner wrote. They’re still responsible for it.
If you run it locally, none of it is public (until you publish what it generated, in which case you’re responsible for the content).