It’s no secret that much of social media has become profoundly dysfunctional. Rather than bringing us together into one utopian public square and fostering a healthy exchange of ideas, these platforms too often create filter bubbles or echo chambers. A small number of high-profile users garner the lion’s share of attention and influence, and the algorithms designed to maximize engagement end up merely amplifying outrage and conflict, ensuring the dominance of the loudest and most extreme users—thereby increasing polarization even more.

Numerous platform-level intervention strategies have been proposed to combat these issues, but according to a preprint posted to the physics arXiv, none of them are likely to be effective. And it’s not the fault of much-hated algorithms, non-chronological feeds, or our human proclivity for seeking out negativity. Rather, the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media. So we’re probably doomed to endless toxic feedback loops unless someone hits upon a brilliant fundamental redesign that manages to change those dynamics.

Co-authors Petter Törnberg and Maik Larooij of the University of Amsterdam wanted to learn more about the mechanisms that give rise to the worst aspects of social media: the partisan echo chambers, the concentration of influence among a small group of elite users (attention inequality), and the amplification of the most extreme divisive voices. So they combined standard agent-based modeling with large language models (LLMs), essentially creating little AI personas to simulate online social media behavior. “What we found is that we didn’t need to put any algorithms in, we didn’t need to massage the model,” Törnberg told Ars. “It just came out of the baseline model, all of these dynamics.”

  • James R Kirk@startrek.website
    link
    fedilink
    English
    arrow-up
    4
    ·
    21 hours ago

    It worries me that there are scientists out there who are making studies based on the assumption that an LLM chat bot is a reasonable stand-in for a human in this context (in any context really but that’s another conversation). It’s just not what LLMs are, it’s not what they are designed to do. They’ve fallen for the marketing.