It’s no secret that much of social media has become profoundly dysfunctional. Rather than bringing us together into one utopian public square and fostering a healthy exchange of ideas, these platforms too often create filter bubbles or echo chambers. A small number of high-profile users garner the lion’s share of attention and influence, and the algorithms designed to maximize engagement end up merely amplifying outrage and conflict, ensuring the dominance of the loudest and most extreme users—thereby increasing polarization even more.

Numerous platform-level intervention strategies have been proposed to combat these issues, but according to a preprint posted to the physics arXiv, none of them are likely to be effective. And it’s not the fault of much-hated algorithms, non-chronological feeds, or our human proclivity for seeking out negativity. Rather, the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media. So we’re probably doomed to endless toxic feedback loops unless someone hits upon a brilliant fundamental redesign that manages to change those dynamics.

Co-authors Petter Törnberg and Maik Larooij of the University of Amsterdam wanted to learn more about the mechanisms that give rise to the worst aspects of social media: the partisan echo chambers, the concentration of influence among a small group of elite users (attention inequality), and the amplification of the most extreme divisive voices. So they combined standard agent-based modeling with large language models (LLMs), essentially creating little AI personas to simulate online social media behavior. “What we found is that we didn’t need to put any algorithms in, we didn’t need to massage the model,” Törnberg told Ars. “It just came out of the baseline model, all of these dynamics.”

  • t3rmit3@beehaw.org
    link
    fedilink
    arrow-up
    7
    ·
    16 hours ago

    The problem with training AI bots is that they will model the human behavior from the bad environment per their training, but not the human psychological reactions to the changing environments, so it’s not really going to tell you whether the different platform makes humans behave differently.

    • James R Kirk@startrek.website
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 hours ago

      It worries me that there are scientists out there who are making studies based on the assumption that an LLM chat bot is a reasonable stand-in for a human in this context (in any context really but that’s another conversation). It’s just not what LLMs are, it’s not what they are designed to do. They’ve fallen for the marketing.

    • TehPers@beehaw.org
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 day ago

      It seems a big limitation that the users in the study were bots.

      Seems like an accurate representation of social media to me.

  • James R Kirk@startrek.website
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    1 day ago

    We address this question using a novel method - generative social simulation - that embeds Large Language Models within Agent-Based Models to create socially rich synthetic platforms. We create a minimal platform where agents can post, repost, and follow others. We find that the resulting following-networks reproduce three well-documented dysfunctions: (1) partisan echo chambers; (2) concentrated influence among a small elite; and (3) the amplification of polarized voices - creating a ‘social media prism’ that distorts political discourse.

    1. lmao “generative AI chatbots trained on social media behave like social media users”
    2. Shame on ArsTechnica they should be better than to publish a study like this.