This incident sparked a sharp conflict between the developer and the US government. Anthropic’s rules officially prohibit the use of Claude to promote violence, develop weapons, or conduct surveillance.

Due to these ethical restrictions, the Trump administration is already considering canceling a $200 million contract with Anthropic. Defense Secretary Pete Hagseth previously stated that the department would not work with models that ‘prohibit warfare,’ hinting specifically at Claude’s strict security filters.

    • supersquirrel@sopuli.xyzOP
      link
      fedilink
      arrow-up
      15
      arrow-down
      1
      ·
      edit-2
      6 days ago

      They are a tiny little scrappy tech startup, give them a break while they find their feet.

      /s!

      • Crankenstein@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        6 days ago

        There is giving them a break and making excuses to justify blatant hypocrisy.

        They oppose violence and surveillance but then sell out to a fascist corporation that is actively creating a panopticon? Give me a fucking break.

  • jacksilver@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    6 days ago

    This seems like a lot of fluff to make anthropic/Claude sound impressive. They don’t state what it was used for, but I’d imagine it was just general purpose text processing.

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    4
    ·
    5 days ago

    just before the assault on the government quarter, all radars in the air defense units suddenly failed.

    After American drones and helicopters appeared in the sky, the soldiers felt the effect of a powerful energy wave that effectively suppressed any possibility of resistance.

    What is this, has it been previously used? Is the US military giving up info on its secret weapons for the sake of stupid vanity missions?