The Pentagon has its eye on the leading AI company, which this week softened its ban on military use.

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    If you guys think that AI hasn’t already been in use in various militarys including America y’all are living in lala land.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    8 months ago

    Literally no one is reading the article.

    The terms still prohibit use to cause harm.

    The change is that a general ban on military use has been removed in favor of a generalized ban on harm.

    So for example, the Army could use it to do their accounting, but not to generate a disinformation campaign against a hostile nation.

    If anyone actually really read the article, we could have a productive conversation around whether any military usage is truly harmless, the nuances of the usefulness of a military ban in a world where so much military labor is outsourced to private corporations which could ‘launder’ terms compliance, or the general inability of terms to preemptively prevent harmful use at all.

    Instead, we have people taking the headline only and discussing AI being put in charge of nukes.

    Lemmy seems to care a lot more about debating straw men arguments about how terrible AI is than engaging with reality.

  • annehathway12@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    5 months ago

    It’s interesting to note OpenAI’s decision regarding the ban on using ChatGPT for “Military and Warfare” applications. For more updates and insights on AI developments, visit ChatGPT.

  • funkforager@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Remember when open ai was a nonprofit first and foremost, and we were supposed to trust they would make AI for good and not evil? Feels like it was only Thanksgiving…

    • wooki@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 months ago

      I wouldnt be too worried they’ve just made an over glorified word predictor and blender of peoples art

            • wooki@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              8 months ago

              Again not new stop grandstanding it as a new effect. Media outlets have been doing this since the dawn of journalism. Scientific process created to combat it, political standards to help reduce it fand laws to make it financially unattractive act remains its not new.

              The only thing that is new. The financial gain from the hype of abusing the word AI and thr media not calling it out. But hey here we are back at the start. Its not new.

              • pinkdrunkenelephants@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                2
                ·
                8 months ago

                And that totally makes it okay for you to use an LLM to do so far more effectively and far more efficiently, destroying humanity’s ability to discern reality