• the_q@lemmy.zip
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    14
    ·
    1 day ago

    An 80 year old judge on their best day couldn’t be trusted to make an informed decision. This guy was either bought or confused into his decision. Old people gotta go.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          13
          arrow-down
          1
          ·
          1 day ago

          Is it this?

          First, Authors argue that using works to train Claude’s underlying LLMs was like using works to train any person to read and write, so Authors should be able to exclude Anthropic from this use (Opp. 16).

          That’s the judge addressing an argument that the Authors made. If anyone made a “false equivalence” here it’s the plaintiffs, the judge is simply saying “okay, let’s assume their claim is true.” As is the usual case for a preliminary judgment like this.

          • MeaanBeaan@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            16 hours ago

            Wait, the authors argued that? Why? That’s literally the opposite of the thing they needed to argue.

          • ag10n@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            10
            ·
            23 hours ago

            Page 6 the judge writes the LLM “memorized” the content and could “recite” it.

            Neither is true in training or use of LLMs

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              13
              ·
              22 hours ago

              The judge writes that the Authors told him that LLMs memorized the content and could recite it. He then said “for purposes of argument I’ll assume that’s true,” and even despite that he went ahead and ruled that LLM training does not violate copyright.

              It was perhaps a bit daring of Anthropic not to contest what the Authors claimed in that case, but as it turns out the result is an even stronger ruling. The judge gave the Authors every benefit of the doubt and still found that they had no case when it came to training.

            • Artisian@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              22 hours ago

              Depends on the content and the method. There are tons of ways to encrypt data, and under relevant law they may still count as copies. There are certainly weaker NN models where we can extract a lot of the training data, even if it’s not easy, from the model parameters (even if we can’t find a prompt that gets the model to regurgitate).