So we’ll all have our own familiar soon?

  • istanbullu@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    5 months ago

    I understand the distinction, but it’s still waaay better than what OpenIAIClosedAI is doing.

    Also people are really good at reverse engineering. Open weights models can be fine tuned or adapted. I am trained a Llama 3 Lora not that long ago.

    • MalReynolds@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Agreed, and the chance of it backfiring on them is indeed pleasingly high. If the compute moat for initial training gets lower (e.g. trinary/binary models) or distributed training (Hivemind etc) takes off, or both, or something new, all bets are off.

      • istanbullu@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        The compute moat for the initial training will never get lower. But as the foundation models get better, the need for from-scratch training will be less frequent.