• The_Decryptor@aussie.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    I’m not convinced LLMs as they exist today don’t prioritize sources – if trained naively, sure, but these days they can, for instance, integrate search results, and can update on new information.

    Well, it includes the text from the search results in the prompt, it’s not actually updating any internal state (the network weights), a new “conversation” starts from scratch.

    • jsomae@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      Yes that’s right, LLMs are context-free. They don’t have internal state. When I say “update on new information” I really mean “when new information is available in its context window, its response takes that into account.”