LOOK MAA I AM ON FRONT PAGE

  • kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    5 days ago

    I can envision a system where an LLM becomes one part of a reasoning AI, acting as a kind of fuzzy “dataset” that a proper neural network incorporates and reasons with, and the LLM could be kept real-time updated (sort of) with MCP servers that incorporate anything new it learns.

    But I don’t think we’re anywhere near there yet.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      5 days ago

      The only reason we’re not there yet is memory limitations.

      Eventually some company will come out with AI hardware that lets you link up a petabyte of ultra fast memory to chips that contain a million parallel matrix math processors. Then we’ll have an entirely new problem: AI that trains itself incorrectly too quickly.

      Just you watch: The next big breakthrough in AI tech will come around 2032-2035 (when the hardware is available) and everyone will be bitching that “chain reasoning” (or whatever the term turns out to be) isn’t as smart as everyone thinks it is.

      • kescusay@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Well, technically, yes. You’re right. But they’re a specific, narrow type of neural network, while I was thinking of the broader class and more traditional applications, like data analysis. I should have been more specific.