• sudneo@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    There is a bunch of research showing that model improvement is marginal compared to energy demand and/or amount of training data. OpenAI itself ~1 month ago mentioned that they are seeing a smaller improvements in Orion (I believe) vs GPT4 than there was between GPT 4 and 3. We are also running out of quality data to use for training.

    Essentially what I mean is that the big improvements we have seen in the past seem to be over, now improving a little cost a lot. Considering that the costs are exorbitant and the difference small enough, it’s not impossible to imagine that companies will eventually give up if they can’t monetize this stuff.

    • theherk@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      Surely you can see there is a difference between marginal improvement with respect to energy and not improving.

    • icecreamtaco@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      Compare Llama 1 to the current state of the art local AI’s. They’re on a completely different level.