• NeilBrü@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 days ago

    Absolutely interested. Thank you for your time to share that.

    My career path in neural networks began as a researcher for cancerous tissue object detection in medical diagnostic imaging. Now it is switched to generative models for CAD (architecture, product design, game assets, etc.). I don’t really mess about with fine-tuning LLMs.

    However, I do self-host my own LLMs as code assistants. Thus, I’m only tangentially involved with the current LLM craze.

    But it does interest me, nonetheless!

    • Takapapatapaka@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      22 hours ago

      Here is the main blog post that i remembered : it has a follow up, a more scientific version, and uses two other articles as a basis, so you might want to dig around what they mention in the introduction.

      It is indeed a quite technical discovery, and it still lacks complete and wider analysis, but it is very interesting for the fact that it kinda invalidates the common gut feeling that llms are pure lucky random.