• GenderNeutralBro@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    111
    arrow-down
    1
    ·
    5 months ago

    As a reminder, the same (closed-source) user-space components for OpenGL / OpenCL / Vulkan / CUDA are used regardless of the NVIDIA kernel driver option with their official driver stack.

    CUDA hell remains. :(

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      53
      arrow-down
      1
      ·
      5 months ago

      AMD needs to get their ducks in a row. They already have the advantage of not being Nvidia

      • john89@lemmy.ca
        link
        fedilink
        arrow-up
        6
        arrow-down
        3
        ·
        5 months ago

        They already have the advantage of not being Nvidia

        That’s just because they release worse products.

        If AMD had Nvidia’s marketshare, they would be just as scummy as the business climate allows.

        In fact, AMD piggybacks off of Nvidia’s scumbaggery to charge more for their GPUs rather than engage in an actual price war.

      • ProdigalFrog@slrpnk.net
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        1
        ·
        5 months ago

        ROCm is it’s own hell (unless they finally put some resources into it in the past couple years)

        • Cornelius@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          5 months ago

          They put in the absolute minimum amount of resources for it.

          It’s also littered with bugs as the ZLUDA project has noted

    • filister@lemmy.worldOP
      link
      fedilink
      arrow-up
      19
      arrow-down
      3
      ·
      5 months ago

      Yes, the CUDA is the only reason why I consider NVIDIA. I really hate this company but the AMD tech stack is really inferior.

    • Phoenixz@lemmy.ca
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      So is CUDA good or bad?

      I keep reading it’s hell, but the best. Apparently it’s the single one reason why Nvidia is so big with AI, but it sucks.

      What is it?

      • GenderNeutralBro@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        5 months ago

        Both.

        The good: CUDA is required for maximum performance and compatibility with machine learning (ML) frameworks and applications. It is a legitimate reason to choose Nvidia, and if you have an Nvidia card you will want to make sure you have CUDA acceleration working for any compatible ML workloads.

        The bad: Getting CUDA to actually install and run correctly is a giant pain in the ass for anything but the absolute most basic use case. You will likely need to maintain multiple framework versions, because new ones are not backwards-compatible. You’ll need to source custom versions of Python modules compiled against specific versions of CUDA, which opens a whole new circle of Dependency Hell. And you know how everyone and their dog publishes shit with Docker now? Yeah, have fun with that.

        That said, AMD’s equivalent (ROCm) is just as bad, and AMD is lagging about a full generation behind Nvidia in terms of ML performance.

        The easy way is to just use OpenCL. But that’s not going to give you the best performance, and it’s not going to be compatible with everything out there.

    • magikmw@lemm.ee
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      5 months ago

      The fact that cuda means ‘wonders’ in polish is living in my mind rent free several days after I read about nvidia news.

          • leopold@lemmy.kde.social
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            5 months ago

            Right, I’m well aware that that article is the reason why a bunch of people have been making the unsubstantiated claim that Nvidia has hired people to work on Nouveau.

            Nvidia hired the former lead Nouveau maintainer and he contributed a bunch of patches a couple of months ago after they hired him. That was his first contribution since stepping down and I’m fairly certain it was his last because there’s no way Phoronix would miss the opportunity to milk this some more if they could. He had said when stepping down that he was open to contributing every once in a while, so this wasn’t very surprising either way. To be clear, it is not evidence that he or anyone else was hired by Nvidia to work on Nouveau. Otherwise, I’d like to ask what he’s been doing since, because that was over three months ago.