• UraniumBlazer@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      Cuda is required to be able to interface with Nvidia GPUs. AI stuff almost always requires GPUs for the best performance.

    • MalReynolds@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      Yes, llama.cpp and derivates, stable diffusion, they also run on ROCm. LLM fine-tuning is CUDA as well, ROCm implementations not so much for this, but coming along.

    • brianorca@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      Nearly all such software support CUDA, (which up to now was Nvidia only) and some also support AMD through ROCm, DirectML, ONNX, or some other means, but CUDA is most common. This will open up more of those to users with AMD hardware.

    • redcalcium@lemmy.institute
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      They are usually released for CUDA first, and if the projects got popular enough, someone will come in and port them to other platforms, which can take a while especially for rocm. Apple m series ports usually appear first before rocm, that’s show how much the devs community dislike working with rocm with famous examples such as geohot throwing the towel after working with rocm for a while.