• MalReynolds@slrpnk.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 months ago

    Yes, llama.cpp and derivates, stable diffusion, they also run on ROCm. LLM fine-tuning is CUDA as well, ROCm implementations not so much for this, but coming along.