• FishFace@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    18 hours ago

    The original developers of Stable Diffusion and similar models made absolutely no secret about the source data they used. Where are you getting this idea that they “intentionally obscure the original works… to make [them] difficult to backtrace.”? How would an image generation model even work in a way that made the original works obvious?

    Literally steal

    Copying digital art wasn’t “literally stealing” when the MPAA was suing Napster and it isn’t today.

    For cynical tech bros

    Stable Diffusion was originally developed by academics working at a University.

    Your whole reply is pretending to know intent where none exists, so if that’s the only difference you can find between collage and AI art, it’s not good enough.

    • pulsewidth@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 hours ago

      Stable Diffusion? The same Stable Diffusion sued by Getty Images which claims they used 12 million of their images without permission? Ah yes very non-secretive very moral. And what of industry titans DALL-E and Midjourney? Both have had multiple examples of artists original art being spat out by their models, simply by finessing the prompts - proving they used particular artists copyright art without those artists permission or knowledge.

      Stable Diffusion also was from its inception in the hands of tech bros, funded and built with the help of a $3 billion dollar AI company (Runway AI), and itself owned by Stability AI, a made for profit company presently valued at $1 billion and now has James Cameron on its board. The students who worked on a prior model (Latent Diffusion) were hired for the Stable Diffusion project, that is all.

      I don’t care to drag the discussion into your opinion of whether artists have any ownership of their art the second after they post it on the internet - for me it’s good enough that artists themselves assign licences for their work (CC, CC BY-SA, ©, etc) - and if a billion dollar company is taking their work without permission (as in the © example) to profit off it - that’s stealing according to the artists intent by their own statement.

      If they’re taking CC BY-SA and failing to attribute it, then they are also breaking licencing and abusing content for their profit. An VLM could easily add attributes to images to assign source data used in the output - weird none of them want to.

      In other words, I’ll continue to treat AI art as the amoral slop it is. You are of course welcome to have a different opinion, I don’t really care if mine is ‘good enough’ for you.