Stable Diffusion? The same Stable Diffusion sued by Getty Images which claims they used 12 million of their images without permission? Ah yes very non-secretive very moral. And what of industry titans DALL-E and Midjourney? Both have had multiple examples of artists original art being spat out by their models, simply by finessing the prompts - proving they used particular artists copyright art without those artists permission or knowledge.
Getting sued means Getty images disagrees that the use of the images was legal, not that it was secret, nor that it was moral. Getty images are included in the LAION-5b dataset that Stability AI publicly stated they used to create Stable Diffusion. So it’s not “intentionally obscuring” as you claimed.
Copying is not theft, no matter how many words you want to write about it. You can steal a painting by taking it off the wall. You can’t steal a JPG by right-clicking it and selecting “Copy Image”. That’s fundamentally different.
An VLM could easily add attributes to images to assign source data used in the output
Oh yeah? Easily? What attribution should a model trained purely on LAION-5b add to an output image if prompted with “photograph of a cat”?
In other words, I’ll continue to treat AI art as the amoral slop it is. You are of course welcome to have a different opinion, I don’t really care if mine is ‘good enough’ for you.
You can do whatever you want (within usual rules) in your personal life, but you chose to enter into a discussion.
From that discussion it’s clear that your position is rooted in bias not knowledge. That’s why you can’t point out substantial differences between AI-generated images and other techniques which re-use existing imagery, why you make up intentions and can’t back them up, and why you prefer to dismiss academics as “tech bros” instead of engaging on facts.
Getting sued means Getty images disagrees that the use of the images was legal, not that it was secret, nor that it was moral. Getty images are included in the LAION-5b dataset that Stability AI publicly stated they used to create Stable Diffusion. So it’s not “intentionally obscuring” as you claimed.
Copying is not theft, no matter how many words you want to write about it. You can steal a painting by taking it off the wall. You can’t steal a JPG by right-clicking it and selecting “Copy Image”. That’s fundamentally different.
Oh yeah? Easily? What attribution should a model trained purely on LAION-5b add to an output image if prompted with “photograph of a cat”?
You can do whatever you want (within usual rules) in your personal life, but you chose to enter into a discussion.
From that discussion it’s clear that your position is rooted in bias not knowledge. That’s why you can’t point out substantial differences between AI-generated images and other techniques which re-use existing imagery, why you make up intentions and can’t back them up, and why you prefer to dismiss academics as “tech bros” instead of engaging on facts.