I think this makes a bit of sense though doesn’t it? They wrote “guy”. Given that training data is probably predominantly white “guy” would give you a white guy nine times out of ten without clarification of what the word means to the AI, i.e. ethnically ambiguous. Because that’s what guy is, ethnically ambiguous. The spelling is because DALL-E suuuuucks at text, but slowly getting better at least.
But they should 100% tweak it so that when a defined character is asked for stuff like that gets dropped. I think the prompt structure is what makes this one slip through. Had they put quotes around “guy with swords pointed at him” to clearly mark that as it’s own thing this wouldn’t have happened.
But I don’t think the software can differentiate between the ideas of defined and undefined characters. It’s all just association between words and aesthetics, right? It can’t know that “Homer Simpson” is a more specific subject than “construction worker” because there’s no actual conceptualization happening about what these words mean.
I can’t imagine a way to make the tweak you’re asking for that isn’t just a database of every word or phrase that refers to a specific known individual that the users’ prompts get checked against and I can’t imagine that’d be worth the time it’d take to create.
If they’re inserting random race words in, presumably there’s some kind of preprocessing of the prompt going on. That preprocessor is what would need to know if the character is specific enough to not apply the race words.