A student in America asked an artificial intelligence program to help with her homework. In response, the app told her "Please Die." The eerie incident happened when 29-year-old Sumedha Reddy of Michigan sought help from Google’s Gemini chatbot large language model (LLM), New York Post reported.
The program verbally abused her, calling her a “stain on the universe.” Reddy told CBS News that she got scared and started panicking. “I wanted to throw all of my devices out the window. I hadn’t felt panic like that in a long time to be honest,” she said.
Must be gemini specific, couldn’t replicate locally
LLMs are inherently probabilistic. A response can’t be reliability reproduced with exact same tokens on exact same model with exact same params.
Maybe it being 16 questions in had an effect on it? I don’t know how much it keeps on it’s “memory” for one person/conversation.