Ideogram, make my post an image.
<Ideogram> Sure thing boss!
Ideogram, make my post an image.
<Ideogram> Sure thing boss!
Claude, re-write my post.
<Claude> Sure thing boss!
Generative AI fascinates me, and I use it frequently for both work and leisure. This photo, however, is the first to make me feel a twinge of sadness, as it reminds me that I now have to question the authenticity of everything I see.
Then I recall that even traditional photographs often undergo extensive post-processing, which somewhat alleviates my concern. To distract myself, I create an AI-generated video of the river flowing, momentarily forgetting my worries.
However, I can’t help but notice that many of these AI generations are of poor quality. Generative AI is simultaneously wonderful and troubling - a double-edged sword of technology.
Despite these mixed feelings, I must admit this is a beautiful photo.
I love generative A.I. I use it all the time, professionally and recreationally. This photo is the first to make me sad that I have to question the veracity of everything now. Then I remember the amount of post-processing a photo like this usually goes through and feel less sad. Then I make an AI video of the river flowing and forget what I was worried about. But all these generations are kinda trash. Such wonderful, terrible things. Beautiful photo, though.
I am legion.
Such wonderful, terrible things.
/stares in smart glasses
At this point, you’re a towel.
WebP is a raster graphics file format developed by Google intended as a replacement for JPEG, PNG, and GIF file formats. It supports both lossy and lossless compression, as well as animation and alpha transparency. Google announced the WebP format in September 2010, and released the first stable version of its supporting library in April 2018.
The format has spotty support across applications and some vulnerabilities were discovered that required patch efforts last year. It’s not clear why you should do anything.
Violence against Nature.
deleted by creator
Humans are really bad at determining whether a chat is with a human or a bot
Eliza is not indistinguishable from a human at 22%.
Passing the Turing test stood largely out of reach for 70 years precisely because Humans are pretty good at spotting counterfeit humans.
This is a monumental achievement.
I had no idea emo ducks admired humanity like that. Imma try and be better for y’all, bring that good bread. Wait, is bread bad for you now? I think I saw that in my feed while doom scrolling.
As long as no one messes with their open source contributions… (ditto for MS)
To the one person who upvoted this: We should be friends.
Aye, I’d wager Claude would be closer to 58-60. And with the model probing Anthropic’s publishing, we could get to like ~63% on average in the next couple years? Those last few % will be difficult for an indeterminate amount of time, I imagine. But who knows. We’ve already blown by a ton of “limitations” that I thought I might not live long enough to see.
Participants only said other humans were human 67% of the time.
On the other hand, the human participant scored 67 percent, while GPT-3.5 scored 50 percent, and ELIZA, which was pre-programmed with responses and didn’t have an LLM to power it, was judged to be human just 22 percent of the time.
54% - 67% is the current gap, not 54 to 100.
/looks around - That doesn’t seem to be the case. A.I. has a better chance of repositioning the social locus of control.
Sounds like he needs someone with training to help him through retraining his behavioral/thought patterns, something a functional social system would provide if those were as common as comment culture.
Mildreds out here, tryin ta take ‘Mildred’ back, and y’all throwin shade like people ain’t have to live with dumbass fuckin names like Mildred. Leave Mildreds alone! They’ve got enough to deal with. Wait, they chose Mildred, like as an adult? In the present? Pfft.