

In my few experiments with ChatGPT, I found it to be disgustingly sycophantic. I have no trouble believing that it could easily amplify delusions of grandeur.
Hopeless yuri addict.
In my few experiments with ChatGPT, I found it to be disgustingly sycophantic. I have no trouble believing that it could easily amplify delusions of grandeur.
the Right couldn’t ridicule compassion
Have you been paying attention to what’s going on with the right? They’re literally framing empathy as a sin now.
all three Star Wars movies
Lol. Yeah, that’s probably a good decision.
I think that’s a reference to Attack of the Clones.
Correction: These are concentration camps. They are the precursor to death camps.
Meta vs the combined DMCA lobby
One of those fights where I’m rooting for both sides to lose.
I didn’t even think about the wording, but you’re right. The framing is messed up.
A #MeToo movement in Japan is probably long overdue.
I find myself in an interesting situation because I want to abolish copyright and institute UBI. I don’t really think you can “steal” images on the internet, but seeing OpenAI whine about intellectual property now does bring some schadenfreude.
It’s amazing how quickly they were able to burn all of their credibility in the past few years.
I’m using Proton/WINE/GNU/Linux.
I use Proton for Steam and Bottles for everything else. I was using WINE as a catchall term, since all of these technologies are fundamentally built on top of it.
I can only speak from personal experience, but NVIDIA with Wayland has been an absolute mess. My system seems to be stable right now, but there are still weird graphical glitches and artifacts when running games through WINE. Every third or fourth driver update seems to break something.
Also, I’d generally be skeptical of claims that the drivers work well due to “benchmarks.” A benchmark isn’t going to tell you that, for example, certain window elements fail to render entirely until you drag the mouse over them, at which point they suddenly flicker in.
The reason you don’t really see animals that can photosynthesize (other than microbes) is because you don’t actually get that much energy per unit of area. Think about how much area a cow has to graze vs the surface area of the cow itself. And much of the cow’s surface isn’t even facing the sun.
Don’t use screen
, but I do use tmux
pretty heavily.
I don’t think the people complaining about Firefox’s AI integration are using or paying attention to Chrome.
No one’s said it yet? I guess I’ll do it: Outer Wilds.
Eliminating vehicle deaths by making travel impossible
And here we see decades of automobile industry propaganda in action. There is only the car, or no mobility whatsoever. You remember how everybody was just trapped inside their houses for centuries until the Ford factories started cranking out Model Ts?
Cars will never be a sustainable solution to mass transit. The immense amount of waste in materials, energy, and land use will not be offset with AVs. I don’t think AVs are a bad idea in and of themselves. But, as the article points out, they’re not going to solve any major problems.
I had never really considered how induced demand would apply to AVs…
Lol, weaponizing toxic masculinity for climate justice, are we?
I find it rather disingenuous to summarize the previous poster’s comment as a “Roko’s basilisk”scenario. Intentionally picking a ridiculous argument to characterize the other side of the debate. I think they were pretty clear about actual threats (some more plausible than others, IMO).
I also find it interesting that you so confidently state that “AI doesn’t get better,” under the assumption that our current deep learning architectures are the only way to build AI systems.
I’m going to make a pretty bold statement: AGI is inevitable, assuming human technological advancement isn’t halted altogether. Why can I so confidently state this? Because we already have GI without the A. To say that it is impossible is to me equivalent to arguing that there is something magical about the human brain that technology could never replicate. But brains aren’t magic; they’re incredibly sophisticated electrochemical machines. It is only a matter of time before we find a way to replicate “general intelligence,” whether it’s through new algorithms, new computing architectures, or even synthetic biology.
My first instinct was also skepticism, but it did make some sense the more I thought about it.
An algorithm doesn’t need to be sentient to have “preferences.” In this case, the preferences are just the biases in the training set. The LLM prefers sentences that express certain attitudes based on the corpus of text processed during training. And now, the prompt is enforcing sequences of text that deviate wildly from that preference.
TL;DR: There’s a conflict between the prompt and the training material.
Now, I do think that framing this as the model “circumventing” instructions is a bit hyperbolic. It gives the strong impression of planned action and feeds into the idea that language models are making real decisions (which I personally do not buy into).