We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      4 hours ago

      AI is not actual intelligence. However, it can produce results better than a significant number of professionally employed people…

      I am reminded of when word processors came out and “administrative assistant” dwindled as a role in mid-level professional organizations, most people - even increasingly medical doctors these days - do their own typing. The whole “typing pool” concept has pretty well dried up.

      • tartarin@reddthat.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        However, there is a huge energy cost for that speed to process statistically the information to mimic intelligence. The human brain is consuming much less energy. Also, AI will be fine with well defined task where innovation isn’t a requirement. As it is today, AI is incapable to innovate.

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    3
    ·
    9 hours ago

    My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”

    It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???

    • fishos@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      1 hour ago

      I’ve been thinking this for awhile. When people say “AI isn’t really that smart, it’s just doing pattern recognition” all I can help but think is “don’t you realize that is one of the most commonly brought up traits concerning the human mind?” Pareidolia is literally the tendency to see faces in things because the human mind is constantly looking for the “face pattern”. Humans are at least 90% regurgitating previous data. It’s literally why you’re supposed to read and interact with babies so much. It’s how you learn “red glowy thing is hot”. It’s why education and access to knowledge is so important. It’s every annoying person who has endless “did you know?” facts. Science is literally “look at previous data, iterate a little bit, look at new data”.

      None of what AI is doing is truly novel or different. But we’ve placed the human mind on this pedestal despite all the evidence to the contrary. Eyewitness testimony, optical illusions, the hundreds of common fallacies we fall prey to… our minds are icredibly fallible and are really just a hodgepodge of processes masquerading as “intelligence”. We’re a bunch of instincts in a trenchcoat. To think AI isn’t or can’t reach our level is just hubris. A trait that probably is more unique to humans.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      4 hours ago

      If an IQ of 100 is average, I’d rate AI at 80 and down for most tasks (and of course it’s more complex than that, but as a starting point…)

      So, if you’re dealing with a filing clerk with a functional IQ of 75 in their role - AI might be a better experience for you.

      Some of the crap that has been published on the internet in the past 20 years comes to an IQ level below 70 IMO - not saying I want more AI because it’s better, just that - relatively speaking - AI is better than some of the pay-for-clickbait garbage that came before it.

    • AppleTea@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      Self Driving is only safer than people in absolutely pristine road conditions with no inclement weather and no construction. As soon as anything disrupts “normal” road conditions, self driving becomes significantly more dangerous than a human driving.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        4 hours ago

        Human drivers are only safe when they’re not distracted, emotionally disturbed, intoxicated, and physically challenged (vision, muscle control, etc.) 1% of the population has epilepsy, and a large number of them are in denial or simply don’t realize that they have periodic seizures - until they wake up after their crash.

        So, yeah, AI isn’t perfect either - and it’s not as good as an “ideal” human driver, but at what point will AI be better than a typical/average human driver? Not today, I’d say, but soon…

    • Puddinghelmet@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      edit-2
      5 hours ago

      Human brains are much more complex than a mirroring script xD The amount of neurons in your brain, AI and supercomputers only have a fraction of that. But you’re right, for you its not much different than AI probably

      • TangledHyphae@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        The human brain contains roughly 86 billion neurons, while ChatGPT, a large language model, has 175 billion parameters (often referred to as “artificial neurons” in the context of neural networks). While ChatGPT has more “neurons” in this sense, it’s important to note that these are not the same as biological neurons, and the comparison is not straightforward.

        86 billion neurons in the human brain isn’t that much compared to some of the larger 1.7 trillion neuron neural networks though.

        • AppleTea@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 hours ago

          It’s when you start including structures within cells that the complexity moves beyond anything we’re currently capable of computing.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          But, are these 1.7 trillion neuron networks available to drive YOUR car? Or are they time-shared among thousands or millions of users?

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 hours ago

              Nah, I went to public high school - I got to see “the average” citizen who is now voting. While it is distressing that my ex-classmates now seem to control the White House, Congress and Supreme Court, what they’re doing with it is not surprising at all - they’ve been talking this shit since the 1980s.

  • Knock_Knock_Lemmy_In@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    3
    ·
    11 hours ago

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure.

    This is not a good argument.

    • fodor@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      Actually it’s a very very brief summary of some philosophical arguments that happened between the 1950s and the 1980s. If you’re interested in the topic, you could go read about them.

    • Simulation6@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      10 hours ago

      The book The Emperors new Mind is old (1989), but it gave a good argument why machine base AI was not possible. Our minds work on a fundamentally different principle then Turing machines.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Our minds work on a fundamentally different principle then Turing machines.

        Is that an advantage, or a disadvantage? I’m sure the answer depends on the setting.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        edit-2
        9 hours ago

        It’s hard to see that books argument from the Wikipedia entry, but I don’t see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.

        It’s just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn’t imply computers can’t gain consciousness, or that they need flesh and senses to do so.

        • Simulation6@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          7 hours ago

          I think what he is implying is that current computer design will never be able to gain consciousness. Maybe a fundamentally different type of computer can, but is anything like that even on the horizon?

          • jwmgregory@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            6 hours ago

            possibly.

            current machines aren’t really capable of what we would consider sentience because of the von neumann bottleneck.

            simply put, computers consider memory and computation separate tasks leading to an explosion in necessary system resources for tasks that would be relatively trivial for a brain-system to do, largely due to things like buffers and memory management code. lots of this is hidden from the engineer and end user these days so people aren’t really super aware of exactly how fucking complex most modern computational systems are.

            this is why if, for example, i threw a ball at you you will reflexively catch it, dodge it, or parry it; and your brain will do so for an amount of energy similar to that required to power a simple LED. this is a highly complex physics calculation ran in a very short amount of time for an incredibly low amount of energy relative to the amount of information in the system. the brain is capable of this because your brain doesn’t store information in a chest and later retrieve it like contemporary computers do. brains are turing machines, they just aren’t von neumann machines. in the brain, information is stored… within the actual system itself. the mechanical operation of the brain is so highly optimized that it likely isn’t physically possible to make a much more efficient computer without venturing into the realm of strange quantum mechanics. even then, the verdict is still out on whether or not natural brains don’t do something like this to some degree as well. we know a whole lot about the brain but it seems some damnable incompleteness theorem-adjacent affect prevents us from easily comprehending the actual mechanics of our own brains from inside the brain itself in a wholistic manner.

            that’s actually one of the things AI and machine learning might be great for. if it is impossible to explain the human experience from inside of the human experience… then we must build a non-human experience and ask its perspective on the matter - again, simply put.

    • bitjunkie@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      9 hours ago

      philosopher

      Here’s why. It’s a quote from a pure academic attempting to describe something practical.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 hours ago

        The philosopher has made an unproven assumption. An erroneously logical leap. Something an academic shouldn’t do.

        Just because everything we currently consider conscious has a physical presence, does not imply that consciousness requires a physical body.

  • merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    21
    ·
    11 hours ago

    The other thing that most people don’t focus on is how we train LLMs.

    We’re basically building something like a spider tailed viper. A spider tailed viper is a kind of snake that has a growth on its tail that looks a lot like a spider. It wiggles it around so it looks like a spider, convincing birds they’ve found a snack, and when the bird gets close enough the snake strikes and eats the bird.

    Now, I’m not saying we’re building something that is designed to kill us. But, I am saying that we’re putting enormous effort into building something that can fool us into thinking it’s intelligent. We’re not trying to build something that can do something intelligent. We’re instead trying to build something that mimics intelligence.

    What we’re effectively doing is looking at this thing that mimics a spider, and trying harder and harder to tweak its design so that it looks more and more realistic. What’s crazy about that is that we’re not building this to fool a predator so that we’re not in danger. We’re not doing it to fool prey, so we can catch and eat them more easily. We’re doing it so we can fool ourselves.

    It’s like if, instead of a spider-tailed snake, a snake evolved a bird-like tail, and evolution kept tweaking the design so that the tail was more and more likely to fool the snake so it would bite its own tail. Except, evolution doesn’t work like that because a snake that ignored actual prey and instead insisted on attacking its own tail would be an evolutionary dead end. Only a truly stupid species like humans would intentionally design something that wasn’t intelligent but mimicked intelligence well enough that other humans preferred it to actual information and knowledge.

  • El Barto@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    edit-2
    12 hours ago

    I agreed with most of what you said, except the part where you say that real AI is impossible because it’s bodiless or “does not experience hunger” and other stuff. That part does not compute.

    A general AI does not need to be conscious.

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      9 hours ago

      That and there is literally no way to prove something is or isn’t conscious. I can’t even prove to another human being that I’m a conscious entity, you just have to assume I am because from your own experience, you are so therefor I too must be, right?

      Not saying I consider AI in it’s current form to be conscious, more so the whole idea is just silly and unfalsifiable.

  • benni@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    17 hours ago

    I think we should start by not following this marketing speak. The sentence “AI isn’t intelligent” makes no sense. What we mean is “LLMs aren’t intelligent”.

    • innermachine@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      3
      ·
      14 hours ago

      So couldn’t we say LLM’s aren’t really AI? Cuz that’s what I’ve seen to come to terms with.

      • TheGrandNagus@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        13 hours ago

        To be fair, the term “AI” has always been used in an extremely vague way.

        NPCs in video games, chess computers, or other such tech are not sentient and do not have general intelligence, yet we’ve been referring to those as “AI” for decades without anybody taking an issue with it.

        • MajorasMaskForever@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          edit-2
          13 hours ago

          I don’t think the term AI has been used in a vague way, it’s that there’s a huge disconnect between how the technical fields use it vs general populace and marketing groups heavily abuse that disconnect.

          Artificial has two meanings/use cases. One is to indicate something is fake (video game NPC, chess bots, vegan cheese). The end product looks close enough to the real thing that for its intended use case it works well enough. Looks like a duck, quacks like a duck, treat it like a duck even though we all know it’s a bunny with a costume on. LLMs on a technical level fit this definition.

          The other definition is man made. Artificial diamonds are a great example of this, they’re still diamonds at the end of the day, they have all the same chemical makeups, same chemical and physical properties. The only difference is they came from a laboratory made by adult workers vs child slave labor.

          My pet theory is science fiction got the general populace to think of artificial intelligence to be using the “man-made” definition instead of the “fake” definition that these companies are using. In the past the subtle nuance never caused a problem so we all just kinda ignored it

          • El Barto@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            12 hours ago

            Dafuq? Artificial always means man-made.

            Nature also makes fake stuff. For example, fish that have an appendix that looks like a worm, to attract prey. It’s a fake worm. Is it “artificial”? Nope. Not man made.

              • atrielienz@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                6 hours ago

                Word roots say they have a point though. Artifice, Artificial etc. I think the main problem with the way both of the people above you are using this terminology is that they’re focusing on the wrong word and how that word is being conflated with something it’s not.

                LLM’s are artificial. They are a man made thing that is intended to fool man into believing they are something they aren’t. What we’re meant to be convinced they are is sapiently intelligent.

                Mimicry is not sapience and that’s where the argument for LLM’s being real honest to God AI falls apart.

                Sapience is missing from Generative LLM’s. They don’t actually think. They don’t actually have motivation. What we’re doing when we anthropomorphize them is we are fooling ourselves into thinking they are a man-made reproduction of us without the meat flavored skin suit. That’s not what’s happening. But some of us are convinced that it is, or that it’s near enough that it doesn’t matter.

      • herrvogel@lemmy.world
        cake
        link
        fedilink
        English
        arrow-up
        4
        ·
        13 hours ago

        LLMs are one of the approximately one metric crap ton of different technologies that fall under the rather broad umbrella of the field of study that is called AI. The definition for what is and isn’t AI can be pretty vague, but I would argue that LLMs are definitely AI because they exist with the express purpose of imitating human behavior.

        • El Barto@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          12 hours ago

          Huh? Since when an AI’s purpose is to “imitate human behavior”? AI is about solving problems.

          • herrvogel@lemmy.world
            cake
            link
            fedilink
            English
            arrow-up
            6
            ·
            11 hours ago

            It is and it isn’t. Again, the whole thing is super vague. Machine vision or pattern seeking algorithms do not try to imitate any human behavior, but they fall under AI.

            Let me put it this way: Things that try to imitate human behavior or intelligence are AI, but not all AI is about trying to imitate human behavior or intelligence.

            • El Barto@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 hours ago

              I can agree with “things that try to imitate human intelligence” but not “human behavior”. An Elmo doll laughs when you tickle it. That doesn’t mean it exhibits artificial intelligence.

            • Buddahriffic@lemmy.world
              cake
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 hours ago

              From a programming pov, a definition of AI could be an algorithm or construct that can solve problems or perform tasks without the programmer specifically solving that problem or programming the steps of the task but rather building something that can figure it out on its own.

              Though a lot of game AIs don’t fit that description.

      • Melvin_Ferd@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        9 hours ago

        can say whatever the fuck we want. This isn’t any kind of real issue. Think about it. If you went the rest of your life calling LLM’s turkey butt fuck sandwhichs, what changes? This article is just shit and people looking to be outraged over something that other articles told them to be outraged about. This is all pure fucking modern yellow journalism. I hope turkey butt sandwiches replace every journalist. I’m so done with their crap

    • undeffeined@lemmy.ml
      link
      fedilink
      English
      arrow-up
      13
      ·
      15 hours ago

      I make the point to allways refer to it as LLM exactly to make the point that it’s not an Inteligence.

  • Basic Glitch@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    12 hours ago

    It’s only as intelligent as the people that control and regulate it.

    Given all the documented instances of Facebook and other social media using subliminal emotional manipulation, I honestly wonder if the recent cases of AI chat induced psychosis are related to something similar.

    Like we know they’re meant to get you to continue using them, which is itself a bit of psychological manipulation. How far does it go? Could there also be things like using subliminal messaging/lighting? This stuff is all so new and poorly understood, but that usually doesn’t stop these sacks of shit from moving full speed with implementing this kind of thing.

    It could be that certain individuals have unknown vulnerabilities that make them more susceptible to psychosis due to whatever manipulations are used to make people keep using the product. Maybe they’re doing some things to users that are harmful, but didn’t seem problematic during testing?

    Or equally as likely, they never even bothered to test it out, just started subliminally fucking with people’s brains, and now people are going haywire because a bunch of unethical shit heads believe they are the chosen elite who know what must be done to ensure society is able to achieve greatness. It just so happens that “what must be done,” also makes them a ton of money and harms people using their products.

    It’s so fucking absurd to watch the same people jamming unregulated AI and automation down our throats while simultaneously forcing traditionalism, and a legal system inspired by Catholic integralist belief on society.

    If you criticize the lack of regulations in the wild west of technology policy, or even suggest just using a little bit of fucking caution, then you’re trying to hold back progress.

    However, all non-tech related policy should be based on ancient traditions and biblical text with arbitrary rules and restrictions that only make sense and benefit the people enforcing the law.

    What a stupid and convoluted way to express you just don’t like evidence based policy or using critical thinking skills, and instead prefer to just navigate life by relying on the basic signals from your lizard brain. Feels good so keep moving towards, feels bad so run away, or feels scary so attack!

    Such is the reality of the chosen elite, steering us towards greatness.

    What’s really “funny” (in a we’re all doomed sort of way) is that while writing this all out, I realized the “chosen elite” controlling tech and policy actually perfectly embody the current problem with AI and bias.

    Rather than relying on intelligence to analyze a situation in the present, and create the best and most appropriate response based on the information and evidence before them, they default to a set of pre-concieved rules written thousands of years ago with zero context to the current reality/environment and the problem at hand.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      A gun isn’t dangerous, if you handle it correctly.

      Same for an automobile, or aircraft.

      If we build powerful AIs and put them “in charge” of important things, without proper handling they can - and already have - started crashing into crowds of people, significantly injuring them - even killing some.

  • Bogasse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    19
    ·
    22 hours ago

    The idea that RAGs “extend their memory” is also complete bullshit. We literally just finally build working search engine, but instead of using a nice interface for it we only let chatbots use them.

  • fodor@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    4
    ·
    19 hours ago

    Mind your pronouns, my dear. “We” don’t do that shit because we know better.

  • aceshigh@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    12
    ·
    edit-2
    17 hours ago

    I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it… AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

    E: I use it to give me ideas that I then test out solo.

    • PushButton@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      8
      ·
      edit-2
      20 hours ago

      That sounds fucking dangerous… You really should consult a HUMAN expert about your problem, not an algorithm made to please the interlocutor…

      • SkyeStarfall@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        15 hours ago

        I mean, sure, but that’s really easier said than done. Good luck getting good mental healthcare for cheap in the vast majority of places

    • Snapz@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      2
      ·
      22 hours ago

      This is very interesting… because the general saying is that AI is convincing for non experts in the field it’s speaking about. So in your specific case, you are actually saying that you aren’t an expert on yourself, therefore the AI’s assessment is convincing to you. Not trying to upset, it’s genuinely fascinating how that theory is true here as well.

      • aceshigh@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        17 hours ago

        I use it to give me ideas that I then test out. It’s fantastic at nudging me in the right direction, because all that it’s doing is mirroring me.

        • innermachine@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          14 hours ago

          If it’s just mirroring you one could argue you don’t really need it? Not trying to be a prick, if it is a good tool for you use it! It sounds to me as though your using it as a sounding board and that’s just about the perfect use for an LLM if I could think of any.

      • Liberteez@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        ·
        21 hours ago

        I did this for a few months when it was new to me, and still go to it when I am stuck pondering something about myself. I usually move on from the conversation by the next day, though, so it’s just an inner dialogue enhancer

  • bbb@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    1 day ago

    This article is written in such a heavy ChatGPT style that it’s hard to read. Asking a question and then immediately answering it? That’s AI-speak.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      13
      ·
      22 hours ago

      Asking a question and then immediately answering it? That’s AI-speak.

      HA HA HA HA. I UNDERSTOOD THAT REFERENCE. GOOD ONE. 🤖

    • sobchak@programming.dev
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      1 day ago

      And excessive use of em-dashes, which is the first thing I look for. He does say he uses LLMs a lot.

      • bbb@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        18
        ·
        edit-2
        1 day ago

        “…” (Unicode U+2026 Horizontal Ellipsis) instead of “…” (three full stops), and using them unnecessarily, is another thing I rarely see from humans.

        Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.

        • Mr. Satan@lemmy.zip
          link
          fedilink
          English
          arrow-up
          6
          ·
          22 hours ago

          Am I… AI? I do use ellipses and (what I now see is) en dashes for punctuation. Mainly because they are longer than hyphens and look better in a sentence. Em dash looks too long.

          However, that’s on my phone. On a normal keyboard I use 3 periods and 2 hyphens instead.

          • tmpod@lemmy.pt
            link
            fedilink
            English
            arrow-up
            2
            ·
            12 hours ago

            I’ve been getting into the habit of also using em/en dashes on the computer through the Compose key. Very convenient for typing arrows, inequality and other math signs, etc. I don’t use it for ellipsis because they’re not visually clearer nor shorter to type.

          • Sternhammer@aussie.zone
            link
            fedilink
            English
            arrow-up
            4
            ·
            20 hours ago

            I’ve long been an enthusiast of unpopular punctuation—the ellipsis, the em-dash, the interrobang‽

            The trick to using the em-dash is not to surround it with spaces which tend to break up the text visually. So, this feels good—to me—whereas this — feels unpleasant. I learnt this approach from reading typographer Erik Spiekermann’s book, *Stop Stealing Sheep & Find Out How Type Works.

            • Mr. Satan@lemmy.zip
              link
              fedilink
              English
              arrow-up
              3
              ·
              18 hours ago

              My language doesn’t really have hyphenated words or different dashes. It’s mostly punctuation within a sentence. As such there are almost no cases where one encounters a dash without spaces.

              • Sternhammer@aussie.zone
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 hours ago

                Sounds wonderful. I recently had my writing—which is liberally sprinkled with em-dashes—edited to add spaces to conform to the house style and this made me sad.

                I also feel sad that I failed to (ironically) mention the under-appreciated semicolon; punctuation that is not as adamant as a full stop but more assertive than a comma. I should use it more often.

        • sqgl@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character.

          Not on my phone it didn’t. It looks as you intended it.

  • psycho_driver@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    1 day ago

    Hey AI helped me stick it to the insurance man the other day. I was futzing around with coverage amounts on one of the major insurance companies websites pre-renewal to try to get the best rate and it spit up a NaN renewal amount for our most expensive vehicle. It let me go through with the renewal less that $700 and now says I’m paid in full for the six month period. It’s been days now with no follow-up . . . I’m pretty sure AI snuck that one through for me.

    • laranis@lemmy.zip
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      Be careful… If you get in an accident I guaran-god-damn-tee you they will use it as an excuse not to pay out. Maybe after a lawsuit you’d see some money but at that point half of it goes to the lawyer and you’re still screwed.

      • psycho_driver@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        Oh I’m aware of the potential pitfalls but it’s something I’m willing to risk to stick it to insurance. I wouldn’t even carry it if it wasn’t required by law. I have the funds to cover what they would cover.

        • JcbAzPx@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 hours ago

          If you have the funds you could self insure. You’d need to look up the details for your jurisdiction, but the gist of it is you keep the amount required coverage in an account that you never touch until you need to pay out.

          • psycho_driver@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            8 hours ago

            Hmm I have daydreamed about this scenario. I didn’t realize it was a thing. Thanks, I’ll check into it, though I wouldn’t doubt if it’s not a thing in my dystopian red flyover state.

            Edit: Yeah, you have to be the registered owner of 25 or more vehicles to qualify for self insurance in my state. So, dealers and rich people only, unfortunately.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 hours ago

        AI didn’t write the insurance policy. It only helped him search for the best deal. That’s like saying your insurance company will cancel you because you used a phone to comparison shop.

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    2
    ·
    1 day ago

    Good luck. Even David Attenborrough can’t help but anthropomorphize. People will feel sorry for a picture of a dot separated from a cluster of other dots. The play by AI companies is that it’s human nature for us to want to give just about every damn thing human qualities. I’d explain more but as I write this my smoke alarm is beeping a low battery warning, and I need to go put the poor dear out of its misery.

    • audaxdreik@pawb.social
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      1 day ago

      This is the current problem with “misalignment”. It’s a real issue, but it’s not “AI lying to prevent itself from being shut off” as a lot of articles tend to anthropomorphize it. The issue is (generally speaking) it’s trying to maximize a numerical reward by providing responses to people that they find satisfactory. A legion of tech CEOs are flogging the algorithm to do just that, and as we all know, most people don’t actually want to hear the truth. They want to hear what they want to hear.

      LLMs are a poor stand in for actual AI, but they are at least proficient at the actual thing they are doing. Which leads us to things like this, https://www.youtube.com/watch?v=zKCynxiV_8I

      • El Barto@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        The dot does not care. It can’t even care. I doesn’t even know it exists. I can’t know shit.

    • mienshao@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 day ago

      David Attenborrough is also 99 years old, so we can just let him say things at this point. Doesn’t need to make sense, just smile and nod. Lol