To go deeper: some animals act curiously, others with fear, but only a few of them understand what the mirror does and use it to inspect themselves.

    • certified_expert@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      16 hours ago

      Hahah, yeah, maybe I am doing that. that’s why it is a shower thought, not a research paper proposal.

      The thought comes from my (kind or recent) study of the algebra/calculus under LLMs (at least the feedforward and backpropagation part of them)

      The interesting part is that my ass is non-differentiable at x=0:

      Lim x0⁺ δass/δx
      ≠
      Lim x0⁻ δass/δx
      
  • Ironfacebuster@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    19 hours ago

    My dog used to stare at me through mirrors, so what does that mean for her intelligence? Hyper intelligent. Red heelers will take over the world.

    • Arthur Besse@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      19 hours ago

      from page 7 of Joseph Weizenbaum’s Computer Power and Human Reason: From Judgement to Calculation (1976):

      screenshot of PDF of page 7: Introduction
intimate thoughts; clear evidence that people were conversing with
the computer as if it were a person who could be appropriately and
usefully addressed in intimate terms. I knew of course that people
form all sorts of emotional bonds to machines, for example, to mu-
sical instruments, motorcycles, and cars. And I knew from long ex-
perience that the strong emotional ties many programmers have to
their computers are often formed after only short exposures to their
machines. What I had not realized is that extremely short exposures
to a relatively simple computer program could induce powerful de-
lusional thinking in quite normal people. This insight led me to
attach new importance to questions of the relationship between the
individual and the computer, and hence to resolve to think about
them,
3. Another widespread, and to me surprising, reaction to the
ELIZA program was the spread of a belief that it demonstrated a
general solution to the problem of computer understanding of natu-
ral language. In my paper, I had tried to say that no general solution
to that problem was possible, ie., that language is understood only
in contextual frameworks, that even these can be shared by people
to only a limited extent, and that consequently even people are not
embodiments of any such general solution. But these conclusions
were often ignored, In any case, ELIZA was such a small and simple
step. Its contribution was, if any at all, only to vividly underline what
many others had long ago discovered, namely, the importance of
context to language understanding. The subsequent, much more
elegant, and surely more important work of Winograd in computer
comprehension of English is currently being misinterpreted just as
ELIZA was. This reaction to ELIZA showed me more vividly than
anything I had seen hitherto the enormously exaggerated attribu-
tions an even well-educated audience is capable of making, even
strives to make, to a technology it does not understand. Surely, I
thought, decisions made by the general public about emergent tech-
nologies depend much more on what that public attributes to such
technologies than on what they actually are or can and cannot do. If,
as appeared to be the case, the public's attributions are wildly mis-
conceived, then public decisions are bound to be misguided and

      a pdf of the whole book is available here

  • minnow@lemmy.world
    link
    fedilink
    arrow-up
    83
    arrow-down
    1
    ·
    2 days ago

    The mirror test is frequently cited as a means of testing sentience.

    OP I think you hit the nail on the head.

    • Aerosol3215@piefed.ca
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      Based on the fact that most people don’t see their interaction with the LLM as gazing into the mirror, am I being led to believe that most people are not sentient???

      • Zorque@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        ·
        2 days ago

        Based entirely on the opinions of people on niche social media platforms, yes.

        • Garbagio@lemmy.zip
          link
          fedilink
          arrow-up
          3
          ·
          20 hours ago

          Mmm, I mean, sentience is a gradient, right? The mirror test is where we decided to draw the line, but there are more places to do so. My toddler thinks his favorite toy has some level of agency, just as by all accounts his older sister thinks Bluey has an identity. Depending on the test, there are developmental markers where we statistically transition from failing to succeeding. Another way to look at it is that for each developmental range, we can develop tests that challenge how we perceive autonomy, which some people succeed at and others fail. We may have just inadvertently developed a test that a significant amount of adults are just going to fail as human beings.

    • Carnelian@lemmy.world
      link
      fedilink
      arrow-up
      37
      arrow-down
      1
      ·
      2 days ago

      Except it’s not their reflection, it’s a string of phrases presented to you based partly on the commonality of similar phrases appearing next to one another in the training data, and partly on mysterious black box modifications! Fun!

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      edit-2
      2 days ago

      I like to describe it as a “force multiplier” along the lines of a powered suit.

      You are putting in small inputs, and it’s echoing out in a vast, vast virtual space and being compared and connected with countless billions of possible associations. What you get back is a kind of amplification of what you put in. If you make even remotely leading suggestions in your question or prompt, that tiny suggestion is also going to get massively boosted in the background, this is part of why some LLM’s can go off the rails with some users. If you don’t take care with what exactly you’re putting in, you will get wildly unexpected results.

      also, it’s devil tech so there’s that.

  • cally [he/they]@pawb.social
    link
    fedilink
    arrow-up
    21
    ·
    2 days ago

    Related: is there a name for “question bias”?

    Like asking ChatGPT if “is x good?”, and it would reply “Yes, x is good.” but if you ask “is x bad?” it would reply “Yes, x is bad, you’re right.”

      • yeahiknow3@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        8
        ·
        2 days ago

        It is not a leading question. The answer just happens to be meaningless.

        Asking whether something is good is the vast majority of human concern. Most of our rational activity is fundamentally evaluative.

  • Horsecook@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    5
    ·
    2 days ago

    There’s been an extensive marketing campaign to convince people that LLMs are intelligent. I wouldn’t call someone a subhuman for assuming there is some truth to that.

    Of those that understand what an LLM is, I think you can divide them into two groups, the honest, and the dishonest. Honest people see no use in a bullshit generator, a lying machine. They see it as a perversion of technology. Dishonest people have no such objection. They might even truly see intelligence in the machine, as its outputs don’t differ substantially from their own. If you view language as a means to get what you want, rather than a means to convey factual information, then lying is acceptable, desirable, intelligent. It would be difficult for such a person to differentiate between coherent but meaningless bullshit, and a machine with agency making false statements to pursue its own goals.

    • certified_expert@lemmy.worldOP
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      2 days ago

      I disagree about the dichotomy. I think you can (1) understand what LLMs actually are. (2) See the value of such technology.

      In both cases being factual (not being deceived) and not being malicious (not attempting to deceive others)

      I think a reasonable use of these tools is as a “sidekick” (you being the main character). Some tasks can be assigned to it so you save some time, but the thinking and the actual mental model of what is being done shall always be your responsibility.

      For example, LLMs are good as an interface to quickly lookup within manuals, books, clarify specific concepts, or find the proper terms for a vague idea (so that you can research the topic using the appropriate terms)

      Of course, this is just an opinion. 100% open to discussion.

      • BanMe@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        I think of it like a nonhuman character, like a character in a book I’m reading. Is it real? No. Is it compelling? Yes. Do I know exactly what it’ll do next? No. Is it serving a purpose in my life? Yes.

        It effectively attends to my requests and even feelings but I do not reciprocate that. I’ve got decades of sci-fi leading me up to this point, the idea of interacting with humanoid robots or AI has been around since my childhood, but it’s never involved attending to the machine’s feelings or needs.

        We need to sort out the boundaries on this, the delusional people who are having “relationships” with AI, getting a social or other emotional fix from it. But that doesn’t mean we have to categorize anyone who uses it as moronic. It’s a tool.

  • ameancow@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    Not nearly enough people understand this about our current models of AI. Even people who think they understand AI don’t understand this, usually because they have been talking to themselves a lot without realizing it.

  • Lost_My_Mind@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    2 days ago

    Huh…so what you’re saying is that mirrors are actually AI.

    THAT MAKES A LOT OF SENSE!!! EVERYBODY COVER YOUR MIRRORS!!!

    • certified_expert@lemmy.worldOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      2 days ago

      lol, Is that the same gorilla that you see in other bathrooms? Or (like me) you meet a new gorilla every time you wash your hands?

      • GuyIncognito@lemmy.ca
        link
        fedilink
        arrow-up
        5
        ·
        2 days ago

        I think he’s the same guy. I used to try to bust him up but he just kept multiplying into more pieces and then coming back whole every time I saw a new mirror, so I eventually gave up

    • I’ve got a duck that prefers to dance in front of a chrome bumper or glass door where he can see his reflection than to go after any potential mates. Possibly he’s worshipping the mirror. Possibly he’s just really vain.

    • Hux@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      I love the idea of a bunch of woodland creatures (completely unaware of what mirrors are) investing heavily—and aggressively—in mirrors and mirror-related technology.

        • Hux@lemmy.ml
          link
          fedilink
          arrow-up
          3
          ·
          2 days ago

          Investor Squirrel 1: “All you have to do is gather your acorns right here, and they will instantly double in value!

          Investor Squirrel 2: “Bro’, we’re so sentient!!!

      • ameancow@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Animals aren’t cursed with the human ability to think our way into harmful and unproductive behavior due to conscious re-interpretation of information around us. Except for occasional zoo-animals in captivity that fall in love with inanimate objects.

        Something something about our species basically being in captivity.

      • Wilco@lemmy.zip
        link
        fedilink
        arrow-up
        3
        ·
        2 days ago

        Uhmm … you never had a pet bird Im guessing?

        Seeing all bird masturbate up against a mirror is just par for the course when you have bird pets. Its gonna be either a mirror, a favorite toy … or you.