• FMT99@lemmy.world
    link
    fedilink
    English
    arrow-up
    298
    arrow-down
    19
    ·
    1 day ago

    Did the author thinks ChatGPT is in fact an AGI? It’s a chatbot. Why would it be good at chess? It’s like saying an Atari 2600 running a dedicated chess program can beat Google Maps at chess.

      • FMT99@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        13 hours ago

        Hey I didn’t say anywhere that corporations don’t lie to promote their product did I?

    • Empricorn@feddit.nl
      link
      fedilink
      English
      arrow-up
      5
      ·
      14 hours ago

      You’re not wrong, but keep in mind ChatGPT advocates, including the company itself are referring to it as AI, including in marketing. They’re saying it’s a complete, self-learning, constantly-evolving Artificial Intelligence that has been improving itself since release… And it loses to a 4KB video game program from 1979 that can only “think” 2 moves ahead.

      • FMT99@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        13 hours ago

        That’s totally fair, the company is obviously lying, excuse me “marketing”, to promote their product, that’s absolutely true.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      230
      arrow-down
      8
      ·
      1 day ago

      AI including ChatGPT is being marketed as super awesome at everything, which is why that and similar AI is being forced into absolutely everything and being sold as a replacement for people.

      Something marketed as AGI should be treated as AGI when proving it isn’t AGI.

      • pelespirit@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        3
        ·
        1 day ago

        Not to help the AI companies, but why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff? It’s obvious they’re shit at it, why do they answer anyway? It’s because they’re programmed by know-it-all programmers, isn’t it.

        • ImplyingImplications@lemmy.ca
          link
          fedilink
          English
          arrow-up
          25
          ·
          1 day ago

          why don’t they program them

          AI models aren’t programmed traditionally. They’re generated by machine learning. Essentially the model is given test prompts and then given a rating on its answer. The model’s calculations will be adjusted so that its answer to the test prompt will be closer to the expected answer. You repeat this a few billion times with a few billion prompts and you will have generated a model that scores very high on all test prompts.

          Then someone asks it how many R’s are in strawberry and it gets the wrong answer. The only way to fix this is to add that as a test prompt and redo the machine learning process which takes an enormous amount of time and computational power each time it’s done, only for people to once again quickly find some kind of prompt it doesn’t answer well.

          There are already AI models that play chess incredibly well. Using machine learning to solve a complexe problem isn’t the issue. It’s trying to get one model to be good at absolutely everything.

        • fmstrat@lemmy.nowsci.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          18 hours ago

          This is where MCP comes in. It’s a protocol for LLMs to call standard tools. Basically the LLM would figure out the tool to use from the context, then figure out the order of parameters from those the MCP server says is available, send the JSON, and parse the response.

        • CileTheSane@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 hours ago

          why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff?

          Because the AI doesn’t know what it’s being asked, it’s just a algorithm guessing what the next word in a reply is. It has no understanding of what the words mean.

          “Why doesn’t the man in the Chinese room just use a calculator for math questions?”

        • rebelsimile@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          31
          arrow-down
          2
          ·
          1 day ago

          Because they’re fucking terrible at designing tools to solve problems, they are obviously less and less good at pretending this is an omnitool that can do everything with perfect coherency (and if it isn’t working right it’s because you’re not believing or paying hard enough)

          • MrJgyFly@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 day ago

            Or they keep telling you that you just have to wait it out. It’s going to get better and better!

        • Pamasich@kbin.earth
          link
          fedilink
          arrow-up
          1
          ·
          17 hours ago

          why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff?

          They will, when it makes sense for what the AI is designed to do. For example, ChatGPT can outsource image generation to an AI dedicated to that. It also used to calculate math using python for me, but that doesn’t seem to happen anymore, probably due to security issues with letting the AI run arbitrary python code.

          ChatGPT however was not designed to play chess, so I don’t see why OpenAI should invest resources into connecting it to a chess API.

          I think especially since adding custom GPTs, adding this kind of stuff has become kind of unnecessary for base ChatGPT. If you want a chess engine, get a GPT which implements a Stockfish API (there seem to be several GPTs that do). For math, get the Wolfram GPT which uses Wolfram Alpha’s API, or a different powerful math GPT.

        • veroxii@aussie.zone
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          1 day ago

          They are starting to do this. Most new models support function calling and can generate code to come up with math answers etc

          • Pamasich@kbin.earth
            link
            fedilink
            arrow-up
            1
            ·
            17 hours ago

            I don’t pay for ChatGPT and just used the Wolfram GPT. They made the custom GPTs non-paid at some point.

        • NoiseColor @lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          1 day ago

          …or a simple counter to count the r in strawberry. Because that’s more difficult than one might think and they are starting to do this now.

        • four@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          I think they’re trying to do that. But AI can still fail at that lol

        • MajorasMaskForever@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          24 hours ago

          From a technology standpoint, nothing is stopping them. From a business standpoint: hubris.

          To put time and effort into creating traditional logic based algorithms to compensate for this generic math model would be to admit what mathematicians and scientists have known for centuries. That models are good at finding patterns but they do not explain why a relationship exists (if it exists at all). The technology is fundamentally flawed for the use cases that OpenAI is trying to claim it can be used in, and programming around it would be to acknowledge that.

      • NoiseColor @lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        30
        ·
        1 day ago

        I don’t think ai is being marketed as awesome at everything. It’s got obvious flaws. Right now its not good for stuff like chess, probably not even tic tac toe. It’s a language model, its hard for it to calculate the playing field. But ai is in development, it might not need much to start playing chess.

        • vinnymac@lemmy.world
          link
          fedilink
          English
          arrow-up
          31
          arrow-down
          2
          ·
          1 day ago

          What the tech is being marketed as and what it’s capable of are not the same, and likely never will be. In fact all things are very rarely marketed how they truly behave, intentionally.

          Everyone is still trying to figure out what these Large Reasoning Models and Large Language Models are even capable of; Apple, one of the largest companies in the world just released a white paper this past week describing the “illusion of reasoning”. If it takes a scientific paper to understand what these models are and are not capable of, I assure you they’ll be selling snake oil for years after we fully understand every nuance of their capabilities.

          TL;DR Rich folks want them to be everything, so they’ll be sold as capable of everything until we repeatedly refute they are able to do so.

          • NoiseColor @lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            8
            ·
            1 day ago

            I think in many cases people intentionally or unintentionally disregard the time component here. Ai is in development. I think what is being marketed here, just like in the stock market, is a piece of the future. I don’t expect the models I use to be perfect and not make mistakes, so I use them accordingly. They are useful for what I use them for and I wouldn’t use them for chess. I don’t expect that laundry detergent to be just as perfect in the commercial either.

        • BassTurd@lemmy.world
          link
          fedilink
          English
          arrow-up
          19
          arrow-down
          2
          ·
          1 day ago

          Marketing does not mean functionality. AI is absolutely being sold to the public and enterprises as something that can solve everything. Obviously it can’t, but it’s being sold that way. I would bet the average person would be surprised by this headline solely on what they’ve heard about the capabilities of AI.

          • NoiseColor @lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            17
            ·
            1 day ago

            I don’t think anyone is so stupid to believe current ai can solve everything.

            And honestly, I didn’t see any marketing material that would claim that.

            • BassTurd@lemmy.world
              link
              fedilink
              English
              arrow-up
              16
              arrow-down
              2
              ·
              1 day ago

              You are both completely over estimating the intelligence level of “anyone” and not living in the same AI marketed universe as the rest of us. People are stupid. Really stupid.

              • NoiseColor @lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 day ago

                I don’t understand why this is so important, marketing is all about exaggerating, why expect something different here.

                • BassTurd@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  23 hours ago

                  It’s not important. You said AI isn’t being marketed to be able to do everything. I said yes it is. That’s it.

                  • NoiseColor @lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    19 hours ago

                    My point is people aren’t expecting AGI. People have already tried them and understand what the general capabilities are. Businesses today even more. I don’t think exaggerating the capabilities is such an overarching issue, that anyone could call the whole thing a scam.

            • petrol_sniff_king@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 day ago

              The Zoom CEO, that is the video calling software, wanted to train AIs on your work emails and chat messages to create AI personalities you could send to the meetings you’re paid to sit through while you drink Corona on the beach and receive a “summary” later.

              The Zoom CEO, that is the video calling software, seems like a pretty stupid guy?

              Yeah. Yeah, he really does. Really… fuckin’… dumb.

              • jubilationtcornpone@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                9
                ·
                1 day ago

                Same genius who forced all his own employees back into the office. An incomprehensibly stupid maneuver by an organization that literally owes its success to people working from home.

        • 4am@lemm.ee
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          2
          ·
          1 day ago

          Really then why are they cramming AI into every app and every device and replacing jobs with it and claiming they’re saving so much time and money and they’re the best now the hardest working most efficient company and this is the future and they have a director of AI vision that’s right a director of AI vision a true visionary to lead us into the promised land where we will make money automatically please bro just let this be the automatic money cheat oh god I’m about to

          • NoiseColor @lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            4
            ·
            1 day ago

            Those are two different things.

            1. they are craming ai everywhere because nobody wants to miss the boat and because it plays well in the stock market.

            2. the people claiming it’s awesome and that they are doing I don’t know what with it, replacing people are mostly influencers and a few deluded people.

            Ai can help people in many different roles today, so it makes sense to use it. Even in roles that is not particularly useful, it makes sense to prepare for when it is.

      • whaleross@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        22 hours ago

        A toddler can pretend to be good at chess but anybody with reasonable expectations knows that they are not.

        • MelodiousFunk@startrek.website
          link
          fedilink
          English
          arrow-up
          20
          ·
          21 hours ago

          Plot twist: the toddler has a multi-year marketing push worth tens if not hundreds of millions, which convinced a lot of people who don’t know the first thing about chess that it really is very impressive, and all those chess-types are just jealous.

          • xavier666@lemm.ee
            cake
            link
            fedilink
            English
            arrow-up
            5
            ·
            19 hours ago

            Have you tried feeding the toddler gallons of baby-food? Maybe then it can play chess

              • xavier666@lemm.ee
                cake
                link
                fedilink
                English
                arrow-up
                4
                ·
                17 hours ago

                “If we have to ask every time before stealing a little baby food, our morbidly obese toddler cannot survive”

    • iAvicenna@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      21 hours ago

      well so much hype has been generated around chatgpt being close to AGI that now it makes sense to ask questions like “can chatgpt prove the Riemann hypothesis”

    • suburban_hillbilly@lemmy.ml
      link
      fedilink
      English
      arrow-up
      29
      ·
      1 day ago

      Most people do. It’s just called AI in the media everywhere and marketing works. I think online folks forget that something as simple as getting a Lemmy account by yourself puts you into the top quintile of tech literacy.

    • Broken@lemmy.ml
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      1 day ago

      I agree with your general statement, but in theory since all ChatGPT does is regurgitate information back and a lot of chess is memorization of historical games and types, it might actually perform well. No, it can’t think, but it can remember everything so at some point that might tip the results in it’s favor.

      • FMT99@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 hours ago

        I mean it may be possible but the complexity would be so many orders of magnitude greater. It’d be like learning chess by just memorizing all the moves great players made but without any context or understanding of the underlying strategy.

      • Eagle0110@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        3
        ·
        edit-2
        22 hours ago

        Regurgitating an impression of, not regurgitating verbatim, that’s the problem here.

        Chess is 100% deterministic, so it falls flat.

        • Raltoid@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          22 hours ago

          I’m guessing it’s not even hard to get it to “confidently” violate the rules.

    • adhdplantdev@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      1 day ago

      Articles like this are good because it exposes the flaws with the ai and that it can’t be trusted with complex multi step tasks.

      Helps people see that think AI is close to a human that its not and its missing critical functionality

      • FMT99@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        18 hours ago

        The problem is though that this perpetuates the idea that ChatGPT is actually an AI.

        • adhdplantdev@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 hours ago

          People already think chatGPT is a general AI. We need more articles like this showing is ineffectiveness at being intelligent. Besides it helps find a limitations of this technology so that we can hopefully use it to argue against every single place

    • TowardsTheFuture@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 day ago

      I think that’s generally the point is most people thing chat GPT is this sentient thing that knows everything and… no.

      • NoiseColor @lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Do they though? No one I talked to, not my coworkers that use it for work, not my friends, not my 72 year old mother think they are sentient.

        • TowardsTheFuture@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 hours ago

          Okay I maybe exaggerated a bit, but a lot of people think it actually knows things, or is actually smart. Which… it’s not… at all. It’s just pattern recognition. Which was I assume the point of showing it can’t even beat the goddamn Atari because it cannot think or reason, it’s all just copy pasta and pattern recognition.

    • x00z@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      In all fairness. Machine learning in chess engines is actually pretty strong.

      AlphaZero was developed by the artificial intelligence and research company DeepMind, which was acquired by Google. It is a computer program that reached a virtually unthinkable level of play using only reinforcement learning and self-play in order to train its neural networks. In other words, it was only given the rules of the game and then played against itself many millions of times (44 million games in the first nine hours, according to DeepMind).

      https://www.chess.com/terms/alphazero-chess-engine

      • jeeva@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        Sure, but machine learning like that is very different to how LLMs are trained and their output.

      • FMT99@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 hours ago

        Oh absolutely you can apply machine learning to game strategy. But you can’t expect a generalized chatbot to do well at strategic decision making for a specific game.

    • saltesc@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      1 day ago

      I like referring to LLMs as VI (Virtual Intelligence from Mass Effect) since they merely give the impression of intelligence but are little more than search engines. In the end all one is doing is displaying expected results based on a popularity algorithm. However they do this inconsistently due to bad data in and limited caching.