• FMT99@lemmy.world
    link
    fedilink
    English
    arrow-up
    129
    arrow-down
    12
    ·
    3 days ago

    Did the author thinks ChatGPT is in fact an AGI? It’s a chatbot. Why would it be good at chess? It’s like saying an Atari 2600 running a dedicated chess program can beat Google Maps at chess.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      111
      arrow-down
      3
      ·
      3 days ago

      AI including ChatGPT is being marketed as super awesome at everything, which is why that and similar AI is being forced into absolutely everything and being sold as a replacement for people.

      Something marketed as AGI should be treated as AGI when proving it isn’t AGI.

      • NoiseColor @lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        20
        ·
        3 days ago

        I don’t think ai is being marketed as awesome at everything. It’s got obvious flaws. Right now its not good for stuff like chess, probably not even tic tac toe. It’s a language model, its hard for it to calculate the playing field. But ai is in development, it might not need much to start playing chess.

        • vinnymac@lemmy.world
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          3 days ago

          What the tech is being marketed as and what it’s capable of are not the same, and likely never will be. In fact all things are very rarely marketed how they truly behave, intentionally.

          Everyone is still trying to figure out what these Large Reasoning Models and Large Language Models are even capable of; Apple, one of the largest companies in the world just released a white paper this past week describing the “illusion of reasoning”. If it takes a scientific paper to understand what these models are and are not capable of, I assure you they’ll be selling snake oil for years after we fully understand every nuance of their capabilities.

          TL;DR Rich folks want them to be everything, so they’ll be sold as capable of everything until we repeatedly refute they are able to do so.

          • NoiseColor @lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            6
            ·
            3 days ago

            I think in many cases people intentionally or unintentionally disregard the time component here. Ai is in development. I think what is being marketed here, just like in the stock market, is a piece of the future. I don’t expect the models I use to be perfect and not make mistakes, so I use them accordingly. They are useful for what I use them for and I wouldn’t use them for chess. I don’t expect that laundry detergent to be just as perfect in the commercial either.

        • BassTurd@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          3 days ago

          Marketing does not mean functionality. AI is absolutely being sold to the public and enterprises as something that can solve everything. Obviously it can’t, but it’s being sold that way. I would bet the average person would be surprised by this headline solely on what they’ve heard about the capabilities of AI.

          • NoiseColor @lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            10
            ·
            3 days ago

            I don’t think anyone is so stupid to believe current ai can solve everything.

            And honestly, I didn’t see any marketing material that would claim that.

            • BassTurd@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              arrow-down
              1
              ·
              3 days ago

              You are both completely over estimating the intelligence level of “anyone” and not living in the same AI marketed universe as the rest of us. People are stupid. Really stupid.

              • NoiseColor @lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 days ago

                I don’t understand why this is so important, marketing is all about exaggerating, why expect something different here.

                • BassTurd@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  3 days ago

                  It’s not important. You said AI isn’t being marketed to be able to do everything. I said yes it is. That’s it.

                  • NoiseColor @lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    2 days ago

                    My point is people aren’t expecting AGI. People have already tried them and understand what the general capabilities are. Businesses today even more. I don’t think exaggerating the capabilities is such an overarching issue, that anyone could call the whole thing a scam.

    • suburban_hillbilly@lemmy.ml
      link
      fedilink
      English
      arrow-up
      16
      ·
      3 days ago

      Most people do. It’s just called AI in the media everywhere and marketing works. I think online folks forget that something as simple as getting a Lemmy account by yourself puts you into the top quintile of tech literacy.

    • iAvicenna@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      well so much hype has been generated around chatgpt being close to AGI that now it makes sense to ask questions like “can chatgpt prove the Riemann hypothesis”

    • Broken@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      3 days ago

      I agree with your general statement, but in theory since all ChatGPT does is regurgitate information back and a lot of chess is memorization of historical games and types, it might actually perform well. No, it can’t think, but it can remember everything so at some point that might tip the results in it’s favor.

      • Eagle0110@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        2 days ago

        Regurgitating an impression of, not regurgitating verbatim, that’s the problem here.

        Chess is 100% deterministic, so it falls flat.

        • Raltoid@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          3 days ago

          I’m guessing it’s not even hard to get it to “confidently” violate the rules.

      • FMT99@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        I mean it may be possible but the complexity would be so many orders of magnitude greater. It’d be like learning chess by just memorizing all the moves great players made but without any context or understanding of the underlying strategy.

    • x00z@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      In all fairness. Machine learning in chess engines is actually pretty strong.

      AlphaZero was developed by the artificial intelligence and research company DeepMind, which was acquired by Google. It is a computer program that reached a virtually unthinkable level of play using only reinforcement learning and self-play in order to train its neural networks. In other words, it was only given the rules of the game and then played against itself many millions of times (44 million games in the first nine hours, according to DeepMind).

      https://www.chess.com/terms/alphazero-chess-engine

      • jeeva@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Sure, but machine learning like that is very different to how LLMs are trained and their output.

      • FMT99@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Oh absolutely you can apply machine learning to game strategy. But you can’t expect a generalized chatbot to do well at strategic decision making for a specific game.

    • saltesc@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      3 days ago

      I like referring to LLMs as VI (Virtual Intelligence from Mass Effect) since they merely give the impression of intelligence but are little more than search engines. In the end all one is doing is displaying expected results based on a popularity algorithm. However they do this inconsistently due to bad data in and limited caching.