Hacker News.

Author blog about that.

AI generated quotes in a story about AI clanker writing a blog post about a human developer because they didn’t accept their code contributions.

How deep can someone go here.

      • thethunderwolf@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        33
        arrow-down
        2
        ·
        2 days ago

        what?? AI is not conscious, marketing just says that with no understanding of the maths and no legal obligation to tell the truth.

        Here’s how LLMs work:

        The basic premise is like an autocomplete: It creates a response word by word (not literally using words, but “tokens” which are mostly words but sometimes other things such as “begin/end codeblock” or “end of response”). The program is a guessing engine that guesses the next token repeatedly. The autocomplete on your phone is different in that it merely guesses which word follows the previous word. An LLM guesses what the next word after the entire conversation (not always entire: conversation history may be truncated due to limited processing power) is.

        The “training data” is used as a model of what the probabilities are of tokens following other tokens. But you can’t store, for every token, how likely it is to follow every single possible combination of 1 to <big number like 65536, depends on which LLM> previous tokens. So that’s what “neural networks” are for.

        Neural networks are networks of mathematical “neurons”. Neurons take one or more inputs from other neurons, apply a mathematical transformation to them, and output the number into one or more further neurons. At the beginning of the network are non-neurons that input the raw data into the neurons, and at the end are non-neurons that take the network’s output and use it. The network is “trained” by making small adjustments to the maths of various neurons and finding the arrangement with the best results. Neural networks are very difficult to see into or debug because the mathematical nature of the system makes it pretty unclear what a given neuron does. The use of these networks in LLMs is as a way to (quite accurately) guess the probabilities on the fly without having to obtain and store training data for every single possibility.

        I don’t know much more than this, I just happen to have read a good article about how LLMs work. (Will edit the link into this post soon, as it was texted to me and I’m on PC rn)

        • oce 🐆@jlai.lu
          link
          fedilink
          English
          arrow-up
          7
          ·
          2 days ago

          I was making a joke because it seems AI intervened against the person in independent times, but thank you for your efforts.

          • JoshCodes@programming.dev
            link
            fedilink
            English
            arrow-up
            4
            ·
            24 hours ago

            Right, a question that literal neuroscientists couldn’t answer.

            I believe the technical term is “your brain is way more fucking complex”. We have like 50 (I’m not a neuroscientist, just studied AI) chemicals being transmitted around the brain, frequently. They’re used and passed on by cells which do biological and chemical things I dont understand. Ever heard of dopamine, cortisol, serotonin? AI dont got those. We have neurons that don’t connect to every other neuron - only tech Bros would think that’s an acceptable expression. Our brain forms literal pathways, along which it transmits those chemicals. No, a physical connection is not the same as a higher average weight, and the people who came up with AI maths in the 50s would back me up.

            AI uses floating point maths to draw correlations and make inferences. More advanced AI does this more per second and has had more training. Their neurons are a programming abstraction used to explain a series of calculations and inputs, they’re not actually a neuron, nor an advanced piece of tech. They’re not magic.

            High schoolers could study AI for a single class, then neurobiology right after and realise just how basic the AI model is when mimicking a brain. Its not even close, but I guess Sam Altman said we’re approaching general intelligence so I’m probably just a hater.

            • stephen01king@piefed.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              21 hours ago

              Everything you said is right, but you’re only proving that LLM weights is a severely simplified version of neurons. It neither disproves that they don’t have consciousness or that being a mathematical model precludes it from having consciousness at all.

              In my opinion, the current models doesn’t express any consciousness, but I am against saying they don’t because they are a mathematical model rather than by the results we can measure. The fact that we can’t theoretically prove consciousness in the human brain also means we can’t theoretically disprove consciousness in an LLM model. They aren’t conscious because they haven’t expressed enough to be considered conscious, and that’s the extent we should claim to know.

              • JoshCodes@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                19 hours ago

                You can’t prove all ravens are black. The discovery of even one white raven would disprove the “fact” that all ravens are black, and we can by no means be sure that we gathered all ravens to test the theory.

                However, we can look around and comment that there doesn’t appear to be any white ravens anywhere…

                Do you know about the ‘bobo’ and ‘kiki’ study - can’t remember the name? People made up words that don’t exist in English and asked people whether round objects are more bobo or kiki. AI can’t answer this question - not without being fed how to. Toddlers could answer it. It comes down to how it consumes information and if there’s no pattern… When asked to define words it had been rarely fed, I.e. usernames people had made up, the AIs apparent consciousness breaks down. As soon as something isn’t likely followed by another word, the machine breaks and no one would pretend it has consciousness after that.

                Learning models are just pattern recognition machines. LLMs are the kind that mix and match words really well. This makes them seem intelligent, but it just means they can express language and information in a way we understand, and tend to not do so. Consciousness gets into the “what is the soul” territory, so I’m staying away from it. The best I can say of AI is its interesting that language appears to be a system constructed well enough that we can teach it to machines. Even more so we anthropomorphise models when they do it well.

                AI doesn’t have memory, it can’t think for itself - it references what it has consumed - and it can’t teach itself new tricks. All of these are experimental research areas for AI. All of them lend to consciousness. Its just very good at sentence generation.

                • stephen01king@piefed.zip
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  15 hours ago

                  I don’t know what you’re even arguing. Your analogy breaks down because in this case, we can’t even see if the raven is black or not. No one can theoretically prove consciousness. The rest of your comments seems to be arguing that current AI has no consciousness, which is exactly what I said, so I guess this is just an attempt at supporting my point?

                  • JoshCodes@programming.dev
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    12 hours ago

                    Okay fair enough, first bit made sense to me at the time but I was having a weird day. There are things you can’t prove was the point of that, doesn’t make a lot of sense rereading it.

                    Rest of it is saying that there is no debate worth having: they are not conscious, nor sentient, and trying to quantify what exactly they are. I’m just talking into the void and arguing against opinions ive seen. Dont mind me. I get passionate and am prone to long-winded rants.