• 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2023

help-circle


  • Again, more gibberish.

    It seems like all you want to do is dream of fantastical doomsday scenarios with no basis in reality, rather than actually engaging with the real world technology and science and how it works. It is impossible to infer what might happen with a technology without first understanding the technology and its capabilities.

    Do you know what training actually is? I don’t think you do. You seem to be under the impression that a model can somehow magically train itself. That is simply not how it works. Humans write programs to train models (Models, btw, are merely a set of numbers. They aren’t even code!).

    When you actually use a model: here’s what’s happening:

    1. The interface you are using takes your input and encodes it as a sequence of numbers (done by a program written by humans)
    2. This sequence of numbers (known as a vector, in mathematics) is multiplied by the weights of the model (organized in a matrix, which is basically a collection of vectors), resulting in a new sequence of numbers (the output vector) (done by a program written by humans).
    3. This output vector is converted back into the representation you supplied (so if you gave a chatbot some text, it will turn the numbers into the equivalent textual representation of said numbers) (done by a program written by humans).

    So a “model” is nothing more than a matrix of numbers (again, no code whatsoever), and using a model is simply a matter of (a human-written program) doing matrix multiplication to compute some output to present the user.

    To greatly simplify, if you have a mathematical function like f(x) = 2x + 3, you can supply said function with a number to get a new number, e.g, f(1) = 2 * 1 + 3 = 5.

    LLMs are the exact same concept. They are a mathematical function, and you apply said function to input to produce output. Training is the process of a human writing a program to compute how said mathematical function should be defined, or in other words, the exact coefficients (also known as weights) to assign to each and every variable in said function (and the number of variables can easily be in the millions).

    This is also, incidentally, why training is so resource intensive: repeatedly doing this multiplication for millions upon millions of variables is very expensive computationally and requires very specialized hardware to do efficiently. It happens to be the exact same kind of math used for computer graphics (matrix multiplication), which is why GPUs (or other even more specialized hardware) are so desired for training.

    It should be pretty evident that every step of the process is completely controlled by humans. Computers always do precisely what they are told to do and nothing more, and that has been the case since their inception and will always continue to be the case. A model is a math function. It has no feelings, thoughts, reasoning ability, agency, or anything like that. Can f(x) = x + 3 get a virus? Of course not, and the question is a completely absurd one to ask. It’s exactly the same thing for LLMs.


  • What does that even mean? It’s gibberish. You fundamentally misunderstand how this technology actually works.

    If you’re talking about the general concept of models trying to outcompete one another, the science already exists, and has existed since 2014. They’re called Generative Adversarial Networks, and it is an incredibly common training technique.

    It’s incredibly important not to ascribe random science fiction notions to the actual science being done. LLMs are not some organism that scientists prod to coax it into doing what they want. They intentionally design a network topology for a task, initialize the weights of each node to random values, feed in training data into the network (which, ultimately, is encoded into a series of numbers to be multiplied with the weights in the network), and measure the output numbers against some criteria to evaluate the model’s performance (or in other words, how close the output numbers are to a target set of numbers). Training will then use this number to adjust the weights, and repeat the process all over again until the numbers the model produces are “close enough”. Sometimes, the performance of a model is compared against that of another model being trained in order to determine how well it’s doing (the aforementioned Generative Adversarial Networks). But that is a far cry from models… I dunno, training themselves or something? It just doesn’t make any sense.

    The technology is not magic, and has been around for a long time. There’s not been some recent incredible breakthrough, unlike what you may have been led to believe. The only difference in the modern era is the amount of raw computing power and sheer volume of (illegally obtained) training data being thrown at models by massive corporations. This has led to models that have much better performance than previous ones (performance, in this case, meaning "how close does it sound like text a human would write?), but ultimately they are still doing the exact same thing they have been for years.



  • This is like, the opposite of old-fashioned. Calling your wife when you’re on the way home is old-fashioned.

    This article is the first time I’m actually hearing about this idea because it never even occurred to me as something people would actually want to do. I frankly don’t see the point of this nonsense. I would much rather talk to my wife on the phone and communicate with her about plans. It’s much more human and normal, and facilitates good communication habits. It takes 2 minutes to give my wife a call and, you know what, I get to talk to my wife! We don’t need technology invading absolutely every aspect of our lives. We don’t need to be constantly plugged in and attached to our phones at the hip.

    It also has other downsides, like making it hard to surprise your partner, constant battery drain from the constant location chatter, etc. In fact, it seems like all downside with no actual benefit (setting aside the trust stuff, because it’s pretty irrelevant either way).


  • I was trying to help onboard a new lead engineer and I was working through debugging his caddy config on Slack. I’m clearly putting in effort to help him diagnose his issue and he posts “I asked chatgpt and it said these two lines need to be reversed”, which was completely false (caddy has a system for reordering directives) and honestly just straight up insulting. Fucking pissed me off. People need to stop brining AI slop into conversations. It isn’t welcome and can fuck right off.

    The actual issue? He forgot to restart his development server. 😡