LLMs performed best on questions related to legal systems and social complexity, but they struggled significantly with topics such as discrimination and social mobility.

“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history,” said del Rio-Chanona. “They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task.”

Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.

  • QuarterSwede@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    5
    ·
    edit-2
    2 months ago

    Ugh. No one in the mainstream understands WHAT LLMs are and do. They’re really just basic input output mechanisms. They don’t understand anything. Garbage in, garbage out as it were.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      2 months ago

      They’re really just basic input output mechanisms.

      I mean, I’d argue they’re highly complex I/O mechanisms, which is how you get weird hallucinations that developers can’t easily explain.

      But expecting cognition out of a graph is like demanding novelty out of a plinko machine. Not only do you get out what you get in, but you get a very statistically well-determined output. That’s the whole point. The LLM isn’t supposed to be doing high level cognitive extrapolations. It’s supposed to be doing statistical aggregates on word association using a natural language schema.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      2 months ago

      That is accurate, but people who design and distribute the LLMs refer to the process as machine learning and use terms like hallucinations which is the primary cause of the confusion.

      • SinningStromgald@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        3
        ·
        2 months ago

        I think the problem is the use of the term AI. Regular Joe Schmo hears/sees AI and thinks Data from ST:NG or Cylons from Battlestar Galactica and not glorified search engine chatbots. But AI sounds cooler than LLM so they use AI.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          The term is fine. Your examples are very selective. I doubt Joe Schmo thought the aimbots in CoD were truly intelligent when he referred to them as AI.

    • Epzillon@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      I just like the analogy of a dashboard with knobs. Input text on one wide output text on the other. “Training” AI is simply letting the knobs adjust themselves based on feedback of the output. AI never “learns” it only produces output based on how the knobs are dialed in. Its not a magic box, its just a lot of settings converting data to new data.