ETH Zurich and EPFL will release a large language model (LLM) developed on public infrastructure. Trained on the “Alps” supercomputer at the Swiss National Supercomputing Centre (CSCS), the new LLM marks a milestone in open-source AI and multilingual excellence.

  • In late summer 2025, a publicly developed large language model (LLM) will be released — co-created by researchers at EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS).
  • This LLM will be fully open: This openness is designed to support broad adoption and foster innovation across science, society, and industry.
  • A defining feature of the model is its multilingual fluency in over 1,000 languages.
  • danzania@infosec.pub
    link
    fedilink
    English
    arrow-up
    1
    ·
    28 days ago

    I’m sure the community will find something to hate about this as well, since this isn’t an article about an LLM failing at something.

    • cabbage@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      28 days ago

      Gigantic hater of all things LLM or “AI” here.

      The only genuine contribution I can think of that LLMs have made to society is their translation capabilities. So even I can see how a fully open source model with “multilingual fluency in over 1,000 languages” could be potentially useful.

      And even if it is all a scam, if this prevents people from sending money to China or the US as they are falling for the scam, I guess that’s also a good thing.

      Could I find something to hate about it? Oh yeah, most certainly! :)

        • cabbage@piefed.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          28 days ago

          Usually when I see this it’s using other machine learning approaches than LLM, and the researchers behind it are usually very careful not to use the term AI, as they are fully aware that this is not what they are doing.

          There’s huge potential in machine learning, but LLMs are very little more than bullshit generators, and generative AI is theft producing soulless garbage. LLMs are widely employed because they look impressive, but for anything that requires substance machine learning methods that have been around for years tend to perform better.

          If you can identify cancer in x-rays using machine learning that’s awesome, but that’s very seperate from the AI hype machine that is currently running wild.