The chipmaker, now the most valuable public company in the world, said strong demand for its chips should continue this quarter.

  • invertedspear@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 hours ago

    How can it slow down? To get investment, every company has to at least claim to use AI. My company measures and ranks people based on their use of Gemini. It hasn’t made it into our performance reviews yet, but there is shaming for not using it. “Come on, we’re paying for it, make use of it”. I haven’t even figured out how to make good use of it in my normal routine. Which is legitimately 75% meetings. Maybe I need to send bots to meetings.

    • Doomsider@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      Bots to meetings you say. Strokes beard and adjusts monocle Tell me more about these meeting bots.

    • yeehaw@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Same where I work. Metrics are coming for how much we leverage AI. So annoying.

    • hobovision@mander.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      Easiest/cheapest way would be to buy $NVD which is a 2x inverse ETF. It is designed so that if NVDA drops by 5% on a day, NVD will go up by 10%. But it’s on a daily basis and isn’t perfect, so over the long run it will bleed out, so you still need to time it right to profit on it.

    • Pringles@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 hours ago

      Oh, the market can stay irrational far longer than you can stay solvent. You are way better off betting on companies you believe have a bright future ahead.

  • nutsack@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    5 hours ago

    the market valuation of the thing might be too high at the moment, but the thing itself is not slowing down and it’s probably not going away.

    when the bubble pops there will be a value adjustment, and some companies will go out of business. but AI isn’t slowing down any time soon.

    i think the only thing that would slow down nvidia is another company making chips.

    • CallMeAnAI@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      edit-2
      10 hours ago

      The stock market is vibes based these days. Posting investors screeching about a bubble isn’t some argument.

      Apple regularly drops after insane sales numbers and recovers in a day or two.

  • MBech@feddit.dk
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    14
    ·
    10 hours ago

    Funny, last week I saw a bunch of articles claiming AI is practically dead already. And now this?

    Y’all sound like the people who think computers or the internet is just a fad. Shit like this is here to stay, wether you like it or not.

    Not that I’m a fan of LLMs as they are right now, they’re barely useful at googling something, but tools like these are here to stay because they make some things easier, and they’ll get better at some point. Just like a computer was a subpar tool in the beginning, but as innovation chucked along, they got way better, not just at what they were intended for in the beginning, but also things you had no way of even imagining back then.

    • Sundray@lemmus.org
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      1
      ·
      7 hours ago

      Shit like this is here to stay

      Of course. Just like VR, AR, the Metaverse, NFTs, cryptocurrency, and hundreds of other boom-and-bust, hype-cycle remnants. “AI” is a bubble. “AI” will burst. The tech will continue, just without the hype and the cohort wildly over-funded moonshot start-ups.

      I look forward to the day when ROI-focused tech executives aren’t trying to cram non-intelligent LLMs into roles where they do not excel. Let people find their own uses, on their own terms for these things. Perhaps someday people will train bespoke, subject-specific ML tools on their laptops in a matter of minutes with a single click, and it will be an unremarkable part of their day. I’d like to see that.

    • Frezik@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      41
      arrow-down
      1
      ·
      edit-2
      9 hours ago

      Really? Companies are going to keep building datacenters that need entire nuclear reactors to themselves without any of that converting into revenue? This is going to to keep going forever in your mind?

      • Victor@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        Definitely a bubble to be burst at some point unless we are able to harness energy and reduce waste substantially better than now.

      • MBech@feddit.dk
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        8
        ·
        8 hours ago

        A lot will fail, sure, but that happens in literally every single developing industry. There are plenty of industries out there that aren’t profitable, but are still going. Tesla wasn’t profitable between 2003 and 2020, yet here we are, where they not only make profit, but they’ve kickstarted the electric cars industry. And that’s despite that they sell shitty cars and their CEO is a nazi.

          • iopq@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            5 hours ago

            They will be profitable in ten years after everything crashes and only a few are left

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        13
        ·
        edit-2
        8 hours ago

        The power usage is massively overstated, and a meme perpetuated by Altman so he’ll get more more money for ‘scaling’. And he’s lying through his teeth: there literally isn’t enough silicon capacity in the world for that stupid idea.

        GPT-5 is already proof scaling with no innovation doesn’t work. So are open source models trained/running on peanuts nipping at its heels.

        And tech in the pipe like bitnet is coming to disrupt that even more; the future is small, specialized, augmented models, mostly running locally on your phone/PC because it’s so cheap and low power.

        There’s tons of stuff to worry about over LLMs and other generative ML, but future power usage isn’t one.

        • Frezik@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          2
          ·
          8 hours ago

          Except none of these companies are making money. Like almost literally none. We’re about three years into the LLM craze, and nobody has figured out how to turn a profit. Hell, forget profit, not bleeding through prodigious piles of cash would be a big deal.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            edit-2
            7 hours ago

            Nods vigorously.

            The future of LLMs basically unprofitable for the actual AI companies. We are in a hell of a bubble, which I can’t wait to pop so I can pick up a liquidation GPU (or at least rent one for cheap).

            That doesn’t mean power usage is an existential issue. In fact, it seems like the sheer inefficiency of OpenAI/Grok and such are nails in their coffins.

            • Frezik@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              7 hours ago

              Power usage is what’s sucking the cash. What else could it be? Not all of these companies are building out lots of datacenters the way OpenAI is. They built what they have, and are now trying to make money on it.

              The companies that are charging for AI are charging about as much as buyers are willing to pay, but it’s orders of magnitude too small to cover their costs. The big cost is power usage.

              • brucethemoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                6 hours ago

                On the training side, it’s mostly:

                • Paying devs to prepare the training runs with data, software architecture, frameworks, smaller scale experiments, things like that.

                • Paying other devs to get the training to scale across 800+ nodes.

                • Building the data centers, where the construction and GPU hardware costs kind of dwarf power usage in the short term.

                On the inference side:

                • Sometimes optimized deployment frameworks like Deepseek uses, though many seem to use something off the shelf like sglang

                • Renting or deploying GPU servers individually. They don’t need to be networked at scale like for training, with the highest end I’ve heard (Deepseek’s optimized framework) being like 18 servers or so. And again, the sticker price of the GPUs is the big cost here.

                • Developing tool use frameworks.

                On both sides, the biggest players burn billions on Tech Bro “superstar” developers that, frankly, seem to Tweet more than developing interesting things.

                Microsoft talks up nuclear power and such just because they want to cut out the middleman from the grid, reduce power costs, reduce the risk of power outages and such, not because there’s physically not enough power from the grid. It’s just corporate cheapness, not an existential need.

    • Kekzkrieger@feddit.org
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      4
      ·
      5 hours ago

      I disagree so hard words wont do justice.

      The internet and computers has value, ai doesn’t. It already is unprofitable to run and newer models consume even more power. So wont turn into more profit all of a sudden.

      It will crash if it doesnt turn useful to actually MAKE money.

      There is 0 use cases other than it being a funny entertainmentbox that still summatizes shit wrong in almost 10% of cases.

      You may say agents might replace humans at some point but again nobody has done it yet profitable. It cant do tasks because it cant think. It can repeat what someone tells it to do but guess what thats what programming was in the first place.

    • mctoasterson@reddthat.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 hours ago

      Well it’s “here to stay” I agree. But there are some real economic indicators that it is also a bubble. First, the number of products and services that can be improved by hamfisting AI into them is perhaps reaching critical mass. We need to see what the “killer app” is for the subsequent generation of AI. More cool video segments and LLM chatbots isn’t going to cut it. Everyone is betting there will be a gen 2.0, but we don’t know what it is yet.

      Second, the valuations are all out of whack. Remember Lycos, AskJeeves, Pets.com etc? During the dotcom bubble, the concept of the internet was “here to stay” but many of the original huge sites weren’t. They were massively overvalued based on general enthusiasm for the potential of the internet itself. It’s hard to argue that’s not where we are at with AI companies now. Many observers have commented the price to earnings ratios are skyhigh for the top AI-related companies. Meaning investors are parking a ton of investment capital in them, but they haven’t yet materialized long-term earnings.

      Third, at least in the US, investment in general is lopsided towards tech companies and AI companies. Again look at the top growth companies and share price trends etc. This could be a “bubble” in itself as other sectors need to grow commensurate to the tech sector, otherwise that indicates its own economic problems. What if AI really does create a bunch of great new products and services, but no one can buy them because other areas of the economy stalled over the same time period?

    • Feyd@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 hours ago

      “Nvidia had good sales in the last 3 months” doesn’t necessarily conflict with whatever drove those articles last week…

      “A technology got more useful in the past” isn’t a compelling reason to argue something else will get more useful…

      Use your critical thinking skills lol

      • MBech@feddit.dk
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        7
        ·
        8 hours ago

        Are you seriously arguing that AI and LLMs won’t get better? For real? In that case I’m sorry, but you’re going to be left behind like 95% of the people older than 50 who didn’t bother to learn how to use a pc properly.

        • Frezik@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          8 hours ago

          You know that technology doesn’t actually get better by default, right? It reaches plateaus. Some things are just dead ends.

          Have you bought any bubble memory recently?

        • Feyd@programming.dev
          link
          fedilink
          English
          arrow-up
          9
          ·
          8 hours ago

          AI is a very generic term and people should stop using it synonymously with LLM which is a very specific thing.

          LLMs have the hallucination problem, and that is a fundamental aspect of the technology that makes it unfit for many of the purposes that are driving investment.

          If I’m somehow wrong and LLM based tools actually become useful, I can learn them then. “Use shitty tools or you’ll get left behind!” Is a completely stupid argument - “skills” in using a shitty tool probably won’t transfer to using this hypothetical good tool anyway.

          And just to reiterate, the argument of “the internet/smart phones/whatever was revolutionary so this is too”, even though so far all we really have is “some ok sometimes coding tools”, “search that lies sometimes”, “summary that lies sometimes” is completely stupid. There are plenty of technologies that didn’t become part of our daily lives as well…

    • CallMeAnAI@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      5
      ·
      9 hours ago

      If your not out actively trying to fuck up, it’s already here for coders. It’s going to become impossible to be a “junior” coder.

      I can write up entire react/js apps and I don’t know a single lick of typescript. Would I drop it in prod? No. But is it good enough for a pr to a senior who knows what’s up? Absolutely.

      • Passerby6497@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        Man, vibe coders really think highly of themselves and their AI outputs.

        As someone who actually understands the language used in AI generated scripts, AI is shit at writing code. It sometimes gets decent wins and helps me figure out something quicker than without, but I can count on my fingers the number of times I’ve gotten a good, and usable, bit of code from it. Vastly more often than not, I have to edit the code to make it run (because it hallucinates functions, parameters, and constantly uses reserved variables even after being corrected dozens of times) only to find out it doesn’t even give the right output, or more often, outputs nothing at all.

        You show the quality of your knowledge by the inverse of the trust you put in AI code. It’s decent at blocking out basic things, but anything past that is a crapshoot at best

      • Feyd@programming.dev
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        8 hours ago

        I hate it when people submit PRs they can’t understand or explain. It is more work for me than just writing it myself. Also, this whole “AI can bootstrap an app!” line is fucking stupid. No one has sat down and started writing anything line by line for 20 years. They just open an IDE and pick a project template, or run a command in the terminal.

        • CallMeAnAI@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          8
          ·
          edit-2
          8 hours ago

          I understand what’s going on at the level I need. It’s easy as someone who’s done this a billion times in other languages to read the language and get a lay of the land. You’d be unable to tell it was AI generated vs your junior team members. And yes, better scaffolding is a feature. And it’s getting better.

          • Feyd@programming.dev
            link
            fedilink
            English
            arrow-up
            9
            ·
            edit-2
            8 hours ago

            You’re just completely wrong lol. You definitely don’t know jack about shit if you think you can AI generate code and people that actually know what they’re doing will find it indistinguishable from code by someone that is at least learning. And if I ask a question and someone throws up their hands and says AI wrote it, or more likely tries to wriggle out of it because they “understand what’s going on at the level I need”, I’m just going to call my boss and tell him to get them the fuck off my team.

          • HarkMahlberg@kbin.earth
            link
            fedilink
            arrow-up
            5
            ·
            7 hours ago

            “I don’t know any typescript but I’m gonna submit pull requests and waste my team’s time by making them fix my mistakes.”

            This kind of rhetoric would make you unhirable in my industry, AI or not. What a disrespectful way to work with other people.

            • CallMeAnAI@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              8
              ·
              edit-2
              7 hours ago

              🤣 my AI vibe code is better than most juniors with less than 2 years. That number is only going up. The biggest challenge is making sure code is organized within the framework and organizations best practice and standards. iE senior dev shit.

              It’s literally your job as a senior to review code not up to your standard. So many devs screeching they have to review code at that quality while simultaneously being paid in the top 5% 🤣

                • CallMeAnAI@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  7
                  ·
                  5 hours ago

                  I write tf, NGinx, and a bunch of other shit in the cncf space. I’m a 2%er. I’m better than you. You use my code every day.

      • Ŝan@piefed.zip
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        8 hours ago

        It’s going to become impossible to be a “junior” coder.

        Which means it’ll become impossible to become a senior one. Which would be a problem, right?

        is it good enough for a pr to a senior who knows what’s up? Absolutely.

        K, þis is a weird take. You must have some really patient and forgiving seniors. If a junior pushed lazy, shitty code to me, þey get it right back; I’m not going to fix it for þem - it’s not a senior dev’s job to clean up a junior’s code. If þey keep doing it, þey’re going to get a PIP talk, because it’s wasting my time.

        • CallMeAnAI@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          7 hours ago

          I agree it’s an issue.

          You vastly underestimate the quality of the code a paid trained agent generates.

          It’s not going to replace developers but it will drive down the need.

          • Ŝan@piefed.zip
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 hours ago

            Hmmm, possibly. I agree it’ll drive down demand, at least short term. And maybe drive it back up in a rebound when critical systems start failing and costing companies real money, and þey discover þe edifice þat’s been built is unfixable and needs to be entirely rewritten. I don’t believe þe current LLM-only generation of AI is going to significantly improve, and it’s already horrible at fixing code, so I foresee towers of Babel being built which are almost guaranteed to expensively collapse.

            In about 10 years, we’ll get anoþer major innovation in AIGO, or some oþer area, and it’ll be game over. I do believe we’re only one major level step from AGI. I don’t þink we’re þere yet, and won’t be for some years.

  • Damage@feddit.it
    link
    fedilink
    English
    arrow-up
    9
    ·
    11 hours ago

    I hate Nvidia, but if they are smart they’ll invest these profits into CPU development. Intel is in free fall, this is their best chance.

  • John Richard@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    12 hours ago

    The bubble won’t pop. AI has become a national defense initiative. The only thing that will pop NVIDIA’s bubble is another chip designer coming out with something better. Maybe someone will use AI that ran on NVIDIA to build something else, therefore using NVIDIA to kill NVIDIA.

    • angstylittlecatboy@reddthat.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      9 hours ago

      I think there is an AI bubble, but the aftermath will be like the dot com bubble: the internet didn’t go away, but a bunch of businesses that were only ever valuable because they were on the internet did.

      OpenAI won’t go away, but a bunch of companies whose products are pretty much wrappers for ChatGPT will.

      • Zos_Kia@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        Funny, I would propose the exact opposite. OpenAI is doomed, they are committed to spending more money than they can ever hope to earn, and already have a hard time raising anything covering their operating expenses, let alone the training of new innovative models. Their life will only keep getting harder and they’ll never have it as good as they did in 2023.

        On the other hand, alternative models get better every day and have tokens that cost a fraction of those from large model makers. Some of what you call ChatGPT “wrappers” actually have solid and healthy business models and are burning reasonable amounts of cash (reasonable for VC backed businesses anyway). They’ll just switch to cheaper models when price pressure tells them to and they’ll be fine.

      • Sundray@lemmus.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 hours ago

        companies whose products are pretty much wrappers for ChatGPT

        Yeah, once the rents go up for access to ChatGPT, et al, (as they inevitably will) a lot of those companies are in for a rude awaking.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      I personally think it will, but as part of bigger events. Like a power takeover. A few decades later, perhaps, not that soon. Imagine if someone\something respected and remembered as the ultimate good comes out of shadows to deliver our world (mostly the computer industry and the Internet) from evil (LLM bots, hate campaigns, all the black mirror stuff) by putting it all in hierarchical order and redoing all the political system along the way. Democracy has failed, it’s the rule of the loudest, thus of those with the best bots, all such things.

      Wouldn’t be the first time, such technological changes often affect political system changes all over the world. Remember what coincided with automobiles, airships and airplanes, electric lighting and radio becoming popular.

      Except in the 00s everyone thought the changes will all be nice and positive.

    • Sundray@lemmus.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 hours ago

      AI/ML is addicted to CUDA. AMD and even Intel have viable alternatives in terms of raw compute performance, but they’ll never natively support CUDA, and that’s what most of the software tooling is focused right now.