Looks like this is an opt-in, supplemental crop mode, for streamers who have stuff that would work well for vertical orientations. Talking head stuff, live solo-artist music, etc.
The existing content experience is still there, but if you’re standing on the subway, and you’re watching a streamer react to a press release about a game, you can turn your phone and watch them babble in landscape while you hold the poll.
Fucking lol. 6 weeks ago the brand team just changed all the brand colors again. They went from purple back to black.
Somewhere in NY there is a sign company making a shitload of money swapping signs off the HBO building every few weeks.
Looking at the downvotes, remember upvoting an article ≠ an endorsement of the shitty technology being discussed in the article.
We shit on the technology in the comments, and upvote it so more of us can read about it and shit on it.
I think most of the “requirements” they’re referring to are the technical ones, not governmental.
North America’s residential HVAC landscape is pretty simply and dumb compared to a lot of what is happening in Europe. Dumb forced central air systems dominate residential HVAC.
It sounds like they don’t like developing for all the weird hardware configurations that appear in Europe.
It’s not available to the public
Question is, do I downvote the crappy product because I hate it, or do I upvote it so other people can learn about it and hate it with me?
Fun fact, if you adjust for inflation, this machine is only $52 more than the original switch was at launch.
This is basically the originally pricing, adjusted for inflation + Trump’s 20% Chinese manufacturing tariff.
Looks like we’re seeing the impact of inflation + tariffs.
The OG Switch was $300 in 2017. This console would be about $350 if you adjusted for inflation.
In its suit, Samsung alleged that Oura had a history of filing patent suits against competitors like Ultrahuman, RingConn, and Circular for “features common to virtually all smart rings,” such as sensors, batteries, and common health metrics.
The problem isn’t the features, it’s that Samsung is copying the very concept of a smart ring. Oura was the first company to make and patent biometric smart rings. So, yeah, if you make a biometric smart ring without paying them, you’re getting sued. That’s how patents work.
For the past 30 years, Samsung’s consumer product development strategy has been 75% “copy the competitors, then pay lawyers to fight it out.”
This guy clamps
Or it’s just the classic Apple “launch some weird shit with a cool interaction model or form factor, but we don’t really know how people will -actually- use this.”
AppleTV, AppleWatch, Firewire iPod, HomePod, etc. They kick it out, people complain about it, Apple learns the users who adopted it, then they focus the feature set when they better understand the market fit.
IMHO, it seems like that’s the play here. Heck, they even started with the “pro” during the initial launch, which gives them a very obvious off ramp for a cheaper / more focused non-pro product.
At least one of those guys is able to ship a product that does what it was advertised to do.
The problem with the Vision Pro is that no one wants to pay $4000 for what it does.
The Vision Pro is a cool solution in search of a user need.
Voice control is a user need that Apple struggles to deliver solutions for.
I think enterprise needs will ensure that people develops solutions to this.
Companies can’t have their data creeping out into the public, or even creeping out into other parts of the org. If you’re customer, roadmap, or HR data got into the wrong hands, that could be a disaster.
Apple, Google, and Microsoft will never get AI into the workplace is AI is sharing confidential enterprise data outside of an organization. And all of these tech companies desperately want their tools to be used in enterprises.
Yeah, it a lot of those studies are about stupid stuff like an LLM in-app to look at grammar, or a diffusion model to throw stupid clip art into things. No one gives a shit about that stuff. You can easily just cut and paste from OpenAI’s experience, and get access to more tools there.
That said, being able to ask an OS to look at one local vectorized DB of texts, images, documents, recognize context, then compose and complete tasks based upon that context. That shit is fucking cool.
That said, a lot of people haven’t experienced that yet, so when they get asked about “AI,” their responses are framed with what they’ve experienced.
It’s the “faster horse” analogy. People that don’t know about cars, busses, and trains will ask for a faster horse when you ask them to envision a faster mode of transport.
Why can’t it work?
I work on AI systems that integrate into other apps and make contextual requests. That’s the big feature that Apple hasn’t launched, and it’s very much a problem that others have solved before.
The new models are being fixed by “nut clamping”
There are a few of us here who are closer to Satya‘s strategic roadmap than you might think.
My guess is that, given Lemmy’s software developer demographic, I’m not the only person here who is close to this space and these players.
From what I’m seeing in my day to day work, MS is still aggressively dedicated to AI internally.
Well, there’s your problem