A representative for Tesla sent Ars the following statement: “Today’s verdict is wrong and only works to set back automotive safety and jeopardize Tesla’s and the entire industry’s efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial. Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator—which overrode Autopilot—as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver—from day one—admitted and accepted responsibility.”

So, you admit that the company’s marketing has continued to lie for the past six years?

  • UnfortunateShort@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    8
    ·
    22 hours ago

    I’m kinda torn on this - in principle, not this specific case. If your AI performs on paar with an average human and there is no known flaw at fault, I think you shouldn’t be either.

    • haloduder@thelemmy.club
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 hours ago

      If the company is penalized for being at fault, then they will have reasons to try better in the future.

      I don’t even give a flying fuck about how autopilot compares to the average driver. Tesla has the resources to make its technology better, so we as customers should all hold them to the highest possible standard. Anything less just outs you as a useful idiot; you’re willing to accept less so someone richer than you can have more.

    • Eranziel@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      21 hours ago

      I think that’s a bad idea, both legally and ethically. Vehicles cause tens of thousands of deaths - not to mention injuries - per year in North America. You’re proposing that a company who can meet that standard is absolved of liability? Meet, not improve.

      In that case, you’ve given these companies license to literally make money off of removing responsibility for those deaths. The driver’s not responsible, and neither is the company. That seems pretty terrible to me, and I’m sure to the loved ones of anyone who has been killed in a vehicle collision.

      • UnfortunateShort@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Yeah, but you can just set targets and penalize companies for missing them. No. of accidents per year for example. Even assuming autonomous vehicles only ever become as good as the average driver, this already means a substantial improvement over where things are at. For me, that’s the point where I’d start to phase out manually operated vehicles. I believe they will ge significantly better than that eventually.

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      22 hours ago

      And that is the point, Tesla’s “AI” performs nowhere near human levels. Actual full self driving levels is on 5 scales where Tesla’s are around level 2 out of those 5.

      Tesla claimed they have full self driving for since about a decade or so, and it has been and continues to be a complwte lie. Musk claimed since long ago that he can drive a Tesla autonomously from LA to NY while in reality it has trouble leaving the first parking lot.

      I’m unsure of and how much has changed there but since Elmo Musk spends more time lying about everything than actually improving his products, I would not hold my breath.

      • OctopusNemeses@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 hours ago

        The original comment is perpetuating the lie. Intentional or not. They rely on fundamentally flawed soundbites that are precisely crafted for propaganda not to be informative or truthful at all.

        Right off the bat they’re saying “in principle” which presumes the baseline lie that “full self driving” is achieved. Then they strengthen their argument by reinforcing the idea that it’s functionally equivalent to humans (i.e. generalized intelligence). Then the cap it off with “no known flaw”. Pure lies.

        Of course they’ve hedged by implying it’s opinion but strongly suggest it’s the most correct one anyways.

        I’m unsure of and how much has changed

        This demonstrates exactly how effective the propaganda is. They set up scenarios where nobody honest will refute their bullshit with certainty. Even though we know there is no existing system is on par with human drivers. Sure they can massage data to say under certain conditions an automated driving system performed similarly by some metric or whatever. But that’s fundamentally not what they are telling laymen audience. They’re lying in order to lead the average person to believe they can trust their car to drive them as if they are a passenger and another human is behind the wheel. This not true. Period. There is no existing system that does this. There will not be in the foreseeable future.

        The fact of the meta is that technological discussion is more about this kind of propaganda than technology itself. If it weren’t the case then more people would be hearing about the actual technology and it’s real limitations. Not all the spin-doctoring. That leads to uncertainty and confusion. Which leads to preventable deaths.

    • w3dd1e@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      22 hours ago

      I think the problem is that for a long time Tesla, and specifically Elon, went around telling everyone how great their autopilot was. Turns out that was all exaggeration and sometimes flat out lying.

      They showed videos of the car driving on its own. Later, we found out it was actually being controlled remotely.

      Yeah, the driver wasn’t operating the vehicle safely but, Tesla told him that he didn’t have to.