

For now, BMW is defaulting to a more traditional approach. If it requires a data package of some sort, it will probably have a recurring fee—and BMW says its customers are already comfortable subscribing to such add-ons.
Sounds like a fairly reasonable position imo, and that they listen to the outrage about heated seats (which tbh was ridiculous). I get the feeling that everyone who commented on this didn’t actually read the article, lol.
Full disclosure: I own a fairly recent BMW and do like it a lot. Would I have bought it with subscription based heated seats? Maybe not, but I do appreciate other things like having a physical button to go into battery save mode and not having to dive 3 touch screen menus down… or that it’s one of the most powerful hybrids in electric only mode (though not anymore I think)… or being generally more dialed back when it comes to driver assist features.
That said I will admit that it has a physical button that tells me to pay up when pressed, to enable automatic high beam control… though it’s not like it was an advertised feature (got it used).

Maybe i misunderstand what you mean but yes, you kind of can. The problem in this case is that the user sends two requests in the same input, and the LLM isn’t able to deal with conflicting commands in the system prompt and the input.
The post you replied to kind of seems to imply that the LLM can leak info to other users, but that is not really a thing. As I understand when you call the LLM it’s given your input and a lot of context that can be a hidden system prompt, perhaps your chat history, and other data that might be relevant for the service. If everything is properly implemented any information you give it will only stay in your context. Assuming that someone doesn’t do anything stupid like sharing context data between users.
What you need to watch out for though, especially with free online AI services is that they may use anything you input to train and evolve the process. This is a separate process but if you give personal to an AI assistant it might end up in the training dataset and parts of it end up in the next version of the model. This shouldn’t be an issue if you have a paid subscription or an Enterprise contract that would likely state that no input data can be used for training.