Regulations does no mean communism. We have regulation even on water, why we should not have regulations on Auto-Pilot driving. The main question is who is going to be liable for injured and dead of people? Who is going to cover the damage? is it driver's fault? Should Tesla be liable? Is it third party software provider? regulations are needed to answer all these and many other questions. We do not need communistic regulations to prohibit something. We need regulations which would stimulate the technology's progress in safe direction. We need regulations which would push companies to improve their products as well as which would protect the companies from various liabilities.
"from Peter Mertens, Volvo's head of Research & Development, in an article published earlier this month: Every time I drive (Autopilot), I'm convinced it's trying to kill me. … Anyone who says we have systems that are better than human drivers is lying to you. Anyone who moves too early is risking the entire autonomous industry. We're the safety brand, and we're taking things slowly. Until the systems are better than a good human driver, Volvo won't go there." This is how responsible carmakers operate. Oh yeah, it is actually an assisted cruise control and not an autopilot. Also helps if the costumer isn't an idiot: "the Model S driver was watching - or at least listening to - a Harry Potter movie at the time of the accident" "The AutoSteer component is a safety-critical feature being Beta-tested on the general public. One wonders how that is that even legal." ...and we came to full circle. How can such a life supporting feature being in beta only and already released to the public? Where is the regulation when we need it???
If all bikes, cars, and trucks are AI operated under the same 'regulated' system, yes then I will let AI run my vehicle., Jetson style. Until then,..but that is a long time into the future. In the meantime, floor that gas pedal when you can....that 'primitive' privilege most likely has an expiration date...
Clarkson already mentioned something along these lines https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/ The problem with all this is that no one understands technology can take you 95% there, but can't fill the missing piece as it requires too much data. Then the situation becomes clear, will you be the guinea pig whose ends up in the wall for that specifc scenario, it's something most will not risk. It's a fat-tail event, it works most of the time but when it doesn't ...
We're still in infancy of AI. Today, we're lucky if an AI recognizes a pedestrian, a baby from a fire hydrant. Lucky, and the AI needs to get it right for every frame, or else it might get wrong data for decision-making. In an entirely new situation, ie. where controls are reversed or broken down, the AI will probably be 180 degrees wrong, making a potential dangerous situation into a manslaughter. We won't be getting details about this though, it'll be strictly private. Accountability will be minimized by those who profits, so there's absolutely no basis for trust. What is good about AI is that they will be mandated max speeds and prevented from doing the foolish things human do. They will also often continue to function, despite adversity, danger and fast-paced action. However this can be a double-edged sword if the AI thinks a normal situation is an emergency, where the AI will be creating an entirely new dangerous scenario. No sane reason to trust today's car autopilots for anything. It's still the driver's, human responsibility. These companies will rather blame the human than faulty technology, while keeping delivering empty promises about AIs. As for society, one shouldn't underestimate the human element. If humans can make AIs, they can also destroy them, or humanity. WWII showed how far psychopathic systems can go. There's usually no salvation in any extreme. Blind belief in rationality and efficiency, is not an end in itself, it's entirely void and empty. There's no meaning in a world consisting only of AIs, at least not any human meaning. Anyways, I'd rather not be a beta-tester of this immature technology, no matter how shiny. I'm sure many will defend such technology as better than humans, but frankly, they're not, not today. And if they were, what will we do with it? Who will own it and how can it support a healthy economy?
Here is a whole article about the moral dilemma of autonomous cars: http://hothardware.com/news/self-dr...h-the-harsh-reality-of-who-lives-and-who-dies See, I didn't dream up the problem... ", “The Social Dilemma of Autonomous Vehicles.” 1,928 participants were surveyed on a number of scenarios in which a self-driving car is faced with a moral dilemma that would result in the death of one or more people." "So if there is just one passenger aboard a car, and the lives of 10 pedestrians are at stake, the survey participants were perfectly fine with a self-driving car “killing” its passenger to save many more lives in return."
Good link! My opinion: At this point in time, such complex anticipated holistic scenario-choosing is more of a philosophical debate rather than real-world AI problems to be solved right now. Pretty much due to immature technology, it's more of a theoretical R&D field. Problems with self-driving cars today is more of a how to obtain quality 3D-4D observation and understanding of surroundings and car/passenger state/dynamics, good enough to tackle diverse scenes through different paces of traffic flow and disruptions. Point is: if the 3D-4D "map" is not good enough, all the 5D+ decision-making software in the world won't help much (garbage in, garbage out). These complex scenarios are likely less problematic with AI cars, since they may generally drive safer (much more boring) than humans once the AI has a good enough grasp on the ever-changing reality around it. Thus this specific problem is more of a bonus high-hanging fruit. Given humanity's priority for private profits before human and environmental life, I suspect we will see laws similar to "don't step in front of a self-driving vehicle, or be liable for damages" rather than protecting human lives first and making corporations accountable. Also, decision-rules that go against existing laws, ie. changing to the wrong lane, will be strictly prohibited, so relegated to philosophy as well. At least before the AI systems becomes robust enough to begin to compete with human drivers in diverse scenes. It'll be more important to prevent escalations and make the AI standardized, predictable and thus simple. AI should be kept as a tool, not anything more fantastic, which it isn't. Like trading engines, the people making AIs will have to solve the crude and simple foundational problems first, and then iterate many times in order to become stable enough. Like most investing in new and shiny, there is a hype-curve. Remark, I'm not saying there's no complex decision-making in current AI car tech, at least the pioneer ones, but that there's more pressing problems to be solved and the solutions are very constrained by laws and expectations of all public road vehicles.
There is no right answer to these problems. Even if a human is driving, there is no right answer. I find it odd that people harp on this topic (have seen it in multiple news sources over the past month) when it has very little to do with the ultimate goal: reduce car fatalities in the general population.
A car that decides to kill me. hmmmm. I think I'll pass. Paint me old school. You know, these things will make medical malpractice lawsuits look like small claims court. A friggin lawyer smorgasbord. You know why small airplanes cost so much(?).... They don't have to. Even a small assembly line, forget the economies of scale, you can build Cessna 182's for far far less than a Toyota Camry. Aside from the avionics, they are very simple machines. Light airplanes cost so much because of liability insurance. I just don't see autonomous cars, aside from a few niche segments, becoming ubiquitous in our lifetime. Not because of the technology, but because of the lawyers and our litigious society.
Interestingly, some of Google self-driving cars are designed to exceed the speed limit: BBC Geek.com has some questions who will be liable for speeding Longer article on Slate.com Compromises will be made for sure.