This past June, the United States National Highway Traffic Safety Administration announced a probe into Tesla’s autopilot software.

With an easy slot asia เว็บสล็อตใหม่ gameplay For those who want to play or want to know how slot games are played. 

Data gathered from 16 crashes raised concerns over the possibility that Tesla’s artificial intelligence (AI) may be programmed to quit when a crash is imminent. This way, the car’s driver, not the manufacturer, would be legally liable at the moment of impact.

It echoes the revelation that Uber’s self-driving car, which hit and killed a woman, detected her six seconds before impact. But the AI was not programmed to recognise pedestrians outside of designated crosswalks. Why? Because jaywalkers are not legally there.

Some believe these stories are proof that our concept of liability needs to change. To them, unimpeded continuous innovation and widespread adoption of AI is what our society needs most, which means protecting innovative corporations from lawsuits.

But what if, in fact, it’s our understanding of competition that needs to evolve instead?

If AI is central to our future, we need to pay careful attention to the assumptions around harms and benefits programmed into these products. As it stands, there is a perverse incentive to design AI that is artificially innocent.

A better approach would involve a more extensive harm-reduction strategy. Maybe we should be encouraging industry-wide collaboration on certain classes of life-saving algorithms, designing them for optimal performance rather than proprietary advantage.