In next few years, we would see cars that would be fully capable of accelerating, braking and even overtaking other cars on highways, and they would do all of this by themselves. However, they would still have controls that the driver would be able to use and override them if need arrives. A fully autonomous car is still at a research stage and it would take some more time before anyone can buy it. The technology however that enables it is slowly creeping into production cars (Read about Smartphone-controlled Range Rover Sport).
A Google car that we see today is ‘too safe’ and it goes by the book and follows every rule. It makes decisions in a way that a human being might consider absurd. The problem is the way in which the car thinks and makes the decisions, which is unlike humans. It is yet to perfectly integrate with pedestrians and cars driven by human drivers, who at times tend to break rules and even communicate via gestures and eyes at signal-crossing. Therefore, a future self-driving car would need some amount of aggression, autonomy and a way of communication to make split-second decisions that would solve these problems.
The main idea behind self-driving cars is to minimize causality by preventing accidents. A fully self-driving car will definitely be safer and more fuel-efficient, but still it can never be perfectly safe. The development of such cars raises serious problems for researchers who need to solve the ethical dilemma of algorithmic morality - What should a car do if it finds itself in a situation when it cannot avoid an accident? That requires some complex thinking. Consider a car in some unfortunate turn of events, speeding into a group of ten people where it cannot stop in time. However, it can save the group by swerving into a different direction and colliding with a wall.
Should a car be programmed to sacrifice the driver and minimise the loss of lives? In general, people might be comfortable with an idea where a car tries to minimise the death toll, but may react differently knowing that they are sitting in a car that can self-destruct in an event like above.
The situation can be even more complex in case the car has infants, who compared to adults have longer lives ahead of them. What should a car do in that case? Another view is that, if the choice of different algorithms were taken out of manufacturer’s hands and put in hands of general public so that they can opt for them before the purchase of self-driving cars, should law make them or the manufacturers responsible for crash and loss of others’ lives that takes place in that event.
Self-driving cars would definitely save a lot of lives, therefore, it would be unethical to not have self-driving cars in future and stop their development at any point. The ethical implications of the future technology are all-encompassing, and therefore, Philosophers as well as Engineers have come together to solve the problem which might prove be the biggest bottleneck in the course of future development of self-driving cars. It's important that this debate is also taken to the general public to gauge their sentiments and find out if people would buy a car that's capable of self destructing, and has a utilitarian view of human life.
Source: MIT Technical Review
Watch video of the Google self-driving car project below.