Self-driving cars have already begun to start their journey in the world.
Recently Google announced that their self-driving car had traveled a million miles.
There is no doubt that the self-driving automated cars are a huge benefit to the world. It makes a big impact in the emergency transport areas like ambulances and fire trucks etcetera.
It also helps in reducing the road accident victims. There won’t be any cases of speeding or cutting the lanes or rash driving.
They definitely make our travel smoother.
While this is a big advancement in the field of AI and Robotics, many philosophers and humanitarians are having second thoughts about it.
While there is a solid argument that we are making remarkable progress in the robotics field making human lives easier by the day, we are also faced with the much more philosophical dilemma.
There is also a problem that these self-driving cars which are programmed to do certain things, have to deal with human drivers who most often than not are simply reckless. In a research conducted recently, many drivers mentioned that there is no use in waiting for a signal when the road is clear. The same happens when cutting lanes.
Since self-driven cars are rule-based, they don’t have the humanitarian values. They wait at the specified signals, stay in their lanes and work according to their rules.
There are a lot of philosophical questions regarding this matter,
What if a person needs immediate medical attention and he needs to be taken to the hospital right away, and there’s only a self-driving car in the vicinity, would it open the doors to let a dying man get into the car and change its route to get him to the hospital?
Also, there’s the danger of rash drivers who cut off others in the traffic or change their lanes without indicating, what if they get in the way of oh-so-obedient rule based cars?
“It’s hard to program in human stupidity or someone who really tries to game the technology,” says John Hanson, spokesman for Toyota’s autonomous car unit.
And in countries like India, where there’s huge amount of traffic and no specific lanes marked on most of the roads, it becomes hard for these cars to follow the rules and to cope up with the increasing traffic. This leads to problems in universality.
While some scientists are hopeful that someday these self-driving powered by AI will be capable of making decisions based on their current conditions many argue that this will be the starting of the Robotic takeover.
One such classic dilemma is the classic Trolley problem.
What is the trolley problem?
It questions are the driver’s utilitarian decision.
Consider a train/car going on its specified route and the driver sees a family of five tied to the track if he wants to avoid hitting the family it has to change its lane where another man is tied to the tracks. What would the driver do?
Almost 68% of the drivers said that they would change their lane and kill a man in order to keep a family of five safe.
But what if there’s a self-driving car? Then the main question is To pull? Or not to pull?
Conclusion While there are many debates and discussions going on revolving the benefits and limits of these self driving cars, the main point they have to remember is that it’s a tool albeit an expensive and AI based tool.
A tool designed to help mankind have an easy life.
I think that it can either be bad or it can be good. It all depends on how we design it and how we use it.