Self-driving cars: The good, bad and unknown
The automobile is one of the most prominent and recognizable symbols of freedom, independence, and responsibility in the world. One major reason that turning sixteen in the United States is so highly anticipated is because it marks the age at which most kids become eligible to legally drive a car. Today’s Millennials are no exception to this cultural phenomenon. Bloomberg recently reported that new data from J.D. Power & Associates found that Millennials now account for 27 percent of new car sales, up from 18 percent in 2010. They have surpassed Gen X to become the second-largest group of new car buyers after their boomer parents. These data suggest the decline of millennial drivers due to the recent economic recession may be making a comeback. The idea that millennials are driving less may be changing, due to the fact that they are starting to find jobs and relocating to the suburbs and smaller cities, where public transport is spotty.
As early as 2009, the concept of a self-driving car still seemed like something out of a Ray Bradbury short story. Today, the so-called “autonomous car” is far more than science fiction, and it is becoming more of a reality every day.
A Future State of Driving Nirvana
By now most people are aware of the futuristic prototypes that big names like Google have been testing. The front page of Google’s proclaimed “Self-Driving Car Project” reads, “What if it could be easier and safer for everyone to get around?” Below is a link to a video that features Google’s prototype for a “vehicle that’s designed to take you where you want to go at the push of a button.” These cars have sensors designed to detect objects like pedestrians, cyclists, and other vehicles as far as two football fields in all directions. Software processes information to help the car navigate safely without the risk of the “pilot” becoming tired and distracted, unlike their human co-pilots.
Google states the project has been up and running since 2009, but the vision of a self-driving car dates back to as early as the 1939 New York World Fair, where visitors were presented with the dream of automated highways. Their site states that in the mid 2000s, the Defense Advanced Research Projects Agency (DARPA) organized Grand Challenges where teams gathered to compete with self-driving vehicles. Google continues, “We’ve self-driven over 1 million miles and are currently out on the streets of Mountain View, California and Austin, Texas.” The cars, like people, are programmed to answer several “key questions” when driving: “where am I? What’s around me? What will happen next?” and “What should I do?”
A Business Insider article predicts 10 million self-driving cars will be on the road by 2020. Clearly, this technological phenomenon is quickly becoming actuality. The article states that the biggest benefit of self-driving cars will be the increase of road safety. In the United Kingdom, KPMG, the global systems integrator, estimates that self-driving cars will lead to 2,500 fewer deaths by 2030. If this is true, one could only imagine the impact self-driving cars would have on road safety in the much larger United States which sees approximately 30,000 motor vehicle fatalities annually.
Can We Take Humans out of the Equation?
On the surface, the autonomous car sounds like a safe, innovative, and evolutionary idea. However impressive, there are major concerns that cannot be ignored. Google proudly states that its self-driving cars drove the first 1 million miles with only one one accident, which was subsequently caused by a human driver rear-ending one of the Google cars. [Although a scan of Google’s Self-driving Car Report shows that in February a minor accident occurred while the Google car was in AV mode].
Google’s experience shows that self-driving cars seem to do pretty well on their own, but they will still require human supervision. A New York Times article explains that the cars will be able to “hand over” the wheel to their human drivers when they encounter complex driving situations or emergencies. Automotive engineers have stated there is no easy solution to this “handoff problem.” How will someone distracted by their phone or otherwise jerk up and retake control of the car in a fraction of a second in the case of an emergency? John Leonard, a professor of mechanical engineering at the Massachusetts Institute of Technology said, “The whole issue of interacting with people inside and outside the car exposes real issues in artificial intelligence.” He continued, “The ability to know if the driver is ready, and are you giving them enough notice to hand off, is a really tricky question.” The danger here is that by inducing human drivers to pay even less attention to the road, the safety technology may be creating even new hazards.
The Times article states the Tesla Motors, Inc. autonomous car was reported to have performed well in freeway driving. The company recently fixed an error that had previously caused the car to unexpectedly veer off onto freeway exits. However, on city streets and country roads, the car, which uses only a camera to track the roadway by identifying lane markers, did not follow the curves smoothly. It also did not slow down when approaching turns. Tsuyoshi Yamaguchi, Renault-Nissan’s executive vice president for technology development said, “There are certain limitations depending on the condition of the weather. For example, if you are in heavy snow or rain, it is impossible to have autonomous driving,” Obviously, this technology is far from perfected.
Another issue is at hand regarding human social impacts. Even though these cars are predicted to increase road safety and overall comfort behind the wheel, is that worth giving up our human ability to make decisions for ourselves and take responsibility for the way we travel? The sign of laziness that goes hand-in-hand with these autonomous cars cannot be ignored. Driving – a symbol of human freedom, ability, and control – is now being automated to make it even easier for people to disengage from the world around them. Instead of putting in effort into doing something we do every day, like driving, correctly and responsibly, people are now expecting it to be done for themselves. Imagine someone who relies solely on an autonomous car finding himself in a situation where he would need to physically operate a car, but is unable to do so.
Too Good to be True?
Many are asking, will the near-perfect record of Google’s autonomous cars last forever? Especially in a future where people have less experience driving themselves than their car driving them from place to place? Software is notoriously error prone, no matter how seemingly flawless it may seem. Even more, when a failure occurs, who is accountable? For example, what if other technologies upon which motor vehicles rely, fail, such as traffic signals? In the event of situation where a police officer is directing traffic, will autonomous cars accurately and consistently interpret human signals, and negotiate accordingly? What if they don’t? Who is responsible?
Google recently came out with a report of its cars’ “disengagements,” which according to the Department of Motor Vehicles (DMV) are “deactivations of the autonomous mode in two situations: (1) “when a failure of the autonomous technology is detected,” or (2) “when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.”
Of the 341 total disengagements, some of which included human intervention, 272 were due to the “failure of autonomous technology,” where the car’s computers detected a fault of some sort, and handed over control to the human, signaling that a takeover was needed with a “distinct audio and visual signal.” Google points out, though, that the company’s objective is not necessarily to minimize the number of disengagements, but to “Gather as much data as possible to enable us to improve our self-driving system.”
To many observers this is a complicated situation. When an injury or even a death occurs and an automated vehicle is involved, who gets the blame? It is uncertain how much the insurance premiums will be going down for a car that is not 100 percent controlled by a human being behind the wheel. It is understandable why many individuals would be nervous about handing over all the power to a computer, which could malfunction and put the driver in a more dangerous situation, whereas if a driver found themselves in a dangerous situation at their own or another human’s expense, then at least the responsibility would lie with a living, breathing person.
According to the 2014 RAND Corp. study on autonomous vehicles, there will still be a need for liability coverage, but that could change overtime. RAND says the product liability might incorporate the concept of cost benefit analysis to mitigate the cost to manufacturers of claims. Since insurance in the United States is state-regulated, each jurisdiction has its own set of regulations for auto insurance. Some states go by a “no-fault” concept,” where insurers pay the injured party regardless of fault, whereas other states go by a tort system. However, important questions include whether the auto insurance industry policies will become more uniform for autonomous vehicles and if the federal government will have to play a bigger role. Even though the number of accidents is expected to decrease with autonomous vehicles, the cost of replacing damaged parts will most likely increase due to the complexity of the components. As of now, these important questions regarding insurance remain widely unanswered and present obstacles for future adoption.
Who’s Watching?
Privacy issues are also coming to light, since automakers already collect and store location and driving data from millions of cars on the road today. In the approaching era of self-driving vehicles, privacy concerns may rise to new levels and include intrusive data collection on habits and patterns inferred from driving data. According to John Simpson, an advocate with California-based Consumer Watchdog, “Once this technology is widely adopted, they’ll have all sorts of information on where you’re driving, how fast you’re going, and there’s no control over what Google might do with it.” Personal information is already collected by automakers, according to a report issued by the Government Accountability Office last year, which found there were not adequate consumer protections in place. AAA, the nation’s largest motorist organization, asked car companies to adopt a consumer bill of rights that set stricter privacy standards. Carmen Balber, Consumer Watchdog’s executive director said, “There’s broader privacy implications … How often do you happen to drive your car to a liquor store, and will that information be provided to your insurance company? Will information on where you spend your Saturday nights be subpoenaed in a divorce proceeding?”
The recent and rapid advances regarding the autonomous vehicle are impressive. Every day, the possibility of a world operated by self-driving cars becomes more realistic. However, that very reality raises many other issues. Namely:
- What will happen if we cede our ability to operate a vehicle to computers? Will we undermine our own humanness?
- Will we encourage laziness? How will this impact our cognitive function and development?
- What will become of insurance policies and the insurance industry’s structure in an environment where we put our lives in the hands of self-sufficient technology?
- Is the possibility of reducing human error worth abandoning our power to operate a vehicle, and what about those who genuinely like driving?
These are only some questions that come to mind when contemplating the ramifications of the autonomous vehicle being introduced and potentially replacing our current cars. These autonomous riding machines are without a doubt a technological phenomenon. Despite the enormous amount of work to be done concerning the functioning and efficiency of these vehicles they have already begun to make an appearance in our everyday lives as features of modern vehicles.
The full monty is just around the corner.
Photo by Bekathwia
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU