This question implies safety can be solved, that safety instead of danger is what needs to be solved or diminished. When in fact, the increase of safety means merely mitigating danger to a workable minimum. Because in a chaotic, open, largely unpredictable world, driverless cars will never be 100% safe, they cannot ‘solve’ the road safety problem. Driverless cars will, however, make traversing roads either more or less safe. And the binary choice between either approving or rejecting driverless-cars on roads depends on that safety-level. Upon saying yes or no. And Arizona, California, Singapore, London have already said yes to a few driverless cars in the hope of improved safety and improved markets. The London Oxbotica cars have had zero accidents whereas Uber and Tesla which have more cars have had numerous. Because more hazards over a longer time equals more risk hence more accidents. Remarkably, in a per mile comparison driverless, the case of Tesla, has fewer than driven. But that is counting auto-pilot cars in a convenient mapped terrain, a small sample, accidents minus unreported near-misses. These driverless cars are flawed and the evidence too incompatible around ‘disengagements’ – human intervention during malfunction – to embrace driverless today or tomorrow. The logic for trials in these cities is that short-term sacrifices are worth it considering the long-term potential to save lives. The answer, then, is this: road safety is never solvable but it is improvable, and safety has improved over the past century, and technical social trial-and-error means driverless cars could further improve safety; though they will generate new problems. This answer is a prediction as it extrapolates out from the continually reduced intervention times for semi-driverless cars and acknowledges possibilities as positively long-tailed as well as negative.
My happily consequentialist argument holds that regardless of nebulous risks, problematic byproducts, and icky novelty, the otherwise lost lives saved through positive long-tail automation outweighs the risky uncertainties of negative long-tails. Those positive possibilities provide the best justification for the processes and purposes of responsible innovation (what else could hold more weight?), especially when entrenched standard cars can only be innovated, never invented again or outlawed. First, I will talk about why the precautionary principle common among policymakers in tech flounders. Second, I will meet the common objections to driverless tech with answers.
The Mistaken Precautionary Principle
Driverless cars will potentially have bad unforeseen consequences that, however unlikely they seem now, are still possible. These need to be considered in light of public trust, of course. A.I researchers call these unlikely but possible disasters ‘negative long-tail effects’. The possible negative long-tail effects of new tech fits well into a pessimistic sensibility and our predisposed suspicion of the new and contrarian. But possibilities cut both ways, to positive long-tails as well as negative. A.I researchers do promote the probability that driverless cars will become safer because ordinary cars are already so dangerous. Every year double the number of people die in car crashes than in wars, terrorist massacres, crimes, counted together. At least 1 million die due to daydreaming, drinking, fatigue or other human foibles while driving or dodging cars. In some ways driverless cars are uniquely risky, true, for example in cyber-terrorism, mechanical or electrical failure, but so too are humans uniquely risky in their own ways. Biological failure is as comparably common as a technological failure yet irrationally more accepted out of our anthropogenic bias. Driverless just has to be, remember, better than driven.
Yet publics, journalists, and policymakers claim against driverless cars because: humans can drive better than algorithms, driverless cars are never autonomous but reliant on connection, accidents and lawsuits will endure, the trolley problem dubiously values some lives over others, networks have unique risks, and citizens will be coerced.
All true. However, these risks never outweigh the benefits of introducing driverless cars. To refuse to is a sin of inaction. Even if 99,0000, say, died at the wheels of driverless cars that is preferable to 1 million dying at the hands of human error behind the wheel. Can you name a better standard for justifying their introduction than to save more lives than otherwise? Upgrading cars is less novel than introducing nuclear power or aeroplanes, for example, which gave efficient electric energy, world travel, and deaths. Despite these being very safe today; people learned from decades of dangerous real-world experiments; driverless car experiments so far have been safer than them because cars have been around, and mitigated, a long time already. Academics, like my charming lecturer Jack Stilgoe, who warn against driverless cars rely on the Precautionary Principle, that being wise entails slowing down and reviewing before permitting new tech. However, for the principle to make sense it must be based on predictions or data: it makes sense to be agnostic about tech benefits but choosing inaction is to choose to avoid hypothetical risks as well as hypothetical benefits on faith. If tangible benefits outweigh hypothetical disasters it’s logical to allow risk rather than to hold back. A good analogy is the creation of fire. Join me for this thought experiment. Early humans have just discovered fire. One elder rightly says ‘fire is scary’ and ‘it could be dangerous and damage equality in ways we don’t yet foresee’ (bushfires, carcinogenic smoke, misuse as weapon, burnt land to the loss of communities without fire) ‘so we should only use fire in ways understood like burning vegetation for convenient clearings. Or perhaps roll back, even. Better safe than sorry.’ Unfortunately, an appeal to the precautionary principle here means cooking, heating, and deterrence from predators would never have been discovered. And that obviously would be a worse outcome. The hypothetical benefits do outweigh risks in driverless cars, despite the objections I have had in conversations and read in public opinion surveys. Here are the typical objections and their answers for you to decide for yourself.
Humans still drive better than driverless tech
Driverless cars just have to become better (safer, more sustainable even) than the average human driver to merit their introduction. Most people I speak with – and publics in surveys – are surprised by the predictions that driverless cars will be better driven than human equivalents. First, because most people consider themselves above-average drivers and underestimate the dangers of driving. The algorithms though are not competing with Lewis Hamilton—who got a speed ticket once in France, irrationally ignoring others’ incompetence—but are competing with fallible everyone. Second, sceptics muster evidence of crashes and the 1% time that humans must intervene in driverless cars, correctly claiming that once every 200,000 miles is riskier than once every 5 miles because it cultivates naive complacency. Driverless aficionados are, indeed, more likely to crash. Agreed, the low percentage and emotionally salient are important but need not detract from the fact percentages between interventions trend downward. So it becomes more likely the driverless cars will become safer than humans. But in the future: a fleet of 100 present-spec driverless cars would need 8 billion miles of trial-and-error learning to pass the safety threshold humans flaunt. That would take 400 years. Discounting lies by Tesla that they have reached 1 billion already, 400 years is a misestimation: the 1% (99% not) intervention time of a Delphi car in 2015 used to be the 1.8% (97.5% not) intervention time for the NavLab 2 car in 1996. (The Navlab 2 went from San Francisco to Pittsburgh instead of to New York, which actually makes the diminished difference between the two cars greater still.) Both are far better than intervention every few miles in the very first autonomous vehicle prototypes, and the first run from Copenhagen to Munich.
Prediction from past pasts, also, extrapolates linearly into future futures that are more likely to fluctuate than remain monotonous with trends before. Pardon these clunky causal terms, but known-unknowns and unknown-unknowns will provide breakthroughs from cousin domains like drones, facial recognition, simulations like CARLA – which I have used – evidence accelerated progress more than regress or surprise catastrophe. Claiming bad as precautionary reasons is neglible because they are bad. Bad unknown-unknowns are no more probable than beneficial ones, so siding with bad possibilities is counterfactual preference rather than caution, a prophecy rather than a prediction.
Human driving is a different kind than computer driving, so can never compare.
Actually they are of degree. Critics of automation like Susan Bainbgridge rightly assume a ‘real’ driver must remain competent, forever alert, so as to keep in check any renegade computing by computers. But in the future, perhaps not. Because humans are just as computational, algorithmic, as driverless cars. (The first named ‘computers’, remember, were clerks.) Renegade computing already happens in brains behind driven accidents. Both real driver networks and computer driver networks rely on deep neural networks’ pattern recognition, thus the difference between them is exaggerated for A to B goals. The neural networks behind the cars’ calculation and motion were gleaned from the neural networks which comprise our mammalian motor system. Both neural networks, despite different embodiment, rely on data sets and pattern recognition thus attributing different essences to them is counterfactual given that goals paired with rules humans train themselves to automatically implicitly follow, like ‘if-hazard-then-avoid’ or ‘if-residential-then-20mph’ – is hardly species specific. Even our notion of humans as independent and individually responsible is misrepresenting enmeshed circumstances; our individuality and driving experiences are actually dividable hence dividual and interdependent: formed by our diverse brain states and social group; akin to the domain states and network of automated cars.
Driverless cars are never autonomous.
Self-driving autonomous cars are misleading names. They are driven by machine enacted algorithms. To not be autonomous i.e. interdependent is actually their advantage. The cars learn within an integrated network communicating information about each others’ learnt patterns and movements themselves. No driverless car learns alone; instead the cars learn as a fleet; a group of cars owned by the same company running the same algorithm on the same hardware. Contrary to confusion between cars not computing what each other are doing—cars within the same ecosystem will know ahead of time where a car will swerve or which junction it will veer into. So, the risks of car accidents that occur because of mistaken intentions among drivers will be subsumed; so long as there are no messy human drivers on the road to interfere. The biggest challenge, current impossibility, is removing those human drivers with their more limited information access of other cars’ and their drivers’ motions. Future algorithmic-led cars would have less risk exactly because they are never autonomous or self-driving, but intelligently designed to be linked. Granted, the introduction of driverless cars into new environments entails new strange parts and actors—say an unfactored appropriate earthquake response, response to others’ responses—human actors nonetheless are more parts, in and between bodies, thus more complex thus malfunction time again. Roboticised vehicles have an edge because they are narrow intelligences: only thinking about the domain, say motorway or rural, at hand. Peculiar examples like blizzards or moths blocking sensors don’t refute this, because actually the generalist human does not translate to better responses either. Reacting to surprise earthquakes or runaway deer or a heartbreak, for examples, humans seldom do what they ought.
Driverless cars are no miracle. Tragedy, accident, lawsuits will endure.
Remember, however, that new technologies draw salient attention to good and bad through what comes to mind rather than what abounds in consideration of baseline rates. So, videos of automated braking systems preventing crashes get ignored. Notable news houses headline when a driverless car kills a person; never 1 million headlines each year of boring deaths by accident-prone humans. As successful charity appeals show: a single death is a tragedy, a million deaths is a statistic. Headlines follow this logic with driverless cars framed disproportionately more dangerous than sugar-related obesity which kills more. Or driven cars which out of sheer number do kill more.
Ethical dilemmas will endure; driverless tech values some lives, and values, over others
This is commonly captured in ‘the trolley problem’ whereby one must trade off sacrificing 5 for 1 or a loved one for 5. Or variants thereof. How can one fairly determine how a driverless car can be programmed; surely the indeterminacy undermines the ethics of driverless cars? Not at all. Making clear-cut ethical decisions is never what drivers during accidents do; they panic. Hence programming in an ethical parameter is at least making a responsible choice instead of abdicating responsibility to fortune-bound instinct. No choice is still a choice; often the worst. Moreover, indeterminate real-world ethics is dubious: the dilemma runs that the driving algorithm must value passengers above pedestrians or pedestrians over passengers. Thus perhaps policymakers must rule which is better, or permit customers to choose. But, I believe I have a workable solution. Human bodies are more fragile than car bodies therefore cars should risk their passengers instead of pedestrians, because passengers are more likely to live. The software dilemma can be resolved by hardware. All driverless cars built to Volvo CX90 standards—with 0 deaths in 50,000 Volvos in 16 years over UK roads—makes prioritising pedestrians logical because overall survival is more probable. Fewer lives need be traded off; just 0% versus actual percentages. Today driven cars’ customers and sellers already do trade off lives prioritising some values over others. Few cars have adapted to Volvo higher standards because of free market choice to choose your own mistakes, but the disruption of driverless provides for a new, better, entrenchment. If safety really does come first then revamped car frames are as important as software. A small sample and likely more careful buyers may bias the Volvo stats – true. But the Volvo CX90 performs better in dummy tests and – key – better than ultra popular alternatives. Regardless of driverless software the best move is to change cars for safety. This is underway: proliferating electric cars, Tesla included, are safer because their batteries do not blow up like combustion engines do. It is changing pedestrians which is far more problematic.
Pedestrians and drivers will be coerced.
Indeed, though driverless cars have less coercion. Road safety already depends on compliant pedestrians. People tend to be disobedient and city transit a chaotic open system. This leads to banal stress and accident trauma—on both sides of the windscreen. To curtail the safety risks some countries ban crossing roads, ‘jaywalking’, and all drivers go through years of training and children through normalisation to hand authority to cars. The onset of driverless could disruptively empower pedestrians to walk where they want, trusting sensors to stop. This raises concerns about transgressive behaviour. People might freely cross the road knowing full well that the cars must stop instead of risking their life. Therefore antisocial behaviour between pedestrians and cars may increase. Yet, if pedestrians also use car transport the incentive to hypocritically disrupt is against their interest. It breaks The Golden Rule default to treat others as you would want to be treated. Aggression, for example, against cyclists is seldom by drivers who road-cycle. Drivers who cycle have literal empathy for cyclists’ plights. The same would be true of pedestrians crossing at will who are also passengers who know how bothersome inconsiderate pedestrians are. Pedestrians are not itching to disrupt – it is not to their advantage. Especially when – in US and Australia – profitably illegal.
Local culture and conditions make self-driving cars problematic.
Encouraging obedient pedestrians in the UK would be harder than in other English speaking lands. And the world encompasses varied driving and pedestrian cultures; the majority to the world fits no western mould. But actually, local driving culture will become less important precisely because of driverless cars. They can a. standardise b. be connected c. update national road laws and customs along with satellite maps. A car of the same fleet, much like money, transcends borders; if one encounters errors the others update; local variations can be accommodated. Granted, the world is messy. The map is never the territory. But compared to travellers driving themselves in foreign lands in foregin cars the problems are at parity. Moreover, globalised car norms like the elimination of local practices like road beggars or laissez faire junctions have always been about—no one disputes ‘western’ roads or ‘western’ traffic lights anymore, so why dispute more efficient transport at the cost of custom? A globalised car culture which resembles Sweden more than, say, Mexico is just better problem solving, fewer deaths, not cultural imposition—when Mexicans choose to have it or buy it themselves.
All the optimism accepted, then, what are the risks? Any driverless car may go wrong; bugs happen, killing people. This already happens yet is branded as teething problems. That is deceptive. Driverless cars become safer, but will never be 100 per cent safe. Thus, the claims driverless deaths are mere growing pains to a pristine future is marketing, not truth-telling. Other risks unique to a driverless future include terrorist hacking, imprivacy, monopolised software, data, car, and the sheer uncertainty of maddeningly different road surfaces, vehicles, and behaviours. Okay, if 90% of deaths really can be vanished by driverless those problems are worth it. Arguably, the real trolley-problem-like dilemma today is between sin of omission or commission. Omitting to bring in driverless cars that have hope to be better, may cause more deaths than otherwise. Yet, committing to driverless cars as-they-are today has too many dangers—they are too bad performing. Indeed, the hype producers like Elon Musk who celebrate driverless cars are paradoxically likely to undermine their cars’ progression. By being scaled too early, deaths and salient trauma will undermine confidence and trust—they already do. Not that the issues are inevitable, for reasons addressed. Ethical hackers, anonymised separately stored data, nationalised companies, and continual updates are ways forward. The best way to save lives is to release cars, when ready for cities. Think what impact a single terrorist hacker mowing down a street would have on progress: besmirching initial conditions which entrench norms, popularity, and uses of tech.
The forecasted promise of the cars – which drives innovation even from Washington – could be ruined by malignant cases; in a self-fulfilling prophecy cars may remain human-driven. That would be no zero-sum bad thing if it instead encourages automated public transport or a co-operative carpool algorithm to replace Uber and congestion. To conclude, then, the road safety problem is unsolvable – but things can get better, safer, more humane. In the short term, driverless will hamper safety between ordinary and roboticised drivers; in the long term, roboticised cars will contribute to safety. But time will tell if the closed management and inexplicable algorithms cause a backlash which in turn hinders the long-tail improvements in safety. The assumption in America is on the product and the fetished tech; seldom on the social system, the tech ought to serve. The imaginary autonomy of cars mirrors the imaginary autonomy of people, and the competitive slant of proprietary data mirrors the competitive slant of entrepreneurs. As said, open forums, nationalised or pooled data, ethical hacking tests, modest precise names, anonymised data, collaborative design, and continual updates are good moves. If policymakers and politicians don’t eagerly compel responsible cooperation then technical limits in machine learning might—shared car-fleet data would make driverless cars a safer, and profitable, norm sooner.
Anderson, James M., Kalra Nidhi, Karlyn D. Stanley, Paul Sorensen, Constantine Samaras, and Oluwatobi A. Oluwatola. Autonomous Vehicle Technology: A Guide for Policymakers. Rand Corporation, 2014. https://play.google.com/store/books/details?id=XEWrAgAAQBAJ.
Baumeister, Roy F., Ellen Bratslavsky, Catrin Finkenauer, and Kathleen D. Vohs. “Bad Is Stronger than Good.” Review of General Psychology: Journal of Division 1, of the American Psychological Association 5, no. 4 (2001): 323. http://doi.apa.org/journals/gpr/5/4/323.html.
Gates, Guilbert, Kevin Granville, John Markoff, Karl Russell, and Anjali Singhvi. “The Race for Self-Driving Cars.” The New York Times, December 14, 2016. https://www.nytimes.com/interactive/2016/12/14/technology/how-self-driving-cars-work.html.
Gerla, M., E. Lee, G. Pau, and U. Lee. “Internet of Vehicles: From Intelligent Grid to Autonomous Cars and Vehicular Clouds.” In 2014 IEEE World Forum on Internet of Things (WF-IoT), 241–46, 2014. https://doi.org/10.1109/WF-IoT.2014.6803166.
Kahneman, Daniel, Stewart Paul Slovic, Paul Slovic, and Amos Tversky. Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press, 1982. https://play.google.com/store/books/details?id=_0H8gwj4a1MC.
Lambert, Fred. “Transcript: Elon Musk’s Press Conference about Tesla Autopilot under v8.0 Update [Part 2] – Electrek.” Electrek, September 11, 2016. https://electrek.co/2016/09/11/transcript-elon-musks-press-conference-tesla-autopilot-under-v8-0-update-part-2/.
Penmetsa, Praveena, Emmanuel Kofi Adanu, Dustin Wood, Teng Wang, and Steven L. Jones. “Perceptions and Expectations of Autonomous Vehicles – A Snapshot of Vulnerable Road User Opinion.” Technological Forecasting and Social Change 143 (June 1, 2019): 9–13. https://doi.org/10.1016/j.techfore.2019.02.010.
Schoettle, Brandon, and Michael Sivak. “A Survey of Public Opinion about Autonomous and Self-Driving Vehicles in the U.S., the U.K., and Australia,” July 2014. https://deepblue.lib.umich.edu/handle/2027.42/108384.
Senate Commerce, Science and Transportation Committee Hearing. “Testimony of Glen W. De Vos, Senate Commerce, Science and Transportation Committee Hearing,” n.d. https://www.commerce.senate.gov/services/files/86053BB6-58D8-4072-A033-03F36766D0C3.
Small, Deborah A., and George Loewenstein. “Helping a Victim or Helping the Victim: Altruism and Identifiability.” Journal of Risk and Uncertainty 26, no. 1 (January 1, 2003): 5–16. https://doi.org/10.1023/A:1022299422219.
Te Morenga Simonette Mallard Jim Mann, Lisa. “Dietary Sugars and Body Weight: Systematic Review and Meta-Analyses of Randomised Controlled Trials and Cohort Studies.” British Medical Journal, January 19, 2013. https://www.bmj.com/content/346/bmj.e7492.
Topham, Gwyn. “‘It’s Going to Be a Revolution’: Driverless Cars in New London Trial.” The Guardian, October 3, 2019. http://www.theguardian.com/technology/2019/oct/03/driverless-cars-in-new-london-trial-in-complex-urban-environment.
Trefis Team. “Just How Far Ahead Is Tesla In Self-Driving?” Forbes Magazine. Forbes, November 8, 2019. https://www.forbes.com/sites/greatspeculations/2019/11/08/just-how-far-ahead-is-tesla-in-self-driving/.
Wright, Robert. The Moral Animal: Why We Are, the Way We Are: The New Science of Evolutionary Psychology. Knopf Doubleday Publishing Group, 2010. https://play.google.com/store/books/details?id=MuI_DVZ1Xo8C.