Collectively Intelligent: Prioritising Safety in Autonomous Systems Design

Collectively Intelligent: Prioritising Safety in Autonomous Systems Design

David Hunt 10/04/2018 10

Recent Uber and Tesla crashes (that led to the respective deaths of a pedestrian and a driver) continue a debate about the merits and challenges of robotics and autonomous systems. For novices, luddites, and alarmists, they have given a moment of cynical Schadenfreude – many toward Tesla and UberCritics of the current state of self-driving vehicles are even calling for a slow-down in development, excusing their hesitancy with a necessity to build “confidence among consumers and regulators alike.” Several have called users “guinea pigs”.

On the other side of the opinion spectrum, the American Council of Science and Health flatly notes in its biased position statement, “We use people as guinea pigs all the time.” It’s an age-old ethical debate: how many casualties are “acceptable” for the longer-term “greater good” of advancing safety and security. For better or for worse, military drones have harmed or killed over 1,000 civilians, all in the same of saving the lives of others.

While we can obviously deliberate on strategies, positions and perhaps the exact number of lives saved (by military drones, or potentially by fully autonomous vehicle systems), we can be quite certain: driver assistance systems (ADAS, generally up to Level 2 autonomy) doalready save lives, and higher levels of autonomy will save even more.

Plenty of discussions remain (although none that should slow development): the trolley problem and other moral issues, kind of testing is safest, how many miles of (virtual or real-life) testing until autonomous vehicles are “safe” or accepted by society. Beyond these, we believe it is right to urgently pursue higher levels of autonomy, but also that there is an imperative to

1. Provide for the best possible learning environment for our robotic peers, and

2. Facilitate collaboration and data exchange, in order to bring out the full life-saving potential that AI has to offer.

Learning from the best?

All autonomous systems still need to learn from – deeply flawed – humans, especially in extreme cases. For the time being, humans should still set the example in public road tests. To be effective, we need to ensure that as long as humans do take to the wheel, they should do so at the height of their abilities (even Alex Roy’s bold Human Driving Manifesto rightly states that it’s “a privilege, not a right [to drive]. Earn it, keep it. Abuse it, lose it.”). When humans have the additional responsibility to set an example, it is as inexcusable to drive carelessly (drinking, texting etc) at a still low level of autonomy as without it.

As AI eventually becomes better than humans* at driving (some argue it already is, to some chagrin), it will need to ween itself off of human misgivings, and their past judgment. To make the right decisions under all circumstances, it will eventually need to turn to the collective intelligence of all other vehicles.

Teaming up for Road-Safety

In his book, Sapiens, Yuval Noah Harari notes that we humans “rule the world because we alone can cooperate flexibly in large numbers.” Cooperation is at the heart of our success as a species. Yet our “success” is also limited by the scope of our verbal and written communication. At least in theory, systems based on collective, or collaborative artificial intelligence are not.

Imagine that every new team-member at your workplace immediately knew everything that past and present staff have ever known. Imagine they could make life-and-death decisions based not just on their own skills, but on a collective of mistakes and learnings – updated in real-time. Now transfer this idea onto road-safety. At the moment, new recruits into the team (fully connected cars with autonomous features) will indeed receive mapping and other data – but only if they came from the same university (i.e. brand).

As the Electronic Frontier Foundation passionately – and rightly – argues, accident data needs be shared among the developer community, “so that no autonomous vehicle has to repeat the same mistake.” We might suggest that if the primary objective in the development of autonomous vehicles were to save lives, then developers would be mandated – and willing – to share not just crash data, but information on all travelled miles. 

Just as crowdsourced – or swarm – data helps over 65 million active Waze-users individually – and collectively – become more efficient (although some residential areas are displeased by it), so would collective sharing of real-time data and decisions** between autonomous vehicles lead to leaps in safety.

The EFF notes, “Acting in isolation, [self-driving car companies] have few if any incentives to share data. But if sharing is the rule, their vehicles will be collectively safer, and the public will be much better off.” In the interest of road-safety, what will it take to ensure, first, sharing of accident data, and next, seamless real-time travel data exchange among any kind of autonomous vehicle (while still fostering competition)?

* Somewhat related, Ex-Googler Mo Gawdat has launched his #onebillionhappy initiative to ensure that, when AI finally surpasses human intelligence, it has learned the “right” human traits, in order to create a happier world. A worthy effort).

** An explicit “Kudos” to developers working toward cross-platform #V2V communication standards.

For further news, opinions and the latest jobs in eMobility followHyperion Executive Search.

This article is part of a series written by Lukas Neckermann for Hyperion Executive SearchLukas Neckermann is a researcher, author and consultant on the mobility revolution. He is an advisor to Hyperion Executive Search, and also to NEXT Future Transportation, SPLYT, Flock, and Eliport. He is a passionate supporter of autonomous systems for both efficiency and road-safety.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • Ruth King

    If lorry drivers had this tech, they wouldn't even have to be in the truck, just tell the truck to go the location.

  • Mandy Black

    I'm still fearful of the possibility of someone hijacking autonomous vehicles remotely... or am i taking the last fast and furious movie too literally?

  • Georgia Seddon

    I think autonomous cars are great if you can still be able to drive it manually, in that way you can drive it during the day but it can bring you home when you are drunk.

  • Chris Richardson

    I am a bit sad now as this is taking away a lot of the fun from driving.

  • Steve Hudson

    Sadly.. this is the future.

  • Michelle Bird

    Informative. I'll drive myself no need for an autonomous car.

  • Alex McGuinn

    I’d rather walk than support this.

  • Caitlin Gooch

    I hope that the next generation of autonomous cars will have steering wheels so we can still have fun.

  • Rishay Patel

    Despite the recent events, I think autonomous cars are cool, as long as we can still have manually driven cars.

  • Tamara Carter

    I don't know why, but I don't like autonomous vehicles.

SHARE THIS ARTICLE

David Hunt

Energy Guru

David is Managing Partner of Hyperion Executive Search, a clean energy executive search specialists. He has been in the clean energy sector since 2007 and held posts on the Policy Board of the UK Renewable Energy Association (REA), chaired the Pan-European Energy Storage Alliance and sits on the Low Carbon Economy Board for the Liverpool City LEP. He also spent seven years as director of an award winning multi-technology renewable energy company, before setting up Hyperion in 2014. He has spoken about renewable energy on various British and international trade publications including the BBC and the FT. David holds prestigious awards in the energy sector and is recently completing a degree in Environmental Management and Technology from the Open University.

   

Latest Articles

View all
  • Science
  • Technology
  • Companies
  • Environment
  • Global Economy
  • Finance
  • Politics
  • Society