Something important for us to start relying on autonomous cars: that they show feedback like humans.

Perhaps, in a short time, we will discover and collectively assume that autonomous cars are much safer than cars driven by humans. However, until that time comes, we are going to have to take intermediate steps to ensure that people do not feel uncomfortable or tense.

When we are going to cross a zebra crossing, for example, we underestimate the importance of creating a mind theory of the car’s conduit that is about to cross just above that zebra crossing as well.

That is, we can make eye contact with the driver, warn that he has seen us, or signal to the driver that we have seen him. That he wants to stop, or that he pretends to continue. All this non-verbal language based on predicting intentions is impossible if the driver of the autonomous car is an algorithm. We cannot create a theory of mind from an algorithm.

First steps

A possible solution to this problem has been proposed by Drive.ai, a company that operates autonomous vans in Texas. The bright orange and blue vehicles have LED signs on all four sides that respond to the context with messages to provide some feedback not only to pedestrians, but also to eventual occupants of the vehicle.

These signals can tell a pedestrian who wants to cross in front of the car something like “I’ll wait for you to pass” or they can warn him: “Wait for me to pass”.

A related strategy is aimed at passengers, not pedestrians: screens on Waymo vehicles show car occupants a simple, animated version of what the autonomous vehicle is seeing. These screens can also show what the vehicle is doing, as if it were stopping to allow a human to cross.

All of which means that if vehicles are predictable and do what they say they will do, people are more likely to trust them. Little by little.