What “Hidden Figures” Can Teach Us about AI
This weekend, I finally watched Hidden Figures. I took my 9-year-old daughter with me to witness how instrumental women of color were to the success of several NASA missions — something that historically has been associated with white male achievement. If you have not seen it yet I highly recommend it. The acting is superb and the story offers so much education, both on race relations and women in the workplace. What I want to focus on is possibly something the director and the cast never imagined could matter. I do, not because it is the most important aspect but simply because it is very relevant to the tech transition we are experiencing right now.
All the talk surrounding artificial intelligence is as much about the technology itself as it is the impact its adoption will have on different aspects of our lives. Business models in the automotive industry, insurance business, public transportation, search and advertising as well as more personal consequences such as human to human interaction, sources of knowledge, and education. Change will not come overnight but we better be prepared because it will come.
New Tech Requires New Skills
Change came in 1962 for the segregated West Area Computer Division of Langley Research Center in Virginia where the three women who are the main protagonists of the story worked. Mathematician Katherine Goble and de facto supervisor Dorothy Vaughan are both directly affected by new tech rolling into the facility in the form of the IBM 7090. If you are not familiar with the IBM 7090 (I was not before this weekend), it was the third member of the IBM 700/7000 series of computers designed for large-scale scientific and technological applications. In layman terms, the 7090 would be able to perform in a blink of an eye all the calculations that took the computer division hours. Dorothy understood the threat and, armed with her wit and a book on programming languages, was able to help program the IBM 7090, taught her team to do the same, shifted their skills and saved their jobs.
I realize part of this story might be for the benefit of the screenplay and the world is much more complicated. However, I do think that what is at the core is very relevant — the creation of new skill sets.
Although AI has the potential to affect not only manual jobs that can be automated but also, theoretically, jobs that require learning and decision making, the immediate threat is certainly on the former.
We focus a lot, and rightly so, on the job loss AI will cause but we have not yet started to focus on teaching new skills so such losses can be limited. As I said, AI will not magically appear overnight but we would be fools to think we have plenty of time to create the skills our “augmented” world will require. From new programming languages to new branches of law and insurance, Q&A testing and more. Empowering people with new skills will be key not only to having a job but also keeping our income at pace with the higher cost these new worlds will entail. Providing a framework for education is a political responsibility as well as a corporate one.
Who Will We Trust?
The IBM 7090 replaces Katherine when it comes to checking calculations but, just as Friendship 7 is ready to launch, some discrepancies arise in the electronic calculations for the capsule’s recovery coordinates. Astronaut John Glenn asks the director of the Space Task Group to have Katherine recheck the numbers. When Katherine confirms the coordinates, Glenn thanks the director saying: “You know, you cannot trust something you cannot look in the eyes.”
I don’t know if Glenn actually said that or if it is a screenplay liberty but, when I heard it, I immediately thought of AI. Who will consumers trust? Many think AI is not going to be any different than it has been with any prior technology but I believe such thinking undermines where AI could actually take us. Autonomous cars are the scenario we most often refer to. We might trust the car to park itself or to alert us if a car is in our blind spot. We might even try a semi-autonomous setting on an empty motorway. But are we ready to trust the car and take out eyes off the road and our hands off the wheel? How will brands earn our trust? Will it be the number of accidents they are involved in? The assurance that, in case of an accident, their computers are programmed to save whoever is in the car?
What if we changed scenarios and talked about a medical diagnosis. Today, we tend to pick our doctors and specialists based on our insurance’s recommendation, a friend recommendation or even the comments on Yelp. Bed manners, courteous receptionists, short wait times all play a role. But, for anything more serious, what it all boils down to is the track record of right diagnosis and saving lives. Will we trust a machine alone? Or will we still want a doctor, who we can look in the eyes, coupled with the machine? A recent White House report mentioned by Fortune talks about the idea of linking human and machine. While they do so as part of the discussion of job losses, I think the formula also applies to our human nature of building trust with another human being.
The same issue of trust will also apply to other scenarios where not our life but our privacy and security could potentially be in danger. Here too, trust will matter. Who do we trust with our digital assistant, with our home automation? When life is not at risk, at least not directly, I feel consumers will show more flexibility, especially when the full implications are not grasped and convenience and possibly price are what matters the most.
In both cases, though, I strongly believe AI will drive consumers to consider more than technology alone and look for traits in brands that have been more traditionally associated with humans: honesty, empathy, loyalty, and service.