Preparing for artificial intelligence, or the fourth wave of industrialisation
This article was originally published on Xero Blog on 5th September 2018
It’s Day 1 of Xerocon, and the Brisbane Convention and Exhibition Centre is buzzing with excited accountants and bookkeepers as they explore what’s on offer.
Anthropology professor and futurist Genevieve Bell, known for her work at the intersection of culture and technology development, took the stage today and shared with us her vision of how artificial intelligence, or AI, will shape our lives.
“AI is nothing more and nothing less than the steam engine of the 21st century,” said Professor Bell. “Steam engines were originally used to power mining operations. But over the next 100 years, they changed the ways we live in ways far beyond the mines – enabling us to build railway systems.”
We are now on the cusp of a fourth wave of industrialisation, says Professor Bell. The first wave, in the 1800s, was about mechanisation using steam power. The second wave was about the electrification of mass production, which gave us the assembly line in the 1900s and electricity in the home. The third wave, starting in 1946, was about computers.
Now, we are entering an era of cyber-physical systems – drones, robots and self-driving cars powered by artificial intelligence. They will change how we live in ways we can’t anticipate today, just as no one envisioned the railways and automated factories that would spring from the first steam engines, said Professor Bell.
As we enter this age, we need to consider five issues about AI:
The question of autonomy
Are these machines really going to be autonomous? And if so, what do we mean by that? Are they really running around with complete free will?
To make a machine autonomous you need AI technology.
The nature of agency
What are the rules? Someone is going to have to determine what those rules are. Someone has to decide how many rules there are and what rules this system should follow.
The challenge of assurance
Trust, reliability, security, ethics, management. How do you know they are safe? Who gets to decide what the threshold is for risk? Who is determining whether the system works?
The necessity for metrics
How will we measure these systems? Is it safety? Is it about collecting better data? Some of these technologies need to be thought of in terms of how much energy they consume. Who gets to decide what the metric should be?
The possibility of relationships
You do not want a world where you have to learn how to engage with the autonomous vehicles. Do we want to talk to those systems, or will they learn to talk to us?
“I didn’t want the future to be built just by engineers,” said Professor Bell. “So as an anthropologist, I’ve spent 20 years in Silicon Valley looking for answers. How do you put people first in the business of making technology – what people care about, what frustrates people – and build technology with that in mind?”
We will increasingly grapple with these questions over the next few years.
“Maybe not tomorrow, maybe not next week, but two years from now, three years from now — this will be our world,” says Professor Bell. “How do we manage these cyber-physical systems? These will be the questions that propel our businesses.”
You can follow the work of Professor Bell at the Autonomy, Agency and Assurance (3A) Institute, which is building a new applied science around the management of artificial intelligence, data, technology and their impact on humanity.