Artificial Intelligence (AI) is spreading very fast, so fast that it may be out of control already. It will soon be integrated into almost everything we use, and will be the ‘fourth industrial revolution’. The first was steam, the second oil and electricity, and the third computers. Each of the first three had massive impacts on society both for good and bad. Businesses that did not adapt disappeared very quickly and the same is as true today as ever before. The great Stephen Hawking says, “The rise of powerful AI will be either the best thing or the worst ever to happen to humanity.” If that looks like sitting on the fence, I’ll go further. It could be both.
Throughout history there have been myths and legends about automatons and the concept of machines that think is an ancient one. But the term Artificial Intelligence was first agreed as the term to define this field at a conference at Dartmouth College, New Hampshire in 1956. John McCarthy proposed to the conference that Artificial Intelligence be adopted as the generic term. Those who attended became the leaders of the field for the next twenty years or so. But despite billions of dollars of investment progress was slow, hampered by the relatively small and slow capacity of computers.
The massive increase in speed and processing power of modern computers has changed that as machines can now process vast quantities of data much faster. As a consequence this fourth revolution is likely to be ten times faster, 300 times the scale and have 3,000 times the impact of the previous ones.
It is not just the processing of data. AI is using more complex equations, what we know as ‘algorithms’, which is leading them to mimic what we think of as uniquely human capability, i.e. the application of intuition, experience or common sense, rather than pure logic.
Machines have already been designed to beat humans at chess and other complex games. This was done by throwing sheer processing power at the problem so that the machine could ‘think’ through every possible permutation to select the one that would give the best outcome. But this was still relying on specific coding. Now the machines are starting to learn thus enabling not just automation of boring repetitive jobs, but also threatening to disrupt jobs that we thought were reserved for humans. Society at large should no doubt welcome the replacement of machines by robots in dangerous or unedifying jobs. Robots can safely mine uranium in Australia where previously the death rate was unacceptably high. We still kill 2 million people every year on the roads worldwide and most accidents are caused by human error. But even the highly paid traditional professions of law, accountancy and medicine are at risk.
Much of the basic work in the law is repetitive and predictable. Junior or even trainee lawyers perform the grunt work of research, or writing wills or conveyances but AI is now being used for much of that. In accountancy again much of the routine work done by audit firms can more quickly and cheaply be done by robots. In medicine robots study X-rays, MRI scans, medical research papers and other data, and pick up signs of disease that doctors sometimes miss. Lord Darzi, the surgeon who pioneered keyhole surgery, says “Robots are more precise, have greater range of movement in keyhole surgery and no hand tremor, which makes delicate stitching easier.”
One might argue that we will still need the experienced lawyers, accountants and surgeons to deal with more complex challenges. But where will they get their experience from? Traditionally all these professions have run a closed shop where would-be professionals had to qualify by both examination and a period of serving articles in effectively an apprenticeship. If this work is done by robots how will these trainee lawyers learn their craft and so gain the experience and wisdom? On the other hand, for the client if this brings down the very high fees that these professionals have become accustomed to charge then that perhaps is a good thing.
AI is complex and there are different stages in development. We have used Automation of manual and cognitive tasks for a long time. This does not involve new ways of doing things - it just automates existing tasks. Then we have Assisted Intelligence where systems assist humans in making decisions or taking actions. These are hard-wired systems that do not learn from their interactions. From there we go to Augmented Intelligence where systems augment human decision making and continuously learn from their interactions with humans and the environment. Finally Automation Intelligence consists of systems that can adapt to different situations and can act autonomously without human assistance. It is unlikely that everything will move as far as the fourth category as it would be too expensive.
Similarly there are a range of technologies covered by these terms. AI is not just automation. It is a branch of computer science that attempts to perform tasks that normally require human intelligence. It includes sub-branches such as machine learning, speech recognition, visual perception, natural language processing and a few more.
· Natural Language Processing is a branch of AI that deals with analysing, understanding and generating the languages that humans use naturally to interface with computers in both written and spoken contexts.
· Machine Learning is about developing computer programmes that make it appear that computers automatically improve with experience, i.e. they “learn”. Typically the more data you train them with the better they perform.
· Deep learning is actually a type of machine learning that uses multi-layered neural networks to “learn”.
· Speech recognition is the technology by which sounds, words or phrases spoken by humans are converted into electrical signals and assigned meaning. Each human voice is different, and identical words have different meanings if spoken with different inflections and contexts.[i]
· Machine vision is essentially the ability of a computer to “see”. It employs one or more video cameras, analogue to digital conversion and digital signal processing to “recognise” people, places and things.
There are then seven major applications of these technologies:
These will have the greatest impact on business over the next few years. According to PwC Data and Analytics the global GDP impact of AI through 2030 will be US$15.7trn. The surprising factor is that they foresee that of this impact Consumption will contribute 60% while Productivity just 40%. This is because of new types of consumption based on personalisation empowered by 3D printing. So while most of the talk is about the impact on jobs perhaps there will be more impact on areas like retail business. Less surprising is they forecast considerably more impact in China, nearly half the total, and 26% of GDP, and just $2.5trn in Europe.[ii]
PwC’s analysis suggests that up to 30% of UK jobs could potentially be at high risk of automation by the early 2030s. These could be exceptionally high in transportation and storage (56%), manufacturing (46%) and wholesale and retail (44%). Between 1990 and 2007, the addition of each robot into US manufacturing resulted in the loss, on average, of 6.2 human jobs. Uber argues that through its model there will be fewer cars on the road and so pollution will fall. But the opposite is happening. New Yorkers are taking so many Uber and Lyft rides that the number of people using the subway is falling for the first time since the financial crisis, and traffic gridlock is increasing. In the City of London there are similar traffic problems caused by all the tiny one parcel deliveries ordered from Amazon and Ocado by City workers. Robots called Sam (Semi-automated mason) are already beginning to replace bricklayers in America and will arrive soon in the UK. They can load up to 3,000 bricks per day, compared with the human average of 500.
Economists like to say that with this kind of disruption new jobs are created. They may well be jobs that we can’t even imagine today. But the problem is the speed of change. The problem is that when an experienced worker’s trade is lost he/she may be too old to retrain on these new jobs. In 1858 in the Great Stink in London there was a severe outbreak of cholera. The cause was correctly identified as the poor treatment of sewage. The great engineer Sir Joseph Bazalgette built London’s central sewage system in response. It was an undoubted success and greatly improved the health of the City. But the night soil workers never worked again.