The concern today around increased automation leading to job losses — or society in general transforming for the worse — is not new. In fact, the debate goes back at least five centuries, from the time the Gutenberg printing press was invented (earlier instances no doubt exist, now lost in the mists of time).
In the late 15th century, the scribes’ guild of Paris successfully lobbied to delay the introduction of the printing press into their city for 20 years. There was widespread worry that mechanised printing would put entire rooms full of monks out of work. It eventually became clear that this would happen very gradually, and the printing press itself gave rise to thousands of new job functions not foreseen by preemptive fearmongers.
A few centuries later, the Luddite movement gained prominence when British artisans opposed the mechanisation of the textile industry, again fearing the loss of jobs. After a spell of rioting in 1811, the revolt was quelled, and the textile industry since then has broadened in scope, variety, and reach to provide livelihood to millions across the globe in a complex supply-chain web.
In the early 1920s, Henry Ford was confronted with the very same question when he designed the motor vehicle assembly line, to which he replied:
“For when were men ever really put out of work by the bettering of industrial processes? The stage-coach drivers lost their jobs with the coming of the railways. Should we have prohibited the railways and kept the stage-coach drivers? In our own experience a new place always opens for a man as soon as better processes have taken his old job.”
So how about the concerns that surface today, on whether AI, RPA, Machine Learning, and other automation tools will eliminate jobs?
Some possible changes
Progress in science and technology is inexorable, and this will lead to entirely new industries opening up. Novel ecosystems will be built around innovative products, solutions and lifestyles – just as the automobile industry that ballooned after Ford’s assembly line innovation has provided millions of jobs globally over the past century. Or the printing, publishing and bookbinding industry that came into being after Gutenberg’s printing press was invented.
We are in the midst of a revolution as significant as the one that occurred during Gutenberg’s time, with the digitalisation of information storage, processing, and retrieval. All the information we need will be found exclusively digitally, replacing the need for printed books. This transition should be evident even now, when online directories have supplanted the bulky print editions of the Yellow Pages, and when newspaper distribution has steadily declined over the years. But does this really mean that the printed book will disappear forever?
Change of focus
It is not a zero-sum game. Gutenberg did not completely kill manuscripts – even today, craftsmen find joy in producing books entirely by hand, and transforming a job that was once expedient into a highly-valued art form that commands a high price in the collectible book market. With the advent of 3D printing, the manufacturing industry may become more fragmented, allowing for smaller, more specialised cottage-like industries creating bespoke goods for specific customers or communities. The focus on quality and craftsmanship will be renewed, with infinite customisation the rule of the day – in an ironic throwback to the pre-industrial days, in which every hand-crafted product was unique and often tailored to personal tastes.
In the same way, automation will transform industries to make sure that what is expedient will be done more efficiently and leave the field wide open for either new or unforeseen job opportunities in markets that have not yet been created.
It is often forgotten that the people who will, supposedly, be rendered jobless are also the proponents and consumers of these new technologies – a factory worker who fears the threat of automation will also eagerly use a new software if it means less manual inventory management. This many open up their time for leisure or other work-related creative endeavor.
Consider for example the case of a creative technologist at MediaMonks training a neural network to create virtual landscapes. This “low-level creative work”, as he puts it, is often repetitive and menial — his experiment with AI augments, rather than replaces, his value to the company.
A technology democracy?
Technology revolutions present rare opportunities for agile and responsive businesses, nations and individuals. The economy and the job market will adjust, but it is the job of policy-makers, educators, and business leaders of the incumbent generation to make this transition as smooth as possible. The issue is not so much whether jobs will be lost, but whether employers and employees alike are ready to cope with a new role which requires a new skill-set.
And as automation is likely to remove the need for skills such as pressing buttons, packing boxes or even sifting through mountains of information to come up with a single statistic. We are likely to see a return to highly-specialised vocational skills and training of the past, with apprenticeships that ensure the quality of training rather than exams designed to increase the human being’s processing speed, memory size, and data transmission rate.
As automation grows, so does the democratisation of the consumer economy – EVRYTHNG, for example, is a platform that digitises the identities of physical products and helps both the consumer and the producer gain deep visibility into the entire supply chain and product life cycle. The possibilities that could be realised through these insights are endless, from control over recycling to increased security through exposing counterfeiting.
Artificial intelligence & Machine Learning
Artificial Intelligence (AI), will automate many of the routine tasks that humans are compelled to do. But engineering AI algorithms, coming up with new applications for AI, and deploying AI to solve the right problem by asking the right questions will be a very human task.
AI will allow a granular level of personalisation, but at scale. According to Relay42, for example, AI can be used in very specific contexts to create greater efficiencies, by smartening relevance en masse: businesses can apply AI models on an industry level for marketing, to make billions of targeted, data-driven decisions in real-time. Idomoo uses automation to create personalisation at scale as well, in this case to deliver videos targeted to the individual consumer with unique tastes and needs.
Several IT companies have initiated a strong push to integrate AI into healthcare. These services will be provisioned via the cloud, ensuring that reliable, scalable 24/7 access becomes a reality for mission-critical personnel. They will enable healthcare professionals to precisely focus on providing the highest quality of care, rather than concerning themselves with laborious administrative tasks. In the AI-enabled healthcare future, professionals with insight and intuition will be greatly valued as the mechanical and repetitive aspects of healthcare are automated.
On the nature of AI
The questions around AI and its ultimate capabilities merge with the questions that concern the nature of human intelligence and of sentience.
Imagine an AI system that is able to store and process all the information collectively available to humans and man-made machines at present. Will the AI system be then able to synthesise all that information into new scientific discoveries, or will it be limited to simply finding solutions to well-defined problem sets? The intellectual sleight of hand required in inventing Calculus, for example — of cheating infinity — seems to be beyond the capability of a system that is fundamentally based on very complex linear algebra and statistics.
Will AI be able to discover new laws of nature by probing the nature of reality, the way pure mathematicians or physicists do through flashes of insight? Or will it merely help us perfect applications of human-discovered laws through technical solutions, by way of finding more efficient, more optimal algorithms?
Where does brute-force computational power end and intuition or creativity begin? Is there even a substantial, qualitative difference between the two, or is it simply a continuum, or the latter an “emergent” property of the former? And will we even be able to tell the difference between the two, when our insights into the nature of our own consciousness and the process of creativity and intuition are so muddy?
This article was first published on e27, on September 28, 2018.
e27 publishes relevant guest contributions from the community. Share your honest opinions and expert knowledge by submitting your content here.