Artificial Intelligence has been driving major change in the work sector across different domains. However, not everything AI can perform will be there, and those will be the future job roles.
In the last few months, we have come across numerous examples of Artificial Intelligence (AI) creating life-like pictures, stunning paintings, abstract art, and even writing meaningful contextual text or composing music, besides, of course, performing a slew of tasks that one always thought couldn’t be done without human creative skills. It is now used to even create videos from the text. Make-A-Video is a new AI technology that enables individuals to convert text suggestions into brief, high-quality video snippets.
Make-A-Video advances recent developments in Meta AI’s research on generative technologies. A multimodal generative AI technique allows users more control over the AI-generated material they create. Make-A-Video is the follow-up to that announcement. With Make-A-Scene, they showed users how to use words, lines of text, and freeform sketches to produce lifelike graphics and artwork fit for picture books.
The ability to create images simply using natural language description of the visual with DALL-E2 will eventually disrupt photography, as users would be able to create any image of their choice. DALL-E (stylized as DALL·E) and DALL-E 2 are machine learning models developed by OpenAI to generate digital images from natural language descriptions. In April 2022, OpenAI announced DALL-E 2, a successor designed to generate more realistic images at higher resolutions that “can combine concepts, attributes, and styles.”
For camera manufacturers, this is a fresh disruption or could be an opportunity to use the technology to create the next generation of cameras that takes pictures of real things and then blends it with the users’ imagination to create something entirely new. It could be a new era of the real, the imaginary creating the surreal! For mobile cameras, too, this can be a creative disruption.
AI was considered good only at detecting patterns, analyzing data, and using Big Data to forecast the next. But this is being challenged daily as AI disrupts what humans have been doing so far and redefines the Future of Work. Nevertheless, there are still certain things that AI cannot do yet.
AI’s main advantage over humans lies in its ability to detect incredibly subtle patterns within large quantities of data. Take the example of loan underwriting. While a human underwriter will look at only a handful of measures when deciding whether to approve your insurance application (your net worth, income, home, job, and so on), an AI algorithm could take in thousands of variables — ranging from public records, your purchases, your healthcare records, and what apps and devices you use (with your consent) — in milliseconds, and come up with a far more accurate assessment of your application.
AI will displace routine jobs
Such algorithms will displace routine white-collar work easily, just as software has steadily taken over routine white-collar tasks, such as bookkeeping and data entry. In “The Job Savior,” we saw examples of affected white-collar workers ranging from bookkeepers to insurance underwriters. Combined with robotics, AI will also displace increasingly complex types of blue-collar work. By 2041, warehouse pickers — who perform routine tasks — will have long been displaced; many construction workers will have been displaced as building practices shift toward prefabricated components built by robots that are easy to assemble en masse.
What AI can’t do yet
These are the three capabilities where AI is falling short and that AI will likely still struggle to master even in 2041:
Creativity. AI cannot create, conceptualize, or plan strategically. While AI is great at optimizing for a narrow objective, it cannot choose its own goals or think creatively. Nor can AI think across domains or apply common sense.
Empathy. AI cannot feel or interact with feelings like empathy and compassion. Therefore, AI cannot make another person feel understood and cared for. Even if AI improves in this area, it will be extremely difficult to get the technology to where humans feel comfortable interacting with robots in situations that call for care and empathy, or what we might call “human-touch services.”
Dexterity. AI and robotics cannot accomplish complex physical work that requires dexterity or precise hand-eye coordination. AI can’t deal with unknown and unstructured spaces, especially ones it hasn’t observed.
The future of jobs
What does all this mean for the future of jobs? Jobs that are asocial and routine, such as telemarketers or insurance adjusters, are likely to be taken over in their entirety. Humans and AI would work together for highly social but routine jobs, each contributing expertise. For example, AI could take care of grading routine homework and exams in the future classroom and even offer standardized lessons and individualized drills. At the same time, the human teacher would focus on being an empathetic mentor who teaches learning by doing, supervises group projects that develop emotional intelligence, and provides personalized coaching.
Augmenting human capabilities
For jobs that are creative but asocial, human creativity will be amplified by AI tools. For example, a scientist can use AI tools to accelerate the speed of drug discovery. Finally, the jobs requiring creativity and social skills, such as strategy-heavy executive roles, are where humans will shine. However, there will still be millions of jobs that will be at risk as the use of AI becomes even more ubiquitous.
People in endangered jobs should be warned well in advance and encouraged to learn new skills. In addition, new AI tools will require human operators. We can help people acquire these new skills and prepare for this new world of work. Besides relearning skills, we need to recalibrate what today’s jobs look like with the help of AI, moving toward a human-AI symbiosis.
Specific AI tools will be customized for each profession and application — for example, AI-based molecule generation for pharmaceuticals, advertising planning for marketing, or fact-checking for journalism.
The human touch
A deeper interdependence between AI optimizations and the “human touch” will reinvent many jobs and create new ones. AI will take care of routine tasks in tandem with humans, who will carry out the ones that require warmth and compassion. For example, the future doctor will still be the primary point of contact trusted by the patient but will rely on AI diagnostic tools to determine the best treatment. This will redirect the doctor’s role to that of a compassionate caregiver, giving them more time with their patients.
AI experts should spend time with managers and employees and explain what AI can and can’t do. These should be application- and domain-specific discussions since AI’s capabilities are broad and can be used in many different ways. The experts and the managers together should, in most cases, ease AI systems into a job task-by-task as opposed to taking any sort of “big bang” approach. It’s also not a bad idea to, as Morgan Stanley did, give employees some say in whether and when they adopt AI capabilities in their jobs, at least in the initial phases of deployment.
(Abhijit Roy is a technology explainer and business journalist. He has worked with Strait Times of Singapore, Business Today, Economic Times and The Telegraph. Also worked with PwC, IBM, Wipro, Ericsson.)
(Disclaimer: The views expressed in the article above are those of the author’s and do not necessarily represent or reflect the views of Autofintechs.com. Unless otherwise noted, the author is writing in his/her personal capacity. They are not intended and should not be thought to represent official ideas, attitudes, or policies of any agency or institution.)