Home » AI’s Threats to Jobs and Human Happiness Are Real – IEEE Spectrum

AI’s Threats to Jobs and Human Happiness Are Real – IEEE Spectrum

by Arifa Rana

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
But short-term job chaos will give way to long-term prosperity, says AI expert Kai-Fu Lee
Renowned computer scientist and AI expert Kai-Fu Lee sees likely disruption over the coming 15 to 20 years, owing to “smart” systems creating jobs in fields that AI-displaced workers may not be well trained to handle.
There’s a movement afoot to counter the dystopian and apocalyptic narratives of artificial intelligence. Some people in the field are concerned that the frequent talk of AI as an existential risk to humanity is poisoning the public against the technology and are deliberately setting out more hopeful narratives. One such effort is a book that came out last fall called AI 2041: Ten Visions for Our Future.
The book is cowritten by Kai-Fu Lee, an AI expert who leads the venture capital firm Sinovation Ventures, and Chen Qiufan, a science fiction author known for his novel Waste Tide. It has an interesting format. Each chapter starts with a science fiction story depicting some aspect of AI in society in the year 2041 (such as deepfakes, self-driving cars, and AI-enhanced education), which is followed by an analysis section by Lee that talks about the technology in question and the trends today that may lead to that envisioned future. It’s not a utopian vision, but the stories generally show humanity grappling productively with the issues raised by ever-advancing AI.
IEEE Spectrum spoke to Lee about the book, focusing on the last few chapters, which take on the big issues of job displacement, the need for new economic models, and the search for meaning and happiness in an age of abundance. Lee argues that technologists need to give serious thought to such societal impacts, instead of thinking only about the technology.
Kai-Fu Lee on…
The science fiction stories are set in 2041, by which time you expect AI to have already caused a lot of disruption to the job market. What types of jobs do you think will be displaced by then?
Kai-Fu Lee: Contrary to what a lot of people think, AI is actually just a piece of software that does routine work extremely well. So the jobs that will be the most challenged will be those that are routine and repetitive—and that includes both blue-collar and white-collar work. So obviously jobs like assembly line workers and people who operate the same equipment over and over again. And in terms of white-collar work, many entry-level jobs in accounting, paralegal, and other jobs where you’re repetitively moving data from one place to another, and jobs where you’re routinely dealing with people, such as customer-service jobs. Those are going to be the most challenged. If we add these up, it will be a very substantial portion of all jobs, even without major breakthroughs in AI—on the order of 40 to 50 percent.
The jobs that are most secure are those that require imagination, creativity, or empathy. And until AI gets good enough, there will also be craftsman jobs that require dexterity and a high level of hand-eye coordination. Those jobs will be secure for a while, but AI will improve and eventually take those over as well.
How do you imagine this trend is changing the engineering profession?
Lee: I think engineering is largely cerebral and somewhat creative work that requires analytical skills and deep understanding of problems. And those are generally hard for AI.
But if you’re a software engineer and most of your job is looking for pieces of code and copy-pasting them together—those jobs are in danger. And if you’re doing routine testing of software, those jobs are in danger too. If you’re writing a piece of code and it’s original creative work, but you know that this kind of code has been done before and can be done again, those jobs will gradually be challenged as well. For people in the engineering profession, this will push us towards more of an analytical architect role where we deeply understand the problems that are being solved, ideally problems that have complex characteristics and measurements. The ideal combination in most professions will be a human that has unique human capabilities managing a bunch of AI that do the routine parts.
It reminds me of the Ph.D. thesis of Charles Simonyi, the person who created Microsoft Word. He did an experiment to see what would happen if you have a really smart architect who can divvy up the job of writing a piece of code into well-contained modules that are easy to understand and well defined, and then outsource each module to an average engineer. Will the resulting product be good? It was good. We’re talking about the same thing, except we’re not outsourcing to the average engineer, who will have been replaced by AI. That superengineer will be able to delegate the work to a bunch of AI resulting in creativity and symbiosis. But there won’t be very many of these architect jobs.
In the book, you say that an entirely new social contract is needed. One problem is that there will be fewer entry-level jobs, but there still needs to be a way for people to gain skills. Can you imagine a solution for engineering?
Lee: Let’s say someone is talented and could become an architect, but that person just graduated from college and isn’t there yet. If they apply for a job to do entry-level programming and they’re competing for the job with AI, they might lose the job to the AI. That would be really bad because we will not only hurt the person’s self-confidence, but also society will lose the talent of that architect, which needs years of experience to build up.
But imagine if the company says, “We’re going to employ you anyway, even though you’re not as good as AI. We’re going to give you tasks and we’ll have AI work alongside you and correct your errors, and you can learn from it and improve.” If a thousand people go through this entry-level practical training, maybe a hundred emerge to be really good and be on their way to become architects. Maybe the other 900 will take longer and struggle, or maybe they’ll feel complacent and continue to do the work so they’re passing time and still have a chance to improve. Maybe some will say, “Hey, this is really not for me, I’m not reaching the architect level. I’m going to go become a photographer and artist or whatever.”
Back to top
Why do you think that this round of automation is different from those that came before in history, when jobs were both destroyed and created by automation?
Lee: First of all, I do think AI will both destroy and create jobs. I just can’t enumerate which jobs and how many. I tend to be an optimist and believe in the wisdom and the will of the human race. Eventually, we’ll figure out a bunch of new jobs. Maybe those jobs don’t exist today and have to be invented; maybe some of those jobs will be service jobs, human-connection jobs. I would say that every technology so far has ended up making society better, and there has never been a problem of absorbing the job losses. If you look at a 30-year horizon, I’m optimistic that that there will not be a net job loss, but possibly a net gain, or possibly equal. And we can always consider a four-day work week and things like that. So long-term, I’m optimistic.
Now to answer your question directly: short-term, I am worried. And the reason is that none of the previous technology revolutions have tried explicitly to replace people. No matter how people think about it, every AI algorithm is trying to display intelligence and therefore be able to do what people do. Maybe not an entire job, but some task. So naturally there will be a short-term drop when automation and AI start to work well.
“If you expect an assembly-line worker to become a robot-repair person, it isn’t going to be so easy.”
—Kai-Fu Lee, Sinovation Ventures
Autonomous vehicles are an explicit effort to replace drivers. A lot of people in the industry will say, “Oh no, we need a backup driver in the truck to make it safer, so we won’t displace jobs.” Or they’ll say that when we install robots in the factory, the factory workers are elevated to a higher-level job. But I think they’re just sugarcoating the reality.
Let’s say over a period of 20 years, with the advent of AI, we lose x number of jobs, and we also gain x jobs; let’s say the loss and gain are the same. The outcome is not that the society remains in equilibrium, because the jobs being lost are the most routine and unskilled. And the jobs being created are much more likely to be skilled and complex jobs that require much more training. If you expect an assembly-line worker to become a robot-repair person, it isn’t going to be so easy. That’s why I think the next 15 years or 20 years will be very chaotic. We need a lot of wisdom and long-term vision and decisiveness to overcome these problems.
Back to top
Currency
There are some interesting experiments going on with universal basic income (UBI), like Sam Altman’s ambitious idea for Worldcoin. But from the book, it seems like you don’t think that UBI is the answer. Is that correct?
Lee: UBI may be necessary, by it’s definitely not sufficient. We’re going to be in a world of very serious wealth inequality, and the people losing their jobs won’t have the experience or the education to get the right kinds of training. Unless we subsidize and help these people along, the inequality will be exacerbated. So how do we make them whole? One way is to make sure they don’t have to worry about subsistence. That’s where I think universal basic income comes into play by making sure nobody goes without food, shelter, water. I think that level of universal basic income is good.
As I mentioned before, the people who are most devastated, people who don’t have skills, are going to need a lot of help. But that help isn’t just money. If you just give people money, a wonderful apartment, really great food, Internet, games, and even extra allowance to spend, they are much more likely to say, “Well, I’ll just stay home and play games. I’ll go into the metaverse.” They may even go to alcohol or substance abuse because those are the easiest things to do.
So what else do they need?
Lee: Imagine the mind-set of a person whose job was taken away by automation. That person has been to be thinking, “Wow, everything I know how to do, AI can do. Everything I learn, AI will be able to do. So why should I take the universal basic income and apply that to learning?” And even if that person does decide to get training, how can they know what to get training on? Imagine I’m an assembly-line worker and I lost my job. I might think, truck driver, that’s a highly paid job. I’ll do that. But then in five years those jobs are going to be gone. A robot-repair job would be a much more sustainable job than a truck driver, but the person who just lost a job doesn’t know it.
So the point I make in the book is: To help people stay gainfully employed and have hope for themselves, it’s important that they get guidance on what jobs they can do that will, first of all, give people a sense of contribution, because then at least we eliminate the possibility of social unrest. Second, that job should be interesting, so the person wants to do it. Third, if possible, that job should have economic value.
Why do you put economic value last in that list?
Lee: Most people think jobs need to have economic value. If you’re making cars, the cars are sold. If you’re writing books, the books are sold. If you just volunteer and take care of old people, you’re not creating economic value. If we stay in that mentality, that would be very unfortunate, because we may very well be in a time when what is truly valuable to society is people taking care of each other. That might be the glue that keeps society going.
More thought should go into how to deal with the likely anxiety and depression and the sense of loss that people will have when their jobs are taken and they don’t know what to do. What they need is not just a bunch of money, but a combination of subsistence, training, and help finding a new beginning. Who cares if they create economic value? Because as the last chapter states, I believe we’re going to reach the era of plenitude. We’re not going to be in a situation of incredible scarcity where everyone’s fighting each other in a zero-sum game. So we should not be obsessed with making sure everyone contributes economically, but making sure that people feel good about themselves.
Back to top
I want to talk about the last chapter. It’s a very optimistic vision of plenitude and abundance. I’ve been thinking of scenarios from climate-change models that predict devastating physical impacts by 2041, with millions of refugees on the move. I have trouble harmonizing these two different ideas of the future. Did you think about climate change when you were working on that chapter?
Lee: Well, there are others who have written about the worst-case scenario. I would say what we wrote is a good-case scenario—I don’t think it’s the best case because there are still challenges and frustrations and things that are imperfect. I tried to target 80 percent good in the book. I think that’s the kind of optimism we need to counterbalance the dystopian narratives that are more prevalent.
The worst case for climate is horrible, but I see a few strong reasons for optimism. One is that green energy is quickly becoming economical. In the past, why didn’t people go for green energy? Because fossil fuels were cheaper and more convenient, so people gained for themselves and hurt the environment. The key thing that will turn it around is that, first, governments need to have catalyst policies such as subsidized electrical vehicles. That is the important first step. And then I think green energy needs to become economic. Now we’re at the point where, for example, solar plus lithium batteries, not even the most advanced batteries, are already becoming cheaper than fossil fuel. So there are reasons for optimism.
I liked that the book also got into philosophical questions like: What is happiness in the era of AI? Why did you want to get into that more abstract realm?
Lee: I think we need to slowly move away from obsession with money. Money as a metric of happiness and success is going to become more and more outdated, because we’re entering a world where there’s much greater plenitude. But what is the right metric? What does it really mean for us to be happy? We now know that having more money isn’t the answer, but what is the right answer?
AI has been used so far mainly to help large Internet companies make money. They use AI to show people videos in such a way that the company makes the most money. That’s what has led us to the current social media and streaming video that many people are unhappy about. But is there a way for AI to show people video and content so that they’re happier or more intelligent or more well liked? AI is a great tool, and it’s such a pity that it’s being used by large Internet companies that say, ‘How do we show people stuff so we make more money?” If we could have some definitions of happiness, well-likedness, intelligence, knowledgeableness of individuals, then we can turn AI into a tool of education and betterment for each of us individually in ways that are meaningful to us. This can be delivered using the same technology that is doing mostly monetization for large companies today.
Back to top
Eliza Strickland is a senior editor at IEEE Spectrum, where she covers AI, biomedical engineering, and other topics. She holds a master’s degree in journalism from Columbia University.
The AI pioneer says it’s time for smart-sized, “data-centric” solutions to big issues
Andrew Ng was involved in the rise of massive deep learning models trained on vast amounts of data, but now he’s preaching small-data solutions.
Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.
Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.
Andrew Ng on…
The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?
Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.
When you say you want a foundation model for computer vision, what do you mean by that?
Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.
What needs to happen for someone to build a foundation model for video?
Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.
Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.
Back to top
It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.
Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.
“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AI
I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.
I expect they’re both convinced now.
Ng: I think so, yes.
Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”
Back to top
How do you define data-centric AI, and why do you consider it a movement?
Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.
When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.
The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.
You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?
Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.
When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?
Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.
“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
—Andrew Ng
For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.
Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?
Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.
One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.
When you talk about engineering the data, what do you mean exactly?
Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.
For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.
Back to top
What about using synthetic data, is that often a good solution?
Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.
Do you mean that synthetic data would allow you to try the model on more data sets?
Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.
“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
—Andrew Ng
Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.
Back to top
To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?
Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.
One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.
How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?
Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.
In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?
So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.
Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.
Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?
Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.
Back to top
This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

source

0 comment

Related Posts

Leave a Comment