Gus Docker: Welcome to the Future of Life Institute podcast. I'm Gus Docker, and I'm here with Anton Korinek, a professor of Economics at the University of Virginia and the economics of AI lead at the Center for the Governance of AI.
Anton Korinek: Hi, thanks for having me.
Gus: You're probably the perfect person to ask - how do automation and wages affect each other?
Anton: That's the billion-dollar question that economists have been debating for at least the past 200 years. During the Industrial Revolution, there was a big debate between the Luddites and the emerging economics profession and entrepreneurial class about whether automation is good or bad for wages.
Economists have argued that automation is ultimately good for the economy, as it makes us wealthier. But from the perspective of an individual worker who gets automated, it's unambiguously bad. The big question is how to reconcile these opposing perspectives.
Economists have argued that while automation is painful for the individual, it allows the economy to produce more with less, making it more efficient. After some adjustment period, workers can switch to more productive jobs that generate higher incomes.
Now we're facing the age of AI, and job automation is one of the greatest public concerns. Economists have jumped to their natural reaction - we need automation and technological progress for the economy to grow and for workers to ultimately be better off. But the big question is whether this time is different.
Gus: You have a model of how to think about artificial intelligence and wages, where one of the things you conclude is that human wages depend on the pace of automation. Why is it that wages will rise if automation happens slowly?
Anton: That's one of the really surprising findings. The idea is that a lot of people in Silicon Valley have this notion that at some point in the future, machines will be able to do literally everything that humans can do. If that's the case, then from an economic perspective, the wages of workers are going to be at the same level as the cost of the machines that can do the same things.
However, the big question is what our economy will look like during the transition to that future as we progressively automate more and more. If we have a little bit of automation, that's painful for the automated workers, but as long as we accumulate enough capital, it's actually beneficial for the rest of the workers because it makes their contribution to the economy comparatively more valuable.
If we automate a little bit, we can suddenly produce the automated goods much cheaper, and that means all the other goods rise in value, and the labor producing those other goods and services will rise in value. There's this race between automation and what is still left for humans, and that determines the level of wages.
Gus: So you're saying that if automation happens gradually, it can actually increase wages, but if it happens too quickly, it could decrease wages. Why is that?
Anton: Exactly. If we automate very quickly and displace what the humans can do, but we haven't actually accumulated the machines that can produce the automated things cheaply, then the economy doesn't grow very much, but the labor already can become devalued and displaced. And if that's the case, then wages are likely to decline.
On the other hand, if we have sufficient capital accumulation - we produce lots of machines that can perform the automated tasks cheaply - then the value of the human labor in the remaining tasks will go up, and humans will be better off.
So human wages or we humans are better off in a situation in which we gradually implement AI technology, and we are worse off in a situation which we quite suddenly jump to human-level intelligence across the board.
Gus: You describe wages as a race between automation and capital accumulation. How do these variables fit together?
Anton: Automation helps us with our wages because if you automate something, you can suddenly produce the automated goods or services much cheaper, and that means the value of the remaining goods rises, and the value of the labor producing the remaining goods rises.
However, we need the requisite capital - the machines that can do things cheaply - in order for the productivity benefits from automation to materialize. If we don't have that capital, then the benefits from automation don't really materialize.
So when I speak of this race between automation and capital accumulation, it means if we automate very quickly and displace what the humans can do, but we haven't actually accumulated the machines that can produce the automated things, then the economy doesn't grow very much, but the labor already can become devalued and displaced. And if that's the case, then wages are likely to decline.
On the other hand, if we have sufficient capital accumulation, we produce lots of machines that can perform the automated tasks cheaply, then the value of the human labor in the remaining tasks will go up, and humans will be better off.
Gus: You describe the effects of automation as first increasing wages and then decreasing wages. Why would the effect be that way?
Anton: If we are in an economy where we have very little automation, then there are a lot of low-hanging fruits. If we automate just a little bit, we'll have very high productivity gains from that, and those productivity gains will ultimately benefit all the workers.
On the other hand, if we are in an economy where there's just very little left for humans - if we displace even more of what is left - then the displacing force of automation will predominate, and the productivity gains will not filter through to the workers.
So there's this hump-shaped relationship between automation and wages. If we have just a little bit of automation, then more automation helps workers. If we already have a lot of automation, and in particular as we approach the very last tasks left for humans, then automation hurts workers.
This all assumes that there is a ceiling of complexity that human workers cannot reach beyond. There's a debate around whether humans are limited in the complexity of the tasks we can solve, or whether we can perhaps solve tasks of just increasing complexity without bound.
Gus: How do we measure the complexity of tasks that humans perform, and how do we compare that to the complexity of the same tasks performed by computers?
Anton: The ultimate measure of complexity that's relevant for machines is computational complexity - how many computations, how many floating-point operations do we need to execute in order to produce some results.
For us humans, computational complexity of something is much harder to wrap our minds around. Some things seem very easy to us because we can do them, like walking, but are actually quite complex from a computational perspective. Other things that are very simple for computers, like adding up two 10-digit numbers, are very hard for our brains.
This leads to interesting conclusions about which jobs could be automated and in which order. The Moravec's paradox suggests that being a professor of mathematics might be automatable before being a dance instructor, even though the latter seems simpler. The human brain is highly optimized for certain tasks, while machines can be fine-tuned for whatever specific task we need them to be efficient at.
Gus: You're worried about the political instability that could result if human labor becomes economically redundant. Why is that, and what are some potential solutions?
Anton: If labor becomes economically redundant, meaning humans can no longer earn enough wages to sustain themselves, that would create some good and bad outcomes. The good is that the greatest bottleneck in our economy - the availability of labor - would be lifted, and our economies could grow significantly faster.
The bad is that labor is not only a factor of production, but it's also what most of us receive our income from and what most of us receive a significant part of our meaning and life satisfaction from. Undermining people's incomes and meaning could create fundamental challenges for our societies and lead to significant political turmoil.
A potential solution could be a universal basic income (UBI) to fix the income problem. But the meaning challenge is harder - the UBI wouldn't address the fact that people may not derive meaning from work if machines can do it better and cheaper.
There don't seem to be any super appealing, easy solutions. Our best bet is a gradual transition that gives us time to devise solutions for the income distribution problem, like a "seed UBI" that can automatically ramp up if significant job displacement occurs. But the challenge of maintaining meaning and purpose in an AI-powered world remains a difficult one.
Gus: What are the implications of increasing market concentration in the AI industry, and how might regulation help address the risks?
Anton: The AI industry has a tendency towards natural monopoly due to the very high fixed costs of training large language models. It's socially desirable to have a relatively small number of players to avoid needless duplication of effort.
However, we still want some competition to reap the benefits. Regulators should pay close attention to vertical integration, where a company encompasses multiple steps of the AI value chain. This can create lock-in and pricing power for consumers.
Competition authorities can lean against this by scrutinizing takeovers and vertical investments, and by mandating open standards so startups can more easily integrate with the big players' systems.
The bigger concern is the concentration of power that could come with a dominant AI player. Once we approach human-level AI, the power that it embodies could surpass that of humanity. This is why AI alignment - ensuring AI acts in human interests - is so critical. The best solution may be a collective, moonshot effort by nation states to create aligned AI systems that we all benefit from, rather than having it developed by competing companies.
Gus: Thank you, Anton, for this insightful discussion on the future of AI and the economy.
The integration of Large Language Models (LLMs) into various fields has revolutionized the way we approach complex problems. Economists, in particular, are nOW faced with the daunting task of understanding the capabilities and limitations of these AI tools. It is essential that economists not only familiarize themselves with LLMs but also critically evaluate their impact on various aspects of economic research.
The potential of LLMs lies in their ability to process and analyze vast amounts of data, generate insightful reports, and provide predictions. However, it is crucial to acknowledge the limitations of these tools. LLMs are not infallible and can perpetuate biases present in the data they are trained on. Moreover, their reliance on complex algorithms and machine learning techniques makes them vulnerable to errors and inconsistencies.
As economists, it is our responsibility to ensure that LLMs are used ethically and responsibly. This requires a critical thinking approach, where we examine the underlying assumptions, data sources, and algorithms used in these tools. We must also be aware of the potential consequences of relying solely on LLMs, such as the perpetuation of existing biases and the lack of human nuance.
One of the most significant challenges facing economists is the impact of LLMs on labor markets, inequality, and productivity. As these tools become increasingly prevalent, it is essential that we examine the potential effects on employment, wages, and social mobility. Will LLMs exacerbate existing inequalities, or can they provide a means to address them?
Furthermore, the integration of LLMs into economic research raises fundamental questions about the nature of economic research itself. Will these tools enable economists to provide more accurate and precise predictions, or will they introduce new biases and uncertainties? How will LLMs change the way we approach economic modeling and analysis?
In conclusion, the impact of LLMs on economics is a complex and multifaceted issue that requires careful consideration. As economists, we must take a proactive approach to understanding the capabilities and limitations of these tools. By critically evaluating the potential benefits and drawbacks of LLMs, we can ensure that they are used responsibly and ethically. The future of economics is uncertain, but one thing is clear: the integration of LLMs will require a new wave of critical thinking and innovation from economists.
Part 1/16:
The Future of AI and the Economy
The Debate Over Automation and Wages
Gus Docker: Welcome to the Future of Life Institute podcast. I'm Gus Docker, and I'm here with Anton Korinek, a professor of Economics at the University of Virginia and the economics of AI lead at the Center for the Governance of AI.
Anton Korinek: Hi, thanks for having me.
Gus: You're probably the perfect person to ask - how do automation and wages affect each other?
Anton: That's the billion-dollar question that economists have been debating for at least the past 200 years. During the Industrial Revolution, there was a big debate between the Luddites and the emerging economics profession and entrepreneurial class about whether automation is good or bad for wages.
Anton Korinek
Professor, Department of Economics and Darden School of Business, University of Virginia
Senior Researcher, Complexity Science Hub Vienna
Visiting Fellow, The Brookings Institution
Economics of AI Lead, Centre for the Governance of AI
Research Associate, NBER and CEPR
Google Scholar
Email: anton [at] korinek [dot] com
Twitter: @akorinek
Part 2/16:
Economists have argued that automation is ultimately good for the economy, as it makes us wealthier. But from the perspective of an individual worker who gets automated, it's unambiguously bad. The big question is how to reconcile these opposing perspectives.
Economists have argued that while automation is painful for the individual, it allows the economy to produce more with less, making it more efficient. After some adjustment period, workers can switch to more productive jobs that generate higher incomes.
Part 3/16:
Now we're facing the age of AI, and job automation is one of the greatest public concerns. Economists have jumped to their natural reaction - we need automation and technological progress for the economy to grow and for workers to ultimately be better off. But the big question is whether this time is different.
Gus: You have a model of how to think about artificial intelligence and wages, where one of the things you conclude is that human wages depend on the pace of automation. Why is it that wages will rise if automation happens slowly?
Part 4/16:
Anton: That's one of the really surprising findings. The idea is that a lot of people in Silicon Valley have this notion that at some point in the future, machines will be able to do literally everything that humans can do. If that's the case, then from an economic perspective, the wages of workers are going to be at the same level as the cost of the machines that can do the same things.
However, the big question is what our economy will look like during the transition to that future as we progressively automate more and more. If we have a little bit of automation, that's painful for the automated workers, but as long as we accumulate enough capital, it's actually beneficial for the rest of the workers because it makes their contribution to the economy comparatively more valuable.
Part 5/16:
If we automate a little bit, we can suddenly produce the automated goods much cheaper, and that means all the other goods rise in value, and the labor producing those other goods and services will rise in value. There's this race between automation and what is still left for humans, and that determines the level of wages.
Gus: So you're saying that if automation happens gradually, it can actually increase wages, but if it happens too quickly, it could decrease wages. Why is that?
Part 6/16:
Anton: Exactly. If we automate very quickly and displace what the humans can do, but we haven't actually accumulated the machines that can produce the automated things cheaply, then the economy doesn't grow very much, but the labor already can become devalued and displaced. And if that's the case, then wages are likely to decline.
On the other hand, if we have sufficient capital accumulation - we produce lots of machines that can perform the automated tasks cheaply - then the value of the human labor in the remaining tasks will go up, and humans will be better off.
So human wages or we humans are better off in a situation in which we gradually implement AI technology, and we are worse off in a situation which we quite suddenly jump to human-level intelligence across the board.
Part 7/16:
Gus: You describe wages as a race between automation and capital accumulation. How do these variables fit together?
Anton: Automation helps us with our wages because if you automate something, you can suddenly produce the automated goods or services much cheaper, and that means the value of the remaining goods rises, and the value of the labor producing the remaining goods rises.
However, we need the requisite capital - the machines that can do things cheaply - in order for the productivity benefits from automation to materialize. If we don't have that capital, then the benefits from automation don't really materialize.
Part 8/16:
So when I speak of this race between automation and capital accumulation, it means if we automate very quickly and displace what the humans can do, but we haven't actually accumulated the machines that can produce the automated things, then the economy doesn't grow very much, but the labor already can become devalued and displaced. And if that's the case, then wages are likely to decline.
On the other hand, if we have sufficient capital accumulation, we produce lots of machines that can perform the automated tasks cheaply, then the value of the human labor in the remaining tasks will go up, and humans will be better off.
Gus: You describe the effects of automation as first increasing wages and then decreasing wages. Why would the effect be that way?
Part 9/16:
Anton: If we are in an economy where we have very little automation, then there are a lot of low-hanging fruits. If we automate just a little bit, we'll have very high productivity gains from that, and those productivity gains will ultimately benefit all the workers.
On the other hand, if we are in an economy where there's just very little left for humans - if we displace even more of what is left - then the displacing force of automation will predominate, and the productivity gains will not filter through to the workers.
Part 10/16:
So there's this hump-shaped relationship between automation and wages. If we have just a little bit of automation, then more automation helps workers. If we already have a lot of automation, and in particular as we approach the very last tasks left for humans, then automation hurts workers.
This all assumes that there is a ceiling of complexity that human workers cannot reach beyond. There's a debate around whether humans are limited in the complexity of the tasks we can solve, or whether we can perhaps solve tasks of just increasing complexity without bound.
Gus: How do we measure the complexity of tasks that humans perform, and how do we compare that to the complexity of the same tasks performed by computers?
Part 11/16:
Anton: The ultimate measure of complexity that's relevant for machines is computational complexity - how many computations, how many floating-point operations do we need to execute in order to produce some results.
For us humans, computational complexity of something is much harder to wrap our minds around. Some things seem very easy to us because we can do them, like walking, but are actually quite complex from a computational perspective. Other things that are very simple for computers, like adding up two 10-digit numbers, are very hard for our brains.
Part 12/16:
This leads to interesting conclusions about which jobs could be automated and in which order. The Moravec's paradox suggests that being a professor of mathematics might be automatable before being a dance instructor, even though the latter seems simpler. The human brain is highly optimized for certain tasks, while machines can be fine-tuned for whatever specific task we need them to be efficient at.
Gus: You're worried about the political instability that could result if human labor becomes economically redundant. Why is that, and what are some potential solutions?
Part 13/16:
Anton: If labor becomes economically redundant, meaning humans can no longer earn enough wages to sustain themselves, that would create some good and bad outcomes. The good is that the greatest bottleneck in our economy - the availability of labor - would be lifted, and our economies could grow significantly faster.
The bad is that labor is not only a factor of production, but it's also what most of us receive our income from and what most of us receive a significant part of our meaning and life satisfaction from. Undermining people's incomes and meaning could create fundamental challenges for our societies and lead to significant political turmoil.
Part 14/16:
A potential solution could be a universal basic income (UBI) to fix the income problem. But the meaning challenge is harder - the UBI wouldn't address the fact that people may not derive meaning from work if machines can do it better and cheaper.
There don't seem to be any super appealing, easy solutions. Our best bet is a gradual transition that gives us time to devise solutions for the income distribution problem, like a "seed UBI" that can automatically ramp up if significant job displacement occurs. But the challenge of maintaining meaning and purpose in an AI-powered world remains a difficult one.
Gus: What are the implications of increasing market concentration in the AI industry, and how might regulation help address the risks?
Part 15/16:
Anton: The AI industry has a tendency towards natural monopoly due to the very high fixed costs of training large language models. It's socially desirable to have a relatively small number of players to avoid needless duplication of effort.
However, we still want some competition to reap the benefits. Regulators should pay close attention to vertical integration, where a company encompasses multiple steps of the AI value chain. This can create lock-in and pricing power for consumers.
Competition authorities can lean against this by scrutinizing takeovers and vertical investments, and by mandating open standards so startups can more easily integrate with the big players' systems.
Part 16/16:
The bigger concern is the concentration of power that could come with a dominant AI player. Once we approach human-level AI, the power that it embodies could surpass that of humanity. This is why AI alignment - ensuring AI acts in human interests - is so critical. The best solution may be a collective, moonshot effort by nation states to create aligned AI systems that we all benefit from, rather than having it developed by competing companies.
Gus: Thank you, Anton, for this insightful discussion on the future of AI and the economy.
Anton: Thank you for having me.
Another podcast by Anton Korinek
The Impact of Large language Models on economics: A Call to Action
The integration of Large Language Models (LLMs) into various fields has revolutionized the way we approach complex problems. Economists, in particular, are nOW faced with the daunting task of understanding the capabilities and limitations of these AI tools. It is essential that economists not only familiarize themselves with LLMs but also critically evaluate their impact on various aspects of economic research.
The potential of LLMs lies in their ability to process and analyze vast amounts of data, generate insightful reports, and provide predictions. However, it is crucial to acknowledge the limitations of these tools. LLMs are not infallible and can perpetuate biases present in the data they are trained on. Moreover, their reliance on complex algorithms and machine learning techniques makes them vulnerable to errors and inconsistencies.
As economists, it is our responsibility to ensure that LLMs are used ethically and responsibly. This requires a critical thinking approach, where we examine the underlying assumptions, data sources, and algorithms used in these tools. We must also be aware of the potential consequences of relying solely on LLMs, such as the perpetuation of existing biases and the lack of human nuance.
One of the most significant challenges facing economists is the impact of LLMs on labor markets, inequality, and productivity. As these tools become increasingly prevalent, it is essential that we examine the potential effects on employment, wages, and social mobility. Will LLMs exacerbate existing inequalities, or can they provide a means to address them?
Furthermore, the integration of LLMs into economic research raises fundamental questions about the nature of economic research itself. Will these tools enable economists to provide more accurate and precise predictions, or will they introduce new biases and uncertainties? How will LLMs change the way we approach economic modeling and analysis?
In conclusion, the impact of LLMs on economics is a complex and multifaceted issue that requires careful consideration. As economists, we must take a proactive approach to understanding the capabilities and limitations of these tools. By critically evaluating the potential benefits and drawbacks of LLMs, we can ensure that they are used responsibly and ethically. The future of economics is uncertain, but one thing is clear: the integration of LLMs will require a new wave of critical thinking and innovation from economists.