Both William Nordhaus and Paul Romer are deserving of a Nobel Prize in Economics, but I was not expecting them to win it during the same year. The Nobel committee found a way to glue them together. Nordhaus won the prize "“for integrating climate change into long-run macroeconomic analysis," while Romer won the prize “for integrating technological innovations into long-run macroeconomic analysis.” Yes, the words "climate change" and "technological innovations" might seem to suggest that they worked on different topics. But with the help of "integrating ... into long-run macroeconomic analysis," Nordhaus and Romer are now indissolubly joined as winners of the 2018 Nobel prize.
Each year, the Nobel committee releases two essays describing the work of the winner: for the general reader, they offer "Popular Science Background: Integrating nature and knowledge into economics"; for those who speak some economics and don't mind an essay with some algebra in the explanations, there is "Scientific Background: Economic growth, technological change, and climate change." I'll draw on both essays here. But I'll take the easy way out and just discuss the two authors one at a time, rather than trying to glue their contributions together.
Back in the 1970s, the federal government had just recently taken on a primary role in setting and enforcing environmental laws, with a set of amendments in 1970 that greatly expanded the reach of the Clear Air Act and another set of amendments in 1972 that greatly expanded the reach of the Clean Water Act. As far back as the mid-1970s, William Nordhaus was estimating models of energy consumption that explored the lowest-cost ways of keeping CO2 concentrations low in seven different "reservoirs" of carbon: "(i) the troposphere (<' 10 kilometers), (ii) the stratosphere, (iii) the upper layers of the ocean (0–60 meters), (iv) the deep ocean (> 60 meters), (v) the short-term biosphere, (vi) the long-term biosphere, and (vii) the marine biosphere."
By the early 1990s, Nordhaus was creating what are called "Integrated Assessment Models," which have become the primary analytical tool for looking at climate change. An IAM breaks up the task of analyzing climate change into three "modules", which the Nobel committee describes in this way:
A carbon-circulation module: This describes how global CO2 emissions influence CO2 concentration in the atmosphere. It reflects basic chemistry and describes how CO2 emissions circulate between three carbon reservoirs: the atmosphere; the ocean surface and the biosphere; and the deep oceans. The module’s output is a time path of atmospheric CO2 concentration.
A climate module: This describes how the atmospheric concentration of CO2 and other greenhouse gases affects the balance of energy flows to and from Earth. It reflects basic physics and describes changes in the global energy budget over time. The module’s output is a time path for global temperature, the key measure of climate change.
An economic-growth module: This describes a global market economy that produces goods using capital and labour, along with energy, as inputs. One portion of this energy comes from fossil fuel, which generates CO2 emissions. This module describes how different climate policies – such as taxes or carbon credits – affect the economy and its CO2 emissions. The module’s output is a time path of GDP, welfare and global CO2 emissions, as well as a time path of the damage caused by climate change.
A number of different IAMs now exist. The usefulness of the framework is that one can plug in a range of assumptions--how much energy will an economy use, how will this affect CO2 in the atmosphere, how will it affect overall climate--and develop a sense of what factors or assumption matter most or least. These are quantitative models: that is, you can plug in a policy like a carbon tax, and then trace through its economic and environmental effects, and consider costs and benefits. Nordhaus offers a readable overview of how this work has developed here, with citations to the underlying academic references.
When I was first being indoctrinated into economics in the late 1970s, the prevailing theories of economic growth were based on the work of Robert Solow (Nobel '87). A couple of implications of Solow's model are relevant here. One is that in Solow's approach, the researcher calculated increases in inputs of labor and capital for an economy, and then figured out whether those rising inputs of labor and capital could plausibly explain the overall rise in the overall amount of economic output. In these calculations for the US economy, economic output was rising faster than could be explained by the growth of labor and capital and so the additional residual amount was said to have resulted from a change in "productivity" or "technology" which needed to be understood in the broadest sense to include not just explicit scientific inventions, but all ways of rearranging inputs to get more output.
This approach was clearly useful, and also clearly limited. Another economists (Moses Abramowitz) liked to say that because it measured technology as the leftover residual from what could not be explained through increases in labor and capital, the discussion of productivity that resulted was "a measure of our ignorance." Others sometimes referred to economic growth in this theory as "manna from heaven," falling upon the economy without much explanation. Others said that technology in this model was a "black box"--meaning that the question of how new technology was created was assumed rather than argued.
Solow and other growth theorists working with this approach did derive some predictions about rates of economic growth. For example, they argued that growth depended on rates of investment, and that economies would experience diminishing returns as their capital stock increased. Thus, a low-income country with a low level of capital stock would have higher returns from investment than a high level of capital stock.
But as Paul Romer noted when he began working on technology and economic growth the 1980s, this theory of productivity growth seemed inadequate. There were many examples of low-income countries that were growing quickly, but also many examples of low-income countries growing moderately, slowly, or even negatively. Something more than capital investment seemed important here. In addition,
From the Nobel "popular science" report:
"Romer’s biggest achievement was to open this black box and show how ideas for new goods and services – produced by new technologies – can be created in the market economy. He also demonstrated how such endogenous technological change can shape growth, and which policies are necessary for this process to work well. Romer’s contributions had a massive impact on the feld of economics. His theoretical explanation laid the foundation for research on endogenous growth and the debates generated by his country-wise growth comparisons have ignited new and vibrant empirical research. ...
"Romer believed that a market model for idea creation must allow for the fact that the production of new goods, which are based on ideas, usually has rapidly declining costs: the frst blueprint has a large fxed cost, but replication/reproduction has small marginal costs. Such a cost structure requires that frms charge a markup, i.e. setting the price above the marginal cost, so they recoup the initial fxed cost. Firms must therefore have some monopoly power, which is only possible for sufciently excludable ideas. Romer also showed that growth driven by the accumulation of ideas, unlike growth driven by the accumulation of physical capital, does not have to experience decreasing returns. In other words, ideas-driven growth can be sustained over time."
Romer's approach is often describe as an "endogenous growth" model. The earlier Solow-style approach demonstrated the critical importance of growth in technology and productivity, by showing that it was impossible to explain actual long-run macroeconomic patterns without taking them into account. A Romer-style approach then seeks to explore the determinants of growth, with an emphasis on the economic power of producing and using ideas.
Oddly enough, Nordhaus and Romer published essays on the topics that won the Nobel prize in consecutive issues of the Journal of Economic Perspectives in Fall 1993 and Winter 1994 (full disclosure: where I have worked as Managing Editor of JEP since the start of the journal in 1987). For those who want a dose of the old stuff:
A version of this article first appeared on Conversable Economist.
Timothy Taylor is an American economist. He is managing editor of the Journal of Economic Perspectives, a quarterly academic journal produced at Macalester College and published by the American Economic Association. Taylor received his Bachelor of Arts degree from Haverford College and a master's degree in economics from Stanford University. At Stanford, he was winner of the award for excellent teaching in a large class (more than 30 students) given by the Associated Students of Stanford University. At Minnesota, he was named a Distinguished Lecturer by the Department of Economics and voted Teacher of the Year by the master's degree students at the Hubert H. Humphrey Institute of Public Affairs. Taylor has been a guest speaker for groups of teachers of high school economics, visiting diplomats from eastern Europe, talk-radio shows, and community groups. From 1989 to 1997, Professor Taylor wrote an economics opinion column for the San Jose Mercury-News. He has published multiple lectures on economics through The Teaching Company. With Rudolph Penner and Isabel Sawhill, he is co-author of Updating America's Social Contract (2000), whose first chapter provided an early radical centrist perspective, "An Agenda for the Radical Middle". Taylor is also the author of The Instant Economist: Everything You Need to Know About How the Economy Works, published by the Penguin Group in 2012. The fourth edition of Taylor's Principles of Economics textbook was published by Textbook Media in 2017.